Value of Science 106: The VALUABLES Impact Assessment Framework

The VALUABLES impact assessment framework can be used to investigate how new data influences decisions and quantifies how these decisions improve societal outcomes, such as lives or dollars saved.


July 22, 2021


Yusuke Kuwayama



Reading time

7 minutes

Organizations commonly use a system of metrics to track the outcomes of their activities, and these assessments can be critical to demonstrating the return on investment or progress toward the organization’s mission. These assessments also can be important for prioritizing future activities and investments, because they provide evidence of past successes and failures. One popular and well-developed tool is an impact assessment (sometimes also referred to as an impact evaluation)—a rigorous, quantitative study that compares outcomes with and without the activity.

For example, consider an international aid program designed to increase women’s access to agriculture and nutrition training in Bangladesh or a set of proposed regulations that tighten fuel economy standards for vehicles. In both contexts, an impact assessment would compare outcomes with and without the intervention to better understand its effects.

But what happens when the activity that is being assessed is the generation of scientific information? Just like in international development, public policy, and many other fields, scientists can conduct impact assessments to quantify the value of improved scientific information, drawing on tools from the field of economics and, in particular, a technique known as the Value of Information (VOI) method. This type of impact assessment investigates how new data influence decisions and quantifies how these decisions improve societal outcomes, such as lives or dollars saved.

The Value of Science Explainer Series

Impact Assessments: Key Concepts

Impact assessments can be conducted using many different methods, but at their core, they are analyses that compare two states of the world: one in which the activity being assessed happens and one in which it doesn’t. When thinking about the value of information, the “activity” is the application of improved information to the decisionmaking process, and the difference in socioeconomically meaningful outcomes between the two states represents the value of the information. A few key concepts help us describe these two worlds in more detail:

  • Reference and counterfactual cases: The two states of the world are often referred to as a reference case and a counterfactual case. The reference case is the world we observe, and the counterfactual case is a hypothetical world that differs from the reference in one specific way. In the case of VOI, if the information that is being assessed exists and is being used in a decision, that is the reference case, and the counterfactual is the world without the information. If, on the other hand, the improved information does not currently exist, then the world without the information is the reference case, and the counterfactual is a world in which the improved information exists and is being used.
  • Ex post vs. ex ante assessments: An ex post assessment is a retrospective evaluation based on the observed outcomes of a past intervention or decision that made use of improved information. An ex ante assessment looks ahead, estimating the outcomes of a future intervention or decision that uses improved information.

The VALUABLES Impact Assessment Framework

These basic concepts underpin the VALUABLES impact assessment framework (Figure 1), a tool that maps out how one can quantify the societal value of improved data from Earth observations in specific decision contexts.

The framework outlines a three-step process that links information, decisions, and outcomes that are important to people and the environment, in two different states of the world:

  • a state in which improved information is available to the decisionmaker (represented by the blue column) and
  • a state in which the improved information is not available (represented by the red column).

As described above, one of these columns represents the reference case, and the other column represents the counterfactual case. Either case may appear in either the blue or the red column, depending on whether the information is already being used in the decision context when the impact assessment is being conducted.

The three rows of the framework represent three steps that a researcher must take to map out the impact assessment:

  1. The first step, represented by the top row of boxes, is to define the improved information (blue column) and the information that would be used in its absence (red column).
  2. The second step, represented by the middle row of boxes, is to identify the decisions that result from using the improved information (blue column) versus the decisions made in its absence (red column).
  3. The third step, represented by the bottom row of boxes, is to compare socioeconomically important outcomes—those that matter to people and the environment—when the decisions are made with (blue column) or without (red column) the improved information.

Once all six boxes of the framework are filled in, the researcher has mapped out the impact assessment. The most important role of the framework is to clearly illustrate the logic behind why the information is beneficial for society, by defining how a world in which decisions are made with the information evolves differently from a world in which decisions are made without the information. Crucially, the difference in socioeconomically meaningful outcomes between the two states—i.e., the difference between the outcomes identified in the two bottom boxes of the framework—represents the value of the information.

Figure 1. The VALUABLES Impact Assessment Framework


Putting Impact Assessments to Work: Two Case Studies

Researchers in the VALUABLES Consortium are using the VOI framework to conduct a series of impact assessments that measure the socioeconomic benefits of incorporating improved Earth observations into natural resource management. Comparing the designs of two studies in this series will be helpful to demonstrate our key concepts and framework in action.

The first study, by Signe Stroming and other researchers, looks at how Utah’s Department of Environmental Quality uses satellite data to detect cyanobacterial harmful algal blooms (cynoHABs) and issue recreational advisories for visitors to Utah Lake. The authors conduct an ex post, or retrospective, impact assessment based on a 2017 cynoHAB event. They compare the agency’s actual decisions that incorporate the satellite data (the reference case) and a hypothetical situation in which only data collected from site testing and user reports were used (the counterfactual). The study focuses in particular on how using satellite imagery to issue timely lake advisories improves human health outcomes.

Figure 2 shows the related three-step impact assessment framework. We see in the first row that the improved information (the reference case, in blue) includes satellite data. The information that would have been used in the absence of satellite data (the counterfactual case, in red) consists only of sight testing and user reports. The second row shows that, with satellite data, scientists can issue warnings or closures on a weekly basis—but without satellite data, proactive decisions can be made much less often, on a monthly basis. The outcomes tied to these decisions, in row three, are defined as the total number of people who fall ill and the associated health care costs, with and without the availability of weekly satellite data. In other words, row three in Figure 2 makes clear that the value of the satellite data in this decision context is represented by the difference in health outcomes: the availability of data leads to fewer people getting sick and avoids health-care costs.

Figure 2. VALUABLES Impact Assessment Framework: Satellite Data on Harmful Algal Blooms vs Site Testing and User Reports Only

Impact Assessment Framework Full Version

In the second study that we’ll explore here, Richard Bernknopf, Andrew Steinkruger, and Yusuke Kuwayama design an ex ante, or future-looking, assessment to examine how satellite data that are available but not being used could help marine shipping operators comply with future regulations to conserve Pacific blue whales at lower cost. The conservation method they investigate is adjusting the speed limits of shipping vessels at critical times and locations to avoid fatal collisions with whales. Their impact assessment involves a reference case, in which vessel speed limits depend on whale distribution data from shipboard surveys, compared to a counterfactual case, in which speed limits depend on whale distributions predicted by a federal data product called WhaleWatch, which incorporates information from several remote-sensing technologies to provide near–real time data on the location of blue whales.

Figure 3 shows the completed VALUABLES VOI framework. Here, the improved information in the first row is whale distributions predicted by WhaleWatch (in blue), and the information that is used in the absence of WhaleWatch is whale distributions predicted by shipboard sightings (in red). The second row shows that the WhaleWatch data help scientists estimate fatal ship strikes with higher confidence compared to shipboard sightings; thus, the data also can help shipping managers implement regulations to slow ships down less often and in fewer areas. The third row describes the outcomes of these decisions, showing how decisions that use the improved information lead to lower estimated whale deaths from ship strikes, which helps conserve blue whales at a lower cost to ship operators.

Figure 3. VALUABLES Impact Assessment Framework: WhaleWatch Data versus Shipboard Sightings

Figure 3. VALUABLES Impact Assessment Framework: WhaleWatch Data versus Shipboard Sightings

In both of these example studies, the difference in outcomes is quantified in dollar terms. The authors of the algal bloom study estimate that incorporating weekly satellite data in the management of water quality advisories saved $370,000 in health care costs during the 2017 event on Utah Lake. According to the whale conservation impact assessment, the annual VOI of WhaleWatch ranges from $21 million to $332 million, depending on the conservation goals of the modeled regulation.

In other cases, putting a dollar value on the benefits of improved information might not be feasible or necessary—for example, in the case of lives saved or a reduction in the concentration of a pollutant in waterways. While economists can use tools to monetize non-market goods, simply quantifying the difference in outcomes often is descriptive enough to illustrate the value of the improved information in an impactful way.


Related Content