Policy commentary

What Can Policymakers Learn from Experimental Economics?

Oct 15, 2007 | John A. List

Welcome to the RFF Weekly Policy Commentary, which is meant to provide an easy way to learn about important policy issues related to environmental, energy, urban, and public health problems.

This week we are very happy to introduce John List, a professor of economics at the University of Chicago and an RFF University Fellow. List discusses how research on field experiments bears on the issue of how we might quantify the benefits of environmental policies; such valuations are a critical ingredient for judging whether or not individual policies make sense on cost–benefit grounds.

How can we value the benefits of preserving wilderness areas and wetlands, providing the recreational benefits of cleaner lakes and rivers, and reducing the pollution in the air? Good policy requires good data on economic values, and generally economists rely on markets to provide them. But in some areas, notably environmental protection, we often need to know the worth that society assigns to incremental benefits for which there are no markets.

This need is frequently a legal requirement. Ever since President Reagan’s 1981 executive order, federal agencies, including the Environmental Protection Agency, have been required to consider both the benefits and costs of regulations for economically significant rulemakings before implementation.

Economists rely on several different methods to estimate environmental benefits or damages. For example, one approach to valuing the benefits of cleaner air is to compare how much extra people are willing to pay for houses in regions with good air quality, such as in Laramie, WY, with houses in regions with relatively dirty air, like Los Angeles. The main challenge here is trying to separate out, statistically, the price premium for clean air from all the other factors that may cause property prices to differ across different regions, including local factors such as climate, job opportunities, crime levels, school quality, and so on. Moreover, this approach is limited in that it cannot be used, for example, to value how much people would be willing to pay to know that Alaska’s Arctic National Wildlife Refuge will be passed on to future generations in pristine condition, even though they may never visit the refuge themselves. As opposed to the value of clean air, which people inhale and thus “use,” these latter kinds of values are considered “non-use values.” They pose problems in that they generally lack markets —and therefore prices—that economists could use for analysis.

The most widely used approach to estimating the total value of non-market goods and services is known as contingent valuation (CV). Under this approach, the researcher uses a questionnaire to ask respondents contingent questions concerning how much they would be willing to pay in donations, taxes, or price increases to achieve a certain goal—preservation of an endangered species, perhaps, or the clean-up of a contaminated area. 

Possibly the most celebrated example of CV in an environmental case arose from the 1989 Exxon Valdez oil spill. On behalf of the state of Alaska, a group of economists conducted a large-scale CV study of Americans’ willingness to pay for the avoidance of another oil spill in Prince William Sound, and the state used the resulting figure, $2.8 billion, in court. The final settlement was $1 billion on top of the $2 billion that Exxon itself spent on restoration.

In California, in another notable case, a fight over water rights raised the question of whether it was worth diverting water into Mono Lake to ensure the survival of the lake’s flora and fauna. Certain downstream parties derided it as a choice between the interests of “300 fish versus 28,000 people.” But the state’s Water Resources Control Board was persuaded otherwise and ordered an increase in the flows into the lake that significantly decreased the city’s water rights.

Even though the CV approach has clearly influenced the policy process, it has remained highly contentious, for it is difficult to know whether people’s answers to hypothetical questions provide a reliable guide to the amounts that they would actually be willing to pay in practice. Here, the techniques of experimental economics are making a significant contribution. Experimental economics sets up choices that people actually make, whether in the laboratory, under carefully controlled conditions, or in the field, where their decisions can sometimes be compared with results in real markets.

In one of the early uses of the technique, the researcher Peter Bohm, a generation ago in Sweden, compared respondents’ answers to hypothetical questions about the value of admission to a sneak preview of a television show with the prices in an actual market for admission. He found that the hypothetical values were higher, but only moderately so. In a recent meta-analysis of these kinds of studies, Craig Gallet and I found that, on average, hypothetical values are three times larger than what people are actually willing to pay in a market setting. Further laboratory and field experiments should make plain the situations wherein contingent valuation might be viable.

Another complication associated with non-market valuation is that differences in values arise, depending on the way in which a question is posed. Sometimes people are asked what they would pay to prevent the loss of a certain environmental benefit, such as a wetland. Sometimes researchers reverse the question, and ask what their respondents would consider fair compensation for the loss of that benefit—suffering the loss of that wetland. Typically, people set a much higher figure for compensation than they are willing to pay to avoid the loss.

At first, many economists argued that the answers on compensation were unreliable and should not be taken seriously. But lab experimentation reinforced the survey evidence, confirming that the difference between willingness-to-pay and fair compensation is robust across a wide variety of goods. Field experiments have complemented the extant lab and survey evidence by showing the limitations of such results. For example, my own work shows that people experienced with trading ordinary private goods, like mugs and candy bars, are not subject to this value disparity. Other field evidence using public goods, such as increased environmental quality, has reinforced these results and shown that the value disparity lessens because people with experience state much lower fair compensation values.

Experimental research now under way in the field demonstrates that there is much to be gained from designing economic experiments that span the bridge between the laboratory and the world outside, with important implications for economics. Examples include developing new auction formats to distribute pollution permits, exploring compensation mechanisms in social dilemmas, such as what is necessary for many endangered species cases, and examining efficient means to provide public goods.

What has become clear in this process is that field experiments can play an important role in the discovery process by allowing us to make stronger inference than we could make from lab or uncontrolled data alone. Similar to the spirit in which astronomy draws on the insights from particle physics and classical mechanics to make sharper insights, field experiments can help to provide the necessary behavioral principles to permit sharper policy advice.


Views expressed are those of the author. RFF does not take institutional positions on legislative or policy questions.

To receive the Weekly Policy Commentary by email, or to submit comments and feedback,


Further Readings:

Bohm, Peter, “Estimating the Demand for Public Goods: An Experiment,” European Economic Review, June 1972, 3(2): 111-130.

Carson, Richard, et al., “Contingent Valuation and Lost Passive Use: Damages from the Exxon Valdez Oil Spill,” Environmental and Resource Economics, 2003, 25: 257-286.

Cummings, Ronald, Elliott Steve, Glenn Harrison, and J. Murphy, “Are Hypothetical Referenda Incentive Compatible?Journal of Political Economy, 1997, 105(3): 609-621.

Glenn Harrison, and Laura L. Osborne, “Can the Bias of Contingent Valuation Be Reduced? Evidence from the Laboratory,” Economics Working Paper B-95-03, Division of Research, College of Business Administration, University of South Carolina, 1995.

Laura Taylor, “Unbiased Value Estimates for Environmental Goods: A Cheap Talk Design for the Contingent Valuation Method,” American Economic Review, 1999, 89(3): 649-65.

Harrison, Glenn W., “Hypothetical Bias Over Uncertain Outcomes,” in Using Experimental Methods in Environmental and Resource Economics, John A. List, ed., Northampton, MA: Elgar, forthcoming 2006.

John A. List, “Field Experiments,” Journal of Economic Literature, 2004, 42(4): 1009-1055.

Elisabet Rutstrom, “Experimental Evidence of Hypothetical Bias in Value Elicitation Methods,” in Handbook of Experimental Economics Results, C.R. Plott and V.L. Smith, eds., forthcoming.

Kahneman, Daniel, Jack L., Knetsch, and Richard H, Thaler, “Experimental Tests of the Endowment Effect and the Coase Theorem,” Journal of Political Economy, December 1990, 98(6): 1325-48.

List, John A., “Do Explicit Warnings Eliminate the Hypothetical Bias in Elicitation Procedures? Evidence from Field Auctions for Sportscards,” American Economic Review, 2001, 91(5): 1498-1507.

List, John A., “Does Market Experience Eliminate Market Anomalies?” Quarterly Journal of Economics, 2003, 118(1): 41-71.

“Neoclassical Theory Versus Prospect Theory: Evidence from the Field,” Econometrica, 2004, 72(2): 615-625.

List, John A., and Craig Gallet, “What Experimental Protocol Influence Disparities Between Actual and Hypothetical Stated Values? Evidence from a Meta-Analysis,” Environmental and Resource Economics, 2001, 20(3): 241-254.

List, John A., Paramita Sinha, and Michael H. Taylor,