Blog Post

Conflicting Measurements Reduce Uncertainty in Climate Science? We Can Work It Out

Nov 8, 2018 | Roger M. Cooke, Bruce A. Wielicki

If the science “isn’t there yet,” scientists are supposed to disagree—without disagreement, science could not advance. Confronted with conflicting evidence, science communicators temporize and media pundits would have us ignore both sides until the scientists all concur. Not only does this marginalize the science, it is also mathematically wrong. All measurements are subject to errors caused by “noise.” A single measurement could be deflected up or down by the noise and we have no way of knowing which way it might go. Given two conflicting measurements, the lower is probably deflected downward by its noise and the higher deflected upward; we expect errors to become negatively correlated.

Knowing this gives a great deal of information that we ignore at our peril. In a recent article in Climatic Change, we work this out for current and enhanced satellite-based measurement platforms for equilibrium climate sensitivity (ECS, the amount by which global mean surface temperature will eventually rise after doubling atmospheric CO2 concentrations). Good uncertainty quantification paired with good mathematics allow us to combine strongly discordant predictions of ECS while cutting the initial uncertainty dramatically. Most pundits and communicators talking about climate science have it wrong. Getting it right means raising our game with regard to uncertainty accounting. In the meantime, good graphical software can hone our probabilistic intuitions. (A related video by coauthor of this post Roger Cooke provides a demonstration about one such software program, which is freely available at —Ed.).

Most of what we measure are proxies for the real quantities of interest. For past global temperatures we measure tree rings, ice cores, coral bands, isotope fractionation, and so on. ECS is our variable of interest. A 2016 interagency working group memo on the social cost of carbon estimates the expected value of ECS at 3.3°C. Given a business-as-usual emissions scenario, one of the proxies is the rise in global mean surface temperature. A linear increase of 0.2°C per decade would lead to 2°C warming in 100 years. Imagine how difficult it is to measure a 0.2°C change in average surface temperature over 10 years. Most of what we know comes from weather data, including surface measurements as well as those recorded by ships and satellites, most of which are not designed for observing small temperature changes over decades.

We use satellite weather data in our example, detailed in the new article. The Intergovernmental Panel on Climate Change (IPCC) does not use satellite data for its surface temperature trends relative to ECS; instead it uses surface air temperature observations, ship observations, and weather balloon observations. These are not calibrated extremely well and the IPCC relies on the fact that many different observing systems give similar results. Our analysis shows why combining noisy proxies requires knowledge of the noise.

Another proxy is the decadal percentage change in reflected solar cloud radiative forcing (CRF)—this is the ability of clouds to change the Earth’s reflected solar energy, which directly affects the amount of warming expected globally. An increase of 1 percent in CRF in 10 years using current observing systems would shift the predicted ECS to 3.9°C. Spoiler alert: the uncertainty in our 3.9°C prediction would actually be greater than our current uncertainty in the 3.3°C prediction. How is that possible? This phenomenon—negative learning—arises when noisy measurements produce unexpected values. For example, uncertainty in my current prediction of the stock market in the next six months is based on historical data. An unreliable forecaster predicting a large gain in six months might raise my prediction but also make it less certain. Statisticians term this phenomenon the “persistence of prior beliefs.” Noisy measurements cannotand should notentirely wipe out prior beliefs.  

Engineers at NASA have spent years designing more accurate systems of measurement. In order to better understand the value of these new systems, they have exhaustively characterized the uncertainty in both the current and proposed enhanced systems (see references below). It turns out that a measured value of 1 percent per decade with the new system would yield a prediction for ECS of 6.1°C. Greater accuracy reduces the persistency of the prior beliefs. Imagine these two teams duking it out at a conference: 3.9°C versus 6.1°C, after they each measured the same rate of 1 percent per decade.

Disagreement contains a great deal of useful information—this is perhaps the most important insight to be gained from a proper accounting of uncertainty in science, with valid mathematical propagation of uncertainty through proxies. Suppose the current system for measuring decadal temperature rise finds a value of 0.1°C per decade leading to a predicted ECS value of 2.3°C. Suppose the current team measuring CRF finds a high value of 1 percent per decade with a predicted ECS value of 3.9°C. When we combine these two values, our resulting uncertainty about ECS actually drops to about one half of its prior value. Measuring the same values with the enhanced systems would slash uncertainty furtherto one quarter of its prior value.

We really can work it out: Rigorous uncertainty quantification and valid mathematical propagation can replace fussing and fighting. Of course, there are assumptions and caveats, which we cover in our new article and in the background literature listed below:

Funding for this research from NASA, NNX17AD55G, is gratefully acknowledged.

The views expressed in RFF blog posts are those of the authors and should not be attributed to Resources for the Future.