Blog Post

Quantifying Uncertainty on Thin Ice

Jan 14, 2013 | Roger M. Cooke

The IPCC's fourth assessment report projecting sea level rise in 2100 of 18 to 59 cm excluded the contribution from ice sheets because the ice sheet models were not up to snuff.  They still aren't, but researchers Bamber and Aspinall at the University Bristol have found a work-around: structured expert judgment (SEJ). Their first results were published in Nature Climate Change on Jan. 6 2012 and reveal a contribution to sea level rise from ice sheets in 2100 whose median estimate is 29 cm and whose 95th percentile is 84cm.

Ouch. Taking into account contributions from glaciers and ice caps (12.4 ± 4 cm) and thermal expansion of the ocean (14-32 cm) we're looking at a range of 33-132cm in 2100, according to Bamber and Aspinall.

The media and blogosphere is abuzz.  See here, here, here, herehere, here, here , here, here, and here.

The article's supplementary online material gives a feel for structured expert judgment. Figure S1 below gives "range graph plots showing the individual responses by experts to the key quantitative questions in the 2010 survey (blue) and repeat 2012 survey (red). Also shown are the Decision Maker (DM) pooled estimates based on self weighting (“confidence” multiplied by “expertise” from items 1-3, see questionnaire), equal weights and performance weighting using the Classical Model of Cooke. In general the latter, performance-based DM solutions have the smallest 90 percent credible ranges, providing an optimized pooling estimate of the associated uncertainty range."


 Nature Climate Change, An expert judgement assessment of future sea level rise from the ice sheets,J. L. Bamber, W. P. Aspinall, copyright 2013

Reprinted by permission from Macmillan Publishers Ltd: Nature Climate Change, An expert judgement assessment of future sea level rise from the ice sheets, J. L. Bamber, W. P. Aspinall, copyright 2013

"M" denotes modeler, "O" denotes observationalist, the results concern contributions in year 2100. Experts were combined according to "self-weights" (they assess their own expertise and confidence), "equal weights" and "Performance based weights". The latter are determined by each expert's statistical accuracy as measured on questions from their field to which true values are known post hoc, and informativeness. Note that self-weights and equal weights tell the same story, but Performance weights yield a more informative decision maker.

There is lots of good stuff in this article. The assessment of experts' and decision makers' statistical accuracy was greatly enhanced in a follow up elicitation conducted last October in which I was involved. On this occasion a serious effort was made to capture tail dependence in the uncertainty of drivers of ice sheet dynamics. Write up is proceeding with all deliberate speed.

Is structured expert judgment science? Well, experts' performance is measured with standard statistical tools of hypothesis testing based on their assessments of variables from their field to which answers are known post hoc. In this very real sense, the results of an SEJ are falsifiable (pace Karl Popper), and are sometimes falsified. Is ignore effecting we cannot competently model better science? SEJ is for quantifying uncertainty, not removing it. For that we must do real science.