Expert Forecasting with and without Uncertainty Quantification and Weighting: What Do the Data Say?

An analysis of expert forecasting highlights how using expert uncertainty quantification can yield more accurate estimates.

View Journal Article

Date

July 27, 2020

Authors

Roger Cooke, Deniz Marti, and Thomas Mazzuchi

Publication

Journal Article in International Journal of Forecasting

Reading time

1 minute

Abstract

Post-2006 expert judgment data has been extended to 530 experts assessing 580 calibration variables from their fields. New analysis shows that point predictions as medians of combined expert distributions outperform combined medians, and medians of performance weighted combinations outperform medians of equal weighted combinations. Relative to the equal weight combination of medians, using the medians of performance weighted combinations yields a 65% improvement. Using the medians of equally weighted combinations yields a 46% improvement. The Random Expert Hypothesis underlying all performance-blind combination schemes, namely that differences in expert performance reflect random stressors and not persistent properties of the experts, is tested by randomly scrambling expert panels. Generating distributions for a full set of performance metrics, the hypotheses that the original panels’ performance measures are drawn from distributions produced by random scrambling are rejected at significance levels ranging from E−6 to E−12. Random stressors cannot produce the variations in performance seen in the original panels. In- and out-of-sample validation results are updated.

Authors

Related Content