Expert Elicitation: Using the Classical Model to Validate Experts’ Judgments

View Journal Article

Date

Feb. 2, 2018

Authors

Abigail Colson and Roger Cooke

Publication

Journal Article

Reading time

1 minute
The inclusion of expert judgments along with other forms of data in science, engineering, and decision making is inevitable. Expert elicitation refers to formal procedures for obtaining and combining expert judgments. Expert elicitation is required when existing data and models cannot provide needed information. This makes validating expert judgments a challenge because they are used when other data do not exist and thus measuring their accuracy is difficult. This article examines the classical model of structured expert judgment, which is an elicitation method that includes validation of the experts’ assessments against empirical data. In the classical model, experts assess both the unknown target questions and a set of calibration questions, which are items from the experts’ field that have observed true values. The classical model scores experts on their performance in assessing the calibration questions and then produces performance-weighted combinations of the experts. From 2006 through March 2015, the classical model has been used in thirty-three unique applications. Less than one-third of the individual experts in these studies were statistically accurate, highlighting the need for validation. Overall, the performance-based combination of experts produced in the classical model is more statistically accurate and more informative than an equal weighting of experts.

Authors

Related Content