Assessing the Accuracy of Norwegian Expert Predictions on the War in Ukraine
FFI-Report
2024
This publication is only available in Norwegian
About the publication
Report number
24/00157
ISBN
978-82-464-3515-2
Format
PDF-document
Size
1.4 MB
Language
Norwegian
This study assesses the accuracy of Norwegian experts’ predictions about the war in Ukraine. Expert predictions play an important role in shaping expectations about political and military de-velopments. Despite this, our understanding of the ability of experts to predict the future is rela-tively limited. Previous studies have attempted to measure the accuracy of expert predictions through surveys, where the experts had to assign numerical probabilities to specific outcomes. These studies suggest that the accuracy of the experts is on par with pure guesswork. However, experts do not normally make predictions through tournaments and surveys. Rather, they gen-erally predict outcomes only within their area of expertise, and they use words, rather than num-bers, when describing probabilities. Here, we assess the accuracy of expert predictions based on their own statements to the media.
Our study is based on 173 predictions from 10 of the most cited experts in the Norwegian media on questions about the war in Ukraine. We manually surveyed the expert statements to identify predictions. We define predictions as statements about future outcomes that include a probabil-ity assessment. By probability assessment, we mean words and expressions that describe how likely the outcome is, such as ‘not likely’ and ‘probable’. Accuracy is measured by examining how often the outcome that experts pointed to as the most probable actually occurred.
The results show that experts anticipated the correct outcome on more than four out of five pre-dictions (81 percent). However, the experts performed significantly worse on questions about whether Russia would engage in a full-scale invasion and how the invasion would unfold (48 percent). Interestingly, the experts that failed to anticipate the invasion were not significantly less accurate in their predictions about other outcomes compared to the experts that correctly predicted the invasion. Individual hit rates range from 68 percent to 100 percent, but the number of predictions each expert made varies significantly. Our dataset does not provide a basis for making claims about systematic individual differences in predictive ability because experts in the media make predictions about different topics and at different times. Additionally, the use of ver-bal probability assessments, instead of numerical, makes it hard to identify experts who pointed to the same outcome, but may not have been equally convinced that it would happen.
The surprisingly high accuracy compared to previous studies is most likely explained by the type of outcomes experts make verifiable predictions about in the media. Many of the expert predic-tions identified in this study concern outcomes that are considered very unlikely, such as the use of nuclear weapons. By comparison, consistently predicting ‘no change’ on the same ques-tions would have achieved the same level of accuracy as the experts’ (80 percent).
Although the results suggests that the most explicit expert predictions in the media will be correct, the generalizability of these findings to other situations where experts are used is uncer-tain. If the high hit rates on questions about the war in Ukraine is partly explained by the fact that experts only make verifiable statements about relatively rare phenomena, the same level of accuracy cannot be expected for questions about more common types of events.