Document Type

Article

Publication Date

1-2024

Abstract

Despite the importance of predictive judgments, individual human forecasts are frequently less accurate than those of even simple prediction algorithms. At the same time, not all forecasts are amenable to algorithmic prediction. Here, we describe the evaluation of an automated prediction tool that enabled participants to create simple rules that monitored relevant indicators (e.g., commodity prices) to automatically update forecasts. We examined these rules in both a pool of previous participants in a geopolitical forecasting tournament (Study 1) and a naïve sample recruited from Mechanical Turk (Study 2). Across the two studies, we found that automated updates tended to improve forecast accuracy relative to initial forecasts and were comparable to manual updates. Additionally, making rules improved the accuracy of manual updates. Crowd forecasts likewise benefitted from rule-based updates. However, when presented with the choice of whether to accept, reject or adjust an automatic forecast update, participants showed little ability to discriminate between automated updates that were harmful versus beneficial to forecast accuracy. Simple prospective rule-based tools are thus able to improve forecast accuracy by offering accurate and efficient updates, but ensuring forecasters make use of tools remains a challenge.

Comments

This version is a pre-print.

DOI

10.1037/dec0000227


Share

COinS