Superforecasting: Supremely Awesome
I’m already an unabashed Tetlock fanboy. But his latest book, Superforecasting: The Art and Science of Prediction (co-authored with Dan Gardner but still written in the first person) takes my fandom to new levels. Quick version: Philip Tetlock organized one of several teams competing to make accurate predictions about matters we normally leave to intelligence analysts. Examples: “Will the president of Tunisia flee to a cushy exile in the next month? “Will an outbreak of H5N1 in China kill more than ten in the next month? Will the euro fall below $1.20 in the next twelve months.” Tetlock then crowdsourced, carefully measuring participants’ traits as well as their forecasts. He carefully studied top 2% performers’ top traits, dubbing them “superforecasters.” And he ran lots of robustness checks and side experiments along the way.
Quick punchline:
The strongest predictor of rising into the ranks of superforecasters is perpetual beta, the degree to which one is committed to belief updating and self-improvement. It is roughly three times as powerful a predictor as its closest rival, intelligence.
But no punchline can do justice to the richness of this book. A few highlights…
An “obvious” claim we should all internalize:
Obviously, a forecast without a time frame is absurd. And yet, forecasters routinely make them… They’re not being dishonest, at least not usually. Rather, they’re relying on a shared implicit understanding, however rough, of the timeline they have in mind. That’s why forecasts without timelines don’t appear absurd when they are made. But as time passes, memories fade, and tacit time frames that seemed so obvious to all become less so.
The outrageous empirics of how humans convert qualitative claims into numerical probabilities:
In March 1951 National Intelligence Estimate (NIE) 29-51 was published. “Although it is impossible to determine which course of action the Kremlin is likely to adopt,” the report concluded, “we believe that the extent of [Eastern European] military and propaganda preparations indicate that an attack on Yugoslavia in 1951 should be considered a serious possibility.” …But a few days later, [Sherman] Kent was chatting with a senior State Department official who casually asked, “By the way, what did you people mean by the expression ‘serious possibility’? What kind of odds did you have in mind?” Kent said he was pessimistic. He felt the odds were about 65 to 35 in favor of an attack. The official was started. He and his colleagues had taken “serious possibility” to mean much lower odds.
Disturbed, Kent went back to his team. They had all agreed to use “serious possibility” in the NIE so Kent asked each person, in turn, what he thought it meant. One analyst said it meant odds of about 80 to 20, or four times more likely than not that there would be an invasion. Another thought it meant odds of 20 to 80 – exactly the opposite. Other answers were scattered between these extremes. Kent was floored.
A deep finding that could easily reverse if widely known:
How can we be sure that when Brian Labatte makes an initial estimate of 70% but then stops himself and adjusts it to 65% the change is meaningful? The answer lies in the tournament data. Barbara Mellers has shown that granularity predicts accuracy: the average forecaster who sticks with tens – 20%, 30%, 40% – is less accurate than the finer-grained forecaster who uses fives – 20%, 25%, 30% – and still less accurate than the even finer-grained forecaster who uses ones – 20%, 21%, 22%. As a further test, she rounded forecasts to make them less granular… She then recalculated Brier scores and discovered that superforecasters lost accuracy in response to even the smallest-scale rounding, to the nearest .05, whereas regular forecasters lost little even from rounding four times as large, to the nearest 0.2.
More highlights coming soon – the book is packed with them. But if any book is worth reading cover to cover, it’s Superforecasting.
The post appeared first on Econlib.