Nassim Taleb is famous, among other things, for his focus on “fat tails.” If events are normally distributed, the chance a year will be three standard deviations worse than average is about 1-in-a-1,000. Yet annual stock marker returns are often three SD worse than normal. To show global war deaths, similarly, you need a logarithmic scale:
The lesson: Either humanity has been getting absurdly unlucky, or important distributions in the real world must have “fat tails.” Taleb, as you probably recall, has a special name for fat-tail disasters: “black swans.” And he’s famously angry that we don’t take black swans much more seriously. A typical Taleb passage:
Let us illustrate one of the problem of thin-tailed thinking with a real world example. People quote so-called "empirical" data to tell us we are foolish to worry about ebola when only two Americans died of ebola in 2016. We are told that we should worry more about deaths from diabetes or people tangled in their bedsheets. Let us think about it in terms of tails. But, if we were to read in the newspaper that 2 billion people have died suddenly, it is far more likely that they died of ebola than smoking or diabetes or tangled in their bedsheets?
The final sentence is eminently reasonable. But on reflection, Taleb’s claim would still stand even if ebola deaths did follow a normal distribution! How? Just imagine that the universe “throws the dice” for bedsheet deaths a billion times a day, but throws the dice for ebola deaths only once per decade - or once per century. Even if every dice throw is normally distributed, you are far more likely to get 2 billion deaths in the latter scenario. In fact, conditional on 2 billion people dying, ebola is trillions of times more likely than bedsheets to be the cause.
Scary? That still depends on the unconditional odds of 2 billion people dying. If that is a one-in-a-billion event, complacency remains the path of wisdom. As I explained in my 2018 conversation with Taleb:
Suppose we’re talking about the possibility that one day you were going to eat something lethally poisoned. One thing you could do is take a lot of precautions against eating poison. You could hire a poison taster, or you go and read up about what kinds of foods are hard to poison.
On the other hand, you could just go and say, “Hardly anyone today gets poisoned by lethal poison put in their food. So low probability, and I’m not worried about it.”
The latter reaction is actually my reaction. Am I wrong? Should I have been worried about being lethally poisoned or not?
My general point: The case for caution depends on many factors. Sure, “fat tails” are one reason to be nervous. Another is: “the universe throws the dice rarely.” Then there’s: “The cost of caution is low” and “The bad outcomes are truly awful.” And never forget, “Caution won’t greatly amplify the risk of other awful outcomes.” Pace Taleb, we shouldn’t fixate on fat tails. We should weigh all these factors and more, then do standard cost-benefit analysis.
At this point, I’m tempted to opine on the possibility that AI will exterminate humanity. I know that many smart people are terrified; I even bet one of the smartest such people. Obviously several of the preceding factors - fat tails included - weigh in favor of fear. But others weigh against. The cost of caution is very high; unless humanity shuts down the internet, AI will keep improving rapidly. And yes, crushing AI really does amplify the risk of other awful outcomes. Like nuclear war, as my doomsday betting partner hair-raisingly admits:
Make it explicit in international diplomacy that preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange, and that allied nuclear countries are willing to run some risk of nuclear exchange if that’s what it takes to reduce the risk of large AI training runs.
Despite my temptation to opine, I won’t. I’ve listened to the fearful past the point of patience, and I remain convinced that the risks of AI extermination are barely worth thinking about. And for once, the vast majority of humanity seems to agree with me: Even if you can get most people to say we’re doomed, only a few nerds are panicking. If you’re in the worried minority, I don’t know how to change your mind. All I can do is affirm my continued willingness to bet against AI disaster.
Bryan’s old post on Batman vs. Superman is highly relevant to the AI x-risk debate.
https://www.econlib.org/archives/2016/12/the_dumbest_thi.html
https://twitter.com/ageofinfovores/status/1658121358763606020?s=46&t=yzu7Smeja7K2V_b0NhPGBA
1. There is an extreme asymmetry exists between a global catastrophe and a global extinction event. Eliminating a majority of the human population preserves the possibility of a long future for humanity because we can rebound, but total extinction means all that future human utility is lost. Parfit makes this point in Reasons and Persons. The number of future possible people is extraordinarily large, and so the loss from extinction is really really bad (see Bostrom's article "Astronomical Waste"). For this reason, I weigh a nuclear war as extraordinarily less bad than AI extinction.
2. Using past data about existential catastrophes can be misleading because of an issue that Ćirković et al (2010) call the "anthropic shadow." We will necessarily underpredict the frequency of world-destroying events because if they occurred, we would not be around to record them.
3. Certain new technology (AI, bioweapons, nanobots, etc.) are potentially much more lethal than past wars and may make your poison analogy unfitting. Imagine that humanity created 1 new substance per decade that you had to try for the past few thousand years, but in the current decade, we are making hundreds, and you have to consume them all. Eventually, one might be lethal.