I’m a big fan of the Effective Altruism movement, but I have noticed an unfortunate tendency to obsess over bizarre issues primarily because they are intellectually engrossing. The Doomsday Argument. The AIs will exterminate us panic. Shrimp welfare. Which, as this passage from Tolstoy’s Anna Karenina illustrates, is a long-standing problem among self-conscious humanitarians:
Konstantin Levin regarded his brother as man of great intelligence and education, noble in the highest sense of the word, and endowed with the ability to act for the common good. But, in the depths of his soul, the older he became and the more closely he got to know his brother, the more often it occurred to him that this ability to act for the common good, of which he felt himself completely deprived, was perhaps not a virtue but, on the contrary, a lack of something - not a lack of good, honest and noble desires and tastes, but a lack of life force, of what is known as heart, of that yearning which makes a man choose one out of all the countless paths in life presented to him and desire that one alone.
The more he knew his brother, the more he noticed that Sergei Ivanovich and many other workers for the common good had not been brought to this love of the common good by the heart, but had reasoned in their minds that it was good to be concerned with it and were concerned with it only because of that. And Levin was confirmed in this surmise by observing that his brother took questions about the common good and the immortality of the soul no closer to heart than those about a game of chess or the clever construction of a new machine.
Of course, this is a minor flaw compared to the Leninist practice of negligently doing massive evil in the vague hope of realizing a “greater good.” As I’ve previously explained:
The key difference between a normal utilitarian and a Leninist: When a normal utilitarian concludes that mass murder would maximize social utility, he checks his work! He goes over his calculations with a fine-tooth comb, hoping to discover a way to implement beneficial policy changes without horrific atrocities. The Leninist, in contrast, reasons backwards from the atrocities that emotionally inspire him to the utilitarian argument that morally justifies his atrocities.
While I’m on the topic, Hanania is right that EA must either become anti-woke or die:
If you’re giving a lecture on a university campus and a young student starts crying about how your ideas are putting her in danger, the only real options are to turn her into an object of mockery and contempt, or to submit and let her decide what you’re allowed to say. Conservatives take the former approach, while the vast majority of journalistic and academic institutions have taken the latter option. Either path can lead to a stable equilibrium. What isn’t stable is taking a sort of in-between position.
P.S. I’m always happy to speak to EA clubs whenever I’m nearby. I’m most eager to sell EAs on open borders and housing deregulation, but we could even talk about this very post!
Many intelligent people are interested in things they find intellectually engrossing that are unimportant. The unifying theme of EA is that these subjects have plausible claims for being very important.
If the Doomsday argument is true, we will likely all die soon. If the AI extinction scenarios are true, we will likely all die soon. If animal welfare advocates are correct in their arguments about the suffering of animals, then the suffering of animals is incredibly immense and may overwhelm all human suffering.
Yes, these topics are "bizarre," but why should that matter if a plausible case can be made for their importance? For those who think we should not spend time thinking about low-probability existential catastrophe scenarios and doomsday arguments, at what percent chance of eliminating all humans would it be reasonable for a portion of Effective Altruists to worry about these things? 10%? 1%? .1%?
I think there's a third option for dealing with the person in the audience: ignore her.