Podcaster Joe Walker recently interviewed Peter Singer for his podcast - and wound up confirming my “noble lie” explanation for Singer’s moral revisionism. Excerpt reprinted with Walker’s permission.
WALKER: Fair enough. Okay, some questions about esoteric morality.
So you have this really interesting paper with de Lazari-Radek called, “Secrecy in Consequentialism: A Defence of Esoteric Morality,” which actually Bryan Caplan brought to my attention after you had a recent debate with him.
SINGER: But you're not promoting our book, The Point of View of the Universe. Because, again, that's a paper that we developed, and it has an entire chapter in that book.
WALKER: Yeah, of course.
SINGER: So those who want to read all of these interesting things you are talking about, please order The Point of View of the Universe.
WALKER: Yeah, exactly. Great book for anyone interested in these issues!
SINGER: Thank you, thank you [laughs].
WALKER: Could you briefly outline the broad argument of the paper, and then I'll ask some specific questions.
SINGER: Sure. And this also takes its lead from something that Sidgwick wrote in The Methods of Ethics. And the question here is: to what extent should a utilitarian follow generally accepted moral rules?
That's a large debate that's been going on for some time between utilitarians and opponents who say that there are moral rules that we ought to keep. And utilitarians like Sidgwick want to say, “No, you shouldn't stick to a moral rule no matter what the circumstances. There could be cases where you should break even generally accepted moral rules.” But moral rules do, in general, tend to lead us to make sound decisions. So utilitarians don't think that in absolutely every decision you make, you should always try and calculate the consequences from scratch. They would say, let's say, you're walking down the street near your home, and a stranger comes up to you and says, “Can you tell me where the nearest train station is?” And you know this very well, so you should tell the stranger where the nearest train station is. That will normally be a good thing to do. You could, of course, lie. And you could say the train station is thataway, when you know that's the opposite direction. But why would you do that? Generally speaking, helping strangers who ask for information does good. So you don't have to try and do those calculations.
But there are some circumstances in which you might produce better consequences by not following a rule. The problem with saying to a utilitarian, “Don't follow the rule” in these circumstances is that it might weaken trust in the rule or it might weaken respect for the rule. So if other people know that utilitarians are going around breaking rules all the time, then maybe that will lead to a worse state overall, because people will break rules when they really shouldn't be breaking rules, or break rules for their own convenience, or because of some irrelevant emotion that they have at the time. And that won't be a good thing.
So Sidgwick then raises the question: what should utilitarians do in circumstances where you could do more good if you break the rule, except for the fact that you will weaken support for the rule, and that will be a larger bad consequence than the good consequence that you'd achieve by breaking the rule?
And Sidgwick then says, “Well, sometimes it may be the case that you can only do good if you can keep what you're doing secret.” So this is what's known as esoteric morality: the idea that sometimes you should do something, and the fact that it's the right thing to do will be true if you can keep it secret, but if you can't keep it a secret it won't be the right thing to do. So that's essentially the sense of keeping morality esoteric.
And that's been a controversial doctrine for Sidgwick. And it’s another point in which utilitarians, and Sidgwick in particular, were attacked by Bernard Williams, because Bernard Williams refers to this as “government house morality.” What he means by that is: government house in the heyday, let's say, of the British Empire, where the British colonised various peoples in other parts of the world. And you imagine them living in their nice white-painted Victorian-style government house building, making rules for the betterment of the “natives” of those people and saying, “Well, of course, they're rather simple people, they don't really know what's the best thing to do. So we need to make some rules which apply to them. And we’ll educate them or bring them up, if you like, indoctrinate them into believing that these are the right thing to do. But of course, for us sophisticated government bureaucrats, we will know that, actually, it's not always the right thing to do. And we will sometimes break those rules ourselves in the general good, where we wouldn't actually tell the local people that we're breaking those rules, because then they would not keep the rules that would be best if they do keep.”
So essentially, Williams is saying, this idea of esoteric morality divides people into the uneducated masses, who have to be brought up with simple rules, and the more powerful elites, who think that the rules don't apply to them. And that's obviously an unpleasant way to view morality.
In the article that you mentioned, and then also in the chapter in The Point of View of the Universe, Katarzyna and I defend Sidgwick and say that of course, the whole attitude that Williams is talking about — of the idea that our nation (white people, presumably) have the right to rule over others, and are wiser than they are, and know more about their situation than they do — is objectionable. But that is not an inherent part of esoteric morality. There may be many circumstances in which you don't have those assumptions. But it's still the case that, generally, you ought to breach some rule where it would still be better if other people did not know about the breach of the rule. And therefore the confidence in the rule was weakened.
WALKER: Thank you. So some specific questions.
In the paper, you consider the standard, originally proposed in your famous paper 'Famine, Affluence, and Morality' — the standard that people should give everything they can spare to the global poor. But you and Katarzyna write: ”Perhaps advocating so demanding a standard will just make people cynical about morality as a whole. If that is what it takes to live ethically, they may say, ‘Let's just forget about ethics and just have fun.’ If, however, we were to promote the idea that living ethically involves donating, say, 10% of your income to the poor, we may get better results.”
So, am I correct in thinking that the 10% recommendation is just a straight up example of esoteric morality? And because this is only audio and we're not doing video, if you want to imply the opposite meaning of your verbal answer, just give me a wink.
SINGER: [laughs] Okay, no winks. I don't need to get winks here. And because this has been a fairly sophisticated philosophical discussion, I'm going to assume that the people who have listened to this point — and I'm relying on you not to put this upfront as the very first thing in the program — I'm assuming that people who listen to this point can follow the idea that, yes, we may want to promote a standard in general that is a reasonably simple standard, that is one that's easy to remember, that also picks up various religious traditions about the tithe (10% of your income donated to the poor), and that encourages people to do that, rather than produce a more demanding standard, which will (as in the quote you read) mean that fewer people actually follow it. And even though there are some people who then give significantly more than 10%, the total amount raised for people in extreme need is less than it would be if we'd promoted the 10% standard.
So if that is the case — and obviously it's a factual assumption whether it is the case — then I do think that that's an example of esoteric morality that, yeah, we will say, “Give 10%,” and many people will do that. But if people really want to inquire and think about this and challenge us and say, “Well, why 10%? I could give 20%, and that would do more good. I could give 40% and that would do still more good.” Then we'll be prepared to say, “Alright, so since you have thought through this and not just accepted the 10% guideline, then we'll acknowledge to you that this guideline was done for general acceptance to produce the most good, but if you understand the situation, and you are prepared to do more than 10%, great. Then you should do more than 10%.”
Let me say, by the way, that I haven't myself — certainly not in the last few years — endorsed the 10% guideline. I did talk about it, I think, at one stage many years ago. But in the book, The Life You Can Save (which your listeners can download free from the website of thelifeyoucansave.org), at the back of the book, I have a kind of a progressive table, that's more like an income tax table, that starts with something much lower than 10% for people who are really on fairly low incomes but still have a little bit more than they need, and goes up to 33.3% (a third of your income) for people who are really earning a lot. And essentially, even that 33.3%, I think people who are very wealthy ought to be giving more than that. But what I'm doing is, anyway, a step towards what I think people should really be doing; a step closer to that than just the 10% figure.
WALKER: So my next question is an empirical question. I see a possible tension between your approach to giving to the global poor and your approach to the treatment of animals. So with respect to animals, one could argue that the meat boycott you called for in Animal Liberation was a very demanding standard. And maybe it was better to just encourage people into pescetarianism, or something else, to avoid the greater evil of intensive farming. But instead, you went pretty hard in calling for a meat boycott. So why does giving to the global poor fall into the more esoteric bucket? Because I can see plenty of reasons why you might actually get better outcomes by publicly and consistently calling for the more demanding standard. I can also think of historical examples where that has been successful. For example, maybe you could view the abolition of slavery, as an example of where people kind of radically self-sacrificed over quite a short period of time.
SINGER: It's an interesting question. Because one difference between these is that with giving to the poor, there just is a continuum right? There's no reason why you should use 10% rather than 11%, or 11% rather than 12%. It's a constant continuum. Whereas with slavery, to take that example, freeing the slaves is a demand that you can make, and that ends the evil you're trying to combat. Whereas reducing slavery, while it would do some good, still leaves this problem, essentially, as it was. And the other thing you have to remember about the abolition of the slave trade, or of slavery in general, is that there was never universal support for slavery. And this is a difference from… getting now to the animal issue.
For example, when British ships were transporting slaves from Africa to the United States, slavery was not legal in Britain. And if somebody who was a slave landed in Britain, they were free. And in the United States, where the slaves were going, slavery was, of course, not universally accepted in the United States. It was accepted in the southern states, basically. And the northern states opposed it. So I think the demand to end slavery was a demand that always had a good prospect of success. The demand to give enough money to end poverty is much more difficult. And as I said, there are endless degrees. And in some sense there will no doubt always be some people who have less resources than others.
With animals, the differences go both ways. Because as I said, there is certainly a very clear majority, and even overwhelming majority, accepting the consumption of animals, of meat. And that makes a particular problem.
There is something like, you could imagine, the abolition of slavery, the ending of this problem. But because we're always going to be interacting with animals, there’ll always be questions of conflicts of our interests and their interests, so it's hard to imagine that we're ever going to get completely to a situation that resembles the abolition of slavery. Although we could certainly get a lot closer to it.
You also raise the question of how demanding it is to ask people not to eat meat. At the time I wrote Animal Liberation, I had stopped eating meat, as had my wife. We made this a joint decision. And we didn't find it particularly difficult, I have to say. Or in a way, the main difficulty was that you had to keep explaining this to people and justifying what you're doing. And some of your friends would look at you, as if you'd become a crank. There were kind of those social difficulties. But in terms of having enjoyable meals, cuisines that we love to cook, and feeling good on a vegetarian diet, feeling perfectly healthy, and zestful, all the rest of it, there was no problem at all. So to me, that's actually not so demanding an ask. Maybe it's a more demanding ask than asking reasonably comfortably off people to give 10% of their income to the poor. But it's not much more demanding than that. And it's certainly less demanding than giving a larger sum.
WALKER: But within a consequentialist framework, you might get better results just arguing for pescetarianism or something like that.
SINGER: Well, pescetarian, I don't think is a good example. Because I think…
WALKER: Fish suffer greatly.
SINGER: Fish definitely suffer. And because they tend to be small, there are more of them suffering. You're going to eat more of them. In a way, I think you could argue that, just from the animal welfare point of view — let's put climate change reasons for being vegetarian aside for the moment — from an animal welfare point of view, it's better to eat cows than fish, because one cow can feed quite a few people, and especially if the cows have reasonably good lives. Whereas with fish, either they're coming out of aquaculture, which is just factory farming for fish, and I think they have pretty terrible lives. Or they're scooped out of the oceans, in which case, their lives were good, their deaths were horrible, and there's a lot of overfishing going on, and we're running down a sustainable fish stock.
WALKER: Okay, interesting. I accept that. So let me ask you a few more questions about esoteric morality. So if esoteric morality is a necessary part of a consequentialist theory, that implies that there must be some cap on the optimal proportion of consequentialists in the population, correct, that would be short of 100%? Because if everyone was a consequentialist, and therefore knew that everyone else would practise esoteric morality, that would potentially lead to a degradation in trust, which would be a bad outcome. So in light of esoteric morality, there is an optimal number of consequentialists.
SINGER: I think the point about esoteric morality works in a society where not everyone is a consequentialist and some people believe in certain moral rules and follow those rules because they think that they are kind of right. And you don't want to weaken that trust. Because if you did, they might just become egoists, for example. They might just think about their own interests.
If, on the other hand, you accept the possibility that everyone is a utilitarian, I don't think the situation is the same. Because if you really believe that everybody — or even virtually everybody, so if you meet a stranger, it's overwhelmingly probable that they're utilitarian — then there's a sense in which you can trust them; you can trust them to do the most good. Now, if you ask them to promise that they will meet you at a certain place at noon tomorrow, it's true that you can't trust that they will turn up there because if there is a greater utility and them doing something else, then they will do something else. That's true. But you will want them to do something else because you are a utilitarian. Now that we all have mobile phones, of course, you would expect them to call you up and say, “Sorry, I can't meet you as we arranged because I've got to drive a sick person to hospital,” or whatever else it might be. But I don't think that there is a limit if you assume that everybody could function well as a utilitarian.
WALKER: Okay, interesting. So there's like this valley, where as the proportion of utilitarians increases, trust diminishes. But then at a certain point, trust starts to increase again.
SINGER: Yes, and the benefits then overcome the disadvantages.
WALKER: Okay. So, this is kind of an empirical question, but at what point would you start to worry about trends like the spread of atheism and “WEIRDness”, to use Joe Henrich’s acronym, that kind of potentially drive consequentialism?
SINGER: I don't accept the acronym that consequentialism is weird.
WALKER: But you know that it means Western, Educated, Industrialised, Rich and Democratic?
SINGER: Yes.
WALKER: The Western kind of psychology, isn't that kind of correlated with utilitarianism in a way?
SINGER: I don't think so, actually. I think utilitarianism is a more universal tendency. There's a little book that, again, Katarzyna and I wrote in the Oxford University Press’ Very Short Introductions series, A Very Short Introduction to Utilitarianism, in which we regard Mozi, the Chinese philosopher from the Warring States, as likely to be a utilitarian, although we don't have a lot of extant writings of his. But there seems to be a utilitarian tendency in his thinking. Among the Greeks, there were some people with a utilitarian tendency. Certainly, Epicurus was a hedonist (not necessarily a universal hedonist), but hedonism was about maximising pleasure and minimising pain and has been around for a long time. There's some tendencies in Buddhist thinking, I think, towards reducing suffering and improving happiness. So I think there are utilitarian tendencies that are non-western.
WALKER: Your paper got me wondering whether the Effective Altruism movement is not being consequentialist enough in light of esoteric morality. So to explain what I mean, I think that a lot of scientific and intellectual breakthroughs come about through irrational optimism — people just irrationally persisting and solving a problem that doesn't seem obvious to their contemporaries. Many such cases of this. And one concern I have about the EA movement is that if it uses base rates to give people career advice, it might persuade some people not to work on things that could turn out to be really important. So here, my claim is that EA gives advice that's rational for the individual, but collectively could result in worse outcomes. So, perhaps from the perspective of esoteric morality, there may be cases where EA should not push “the outside view" (to use Daniel Kahneman’s term) when giving career advice. Do you have a reaction to that?
SINGER: You might be right. I don't really know how you would calculate how often you will get those extraordinary benefits from people pursuing these strange obsessions. But yeah, it is an empirical question. And it's possible that you're right. And if you're right, then, yes, effective altruists should not be persuading people to go for what has the best strike average.
WALKER: So esoteric morality is related to this idea of Straussianism. if we think about Strauss' book, Persecution and the Art of Writing, where philosophers write very cryptically for an audience. Their work may be published more broadly, but only a very select few can actually understand and interpret what they're trying to say, in order to avoid persecution by sort of conveying and discussing uncomfortable truths throughout history.
When I think of examples of noble lies, I can really only call to mind examples of where the noble lie kind of blows up in the face of the liar. Things like at the beginning of the pandemic, when the US Surgeon General told people that masks aren't effective in preventing the spread, because, potentially, they wanted to reserve the supplies of masks for medical professionals. And then that's just seemed to diminish trust in institutions even further in the US. Maybe the only reason I can think of bad examples of noble lies blowing up is because the good ones, by definition, stay hidden. But I'm curious whether you are aware of any historical examples of where Straussianism has worked successfully, on the part of philosophers or anyone else — someone who's tried to be esoteric in their circumstances, they were successful, and now with the benefit of hindsight, we can recognise what they were trying to do.
SINGER: Oh, that's a very good question. I'm not sure that I can think of that off the top of my head. So when you start talking about Straussians, then what I think about actually is the group around George W. Bush, who acknowledged the influence of Strauss. And I think that led them into the catastrophic invasion of Iraq. I think some of them at least knew that Saddam did not have weapons of mass destruction, but they thought that they could create a democracy in the Middle East, and that that would be a good thing and would increase American influence. So that certainly is not what you're looking for. That's an example of it coming very badly unstuck. There surely are examples of noble lies that have worked.
Very interesting. It's a bit surprising that in that long conversation, they never discuss the propensity of brilliant people to delude themselves into thinking they're acting in the common good for selfish purposes. Which makes this "esoteric morality" very dangerous.
Sam Bankman Fried of FTX seems like a perfect example of a utilitarian who went wrong by using Singer's precise philosophy here.
And of course the Covid "experts", Iraq War strategists, and "government house officials", are examples of it going wrong a just slightly-less-explictly-utilitarian way.
"Famine, Affluence, and Morality" is confused in three crucial ways: https://jclester.substack.com/p/peter-singers-famine-affluence-and