Yet Another Reply to Huemer on the Ethical Treatment of Animals
At risk of taxing readers’ patience, here’s my rejoinder to Mike Huemer’s last guest post on the ethical treatment of animals. He’s in blockquotes, I’m not.
My main reactions:
I. The argument from insects has too many controversial assumptions to be useful. We should instead look more directly at Bryan’s theoretical account of how factory farming could be acceptable.
My argument may not change the minds of people who are convinced that factory farming is unacceptable, but I think it’s very useful at (a) changing the minds of people who are genuinely undecided, and (b) clarifying the views of people who have modest doubts about conventional treatment of animals.
II. That theory is ad hoc and lacks intrinsic intuitive or theoretical plausibility.
III. There are much more natural theories, which don’t support factory farming.
Disagree on both counts; see below.
I.
To elaborate on (I), it looks like (after the explanations in his latest post), Bryan is assuming:
a. Insects feel pain that is qualitatively like the suffering that, e.g., cows on factory farms feel.
b. If (a) is true, it is still permissible to kill bugs indiscriminately, e.g., we don’t even have good reason to reduce our driving by 10%.(a) and (b) are too controversial to be good starting points to try to figure out other controversial animal ethics issues. I and (I think) most others reject (a);
You might be right about how many people believe (a), but I suspect my view is actually more common. Pain has great evolutionary value. So why wouldn’t bugs feel pain?
I also think (b) is very non-obvious (especially to animal welfare advocates).
As my original post noted, even conscientious people like Mike put little mental effort into investigating whether bugs feel pain. Which to me strongly suggests they find (b) pretty obvious.
Finally, note that most animal welfare advocates claim that factory farming is wrong
because of the great suffering of animals on factory farms (not just because of the killing of the animals), which is mostly due to the conditions in which they are raised. Bugs aren’t raised in such conditions, and the amount of pain a bug would endure upon being hit by a car (if it has any pain at all) might be less than the pain it would normally endure from a natural death. So I think Bryan would also have
to use assumption (c):
I haven’t investigated how horribly bugs suffer when humans accidentally kill them. But it seems entirely possible that humans condemn trillions of bugs to excruciating, drawn-out deaths every year. My moral theory implies there’s little need for me to investigate this issue. But if you really doubt (b), it’s a vital question. And since animal welfare advocates put little time into investigating (b), I infer they probably tacitly agree with me.
c. If factory farming is wrong, it’s wrong because it’s wrong to painfully kill sentient beings, not, e.g., because it’s wrong to raise them in conditions of almost constant
suffering, nor because it’s wrong to create beings with net negative utility, etc.
I could be wrong, but I think most people – regardless of their views on the ethical treatment of animals — would see little difference between these moral theories. Either you find them all plausible, or you find none plausible. Hence (c) is barely at issue.
II.
What would be more promising? Let’s just look at Bryan’s account of the badness of pain and suffering. (Note: I include all forms of suffering as bad, not merely sensory pain.) I think his view must be something like what the graph below depicts.
As your intelligence increases, the moral badness of your pain increases. But it’s a non-linear function. In particular:
i. The graph starts out almost horizontal. But somewhere between the intelligence of a typical cow and that of a typical human, the graph takes a sharp upturn, soaring up about a million times higher than where it was for the cow IQ. This is required in order to say that the pain of billions of farm animals is unimportant, and yet also claim that similar pain for (a much smaller number of) humans is very important.
ii. But then the graph very quickly turns almost horizontal again. This is required in order to make it so that the interests of a very smart human, such as Albert Einstein, don’t wind up being vastly more important than those of the rest of us. Also, so that even smarter aliens can’t inflict great pain on us for the sake of minor amusements for themselves.
Sure, this is a logically possible (not contradictory) view.
Your graph accurately describes my view.
However, it’s also very plausible to me that the gap in intelligence between cows and the average human is so enormous that even a linear value function would yield similar results. This is trivially true on a conventional IQ test, where all bugs and cows would score zero. But it seems substantively true for any reasonable intelligence test: Bugs’ and cows’ ability to evaluate or construct even simple logical arguments stems from their deficient intellects, not inability to communicate. Or at least that seems clear to me.
But it is very odd and (to me) hard to believe. It isn’t obvious to begin with why IQ makes a difference to the badness of pain. But assuming it does, features (i) and (ii) above are very odd. Is there any explanation of either of these things? Can someone even think of a possible explanation? If you just think about this theory on its own (without considering, for example, how it impacts your own interests or what it implies about your own behavior), would anyone have thought this was how it worked? Would anyone find this intuitively obvious? As a famous ethical intuiter, I must say that this doesn’t strike me as intuitive at all.
Then we have a deep clash of intuitions. My bug hypotheticals were meant to overcome such clashes. But if those seem lame to you, we are at an impasse.
Now, that graph might be a fair account of most people’s implicit attitudes. But what is the best explanation for that:
1) That we have directly intuited the brute, unexplained moral facts that the above graph depicts, or
2) That we are biased?I think we can know that explanation (1) is not the case. We can know that because we can just think about the major claims in this theory, and see if they’re self-evident. They aren’t.
Again, the idea that the well-being of creatures of human intelligence is much more morally important than the well-being of cows or bugs seems quite self-evident to me. And it also seems self-evident to the vast majority of creatures capable of comprehending the idea.
To me, explanation (2) thrusts itself forward. How convenient that this drastic upturn in moral significance occurs after the IQ level of all the animals we like
the taste of, but before the IQ level of any of us. Good thing the inexplicable upturn doesn’t occur between bug-IQ and cow-IQ (or even earlier). Good thing it goes up by a factor of a million before reaching human IQ, and not just a factor of a hundred or a thousand, because otherwise we’d have to modify our behavior anyway.And how convenient again that the moral significance suddenly levels off again.
Good thing it doesn’t just keep going up, because then smart people or even smarter aliens would be able to discount our suffering in the same way that we discount the suffering of all the creatures whose suffering we profit from.
I don’t see how your case for alleged bias is any stronger than the standard utilitarians’ claim that affluent First Worlders are too biased to see their moral duty to give all their surplus wealth to the global poor. The same goes for numerous other onerous-and-implausible moral duties, like our duty to create as many children as possible, or perhaps the duty to adopt as many needy orphans as possible.
In all of these cases, I admit, we should calmly reflect on our potential bias. I’ve genuinely tried. But even when I bend over backwards to adjust for my alleged bias against animals, I keep getting the same answer: They barely matter.
[…]
Imagine a person living in the slavery era, who claims that the moral significance of a person’s well-being is inversely related to their skin pigmentation (this is a brute moral fact that you just have to see intuitively), and that the graph of moral significance as a function of skin pigmentation takes a sudden, drastic drop just after the pigmentation level of a suntanned European but before that of a typical mulatto.
One of my strongest intuitions is that mental traits have a large effect on moral value, while physical traits do not. You really don’t share this intuition?
III.
A more natural view would be, e.g., that the graph of “pain badness” versus IQ would just be a line. Or maybe a simple concave or convex curve. But then we
wouldn’t be able to just carry on doing what is most convenient and enjoyable for us.I mentioned, also, that the moral significance of IQ was not obvious to me. But here is a much more plausible theory that is in the same neighborhood. Degree of cognitive sophistication matters to the badness of pain, because:
1. There are degrees of consciousness (or self-awareness).
2. The more conscious a pain is, the worse it is. E.g., if you can divert your attention from a pain that you’re having, it becomes less bad. If there could be a completely unconscious pain, it wouldn’t be bad at all.
3. The creatures we think of as less intelligent are also, in general, less conscious. That is, all their mental states have a low level of consciousness. (Perhaps bugs are completely non-conscious.)I think this theory is much more believable and less ad hoc than Bryan’s theory. Point 2 strikes me as independently intuitive (unlike the brute declaration that IQ matters to badness of pain). Points 1 and 3 strike me as reasonable, and while I wouldn’t say they are obviously correct, I also don’t think there is anything odd or puzzling about them.
What’s odd/puzzling to me is (3). Why would bugs or cows would be less conscious of their pain than we are? If anything, I’d think less intelligent creatures’ minds would focus on their physical survival, while smart creatures’ minds often wander to impractical topics.
This theory does not look like it was just designed to give us the moral results that are convenient for us.
Of course, the “cost” is that this theory does not in fact give us the moral results that are most convenient for us… But it just isn’t plausible that the difference in level of consciousness is so great that the human pain is a million times worse than the (otherwise similar) cow pain.
I fear we’re at an impasse. But let me say this: If Mike convinced me that animal pain were morally important, I would live as he does. I’d stop eating meat and wearing leather. I’d desperately search for a cruelty-free way to get dairy; but if I couldn’t, I’d even give up ice cream. I understand Mike.* What I don’t understand is people who claim to agree with him, but don’t repent and live a cruelty-free life.
* Well, almost. If I thought like Mike, I would also evangelically focus my intellectual career on the ethical treatment of animals, and spend thousands of hours reading biology journals to learn more about which animals feel how much pain. I know Mike hasn’t done the former*, and doubt he’s done the latter. And I don’t understand why he hasn’t.
The post appeared first on Econlib.
* Since I wrote this piece, Huemer did publish such a book, Dialogues on Ethical Vegetarianism.