Knowledge, Reality, and Value: Huemer’s Response, Part 4
The latest from Huemer:
Thanks, everyone, for the discussion! Here are my responses to the comments about part 4:
Bryan’s Comments
“BC” indicates Bryan’s comments; “MH” is me, from the book.
(1)
BC: When defending moral realism, Huemer places a fair amount of weight on linguistic evidence … I find this evidence less probative than he does. Why? Because human beings often frame non-assertions as assertions for rhetorical effect. “Yay for the Dodgers!” is almost equivalent in meaning to “Dodgers rule!”
More about the linguistic evidence: Each of the linguistic tests for proposition-expressing sentences corresponds to a non-linguistic, metaphysical truth. E.g., the reason why it’s linguistically odd to say “I believe hurray for the Dodgers” is that belief is an attitude toward a proposition; it doesn’t make sense to speak of believing something that isn’t a proposition. Granted, one could always claim that the word “believe” has a special, alternative meaning for ethical contexts. But it’s going to be a pretty incredible coincidence if that happens with every single expression in the language that normally gets attached to proposition-expressing clauses.
I think the linguistic evidence is pretty good, partly because it’s relatively objective. There is so much in philosophy that can be fudged, and people will just claim to have different intuitions. But it’s really uncontroversial that statements like “I believe taxation is wrong” and “It’s possible that abortion is permissible” are meaningful.
As to the Dodgers: It may be that “the Dodgers rule” just means something like “the Dodgers are extremely good.” (Let’s not worry about getting down the exact meaning.) If so, that is a proposition-expressing phrase (as I’ve argued). Now, even when making a factual assertion, people are often not very interested in the fact they’re asserting and may instead be trying to make some move in a social game. So a Dodgers fan might be asserting (with no justification) that the Dodgers are extremely good in order to express his quasi-tribal affiliation as a Dodgers fan. That doesn’t stop it from being an assertion, though. Compare: a person might assert that Barack Obama is a Muslim as a way of expressing hostility to the Democrats and tribal affiliation with the alt-right. This doesn’t stop “Obama is a Muslim” from being a proposition-expressing phrase, though.
(2)
BC: Is moderate deontology fully intellectually satisfactory? No. But why the doleful “least bad” rather than the hopeful “rather good”?
Basically, many deontological intuitions lead to paradoxes in certain cases. See my papers:
“Lexical Priority and the Problem of Risk”, http://www.owl232.net/papers/absolutism.pdf
“A Paradox for Moderate Deontology,” https://philpapers.org/archive/HUEAPF.pdf
I mentioned this in the chapter, but those papers give more detail. Those aren’t the only puzzles, just the ones written about by me. There are many puzzles that arise when trying to think through deontological principles, to explain and justify them. Here’s another challenging paper, by Caspar Hare: “Should We Wish Well to All?”, http://web.mit.edu/~casparh/www/Papers/CJHareWishingWell.pdf
BC: And for the “two or more actions,” problems, what’s wrong with implicit or hypothetical consent…?
This was in reference to the following scenario: You have a chance to perform an action that harms person A while benefitting person B by a greater amount (and with respect to the same good). You also have available an independent action that would harm B while benefitting A by a greater amount (with respect to the same good). Each action is wrong, considered by itself, according to deontologists. But the combination of actions simply benefits both A and B. So x is wrong, and y is wrong, but (x&y) is okay.
Does consent save us from the paradox? This is discussed in section 4 in my paper (https://philpapers.org/archive/HUEAPF.pdf). Suppose A does not consent. A wants you to perform only the action that benefits him while harming B; he won’t consent to the action that harms him while benefitting B (not even conditional on your doing the other action simultaneously). Now what? It looks to me like we still have the original problem.
I also thought there were some worthwhile points on behalf of utilitarianism in the book in 14.3.2 and 14.5.3.
(3)
BC: If you regress fertility on income alone, higher income predicts lower fertility. However, if you add more variables, the picture changes. At least in the US, for example, the highest-fertility people have high income combined with low education.
(Bryan includes a link to another post where he says that education decreases fertility, but if you control for education, then income increases fertility.
That’s very interesting, so thanks for that.
The context of the discussion in the book was that some people think we shouldn’t aid people in poor countries because that will just cause them to have more babies. I was rejecting this, saying that aid will more likely decrease fertility. Taking account of Bryan’s point about education vs. income, I still think my main point is true. If we successfully help the poor, their income will probably go up together with their education levels; they won’t get increased income but somehow be forced to stay at the same education levels. So aid to the poor will still likely decrease fertility.
(4)
MH: … if we can alleviate world poverty, this would actually reduce population growth.
BC: If true, this is probably the best consequentialist argument against alleviating world poverty. After all, most poor people are happy to be alive. If saving the lives of the most miserable of the world’s poor causes their total population to greatly shrink, how is that a win?
Because the living standards will be higher.
This brings up a debate in population ethics that I didn’t go into in the book, because it would be a long and challenging discussion.
If you have a fixed set of conscious beings, then it’s straightforward how consequentialist reasoning works. You just always produce the largest net benefit you can (summing your action’s benefits minus its costs, regardless of who the beneficiaries and victims are). This maximizes the total utility of the fixed set of beings; it also maximizes their average utility.
But what should you do if your choices not only produce benefits and costs but also affect how many and which people will even exist in the future? Should we try to maximize total utility, such that we’d have reason to try to create lots of new people to rack up the utils? Or maybe we should try to maximize the average level of welfare of the population? Or maybe we should do something in between? Or none of the above?
This area of ethics is full of difficult and confusing problems, which we don’t have time to go into (see Derek Parfit’s Reasons and Persons and my article “In Defence of Repugnance”). Suffice to say that there’s no generally accepted theory, and every theory has at least one highly counter-intuitive consequence (usually more than one). So I’m not going to try to resolve the issues in population ethics now.
However, I would note that people in developed countries are really a lot better off than people in the poorest nations of the world. Even if you think that increasing population is good ceteris paribus, it’s plausible that the increase in welfare levels resulting from economic development outweighs the decrease in population. Consider: If you had the chance to go back in time and somehow sabotage the industrial revolution, so that Europe and America (etc.) would never have industrialized and never have become fabulously wealthy as we are today, would you do it? Before answering, note that our population today would probably be much larger, though also more miserable.
I bet that most people would answer “no”.
(5)
BC: Memory is highly fallible. Memory varies so much between people. Yet we can’t do without it. Even math relies on memory! … The same applies at least as strongly to natural science. Unless you’re directly staring at something, natural science is based not on observation and experimentation, but on what we remember about past observation and experimentation.
Yep. Btw, it’s not even what we remember about past observations – nearly all scientific beliefs are almost entirely based on what we remember hearing from other people. Science depends on memory, testimony, observation, reasoning, and intuition. And none of those information sources can be checked without relying on itself, with the possible exception of testimony (but even there, we have not in fact done much to check the reliability of other people). All those sources are fallible. Yet science is still pretty good.
Reader Comments
I can’t address everything, because I have other stuff to do, but here are a few of the comments:
(1) “It’s hard to see how demandingness is a particularly strong objection to utilitarianism. All moral theories can be extremely demanding in certain circumstances.”
Yes, they can (almost all). I assume Bryan would say that utilitarianism is extremely demanding in a much wider range of circumstances; also that its demands are much less intuitive.
(2) “Huemer is overlooking the fact that in utilitarianism future people count just as much as present people. If I invest my surplus wealth, I will benefit future people…”
Sure, if you’re good at investing, then maybe you should invest everything you don’t need to survive, in order to grow your fortune to the maximum amount possible before you die, then donate it to charity in your will. This is still just as demanding as the original idea (of giving your money to charity continuously).
(3) “without empirical data, Huemer is not appealing to linguistic or introspective evidence in any systematic and rigorous way: he’s simply appealing to his intuitions about how language works”
I’m not sure what we’re talking about. I used premises like this:
“John thinks that taxation is bad” makes sense.
“I wonder whether abortion is wrong” makes sense.
“If abortion is wrong, then fetuses are people” makes sense.
“John thinks that ouch” doesn’t make sense.
“I wonder whether please pass the salt” doesn’t make sense.
Etc.
Is that what we’re talking about? Is the commenter saying that I need to conduct a rigorous scientific study to figure out whether (a)-(e), etc., are true? I find that kind of bizarre.
(4) “But neither his nor my personal experiences, introspective reports, or intuitions are very good evidence of whether most people are moral realists.”
I did not say most people are moral realists. My argument is about what’s true, not what most people believe.
(5) “Even the earlier studies found equivocal evidence, with most participants expressing inconsistent and mixed metaethical standards.”
That’s pretty close to what I said in the preface to Ethical Intuitionism.
(6) [Commenter #1] “To you, does “if murder is wrong, then assassination is wrong”, really seem just as well formed as “if boo Dodgers, then go Giants”?
[Commenter #2] “No, but I’m not a noncognitivst. And this would only stand as at best an objection to very old and flat-footed noncognitivist accounts. Even some of those could probably handle this fairly well, but contemporary expressivist accounts can handle the semantics of moral language even better…”
Three points:
First, notice that this is a completely different response. Lance’s first response was “Huemer is just using intuitions, so his so-called ‘evidence’ is worthless.” This new response seems to grant that the non-cognitivist actually needs to accommodate (“handle”) my evidence.
Second, I don’t agree that they “handle” those well. I don’t have time to write a lengthy discussion here, but I discussed this in Ethical Intuitionism.
Third, conditionals were just one example of a much more general type of problem. Non-cognitivists have spent a lot of time on trying to interpret conditionals with moral clauses, and a few other types of sentences. But the problem isn’t just with a few types of sentences. The problem is with every single linguistic context in which you can embed a proposition in a larger proposition. Someone has to explain why every single one of those works with moral statements.
You can of course claim that, coincidentally, every single word or phrase that lets you embed a proposition also has a different meaning that lets you embed moral sentences even though moral sentences are non-propositional, but doesn’t let you embed any of the uncontroversial examples of non-propositional sentences. The willingness to say this sort of thing is a good illustration of why philosophers make a lot less progress than scientists.
(7) “Would we look for an answer to scientific realism by checking how people in fact use the language of science? … Why would it be any different for moral language?”
Because we were talking about semantic theories. Non-cognitivism is a semantic theory. It says moral statements don’t express propositions. (No one that I know of holds a non-cognitivist view of science.)
(8) “I’m less confident that the Trolley Problem is a useful critique of utilitarianism. It’s quite far outside the normal range of human experience…”
Just to be clear, the Trolley Problem is typically not used to criticize utilitarianism. The example is more often used to support utilitarianism, because most people intuitively judge that you should switch the trolley away from the five toward the one.
There was, by the way, a real case like this in Los Angeles in 2003. A group of 31 Union Pacific freight cars started rolling downhill out of control, due to improperly set brakes, heading for downtown Los Angeles. This could have been a disaster. The Union Pacific officials decided to switch the train to a side track into Commerce City, a much less populated area. The train derailed in Commerce City. Miraculously, no one was hurt.
I would say the Union Pacific employees did the right thing. (Given the situation. Of course, the really right thing is to set the brakes properly at the start!) I guess Bryan would say no, they should have let the train continue into downtown Los Angeles.
Now, you could say, “Let’s avoid train accidents altogether.” And sure, that’s the right thing to say if our actual interest was train accidents. But that’s not really the point when people raise such examples in ethics. The trolley example is given to illustrate something about utilitarianism – it’s interesting because it’s a case in which most people have the utilitarian intuition.
(9) “Given the world as it is, utilitarianism makes some pretty stringent active demands on citizens of wealthy countries. But any moral theory that doesn’t make these demands thereby makes even more stringent passive demands on those in poverty. […] They have to die of a preventable disease.”
This isn’t really a demand of the moral theory in the same sense. The non-utilitarian theory doesn’t say that people are obligated to die of a preventable disease. It doesn’t, for example, say that if you have a way of saving your life, you’re obligated not to take it. One could consistently say that (a) other people aren’t obligated to give their money to help you, but (b) you are permitted to steal money to save your life, if you have the chance to do that. (Which I actually think is plausible.)
Of course, most people don’t in fact have that chance, so many will die of preventable diseases each year. But that’s not because a moral theory is telling them to do that.
The post appeared first on Econlib.