People love to trot out this argument when arguing against utilitarianism. "See? When we follow utilitarianism, we end up in this dystopian nightmare world where doctors kill you to save strangers." To me, it seems absurd that something many people would describe as a "dystopian nightmare world" would ever maximize utility. And so perhaps killing one man to give his organs to save five others is not actually utilitarian, when you consider the long run.
You're just evading the point of the thought experiment, which is that utilitarian conclusions can conflict with widely-held core moral intuitions. To make it clearer, one could add the extra factor of it being given that this is a one-off case that gets no publicity and sets no precedent and has zero long term or large scale implications. People won't just say, "Oh, well, in that case, go ahead then doc, knock him out and cut him up." You could say, well the nature of the (commonly current) human moral intuitive drive CAN'T be disciplined to not think it would set a new social rule and a precedent with all those implications, even when you try to tell someone to do so, but now you are just layering salvaging kludge upon salvaging kludge and getting on thin ice.
I'm saying that utilitarian conclusions are defined by widely-held core moral intuitions. If a decision which seems utilitarian violates everyone's morals, it's usually not utility maximizing.
According to survey data, about 90% of people would redirect a trolley away from five people to kill one. And yet, I agree that very few people will say "knock him out and cut him up" in the one-off no-precedent version of the surgeon thought experiment you proposed. Since these are effectively the same thought experiment, the only way we can explain the inconsistent answers is by saying human moral intuitions aren't good enough to handle this. Doesn't feel kludgey at all, and we have to explain this discrepancy SOMEHOW.
Certainly not. Every society in history has had some kind of concept of property rights. Even many animals seem to. Seems to be baked into our genetics (for fairly obvious reasons).
There's a legal saying, "hard cases make bad law". Clearly our innate moral sense has some element of property rights as well as an element of utilitarianism.
It's why most societies have thought it's not OK to pick a random innocent person and throw them to the lions in the arena to be torn to bits for the entertainment of thousands, but it's OK to conscript random innocent people to fight an invader.
We weigh both elements - rights and utility. (And other considerations.) Our moral intuitions don't follow clean logical (legal-type) rules - they're the result of millions of years of evolution - the morals we have are what worked for our ancestors' survival in the (very different from today) environment they lived in.
Not having a system of rights to restrain utopians of any kind (democrats, utilitarians...) has lots of disutility. Democracy and human rights are in tension.
Yes. I've heard this called "rule utilitarianism," and it's why I support following the rule of law, including the U.S. Constitution (or the Canadian Charter of Rights and Freedoms, Grundgesetz in Germany, etc.) even in cases in which the law doesn't yield the result we might want in a vacuum.
A system in medical practitioners can't be absolutely trusted not to deliberately harm patients would have horrible consequences relative to one in which we laymen believe doctors follow medical ethics. If this surgeon kills one to save five, then word will get around. Some people who need help won't visit doctors, or they'll come too late. People will decline to be organ donors because they're afraid of this. People will refuse to get vaccines.
The same can't really be said about the trolley problem, whether the main version or the "do you push the gigantic man off the bridge" version. If it were possible to avoid the results of the surgery beyond those six patients, then of course, the correct answer would be to do it, just as you'd pull the lever or push the huge dude off the bridge.
Why does nobody ever mention the bargaining/probabalistic option here? don't kill the healthy person, get agreement among the sickly to roll dice among themselves for who dies a day early so the others can live.
You have two (A and B) with healthy heart and kidneys, but no usable lungs. two (C and D) with bad kidneys but healthy lungs and heart. and one (E) with healthy kidneys and lungs, but needs a heart. Solutions:
Kill E, others all live.
Kill any two others, the 3 remaining live.
Set the odds so that ALL 5 agree that they prefer to gamble rather than not. Roll the dice, kill the losers.
I wonder if anyone would still think this to be unethical.
Bonus: what if they are already sedated and there's no way to get their consent? To me, it's still definitely ethical, but I'm sure some would disagree
I think this is the key distinction. Consent makes the question easy. Non-consent triggers our intuitions about autonomy and individual rights that override group needs.
Thus the massive practical advantage in framing a problem in such a way that there is a consensual solution. Cases where there is no such option are going to be controversial and different humans will have different intuitions about the values.
what if we go to an edge case and instead of 5:1 ratio it's 8 billion to 1? e.g. asteroid hitting earth or global 100% death rate pandemic type scenario where only one person can save the day if sacrificed? my guess is a majority, maybe a large majority, would say butcher him up (especially if they actually had to vote on the decision in real life as opposed to just speculating philosophically)
Jeremy Bentham and utilitarianism need to be constrained in economics by natural rights. Henry George made some telling criticisms in his "A Perplexed Philosopher" debate with Herbert Spencer. Economists do not know that taxes are aids, gifts and benevolences to the sovereign - the theory is one of common consent like club dues.
Because property rights. I own my body and get to decide what to do with it.
It doesn't matter how much value it has to other people.
As with anything else I own.
Except - if you forcibly take things I own other than my body, I can in principle be compensated for the loss with money (more than market price; we are talking about a forced sale).
A challenge is the slippery slope problem. If we could kill 1 person to save 5, why not kill 50,000 healthy people to save 250,000 sick people? The problem is that ideas morph over time and, while Bryan is not making this case, incrementally, the 3rd cycle of this thinking can lead to human killing factories.
In a nice, clear, thought experiment like this one it might work out. However, in real life almost every time someone forcibly harms someone else "for the greater good", the harm appears and the good doesn't.
This is an overused and weak argument against consequentialism. I’m not a consequentialist, but this strikes me as so one dimensional. Obviously the consequences of a doctor killing a patient would be disastrous for health when nobody goes to doctors any more.
People love to trot out this argument when arguing against utilitarianism. "See? When we follow utilitarianism, we end up in this dystopian nightmare world where doctors kill you to save strangers." To me, it seems absurd that something many people would describe as a "dystopian nightmare world" would ever maximize utility. And so perhaps killing one man to give his organs to save five others is not actually utilitarian, when you consider the long run.
You're just evading the point of the thought experiment, which is that utilitarian conclusions can conflict with widely-held core moral intuitions. To make it clearer, one could add the extra factor of it being given that this is a one-off case that gets no publicity and sets no precedent and has zero long term or large scale implications. People won't just say, "Oh, well, in that case, go ahead then doc, knock him out and cut him up." You could say, well the nature of the (commonly current) human moral intuitive drive CAN'T be disciplined to not think it would set a new social rule and a precedent with all those implications, even when you try to tell someone to do so, but now you are just layering salvaging kludge upon salvaging kludge and getting on thin ice.
I'm saying that utilitarian conclusions are defined by widely-held core moral intuitions. If a decision which seems utilitarian violates everyone's morals, it's usually not utility maximizing.
According to survey data, about 90% of people would redirect a trolley away from five people to kill one. And yet, I agree that very few people will say "knock him out and cut him up" in the one-off no-precedent version of the surgeon thought experiment you proposed. Since these are effectively the same thought experiment, the only way we can explain the inconsistent answers is by saying human moral intuitions aren't good enough to handle this. Doesn't feel kludgey at all, and we have to explain this discrepancy SOMEHOW.
An alternate explanation is that core moral intuitions are not utilitarian, at least not exclusively.
Certainly not. Every society in history has had some kind of concept of property rights. Even many animals seem to. Seems to be baked into our genetics (for fairly obvious reasons).
There's a legal saying, "hard cases make bad law". Clearly our innate moral sense has some element of property rights as well as an element of utilitarianism.
It's why most societies have thought it's not OK to pick a random innocent person and throw them to the lions in the arena to be torn to bits for the entertainment of thousands, but it's OK to conscript random innocent people to fight an invader.
We weigh both elements - rights and utility. (And other considerations.) Our moral intuitions don't follow clean logical (legal-type) rules - they're the result of millions of years of evolution - the morals we have are what worked for our ancestors' survival in the (very different from today) environment they lived in.
Not having a system of rights to restrain utopians of any kind (democrats, utilitarians...) has lots of disutility. Democracy and human rights are in tension.
Yes. I've heard this called "rule utilitarianism," and it's why I support following the rule of law, including the U.S. Constitution (or the Canadian Charter of Rights and Freedoms, Grundgesetz in Germany, etc.) even in cases in which the law doesn't yield the result we might want in a vacuum.
A system in medical practitioners can't be absolutely trusted not to deliberately harm patients would have horrible consequences relative to one in which we laymen believe doctors follow medical ethics. If this surgeon kills one to save five, then word will get around. Some people who need help won't visit doctors, or they'll come too late. People will decline to be organ donors because they're afraid of this. People will refuse to get vaccines.
The same can't really be said about the trolley problem, whether the main version or the "do you push the gigantic man off the bridge" version. If it were possible to avoid the results of the surgery beyond those six patients, then of course, the correct answer would be to do it, just as you'd pull the lever or push the huge dude off the bridge.
Why does nobody ever mention the bargaining/probabalistic option here? don't kill the healthy person, get agreement among the sickly to roll dice among themselves for who dies a day early so the others can live.
You have two (A and B) with healthy heart and kidneys, but no usable lungs. two (C and D) with bad kidneys but healthy lungs and heart. and one (E) with healthy kidneys and lungs, but needs a heart. Solutions:
Kill E, others all live.
Kill any two others, the 3 remaining live.
Set the odds so that ALL 5 agree that they prefer to gamble rather than not. Roll the dice, kill the losers.
Nothing involuntary needed here.
I wonder if anyone would still think this to be unethical.
Bonus: what if they are already sedated and there's no way to get their consent? To me, it's still definitely ethical, but I'm sure some would disagree
I think this is the key distinction. Consent makes the question easy. Non-consent triggers our intuitions about autonomy and individual rights that override group needs.
Thus the massive practical advantage in framing a problem in such a way that there is a consensual solution. Cases where there is no such option are going to be controversial and different humans will have different intuitions about the values.
That's not the point of the exercise but indeed I also thought the same
what if we go to an edge case and instead of 5:1 ratio it's 8 billion to 1? e.g. asteroid hitting earth or global 100% death rate pandemic type scenario where only one person can save the day if sacrificed? my guess is a majority, maybe a large majority, would say butcher him up (especially if they actually had to vote on the decision in real life as opposed to just speculating philosophically)
Please read "The Jigsaw Man" by Larry Niven in 1967. (there's a wikipedia page if you can't find it)
As a cyclist that often sees people looking at their phones while driving; I endorse this message
Jeremy Bentham and utilitarianism need to be constrained in economics by natural rights. Henry George made some telling criticisms in his "A Perplexed Philosopher" debate with Herbert Spencer. Economists do not know that taxes are aids, gifts and benevolences to the sovereign - the theory is one of common consent like club dues.
Just as I was beginning to have sympathy for the utilitarians, someone argues that my organs would do more good in 5 other people. Oh well.
Because property rights. I own my body and get to decide what to do with it.
It doesn't matter how much value it has to other people.
As with anything else I own.
Except - if you forcibly take things I own other than my body, I can in principle be compensated for the loss with money (more than market price; we are talking about a forced sale).
That doesn't apply to my life.
Having a "right" to property means you can STOP other people from doing things with your property.
It doesn't mean you can force them to do things with it.
You can't force a mechanic to break your car into parts either, if he doesn't want to.
That's fair.
A challenge is the slippery slope problem. If we could kill 1 person to save 5, why not kill 50,000 healthy people to save 250,000 sick people? The problem is that ideas morph over time and, while Bryan is not making this case, incrementally, the 3rd cycle of this thinking can lead to human killing factories.
Killing the one is bad *and* killing the 50,000 is bad too. This is a case where you don't even need to make a slippery slope argument!
You are correct. This is where utilitarian arguments fall down. Good call.
In a nice, clear, thought experiment like this one it might work out. However, in real life almost every time someone forcibly harms someone else "for the greater good", the harm appears and the good doesn't.
This is an overused and weak argument against consequentialism. I’m not a consequentialist, but this strikes me as so one dimensional. Obviously the consequences of a doctor killing a patient would be disastrous for health when nobody goes to doctors any more.
Think one extra step. Just one.
In the example taking the needed organs from the patient needing a heart would save 4 of the 5!
I know, in reality there are compatibility issues, but why not require everyone who receives an organ to donate one kidney or something else?