Utilitarianism does not lead to repugnant conclusions

My previous post triggered a lot of comments that need to be addressed.  Let’s imagine a vaccine that we could give to all 7 billion people.  The vaccine would prevent lots of cases of an unpleasant illness that lasts for one week, but one out of the 7 billion will die from nasty side effects.  Does it make sense to do this?  Most intelligent people would say yes.  (To simplify things suppose the cost is almost zero, and it’s voluntary.  The program is government funded.)  We make lots of other similar trade-offs in life, as with speed limits of 65/mph rather than 25mph, even though the higher speed leads to more fatal accidents.  There are trade-offs between risk and pleasure.

If you are not with me so far, stop reading.  If you are willing to concede that a small risk of death might be a price worth paying for a more pleasant life, then consider a universe where there are 6 million Earth-like planets, all facing the same dilemma.  If it makes sense for Earth to provide this vaccine, then presumably it makes sense on those other 6 million planets.  Now the cost of the vaccine is that 42 quintillion quadrillion people have more pleasant lives, at a cost of 6 million deaths.  That doesn’t sound like such a good trade-off, does it?  That’s a lot of dead people for merely the benefit of making life better for the rest of us.  Does utilitarianism fail at large numbers?

No, utilitarianism doesn’t fail; what fails is our brainpower.  Our brains can visual that a cost of 6 million lives is much worse than a cost of one life. But they can’t visualize the fact that the benefit of 42 quintillion quadrillion happier people is vastly greater than the benefit of 7 billion happier people.  When I try to visual the number of stars in the universe, it’s not much different from my attempt to visualize the number of stars in our galaxy, even though the former number is vastly larger.

Many of the “repugnant conclusion refutations” of utilitarianism rely on similar cognitive failures.  Another set of tricks postulates some horrific practice that occurred in the past, and then ask “What if slave owners got more pleasure from slavery than the suffering of the slaves.  Would slavery be OK then?”  The trick here is that we (or at least I) find slavery repulsive largely for utilitarian reasons.  So the question puts us in an awkward position.  If we endorse utilitarianism then we seem to be endorsing actual real world slavery, even though our utilitarianism has actually caused us to reject slavery.  Indeed I’d go further, and argue that slavery was abolished in the 19th century largely for utilitarian reasons. The 19th century saw an enormous boom in utilitarian thinking.

One commenter pointed out that utilitarianism goes against certain instincts that are hard-wired into our brains.  Maybe so, but lots of those can change over time, as culture changes.  Our modern culture is more utilitarian than past cultures.  Even during my lifetime, I’ve seen the rise of utilitarian thinking in many areas (gay marriage, anti-bullying campaigns in schools, making it illegal to rape one’s spouse, civil rights for blacks, etc., etc.)

These utilitarian instincts come from the narrative arts (literature and film).  Art that makes you aware of the suffering of others.  Since young Americans saw many more sympathetic gay characters on TV than old Americans, they are far more in favor of gay marriage, and far less likely to vote for Trump.  Milan Kundera said that Europeans are the children of the novel.  There are reasons why Denmark is more utilitarian that Saudi Arabia, and why American politicians who read novels (Obama) are more utilitarian than those who do not (Trump).  If Trump would read this horrific exposé in the NYT (highly recommended), showing the human costs of Duterte’s slaughter of thousand of Filipino drug users, he might reconsider his support for the program.  Even better, if he read a novel about the lives of the victims, and their loved ones.

PS.  In fairness, utilitarianism is no cure-all.  Although Obama is more of a utilitarian, Trump’s views on labor market and financial regulation are actually more utilitarian that Obama’s, for reasons I discussed in my previous post.  You must also avoid cognitive illusions in economics.

PPS.  Don’t waste time writing comments claiming that I don’t really believe what I write.  You don’t know me, and I’m not like you.

screen-shot-2016-12-10-at-10-48-15-am


Tags:

 
 
 

70 Responses to “Utilitarianism does not lead to repugnant conclusions”

  1. Gravatar of Dtoh Dtoh
    10. December 2016 at 08:14

    Scott, perhaps you really do believe it’s OK to rob Peter to pay Paul, but surely you must also know that that view is so abhorrent that it is likely to result in Peter either resorting to violence or voting for Donald Trump.

  2. Gravatar of ssumner ssumner
    10. December 2016 at 08:35

    Dtoh, If you mean do I literally favor robbing Peter to pay Paul (in a lawless manner), then the answer is obviously no, as I clearly indicated in the previous comment section. If you mean do I favor redistribution programs like the EITC that would metaphorically “rob” Peter to pay Paul, then the answer is yes, but very few Americans find those programs to be “repugnant.” Indeed even Trump favors these programs. Indeed he campaigned against other GOP candidates who promised cuts in these programs, by saying he would avoid cutting the welfare state. Trump’s more of a socialist than I am—I favor a much smaller welfare state than he does.

  3. Gravatar of Dtoh Dtoh
    10. December 2016 at 08:57

    Scott
    Why would you favor domestic programs like EITC and not programs to transfer money outside of the US to poorer countries where the marginal utility of a dollar is higher.

    To you favor legal theft because you think it more ethical or just because it has fewer bad consequences?

  4. Gravatar of entirelyuseless entirelyuseless
    10. December 2016 at 08:59

    If you trace utilitarianism to cultural change that comes from reading novels, that should make you very skeptical of utilitarianism considered as a truth about how people should behave.

  5. Gravatar of Dtoh Dtoh
    10. December 2016 at 09:04

    Scott, I also think your vaccine analogy is not a good one. In that scenario everyone’s initial expected outcome is positive. It’s an easy decision. Everyone upfront would agree to it. If you know in advance to the losers are, then you have a very different ethical decision that you have to make.

  6. Gravatar of Dtoh Dtoh
    10. December 2016 at 09:05

    Scott, why is what’s happening in the Philippines a bad thing? If it’s legal and leads to an increase in aggregate utility, isn’t it a good thing?

  7. Gravatar of Dhruv Dhruv
    10. December 2016 at 09:26

    Completely unrelated but it would be great to get your thoughts on the Indian demonetisation drive

  8. Gravatar of Leigh Caldwell Leigh Caldwell
    10. December 2016 at 10:14

    Novels can perhaps help explain that part of utilitarianism which arises from empathy for one’s fellow human beings – the aggregation of “the greater good” over 7 billion people. But there’s another important part of being utilitarian: the ability to aggregate utility over time. In the moment that any decision is made, this requires a capability to relate to, and act on, one’s own future pleasure or pain.

    I think that this “empathy” for one’s future self is the more fundamental capability, and the origin of the other. Empathy for others is likely to arise as a secondary consequence of this. Novels (and the arts in general) are an amplifier for this kind of empathy but it doesn’t start from there.

  9. Gravatar of Christian List Christian List
    10. December 2016 at 10:15

    Ssumner again is acting like the dubious “utility” (in the theory of utilitarianism) is something obvious for every one to see, something humans can objectively determine and agree on. But it’s not. It’s totally subjective and ever-changing.

    And why would you bring up Duterte as an example? That’s one of the worst examples you could have picked. In his view he follows a strict utilitarian approach: Getting very though on a few criminals so that millions of other people are better off.


    horrific exposé in the NYT (highly recommended), showing the human costs of Duterte’s slaughter of thousand of Filipino drug users

    So that’s your idea of an utilitarian approach? A one-sided article that appeals to emotion?

  10. Gravatar of E. Harding E. Harding
    10. December 2016 at 11:10

    “Since young Americans saw many more sympathetic gay characters on TV than old Americans, they are far more in favor of gay marriage, and far less likely to vote for Trump.”

    -Trump won White millennials, despite a high Johnson vote. Adjusted for race, there is little to no evidence of “far less likely to vote for Trump”. The age gap was certainly smaller than in 2012 and 2008. Young people were much more likely to vote for Bernie Sanders in the primary than old people, and much less likely to vote for Hillary Clinton and John Kasich in the primary than old people. That’s it. If the electorate was limited to millennials with the same racial composition as the actual total 2016 electorate, I’d like Trump’s chances.

  11. Gravatar of E. Harding E. Harding
    10. December 2016 at 11:12

    “There are reasons why Denmark is more utilitarian that Saudi Arabia, and why American politicians who read novels (Obama) are more utilitarian than those who do not (Trump).”

    -There is no evidence Obama is more utilitarian than Trump.

  12. Gravatar of E. Harding E. Harding
    10. December 2016 at 11:16

    BTW, Sumner, you should read my comment on marginal support for the Republican nominee this year. Econlog isn’t publishing it.:

    “This all may seem to be about trade, but it’s actually about automation and low-skilled men who feel emasculated.”

    -It’s not even about that. It’s about education. The sheer extent and broadness of Trump’s gains relative to Romney (and Clinton’s losses relative to Obama) with the non-college-educated of every race, location, and sex is something which cannot be explained by the loss of manufacturing jobs, racism, or any other facile and wrong explanation. Trump even made gains (percentage-wise, at least) relative to Mitt Romney in most counties in Texas, even as he got a smaller percentage of the vote than Mitt Romney nationwide (and in Texas, due to the cities shifting against him). Most counties in Texas have never had an interest in protection of manufacturing jobs. Almost all were solidly Democratic (pro-free-trade) in every sense until 1928. In the county of Indiana which went heaviest for Cruz in the primary, Trump in November got a higher percentage of the vote and a higher number of votes than Mitt Romney did in 2012.

    Focusing on manufacturing jobs (proof: rural Texas), racism (proof: 1964 election, Trump’s gains among Blacks and Hispanics), sexism (proof: Hillary 2008, Trump’s gains among non-college women), or even primary vote (proof: Indiana 2016 primary), as an explanation for Trump’s gains over Romney is a fool’s errand. It’s about education.

  13. Gravatar of Rob Rob
    10. December 2016 at 11:20

    By making the vaccine voluntary in your hypothetical – you don’t merely simplify it, you take away rights-based (libertarian) objections. It’s simply a risk vs. reward calculation for individuals to make for themselves.

    I don’t see how answer your hypothetical elucidates the conflict between rights-based approaches and utilitarianism. (Or perhaps I’ve missed the point of the hypothetical.)

    Thanks for blogging – always enjoyable and educational.

  14. Gravatar of Vladimir Vladimir
    10. December 2016 at 11:31

    Dtoh, being a utilitarian doesn’t mean being stupid. I can think of at least 50 better ways to tackle rampant drug use in the Philippines that doesn’t involve mass murder. This is why deontological rules are so helpful to humans. We’re quick to assume the best solution is to kill our rivals, so to be true consequentialists (utilitarians) we developed these deontological rules of thumb (like don’t ever kill ppl) which help correct for our distorted brainpower.

    http://lesswrong.com/lw/uv/ends_dont_justify_means_among_humans/

  15. Gravatar of Tom Brown Tom Brown
    10. December 2016 at 11:52

    Re: novels: Scott, you may find this interesting: the case against empathy:
    http://rationallyspeakingpodcast.org/show/rs142-paul-bloom-on-the-case-against-empathy.html
    There’s a transcript at the bottom. He makes a distinction between compassion & empathy.

  16. Gravatar of ssumner ssumner
    10. December 2016 at 12:25

    dtoh, Good question. I feel that the best way to help the world’s poor is by increasing immigration. If that can’t be done for political reasons, then send money directly to the world’s poor (not their governments). If that can’t be done, then do the EITC.

    entirelyuseless, I don’t follow your reasoning.

    dtoh, You said:

    “Everyone upfront would agree to it. If you know in advance to the losers are, then you have a very different ethical decision that you have to make.”

    This is the “girl stuck at the bottom of the well” example, which does raise an interesting dilemma. It seems that the millions of people watching the struggle to get her out (on TV) get a lot of disutility from not “doing all we can”. We’d do more than we would to save a statistical life through highway improvements. Whether that is consistent with utilitarianism is an interesting question.

    Dtoh, Why is it a bad thing to murder thousands of innocent people (maybe millions if Duterte gets his way?) Why do I even have to answer that question? And no, it does not make the Philippines a happier place.

    And no, it’s not being done lawfully.

    Drhruv, Seems like a bad idea, I’ll try to do a post soon.

    Leigh, Long time no see. Thanks for bringing back memories of my old comment section, when people like you frequently left thoughtful comments. Before my Trump derangement brought it down into the gutter.

    Some people believe that the advantage of saving for the future is stronger in climates with long winters, and that leads to cultural change which promotes saving, investment, and ultimately, capitalism. (That’s not to say that capitalism can’t thrive in warm places (Singapore) just that it was unlikely to start there.)

    Christian, You said:

    “Ssumner again is acting like the dubious “utility” (in the theory of utilitarianism) is something obvious for every one to see, something humans can objectively determine and agree on.”

    No I am not.

    As far as your seeming to condone mass murder, I have nothing to say–except read Vladimir.

    Harding, Your facts are wrong about the young, but there’s some truth to your claim that the less well educated supported Trump.

    Vladimir, Good comment.

    Thanks Tom.

  17. Gravatar of E. Harding E. Harding
    10. December 2016 at 12:33

    “Your facts are wrong about the young”

    -No, they’re not wrong.

  18. Gravatar of Eric Falkenstein Eric Falkenstein
    10. December 2016 at 12:50

    6M to 1 is easy, especially when each individual treated has the same ex ante probability of being helped or hurt: the PV for everyone is the same. What about the case of drafting 1 random healthy individual for his organs to save 10 people? It seems 10 lives>1 life makes it good from a utilitarian perspective. Most people would say it is not right.

  19. Gravatar of Jack L Jack L
    10. December 2016 at 13:05

    @Eric because most people make the intuitive decision by considering the context around the scenario, whereas the ‘utilitarian’ decision is forced by the thought experiment to ignore all the context.

    Utilitarianism also doesn’t claim a particular utility function; most utilitarianists would agree that they don’t know what the right utility function or aggregation method is. Utilitarianism is more about acknowledging that there exists an utility function (without constructing it), and avoiding behaviours that are inconsistent with a UF existing. This usually reduces to consequentialism and Von Neumann–Morgenstern coherency.

    So, possible factors in an utility function that might explain why killing one unwilling person to save ten might be wrong:

    1. Everyone from now on needs to live in fear of being selected for harvesting. Fear is disutility.
    2. The selection process might not be fairly random, and thus be exploitable; this gives defectors too much power and means the harvesting isn’t de-facto a slightly higher accident rate, but instead could be controlled by agents that mean us harm. (compare: harvesting natural disaster victims for organs; is it *as* bad as picking a person to harvest? Doesn’t feel like it)
    3. And then of course is the option of people actually being wrong about their intuitions; Omelas might seem like a horror in far thinking mode, but I highly doubt many of us would walk away despite what we might think and say now.

    And lastly, of course, does it really matter if our morality system gives you the intuitive answer in situations that don’t actually ever happen? The way to save those 10 people isn’t ever going to be to take one healthy person apart; it’s to work and improve medications, artificial organs, etc. Taking people apart doesn’t scale.

  20. Gravatar of Major.Freedom Major.Freedom
    10. December 2016 at 13:08

    Let’s imagine a vaccine that we could give to all 7 billion people.  The vaccine would prevent lots of cases of an unpleasant illness that lasts for one week, but one out of the 7 billion will die from nasty side effects.  Does it make sense to do this?  Most intelligent people would say yes.  (To simplify things suppose the cost is almost zero, and it’s voluntary.  The program is government funded.)

     

    The seemingly innocuous comments that follow “To simplify things…” actually turns the scenario into one that does not speak to utilitarianism at all.  To “simplify” this scenario by having each individual decide for themselves whether they want to ingest the vaccine or not, turns it into a scenario where utilitarian ethics plays absolutely no role at all.  Each individual decides for him or herself what to do, based on their own intentions, values, and happiness.  It doesn’t matter that the vaccine statistically affects the population in such a way that 7 billion live while one person dies.  Each individual is making the choice and assuming the risk voluntarily.  They are as individuals deciding that the benefits of the vaccine to themselves outweigh the risks of death to themselves.  This scenario is not even related to utilitarian ethics.  Such a society could very well be a pure anarcho-capitalist ethics.

    We make lots of other similar trade-offs in life, as with speed limits of 65/mph rather than 25mph, even though the higher speed leads to more fatal accidents.  There are trade-offs between risk and pleasure.

    But again this “trade off” is what the individual makes for themselves.

    If you are willing to concede that a small risk of death might be a price worth paying for a more pleasant life

    That is sloppy reasoning.  Who exactly is introducing risks of a given individual’s death here?  Who exactly is going to end up having a more pleasant life given that risk was introduced?  Are they the same person?  Or is one person introducing risks of death on another, against their will, so that first person can live a more pleasant life?  You just aren’t being sufficiently attention detailed about the most important factors that need to be factors in all ethical reasoning, including utilitarian ethical reasoning.

    Yes, I as an individual can make the choice to introduce risks of death to myself, if I as an individual expect the payoff to be worth it, to me.  If everyone did this, after which it turns out there were a few deaths, this doesn’t justify utilitarian ethics at all.

    No no, in order for a person to justify utilitarian ethics in this scenario, they have to at least be willing to act upon the forceful ending of life a few people here and there, as a means to have the rest benefit.

    If you insist that every scenario to test utilitarianism has to in the first place be one of individual voluntarism, then you are no longer talking about utilitarianism ethics, at best you would only be referring to the outcomes which you expect to look like utilitarian intentions prior, which is “greatest good for greatest number”.

    …then consider a universe where there are 6 million Earth-like planets, all facing the same dilemma.  If it makes sense for Earth to provide this vaccine, then presumably it makes sense on those other 6 million planets.  Now the cost of the vaccine is that 42 quintillion people have more pleasant lives, at a cost of 6 million deaths.  That doesn’t sound like such a good trade-off, does it?  That’s a lot of dead people for merely the benefit of making life better for the rest of us.  Does utilitarianism fail at large numbers?

    No, utilitarianism doesn’t fail; what fails is our brainpower.  Our brains can visual that a cost of 6 million lives is much worse than a cost of one life. But they can’t visualize the fact that the benefit of 42 quintillion happier people is vastly greater than the benefit of 7 billion happier people.

    Sumner, your brain is failing with that comment because it is an economic fallacy, which by the way begs the question.

    By blithely asserting that the “benefit of 42 quintillion happier people” is “vastly greater” than the “benefit of 7 billion happier people”, you are assuming the conclusion you need to prove, namely the “vastly greater” claim, to already be true.  

    In fact, that is claim is wrong.  It is wrong not because the contra-positive claim is true, but because it is a claim that contradicts what is actually taking place.  It is an economic fallacy to believe that one can add or subtract or multiply or divide inter-personal subjective values.  You cannot add or subtract utilities across different people.  There is no such thing as “social” utility apart from the distinct and non-summable utilities of the individual people.  If one person is worse off relative to their own counter factual range of possible outcomes, while say ten or 20 people are each better off relative to their own counter factual range of possible outcomes as individuals, then it is an utter fallacy to imagine the existence of some “higher” or “aggregate” entity or being or concept or reality, that “gains on net”.  The reality is that one person is worse off.  

    You can imagine that it is morally justified to aggress against that one person, either because it makes you personally better off, or because it makes you and say one other person better off, but what you cannot claim is the existence of some sort of “net” or “aggregate” gain.  There is nothing that experiences any aggregate or net gain.  It is all in your personal, cognitive failure filled, delusional and warped philosophy.  Intellectually you are on par with a person who imagines the existence of a God who “benefits” when a few innocent people are slaughtered to the cheers of the many on Earth.  All you did was secularize God and named it “Humanity” or “Society”.

    Many of the “repugnant conclusion refutations” of utilitarianism rely on similar cognitive failures.

    Wait what?  How can your failed explanation of utilitarianism and fallacy filled scenarios and assumptions, possibly constitute a showing of the cognitive failures of the critics of utilitarianism?  This is just laughably obtuse reasoning.

    Another set of tricks.postulates some horrific practice that occurred in the past, and then ask “What if slave owners got more pleasure from slavery than the suffering of the slaves.  Would slavery be OK then?”  The trick here is that we (or at least I) find slavery repulsive largely for utilitarian reasons.  So the question puts us in an awkward position.  If we endorse utilitarianism then we seem to be endorsing actual real world slavery, even though our utilitarianism has actually caused us to reject slavery.

    This is just pure avoiding and dodging of the question.  The question is what do you have to say about utilitarianism in a world where most people actually believe slavery is a good thing.  You are just being conceited by insisting that your own personal belief about slavery being a bad thing is “the one true” belief regarding utilitarianism.

    No, utilitarianism is not an objective ethics where “true” utilitarianism necessarily and forever includes “no slavery”.  It depends on the people’s beliefs at the time and their happiness when acting on those beliefs at the time.  You are just labelling “no slavery” as utilitarian TODAY simply because you are fortunate enough to live in a world where most people are for the most part are happier without slavery.  You can’t do that.  The question was about the past when most people were happier WITH slavery.  At that time, utilitarianism sanctions slavery.  It would not be until most people disliked slavery that the utilitarians can pretend to participate and say aha, see?, since ending slavery is a good thing, and since most people are against it, utilitarianism works!

    Um, no.

    Way is actually happening is that when people change their views over time for non-utilitarian reasons, that is what utilitarianism completely depends on.  Utilitarianism is really a historical explanation of what most people were happy about at any given time.  It cannot serve as an ethic of what people ought to do and what they ought not do going forward. We need to first wait and see what makes most people happiest, and the only way we can wait and see for that, is if another set of ethics has already been presupposed by virtue of whatever people actually did from then up until now that gave you the information on what most people are most happy about.

    What, did you first conduct a study on what most people tell you will make them happiest next week or next year, before you decided what you yourself ought to do and what you ought not do until next week or next year?  Of course not!  Did anyone else?  Of course not!  How can anyone do this when everyone has to wait for everyone else?

    Nobody can actually follow utilitarian ethics.   What utilitarianism really is, is a strategy for states to test their aggressions against the populace.  Do you think it is a coincidence that every Utilitarian has always framed their advocacies in terms of what states ought to do not do?  No individual person, company owner or otherwise, physically comes into contact with anywhere close to the bodies or property of “most people”.   

    Is is not a coincidence that Sumner can only imagine a government vaccine program.

    The first Utilitarians were people who wanted states to behave in such a way that the fewest people were harmed and the most people were benefitted.

    Some utilitarians were fascist eugenicists.  Others, far fewer of course, were radical atheists who wanted states to shut down entirely.  All of them were not really utilitarians though in how they acted and what they believed they ought to do or not do themselves.

    Utilitarianism is grounded on the happiness of people at the time, and accepts people being “wrong” according to other, non-utilitarian ethics.  As long as “most people find the most happiness” in a class of actions, then utilitarianism accepts those actions as justified.

    Indeed I’d go further, and argue that slavery was abolished in the 19th century largely for utilitarian reasons. The 19th century saw an enormous boom in utilitarian thinking.

    But only if most people in fact were most happiest with abolishing slavery, can abolishing slavery be “utilitarian” as you define it.  If in fact most people believed slavery was moral, which was the case for tens of thousands of years in human history, then slavery is in fact utilitarian.

    PS.  In fairness, utilitarianism is no cure-all.  Although Obama is more of a utilitarian, Trump’s views on labor market and financial regulation are actually more utilitarian that Obama’s, for reasons I discussed in my previous post.  You must also avoid cognitive illusions in economics.

    TIL Utilitarianism = What Sumner believes is ethical.

  21. Gravatar of Major.Freedom Major.Freedom
    10. December 2016 at 13:15

    To clarify, the utilitarian ethics Sumner is espousing is one among many versions. His is a collectivist version. The hedonistic, egoistic version, similar to the virtue-free hedonism of Bentham, is also a utilitarianism. Hitler was a utilitarian. He did what he thought gained him the most utility and that served as what was just to him.

    Disclaimer: Even though Sumner and Hitler were both Utilitarians, they did have some differences.

  22. Gravatar of engineer engineer
    10. December 2016 at 13:17

    Why do I even have to answer that question? And no, it does not make the Philippines a happier place. And no, it’s not being done lawfully.

    Does not nearly every leader think he is doing the greater good at least in the long run and considers themselves a utilitarian? I am sure that Jim Jones thought he was doing the greater good when he handed out the kool aid. I am sure Duterte thinks he is doing the greater good with his war on drugs. Crystal Meth is the problem drug in the Philippians. The impact on ruined lives is pretty enormous, drug overdoses, inter gang killings, etc is pretty huge…A short term brutal crackdown would be justified as a utilitarian solution if it resulted in a major decrease in drug use and a better society in the long run would it not? After all these Chinese drug gangs are smart and brutal, they can not be handled by ordinary police tactics.

  23. Gravatar of Major.Freedom Major.Freedom
    10. December 2016 at 13:30

    Summer you can test your collectivist utilitarianism by honestly answering this question:

    Suppose your spouse and children, and your closest friends, were the only beings in the universe who somehow had a biological trait that when extracted from all, and only all, could cure cancer and all other diseases for everyone forever. We’re talking trillions and trillions of lives.

    Unfortunately, extracting the trait from a person results in their death.

    Sumner, would you murder your spouse, children and closest friends to save the lives of strangers? And, would you tell them this to their faces today? “If your body is discovered to have the cure for cancer and disease, I will murder you, just so you know.” Would you tell that to a child? If not, why not? Would it be a good or bad thing to say based on the few people in your life, or based on “the greatest good for the greatest number”?

    You said utilitarianism does not lead to repugnant conclusions. Prove to your readers how your answer to the above is not repugnant.

  24. Gravatar of Major.Freedom Major.Freedom
    10. December 2016 at 13:37

    What most people are most happy about doing is NOT the same thing as what most people ought to be most happy about doing. Sumner keeps conflating these two things.

  25. Gravatar of Christian List Christian List
    10. December 2016 at 13:58


    As far as your seeming to condone mass murder…

    I’m not condoning mass murder, quite the opposite is true. I was pointing out that Duterte (in his opinion) follows a strict utilitarian approach and that millions of Filipinos seem to agree with him and that they got a point: It is in fact a utilitarian approach. A bad one but it is one.

    I’ve read Vladimir and I liked it. I assume I liked it because he is right but also because he is basically saying what I’ve been saying all the time:

    Utilitarianism is the best navigational system but it needs (deontological) “rules of thumb” as a foundation, like “You shall not kill” for example. Rules that have been established over millennia. Rules that existed before utilitarianism and cannot necessarily be derived from utilitarianism alone.

    Pure Utilitarianism comes into play “only” at the margin, at the boundaries, for example when people realize that a certain millennia-old rule of thumb is not working in a specific case.

    Maybe we can meet at this point?


    We’re quick to assume the best solution is to kill our rivals, so to be true consequentialists (utilitarians) we developed these deontological rules of thumb (like don’t ever kill ppl) which help correct for our distorted brainpower.

    That’s Hegelian dialectic at its very best. It’s really good. It tries to distract from the fact that utilitarianism lacks exactly those basic rules of thumb by drawing this lovely synthesis. I really think Hegel would have loved it.

    I also should point out that I really admired your “The Great Danes” article, especially the parts about the Great Depression. And all your articles about organ trade of course. I’m always very harsh on you when in fact you are right at least 95% of the time. I’m just trying to attack you on the 0-5% because there’s the room for discussion.

  26. Gravatar of TravisV TravisV
    10. December 2016 at 14:31

    John Cochrane on China:

    http://johnhcochrane.blogspot.com/2016/12/the-next-crisis.html

    WSJ editorial board on China (new):

    http://www.wsj.com/articles/trumps-chinese-currency-manipulation-1481155139

  27. Gravatar of Dtoh Dtoh
    10. December 2016 at 14:56

    Scott,
    I’m not saying that what is happening in the Philippines is either legal or increases happiness. What I asked is if it WERE legal and it did increase happiness would it then be OK under your utilitarian value system?

    Also are you using happiness and utility interchangeably?

  28. Gravatar of Tiago Tiago
    10. December 2016 at 15:14

    Have you read “moral tribes” by Joshua Greene? If not, you will like it a lot

  29. Gravatar of B Cole B Cole
    10. December 2016 at 16:14

    Since young Americans saw many more sympathetic gay characters on TV than old Americans, they are far more in favor of gay marriage, and far less likely to vote for Trump. —-Sumner.

    It was Trump who stood in front of the GOP convention and condemned Orlando as an attack on gays and said that should not happen in our country. For all of his flaws, Trump seems to have lifted the GOP out of homophobia. Then Peter Thiel too.

    Ted Cruz was shrieking about transexuals in bathrooms.

  30. Gravatar of Doug S. Doug S.
    10. December 2016 at 16:21

    Here’s an interesting ethical dilemma:

    A private company wants to build a bridge. However, bridge building is dangerous, and it expects that its employees will end up at significant risk of death. The company proposes two different ways to build the bridge: if it uses method A, one specific employee, named John, will die. If it uses method B, there will be three deaths, but they will be randomly chosen from a group of 100 employees, and nobody will know which ones until it’s too late to save them. The company can’t force its employees to risk themselves, but it can bribe them. The employees won’t agree to accept a 100% chance of death for any amount of money, but they will agree to face a 3 in 100 risk of death for a price that the company is willing to pay.

    As a government employee, you now have three choices:
    1) Force John to sacrifice himself so the bridge can be built using method A.
    2) Allow the company to build the bridge using method B, resulting in three dead employees.
    3) Forbid the bridge from being built at all.

    What should be done?

  31. Gravatar of Carl Carl
    10. December 2016 at 18:03

    Does it change your calculus if the 6 million people in your example all reside in your home town? Or do you stand up to convince your neighbors that they should take their shots and die quietly so the people on Dune will not have to risk reinfection.

  32. Gravatar of Dan W. Dan W.
    10. December 2016 at 21:16

    Utilitarianism justified the US economic policy that it was OK to put American workers out of a job if that job could be done for less by a non-citizen. It further rationalized that it was for the greater good if the jobs American workers wanted to do were done in nations with dismal standard of living.

    The Utilitarians had it all figured out except one thing: The American workers had a vote. And so they helped elect a US president who sees the world differently then do the Utilitarians.

    The Utilitarians see the election of Trump as a repugnant conclusion. But said election was a consequence of Utilitarian philosophy punishing a group of Americans who had a vote while rewarding non-Americans who had no vote.

    I am very curious to know how the Utilitarians will correct this deficiency.

  33. Gravatar of Major.Freedom Major.Freedom
    10. December 2016 at 22:04

    Doug S., is the government employee being forced against their will to choose among only 1), 2), or 3)?

    If you asked me, the government employee should allow the owner of the company, and the employees, to choose for themselves. I doubt John will agree to participate in any method that guarantees his own death.

  34. Gravatar of blacktrance blacktrance
    10. December 2016 at 23:02

    The main problem with utilitarianism is the neutralism. From a neutral perspective, the happiness of 7 billion outweigh that of one, and that of 42 quintillion outweighs that of 7 billion. But our perspective isn’t that of a neutral disinterested central planner who cares about everyone equally. The happiness of several strangers outweighs the happiness of one, but our own happiness or that of a single friend easily outweighs that of numerous strangers.
    For example, when it comes to redistributive policy, the proper question is not just “will the poor benefit by more than I lose?”, but “is it worth it for me to lose this money in exchange for the poor being somewhat better off?”. If not, one should oppose the policy even if it produces a net benefit from a neutral perspective.

  35. Gravatar of Tom Brown Tom Brown
    11. December 2016 at 00:36

    O/T: Scott, you ever take one of those questionnaires to see where you land on the 2-dimensional political grid?
    https://pbs.twimg.com/media/CzXhrrzWQAAXMXe.jpg

    ?

    If that one doesn’t suit you, there’s always this:
    https://pbs.twimg.com/media/CyEIGi8WIAAg63b.jpg:large

  36. Gravatar of weareastrangemonkey weareastrangemonkey
    11. December 2016 at 05:46

    Major List, let me answer in Scott’s place
    (he can correct me if he thinks I get this wrong)

    It’s a really horrible choice you’ve painted. It would be easy if all you had to do was sacrifice your own life – but your family and friends is far worse. If you think the right thing to do is to save your family and friends in that example then, given the numbers you gave, you are effectively allowing a holocaust to occur every year for more than a millennia of millennia, for longer than all of human history, creating mountains of bodies that literally dwarf all the mountain ranges on earth, an unfathomable and all but endless litany of pain and misery all so you can save your family and your friends. Utilitarianism, on the other hand, says you make the supreme sacrifice in order to prevent an atrocity that would make the combined actions of Hitler, Stalin and Mao a footnote in history. So obviously the utilitarian choice is the more moral and least repugnant option. Of course, I expect many people would lack the courage and morality to take this choice. And of course, I wouldn’t feel the necessity to have conversations with our children about such horrible variants of the trolley problem. Satisfied?

  37. Gravatar of mbka mbka
    11. December 2016 at 06:21

    Scott,

    you are confusing me here because Derek Parfit’s Repugnant Conclusion is a very specific case of making a utility calculus. It is not about cost-benefit trade offs. It is about treating happiness as fungible and scalable. It is about the dangers of treating people as mathematical entities.

    He poses the following dilemma:

    1- assume your philosophy is maximizing total happiness
    2- assuming a small population p1 has a large individual happiness per capita h1. Total happiness (1) = p1 x h1
    3- assume you could make it happen that a much larger population p2 has a very poor, but positive per capita happiness h2 with total happiness (2) = p2 x h2.
    4- by mercilessly maximizing total happiness you have to to conclude that total happiness (2) > total happiness (1) and that therefore you would have to prefer a world of many marginally happy people to a world of a few very happy people. That is the repugnant conclusion.

    I call this a failure of algorithmic reasoning. In honor of Scott Alexander’s “Beware the man of one study” I’d call this “Beware the man of one thought system”. Maximising total happiness clearly becomes meaningless here, it wasn’t meant to increase the number of individually fairly miserable people just for the sake of scoring a mathematical victory in maximising total happiness.

    See here for more:
    https://plato.stanford.edu/entries/repugnant-conclusion/

  38. Gravatar of mbka mbka
    11. December 2016 at 06:42

    Edit:
    4- should have read:

    4- by mercilessly maximizing total happiness you have to to conclude that IF total happiness (2) > total happiness (1) , then you would have to prefer a world of many marginally happy people to a world of a few very happy people. That is the repugnant conclusion.

  39. Gravatar of weareastrangemonkey weareastrangemonkey
    11. December 2016 at 07:03

    Major.Freedom,

    Regarding your horrible choice, it would have been easier if we could just sacrifice our own life rather than those of our family and friends. Regardless, if we don’t sacrifice our family and friends then there will be as if there were a rolling holocaust longer than all human history creating mounds of corpses that dwarf all the mountain ranges on earth. Utilitarianism would have us make the ultimate sacrifice to prevent a tragedy that would make the combined acts of Hitler, Stalin and Mao all but a footnote in the annals of atrocities. It is pretty obvious that the utilitarian option is both the most moral and least repugnant of these two highly repugnant choices.

  40. Gravatar of ssumner ssumner
    11. December 2016 at 07:51

    Everyone, No one noticed my bad math? It should have been 42 quadrillion. You guys are more innumerate than I am!

    Eric, Good question. I’ve done a lot of posts on the need for organ markets, which dominate this policy. The issues also apply to the military draft—in WWII lots of young American men were drafted (and died) to make life better for the rest of us.

    But I do agree that most people would not favor the policy you describe. Again, there are other policies that dominate it, but it is a good counterexample

    (BTW, I doubt you could generate ten healthy lives by sacrificing one—but your example is worth thinking about even if you cannot do so today.)

    One other point, if you did go down that road, the best option would be to take the organs from murderers, like the Chinese used to do, not via lotteries.

    Engineer, No they don’t all think that, which is why there is so much opposition to utilitarianism. I could give you lots of examples. If you say that people enjoy hedonistic activities, religious fundamentalists will often respond that it doesn’t matter if they enjoy them, it’s sinful. If you say more immigration would help poor third world people, commenters like Harding will tell you that their welfare doesn’t matter, only Americans matter.

    Christian, I don’t agree that Duterte is a utilitarian. He seems to relish the thought of killing millions of drug users. I doubt that he cares about their welfare. Similarly, Hitler didn’t care about the welfare of the millions of Jew she killed.

    Dtoh, Yes, but read my post on how misleading that question is. It’s like the “suppose slavery made us happier” example in my post.

    Ben, You may not know this, but Trump was perceived as a bigot by both his opponents and many of his his supporters. They LIKED his idea of banning Muslims. I agree that he was not a specifically anti-gay bigot, I was referring to his bigotry more broadly.

    Doug, Choose the option that maximizes aggregate utility, whatever that is.

    Carl, You seem to confuse two issues—what moral system is best, and how do selfish people like me choose to live?

    Dan, Dumb comment. The public did not vote against trade, they voted for Trump. Polls show the public thinks trade is good. And Trump said he favors freer trade than Obama favored.

    blacktrance, That’s not a problem with utilitarianism, that’s a problem with us. Your comment is like saying the problem with Christianity is that people don’t always love their enemies. You missed the point.

    mbka, You said:

    “That is the repugnant conclusion.”

    That doesn’t seem at all repugnant to me. Or the the Catholics, I might add.

    You said:

    “individually fairly miserable people”

    Now you are cheating. These people are not fairly miserable, they have to be happier than not. They must have a positive utility. I agree that having more miserable people is no net gain.

    Think of it this way. Suppose that during our lives we have set of moments that are either plus one or minus one in utility. A happy life is where that aggregates to a positive number (and I have no idea if it positive for me, BTW). In that case, you want to maximize the total number of happy moments, minus the total number of unhappy moments. It makes no difference whether you do that with one person or one quadrillion people.

    Yes, the actual numbers are not exactly plus and minus one, but the same idea applies if there is differences in intensity of happiness and pain.

  41. Gravatar of Dan W. Dan W.
    11. December 2016 at 08:38

    Scott,

    Are you now saying Trump is not deplorable? I must have misunderstood all your rantings about his awfulness. I’m happy for you that you have accepted Trump’s election, but what about the tens of millions who still believe Trump is the worst thing ever? Is not their utility lower? Yet you seem to be saying that Trump’s election increased aggregate utility and is thus a good thing! E. Harding is elated.

    What I interpret your Utilitarianism position to be is a restatement of Jefferson’s “Declaration” that individuals deserve the right to Life, Liberty and their Pursuit of Happiness. At least that is the Utilitarianism that you defend. What you have not shown is how Utilitarianism can form the basis of a societal contract.

    One limitation is no person or group of persons knows what will make society most happy. A second limitation is there is no stable form of government that can be empowered to actively do what is best for all of society without abusing that power. The only model that works is that of the benevolent dictator. Good luck finding one of those.

  42. Gravatar of Marty Bormann Marty Bormann
    11. December 2016 at 08:42

    Sumner demonstrates yet again that utilitarians are the masters of special pleading.

  43. Gravatar of Marty Bormann Marty Bormann
    11. December 2016 at 08:44

    “Doug, Choose the option that maximizes aggregate utility, whatever that is.”

    Kinda like, the Fed just needs to do “whatever it takes” to raise NGDP growth? I’m starting to see a pattern here.

  44. Gravatar of Carl Carl
    11. December 2016 at 09:12

    @ssumner
    I probably am confusing them. But I wonder if there is a tie between your last couple of posts on nationalism and utilitarianism. Does utilitarianism have barriers of scale or just of time. I think you are arguing it simply has barriers of time. As people become wiser they expand the circle of people whose utility they care about maximizing: people of different sexual persuasions, different colors, nationalities and so forth. But the reaction to globalism, the nationalism you wrote about a few posts ago, seems a reaction against expanding the circle. That may be ignorance (i.e a barrier of time), or it may be a problem of scale (our happiness increases when the group of which I am a part is smaller both because there is greater reciprocity and identity). And, there may be scale effects on effort. Aggregate effort may be greater when everyone feels that he is trying to maximize the utils of a smaller group rather than a larger group. I recognize that competition can benefit all and that you are advocating free markets, competition and private property, but to truly compete there must be barriers between the competitors. And those barriers can be things like borders and property.
    When you decide to stiff the Dunians by boycotting the vaccine, you may be hurting the Dunians in the moment but in the long run you may be supporting a moral system that works better by being truer to the limited scope of its practitioners.

  45. Gravatar of blacktrance blacktrance
    11. December 2016 at 11:10

    blacktrance, That’s not a problem with utilitarianism, that’s a problem with us. Your comment is like saying the problem with Christianity is that people don’t always love their enemies. You missed the point.

    Scott,
    Non-neutralism isn’t just (descriptively) what people do, it’s also what they should do. Neutralist philosophies like utilitarianism ignore the central importance of motivation and its connection to action. If an action would increase overall utility but reduce mine, it wouldn’t give me a reason to act – and it’d be paradoxical for me to be obligated to act when I don’t have a reason to do so. (If it did give you a reason to act, then you’d be irrational, because you’d be choosing a lower-utility option over a higher one.)
    Or, from a different angle, if that someone should act morally can’t be derived from their actual desires, what relevance does morality have? Either morality is necessarily what one should do, in which case it must be derived from our actual motivations (which implies non-neutralism), or it’s independent of us, in which case we have no reason to care about it.

  46. Gravatar of Major.Freedom Major.Freedom
    11. December 2016 at 11:34

    weareastrangemonkey:

    Thanks for giving your honest answer.

    Yes, it *would* be “easier” to think individuals can do what is demanded on them “from above” and kill themselves. It would make it very easy on the utilitarian.

    That is interestingly enough, at root the design, the intention, of Sumner’s collectivist utilitarianism. The good of the many not only outweigh the good of the individual in the abstract argumentative sense, but each individual must accept and stand ready and willing to destroy themselves in the march towards what others, such as Sumner in his mind, and every other utilitarian in their own minds, defines as “social” progress.

    Notice how Sumner’s only rebuttal to mbka’s comment “That is the repugnant conclusion”, was:

    “That doesn’t seem at all repugnant to me.”

    Utilitarians cannot rationally agree with anyone else on a particular set of outcomes as being “utilitarian” or otherwise. There can be only secular faith-based agreement or disagreement. The fact that people are persuaded of something is enough.

    But wait, if we cannot according to Sumner know if anything is objectively true, that all we can do is be persuaded, how can anyone know whether they themselves or anyone else has fact been persuaded by an argument? That would also have to be determined on the basis of persuasion. People would first need to be persuaded about whether or not they or anyone else was persuaded that an argument is true or false. And they would need to be persuaded of whether THAT persuasion exists. And so on…

    In other words, what Sumner is arguing about is by his own account not what he says it is. What he is doing is by his own account totally and utterly unknowable meaningless gibberish. But then how do we explain what has actually occurred on this blog over the years? The answer is this: Sumner can only claim to be presenting a meaningful theory or argument about anything, by virtue of the fact that what he says is true about the human mind and knowledge as such, is in fact false with a capital T. In other words, if Sumner’s epistemology were accurate, then nothing he says is or can be regarded as true. Only if it is inaccurate can the content of what he says have the truth value he claims it does have.

    Sumner refuses, or is perhaps too scared, to introspect, so he turns outward, to find himself in everyone and everything else, and the only choice we have is to either arbitrarily agree or arbitrarily disagree. If we agree, we’re being pragmatic…to Sumner’s interests because he finds himself in us that way. If we disagree we are not being pragmatic…to Sumner’s interests because he doesn’t find himself in us that way.

    There is in fact no room in Sumner’s utopia for disagreement. Everything you think and do is judged as either pragmatic and utilitarian, or it isn’t, as judged by Sumner, and you have to convince him of this first (how? Not rationally, hence his avoidance of my comments, it must be an appeal to his passions and prejudices to decide what is right and what is wrong).

    Sumner has declared war on everyone’s minds, which he ironically believes is justified by the terms of the human mind. He declared to everyone on this blog that he denies the existence of a common rational ground among human minds which we can all appeal to in order to know which ideas are objectively wrong and which are objectively right. All of his readers are told that they are categorically unfit to know the truth. Anyone who thinks they can know any truth about anything, are a mortal danger. And yet we are supposed to take all THAT as an objective truth.

    Back to the the topic of utilitarianism now: if 1 person is killed or harmed, or 10 million people are killed or harmed, there is no rational basis for concluding that the happiness of the remainder of the (unharmed, or living) population is this much higher or that much higher so that “social” utility goes up. Utilitarianism is really a cover for a person’s desire to introduce aggression in society in order for the utilitarian to get what they want, either the good for themselves, or even worse, what they believe they have the wherewithal to know what is good for everyone else. All while attacking everyone’s minds in the process. It is a vicious circle for these people.

  47. Gravatar of Major.Freedom Major.Freedom
    11. December 2016 at 13:10

    Sumner wrote:

    These people are not fairly miserable, they have to be happier than not. They must have a positive utility. I agree that having more miserable people is no net gain.

    Net gain TO WHOM?!?!!???

    For the love of Pete, this is a totally delusional fantasyland of fakery.

    Gains and losses are only experienced by individuals.

    All you are doing when you say you perceive a “net gain” is YOUR OWN personal experience for yourself, warped and twisted into some arrogant objectification applicable and imposed on everyone else. The individuals who are harmed and sacrificed according to their own judgments, DO NOT experience a gain, and their loss is not “counter-acted” or “outweighed” by the individual’s who experience a benefit and are not sacrificed. You are not referring to anyone or anything real by invoking these mystical concepts of “net” gain, or “social” gain, or “aggregate” gain.

    Think of it this way. Suppose that during our lives we have set of moments that are either plus one or minus one in utility. A happy life is where that aggregates to a positive number (and I have no idea if it positive for me, BTW). In that case, you want to maximize the total number of happy moments, minus the total number of unhappy moments. It makes no difference whether you do that with one person or one quadrillion people.

    OF COURSE it makes a difference. The difference is that you cannot add or subtract utilities across different individual people. What you are doing when referring to “happy moments” which are “added up” for one individual and for many individuals, is invoking just another variation of the long refuted concent of the “util”.

    You cannot add up “happy moments” for people. Not for one individual and not for millions. Happiness is not a binary on off, yes no, concept whereby one individual can be observed to have experienced 145 happy moments in their lifetime while another individual experienced 156 happy moments, such that the second person “lived a happier life”. For one thing, one happy moment is not equal in subjective experience to every other happy moment. Giving a starving person a morsel of food every minute every day for a year, only to see them suffer from one big horrific disease, is not an example whereby you can say they “lived a happy life on net” on the basis of you observing them to what you believe to be many thousands of happy moments from the food, but only one unhappy moment from the horrific disease.

    Oh and the nonsense cannot be saved by then attributing a larger “number” to the one big horrific disease, in a desperate attempt to outweigh the many thousands of moments of happiness, all so that you can reach your desired conclusion based on totally different criteria not in this social engineering accounting nightmare.

    You are in la la land.

    This:

    Yes, the actual numbers are not exactly plus and minus one, but the same idea applies if there is differences in intensity of happiness and pain.

    .

    Is just an admission that the method is bogus.

  48. Gravatar of Major.Freedom Major.Freedom
    11. December 2016 at 15:10

    weareastrangemonkey:

    To respond to your comments directly:

    If you think the right thing to do is to save your family and friends in that example then, given the numbers you gave, you are effectively allowing a holocaust to occur every year for more than a millennia of millennia, for longer than all of human history, creating mountains of bodies that literally dwarf all the mountain ranges on earth, an unfathomable and all but endless litany of pain and misery all so you can save your family and your friends.

    To be accurate, what actually kills those people is not any positive action I did, and in addition there is no outstanding contract I have entered into such that my inaction is violating their property rights. What actually kills them is the natural world as it stands against human welfare. We would be standing against viruses, bacteria, and genetic flaws and imperfections. I would not be the creator of the mountains of bodies.

    Now to be sure, it is very easy to fuzzy the concepts here and attribute the responsibility for all those deaths on what you can more easily understand, namely, the maliciousness of humanity, focused on a single individual. You can then go about your days with the urgent and anxious belief that a grand and benevolent program of stopping a lot of death and suffering is within your reach, within your grasp, that all it would take is just a few murders.

    Be a mortal villian to just a few people, in order to become a hero to trillions of people.

    I avoid using phrases like “repugnant” to describe certain beliefs, unless that is the context of the debate. I will say that for me, I would not murder my spouse, no matter how many lives would otherwise die from cancer or disease. The universe can collapse in on itself, and I would still choose my spouse. I don’t expect you to understand or to accept that, because it is totally and completely indifferent to your life. Not hostile, because I am not the creator of cancer, but indifferent IF I were faced with the having to murder my spouse to save you. What I would do, is support, financially and by my time, on ways to cure cancer and disease using the knowledge that my spouse has something in their body that can be used to cure cancer and disease. If the example is that ONLY their body has the cure at this time, then surely cloning is a science to support.

    Utilitarianism, on the other hand, says you make the supreme sacrifice in order to prevent an atrocity that would make the combined actions of Hitler, Stalin and Mao a footnote in history. So obviously the utilitarian choice is the more moral and least repugnant option.

    Why exactly would it be more moral and less repugnant? That is the task, the challenge, I am putting out there. It is not enough to just describe the various outcomes and then conclude with “so obviously”. For one thing, there is a big difference between millions of people being murdered by human cause, and millions of people dying from non-human caused reasons like cancer and disease. I dispute the way you overlap and equivocate “atrocity”. Dying from cancer is not an “atrocity” akin to people being murdered by human hands. Just because death results in both cases, that does not mean that the causes are equivalent.

    Billions and billions of people have been killed by the fact of their nature since the dawn of mankind. Is nature evil like Hitler and Stalin were evil? Of course not!

    Of course, I expect many people would lack the courage and morality to take this choice.

    Are you sure about that? The way you frame it, murdering one spouse is the easy solution, because of the tremendous immorality you say is inherent in the choice of not murdering them. Surely a choice so incredibly immoral is easy to avoid is it not?

    Is it not more courageous to choose not murdering one’s spouse instead, and trillions and trillions of people dying due to cancer and disease? What is courageous? What is moral? What if I said that protecting my spouse, even against the prospect of trillions and trillions of people dying from cancer and disease, to be one of the most courageous and moral things I could ever do? It would be me against a LOT of people looking to convince me, and likely taking it upon themselves, to murder my spouse, wouldn’t it? Would I need to be courageous in this scenario, or would I need to be a coward?

    And of course, I wouldn’t feel the necessity to have conversations with our children about such horrible variants of the trolley problem. Satisfied?

    How about adults?

    To be fair to myself, I have to test what I think and subject it to extreme scenarios. Flipping things around, suppose my spouse were suffering from a deadly disease that ONLY the murder of every last human being on Earth, other than myself, would stop them from dying from cancer. Suppose there was a button, whereby if I press it, every else dies and my spouse is cured. Would I press that button? This is by design the most awkward and tough ethical question I could imagine being faced with. What would I do? Well since you were honest, so will I be honest. I would not.

  49. Gravatar of engineer engineer
    11. December 2016 at 16:22

    From the perspective of arguing for a public policy, it is important to try to justify it with a utilitarian augment. But when a utilitarian defense butts up against a liberty, I usually side with the liberty. Take gun control. The proponents of strict gun control base their arguments on a utilitarian augment, strict control will give fewer deaths overall in society…translating to more happiness. But I like my freedom and reject that argument and although I can think of utilitarian rebuttals, the loss of personal freedom alone is enough for me to reject it.

    In your person life, utilitarianism will never trump self interest for the vast majority of people….the communists tried to change human nature on a massive scale and it did not work. Religion is the only belief system that I know of that has been able to change this and then only with the promise of a better afterlife.

  50. Gravatar of weareastrangemonkey weareastrangemonkey
    11. December 2016 at 19:25

    major.freedom,

    A longer response than necessary. The less repugnant option is to not let trillions of people to die so you can avoid doing something significantly less bad but far more personal. We can dance around and paint it in as many ways as you might like – but if someone would prioritise their family over trillions of other human lives, or even millions, then they have done something terrible but very human. I expect many people would make the immoral choice because morality is a weak motivator.

  51. Gravatar of weareastrangemonkey weareastrangemonkey
    11. December 2016 at 20:16

    Major.Freedom

    “That is the task, the challenge, I am putting out there. It is not enough to just describe the various outcomes and then conclude with “so obviously”. ”

    You attempted to paint a picture in which the repugnant option – your words – was the non-utilitarian option for exactly the purpose of concluding with “so obviously”. If you find it acceptable to allow trillions to die for your own self-interest – the well-being of your family – less repugnant then fair enough. I personally think it very convenient that your morality would lead you to not have to make any personal sacrifices.

  52. Gravatar of Marty Bormann Marty Bormann
    11. December 2016 at 20:22

    @monkey boy

    Why is prioritizing your family over millions of others immoral?

    Assuming that’s not too hard to answer, which millions are you talking about? Millions of fellow country men, humanity in the abstract, sentient life in the galaxy, what, exactly?

  53. Gravatar of weareastrangemonkey weareastrangemonkey
    11. December 2016 at 20:52

    @bormann

    Read his example.

  54. Gravatar of Major.Freedom Major.Freedom
    11. December 2016 at 21:13

    weareastrangemonkey:

    The less repugnant option is to not let trillions of people to die so you can avoid doing something significantly less bad but far more personal.

    You keep saying that me murdering my spouse is less “repugnant”, and less “bad”, but that is what I am asking you to explain. WHY would doing so be less repugnant, less bad? Is it merely a question of numbers to you? That as X increases, where X=# deaths, the level of “repugnance” increases, regardless of whether the deaths are caused by murder or disease?

    Where is the relevance of cancer and disease in your answer?

    We can dance around and paint it in as many ways as you might like – but if someone would prioritise their family over trillions of other human lives, or even millions, then they have done something terrible but very human. I expect many people would make the immoral choice because morality is a weak motivator.

    Again, you keep using these terms, “terrible”, “immoral”, etc, but these terms presuppose a set of ethics. That is what I am asking about. WHY is it “terrible” to not murder one’s spouse in this scenario?

    What if I tell you that murdering one’s spouse is more repugnant and less moral than not doing so in a context of cancer killing lots of people?

    What if I said your preferred choice is the more repugnant choice? You would probably expect me to explain how it is more repugnant, right? What is your explanation for why it is less repugnant? My answer for why it is more repugnant is that repugnancy is not divorced from values. If I was faced with the choice of murdering my spouse to stop cancer killing the lives of many murderers and rapists and thieves, then I would easily reject the numbers game without much thought. Numbers alone are not enough.

    What is your explanation?

  55. Gravatar of weareastrangemonkey weareastrangemonkey
    11. December 2016 at 22:10

    major.freedom

    You framed the question in a fashion that suggested letting trillions die was less morally repugnant. If that was not your intention, if it was merely to raise the fact that we have no firm meta-ethics why bother with the elaborate example? If that was your intent then I find it convenient that you think it more moral to let trillions die so that your family can live.

  56. Gravatar of weareastrangemonkey weareastrangemonkey
    11. December 2016 at 22:16

    major.freedom

    If you cannot see how allowing trillions of people to die to save your family members is not selfish and repugnant (even if potentially understandable as the decision of a human being) then I cannot communicate with you about morality. Just as I could not explain to a true psychopath why it would be wrong to rape and murder children.

  57. Gravatar of Student Student
    12. December 2016 at 05:22

    @weareastrangemonkey,

    MajorFreedoms morality is as long as he gets his, all is well. As longneck. As it’s his starving or dying or suffering, who cares.

  58. Gravatar of Marty Bormann Marty Bormann
    12. December 2016 at 05:25

    @monkey

    That’s what I thought. More special pleading from utilitarians.

  59. Gravatar of Student Student
    12. December 2016 at 05:25

    *as long as it’s not his…

  60. Gravatar of Marty Bormann Marty Bormann
    12. December 2016 at 05:27

    BTW, is it trillions, or millions? You seem to go back and forth. How about 999,999?

  61. Gravatar of ssumner ssumner
    12. December 2016 at 08:50

    Dan, Don’t be an idiot. Of course I still believe Trump is the worst.

    Carl, You said:

    “I recognize that competition can benefit all and that you are advocating free markets, competition and private property, but to truly compete there must be barriers between the competitors.”

    Why?

    Blacktrance: You said:

    “Neutralist philosophies like utilitarianism ignore the central importance of motivation and its connection to action. If an action would increase overall utility but reduce mine, it wouldn’t give me a reason to act”

    Of course you’d have a reason to act. Your reason would be “to make the world a better place”. Why do you think people give to charity?

    Engineer, You ignore that fact that societies all over the world are becoming more utilitarian over time. So “human nature” need not be a barrier to positive change.

    Marty, You said:

    “Why is prioritizing your family over millions of others immoral?”

    Are you from Sicily? You think it’s OK to give that government job to a sibling who is less qualified than a stranger?

  62. Gravatar of Carl Carl
    12. December 2016 at 09:21

    @ssumner
    If my company out competes your company then has to share all our profits with your company what are we competing for?

  63. Gravatar of Marty Bormann Marty Bormann
    12. December 2016 at 10:27

    Scott, have you grown so bored with pushing the boundaries of stupidity in economics, that you feel you have to move into moral philosophy?

  64. Gravatar of blacktrance blacktrance
    12. December 2016 at 11:57

    Scott,
    “Make the world a better place” is compatible with non-neutralism, so you need more than that for utilitarianism. It’s entirely possible to give to charity and such while still being partial to yourself and those close to you. Utilitarianism says not only to make the world a better place, but also that the satisfaction of your motivations and desires only matters as part of increasing global utility – and that’s the part that ignores motivation. It doesn’t provide me with a reason for that kind of self-disregard.

  65. Gravatar of ssumner ssumner
    13. December 2016 at 09:00

    Carl, Who said anything about sharing profits. Reread what I wrote.

    Marty, Yes. But mostly I do it to annoy people like you.

    Blacktrance, I think you misunderstand several things about utilitarianism.

    1. It’s not a prediction about how people will act. I know that I’m selfish, and I know that I do not act in a way that maximizes global utility. I know that the world would be happier if I behaved differently. The motivation is supposed to come through moral education; whether religion, or the narrative arts (I prefer the arts.) We try to do better, understanding that perfection is not a realistic goal. And indeed some cultures do better than others, and also better than they themselves did in the past.

    2. It’s efficient for people to devote more time to those close to them, even if you believe in maximizing global utility. Structures like families and work units have real value to society–we are social animals. Infants cannot raise themselves. The key is finding the right balance. It’s OK to give your kids Christmas presents, and not give them to strangers, but it’s not OK to hire your incompetent cousin for a government job.

    I see utilitarianism as a benchmark for government policy decisions. It can also be used in one’s personal life, but I agree that people aren’t likely to use utilitarianism for daily life decisions, rather they rely on rules of thumb like don’t lie, cheat, and steal. Or work hard and save.

  66. Gravatar of blacktrance blacktrance
    13. December 2016 at 13:04

    Scott,

    I think we may be disagreeing about the content of the motivation. If you mean that people can be persuaded to change their behavior in a way that makes the world a better place, that seems uncontroversially true. But that’s not enough for utilitarianism – it requires setting aside your own desires (or pleasures or whatever) except as part of global utility. That’s the part that can’t motivate a rational person, because it requires choosing a lower-value option over a higher-value one. It’s true that it’s a higher-value option from the point of view of the world, but that’s not your point of view and there’s no reason why you should act as if it were.
    It’s not a prediction about how people will act, but if it’s true that it’s how people should act, then it must be derived from how it’d be rational for them to act, which depends on their particular individual desires, so utilitarianism (being neutralist) is out.

    There’s nothing special about government policy that makes it different from using utilitarianism in one’s personal life. Whether you should support or oppose a policy is subject to the same decisionmaking criteria as all of those in your everyday life. There’s no fundamental difference between, say, “Is it worth it for me to buy a new car?” and “Is it worth it for me to endorse having some of my money taken away to be given to others [in some particular policy]?”

  67. Gravatar of Jason Smith Jason Smith
    13. December 2016 at 17:15

    Scott,

    Back in my college days, I wrote a paper for my ethical theories philosophy class about utilitarianism where I concluded (generally) that while a single utilitarian moral choice did not lead to ‘repugnant’ conclusions a series of utilitarian moral choices could do so, much like an optimizing algorithm can get stuck in local optimum. You have to make some pretty strong assumptions about the space of possible moral choices (essentially, a pretty monotonic/convex structure of the ‘utility manifold’) or allow a diversity of moral choice algorithms (virtue ethics, categorical imperatives, irrationality) in order to ensure society doesn’t get stuck in a sub-optimal dystopia. The idea is similar to Noah Smith’s analysis of Twitter as a dystopian technology:

    http://noahpinionblog.blogspot.com/2016/12/is-twitter-dystopian-technology.html

  68. Gravatar of Carl Carl
    14. December 2016 at 00:17

    @ssumner
    What was the correct utilitarian response of the Native Americans to the arrival of the Europeans? And what was the correct utilitarian course of action for the Europeans to follow? Should we look at the two groups as inhabiting two distinct moral realms or do we need to aggregate their utils to maximize utility?

    @Blacktrance
    Why is choosing the selfless short term pleasure necessarily the higher value option for an individual? Compassion, shame and the esteem of others can be powerful motivators. You don’t have to assume that individuals are acting on behalf of maximizing global utility to posit a theory in which the consequences of the actions of individuals acting to maximize their utils is maximum global utility.

  69. Gravatar of ssumner ssumner
    14. December 2016 at 06:34

    blacktrance. You said:

    “There’s no fundamental difference . . . ”

    I am a philosophical pragmatist, and hence have little or no interest in fundamental differences. I accept your point, but all that matters to me is practical differences. And I think as a practical matter utilitarianism is more useful in policy decisions. That’s partly for the reasons you suggest, at the individual level it’s harder to set aside self interest.

    You said:

    “That’s the part that can’t motivate a rational person, because it requires choosing a lower-value option over a higher-value one. It’s true that it’s a higher-value option from the point of view of the world, but that’s not your point of view and there’s no reason why you should act as if it were.”

    Somehow we are talking past each other. Elsewhere you concede that cultural change can make people less selfish and more civic minded, but this seems to deny that proposition. I’m missing something. Why do you think people often anonymously give to charity?

    Jason, The beauty of utilitarianism is that it always allows you to take one step back, into a more and more generalized version of “rules utilitarianism”. The most famous example is the First Amendment. Banning some speech would probably make the world a happier place, but it may be wise to allow all speech, to avoid the bad side effects of trying to figure out which sort of speech to ban.

    Ironically it was Noah Smith’s tweets that convinced me I didn’t want to do twitter.

    Carl, The short answer is I don’t know, but I’m pretty sure the Europeans should have treated the natives better. As far at the optimal solution, my hunch is that it would have involved the whites buying enough land to suit their purposes, not stealing it. (unfortunately this still would have been a disaster for NAs, for standard reasons like disease, alcoholism, etc.) I’m afraid I don’t have a good answer at the moment. History is so full of complex counterfactuals, it’s difficult to evaluate. For instance, the discovery of America led to new foods which allowed the populations of China and India to expand by hundreds of millions relative to pre-1500 levels. How do you factor all that in? It’s hard.

  70. Gravatar of blacktrance blacktrance
    14. December 2016 at 14:19

    Carl,
    The point of utilitarianism is that individuals ought to act on behalf of maximizing global utility. It’d be quite a coincidence if maximum global utility could be achieved without people aiming at it. It’s certainly possible to construct incentives that lead to people being more pro-social, but that’s different from them not giving special priority to whatever they happen to value.

    Scott,

    A position like “utilitarianism is a good rule of thumb for policy decisions” is different from “utilitarianism is true”. If you’re only endorsing the former, then I’ll agree 90% of the time. But it sounded like you were endorsing the latter, and that’s what I’ve been arguing against.

    “Elsewhere you concede that cultural change can make people less selfish and more civic minded, but this seems to deny that proposition.”

    That’s not the proposition I’m denying. Education, cultural change, and the like have two mechanisms of action: they affect what people value, and they can cause them to think more carefully about what their values imply and to weigh them against each other to make themselves more consistent. For example, maybe if I’m raised to care about other people more, then I’ll value their well-being more highly. Or if I think about my beliefs and actions and reflect on how they affect other people, I might change my behavior based on how much I already value them (if I wasn’t acting in accordance with that previously). But it can’t make it right for me to choose an option that’s lower-value than higher-value for me.

    To construct a toy example, let’s say that my utility is the sum of two components: one from the utility of a beggar (say, 0.1x), and one from the amount of money I have. I have a choice to give some money to Beggar, to increase his utility by 5 and decrease the amount I get from money by 2. The payoffs then are:
    Give: U_beggar – 10, U_blacktrance – 99 (1 from beggar, 98 from money)
    Don’t Give: U_beggar – 5, U_blacktrance – 100.5 (0.5 from beggar, 100 from money)
    So the rational course of action for me is Don’t Give (100.5 > 99), but the utilitarian action is Give (10 + 99 > 5 + 100.5).
    Now suppose that I think I value the beggar’s utility at 0.1x, but I’m actually wrong and I get more out of his well-being than I thought, such as 0.5x. Or maybe I’m instead raised in a way that causes me to value the beggar at 0.5x. So then:
    Give: U_beggar – 10, U_blacktrance – 103 (5 from beggar, 98 from money)
    Don’t Give: U_beggar – 5, U_blacktrance – 102.5 (2.5 from beggar, 100 from money)
    So now the rational action for me would be Give.

    It’s sometimes possible to move from the first scenario to the second. In the second scenario, I genuinely do have an overriding reason to give to the beggar. But the point is that utilitarianism requires me to give to the beggar even in the first scenario, when it would be irrational for me to do so – where I don’t have enough of a reason. In other words, it’s possible to change people’s payoffs, but utilitarianism requires them to *ignore* their payoffs as such and only to maximize the sum (or average or whatever) of everyone’s payoffs. That’s the part they don’t have a reason to do.

    (As you say, it may be more efficient for them to sometimes act as if they were giving their own payoffs special consideration, because they have local knowledge. But that’s different from actually giving their payoffs special consideration, and if it would maximize global utility for them to act otherwise – as it often is – then utilitarianism they ought to do so.)

Leave a Reply