Utilitarianism Doesn’t Add Up
On the impossibility of comparing and aggregating utility, and what this means for ethics
Introduction
Utilitarianism begins with the simple idea that morality is about maximising happiness. But what does that really mean?
In this post, I argue that it is conceptually incoherent to compare or aggregate happiness between different individuals. There can be no objective scale of happiness between individuals, even in theory.
Then, I sketch a more coherent, realistic, and intuitive alternative. One based not on calculations or sacrifices for "the greater good", but on free cooperation and autonomy — a morality of mutual happiness, built together.
What is utilitarianism?
Utilitarianism is the theory that morality consists in taking those actions that lead to “the greatest happiness for the greatest number of people”. To do this, utilitarians ideally try to quantify and aggregate happiness (utility), e.g. if a particular course of action brings about 10 happiness for one person, and -20 for another, it scores -10 overall. If another course of action scores higher, it is to be preferred. Simple!1
At a first glance, this appears to be a promising definition of morality. It treats people as equals. It prescribes caring for others’ happiness and sadness, not just one’s own. It aims at increasing the total happiness in the world — who could possibly oppose that?
Problem 1: Valuation is not linear in any clear way
How would we go about assigning these numbers? Firstly, play a big game of “would you rather?” Would you rather be happy or sad? Would you rather the pleasure of eating your favourite cake, or the bliss of a desperately needed wee? Would you rather live forever as a bumblebee, or live one human life? You can go through this process for all the experiences you can imagine, each time placing the experience along a line going from highest to lowest utility.
That's a good start, but now how do we attach numbers to the line? As far as I know, there is no way to effectively define one unit of utility. We cannot, for example, define one utiliton (the unit of utility/happiness) as the pleasure of eating one Belgian chocolate, because how much pleasure this gives us is not constant. Nor is it clear how to extend such units, if we could define them. Would n units be the pleasure from eating n Belgian chocolates? That won't work, because each bite will vary.2
Setting that issue aside, the basic principle is that we can order our different experiences across a line from best to worst, by repeatedly asking what we would choose between the two options. If we had option A already, would we be willing to exchange it for option B? If so, we say that option B is better.
Problem 2: Interpersonal comparison is impossible in principle
The trouble is, we do not need to place only our own experiences on the line: we must also judge the experiences of others. But we have no way to objectively compare our own experiences with someone else’s, in order to say which is preferable. I cannot step into your mind and experience your experiences. Nor can I just assume they are consistent with my own experiences of the same stimuli. As we saw above, even my own experiences of a constant stimulus are inconsistent. The problem of other minds tells me I can hardly know you're conscious at all, so how can I begin to judge how intense your joys or sufferings are?
But the problem is deeper still, because we cannot separate the contents of experience from the experiencer. There is no experience separate from the one experiencing it, and no one experiencing it separate from what is being experienced3. Our experiences and the values we attach to them are inseparable from ourselves in the act of experiencing/valuing. The idea that we could separate the value of our experiences from the one evaluating them is wrong. So then, if we are to truly compare the values of our experiences, we will need to not just experience the same thing as others, but to experience being them.
This is more than comparison being difficult in practice. The problem is it’s impossible even in theory! There is no objective way to evaluate what constitutes the same amount of utility in two different people. We cannot give an objective valuation to an experience, because the act of valuation is inalienably subjective. Just as there is no experience without an experiencer, there is no value without the one evaluating.
The Utility Monster
In this context we can take a fresh look at the utility monster thought experiment. Wikipedia explains:
A hypothetical being, which Nozick calls the utility monster, receives much more utility from each unit of a resource that it consumes than anyone else does. For instance, eating an apple might bring only one unit of pleasure to an ordinary person, but could bring 100 units of pleasure to a utility monster. If the utility monster can get so much pleasure from each unit of resources, it follows from utilitarianism that the distribution of resources should acknowledge this. If the utility monster existed, it would justify the mistreatment and perhaps annihilation of everyone else, according to the mandates of utilitarianism, because, for the utility monster, the pleasure it receives outweighs the suffering it may cause.
But what does it even mean to say that the utility monster receives 100 times as many “units of pleasure”? How are we meant to operationalise this? And how do I convince a utilitarian that I'm a utility monster and they must serve me?4
This is a big problem! How can you quantify another being’s happiness compared to your own? It seems as though we are imagining happiness is produced by tiny “happiness particles”. We could assume that creatures like us (i.e. fellow humans) experience joy and suffering in roughly the same amounts as we do. But this is an arbitrary assumption, and we have no more reason to believe it than to believe that I experience 100 times as much pleasure and suffering as everyone else. I do not want to be accused of “scientism”, but what sense are we meant to make of the notion of utility until we have at least some hypothetical way to measure it?
The experience machine
We might try to solve this by borrowing Nozick’s experience machine and reprogramming it so that you experience someone else’s life perfectly, to the point that you experience being that person and not yourself. But even then, you have no way to objectively compare between the two experiences. You would only be able to make such comparisons after you have been removed from the machine and restored to being yourself, at which point you are not experiencing their experiences as them, but are experiencing yourself remembering them. Your judgment would not be a fair, impartial, objective evaluation, but would be thoroughly tainted by your own perspective — you cannot escape your own point of view.
We might similarly imagine two people using the experience machine to share a memory with each other. After the experience session is complete, they ask each other which experience they prefer. Is there anything to prevent them disagreeing over which experience they preferred? I do not know of any. Each is evaluating the two experiences from their own perspective, and there can be no valuation separate from their perspectives, so what is there to guarantee agreement?
Admittedly, the more similar the two subjects are in their perspectives (beliefs, background, species etc) the more likely they will agree. But we need to compare across all sentient beings. We need, for example, to compare the value of our experiences against those of bees. And even in those cases where there is agreement, what we have is intersubjective agreement, not objective fact. This point will be important later.
Perhaps we could resort to a hypothetical, perfectly rational, impartial third party experiencing the memories of both in the experience machine, and then rendering their judgment? Again though, what is there to guarantee that rationality and impartiality alone will give a consistent judgment? And more importantly, what right do they have for their judgment to be taken as authoritative over others? Why do they get decide the value of my experiences?
And once we have resorted to such a perspective, we can no longer claim to be maximising the happiness of the people involved, but are instead maximising the happiness of our hypothetical third party. The happiness of those involved can be dismissed entirely! All we care for is this rational, impartial, (practically divine) judgment over what's really good. We've already abandoned utilitarianism at this point.
There is a distinct air of authoritarianism and paternalism in all this, enforcing one’s judgments on others on the grounds that “we know better”. This is the kind of attitude found historically in communism and colonialism, and was used to justify actions we now widely consider horrific abuses.
Relativity
From this problem it emerges that the value/utility of an experience is observer relative, like space and time. If I run north at 6 mph on a train heading south at 10 mph, what is my speed, and what way am I running? There is no absolute answer, only frame dependent answers. Likewise, which experience is better than another is experiencer relative.5
This is also similar to the situation in economics, where there is no objective value of any particular good. Not even currency, gold, or potatoes. Instead, prices are agreed on between buyers and sellers, such that both parties feel they are profiting from the exchange. For the utilitarian to put a price on someone else’s experiences is like unilaterally setting a price on someone else’s property and forcing the trade (i.e. stealing). Except in the case of utilitarianism, it might not even be that person who receives the “preferable” experience it was exchanged for.
Compassion and empathy
What about compassion and empathy? Don't these require us to infer the experiences and valences of others? Isn’t this the heart of morality?
It’s true that empathy and compassion lie at the heart of morality, and I think this is why utilitarianism is initially so appealing and intuitive. But compassion and empathy do not consist in inferring what others actually experience. Instead, they are us sharing in the feelings of others. I cannot experience your suffering as you, but I can experience them as shared/transmitted to myself. Your suffering becomes my suffering — I do not become you.
It's not about accurately simulating (let alone calculating) the suffering of others, it's about allowing them to transmit that suffering to you, to share it with you, in order that you may share your kindness back. In this way, we pool together our problems and our solutions with those we care about.
I'll also note, that it is not moral to share in evil joys and sorrows. We shouldn't weep with those mourning a failed genocide, or rejoice with those who enjoy torturing animals. These pleasures and pains should not be given the same moral status and concern as others. Empathy should discriminate.
Cooperation and "mutual happiness"
So, we cannot exchange our experiences with others, and so compare and rank them directly. What can we do? We can evaluate and rank our own experiences, and then freely cooperate with others who are doing the same. In this way we have a proxy for exchanging experiences. I give up experience A and gain experience B, while you give up experience Γ and gain experience Δ, and both of us consider ourselves to have traded a lesser happiness for a greater. The happiness of both parties has been successfully increased!6
It is therefore crucial to promote free cooperation, community, and friendship, as the way to increase our own and everyone’s happiness. Not an abstract “aggregate happiness”, but the happiness of each and every one.7
Via such cooperation, community, and friendship, we collectively produce a morality that increases what we might call “mutual happiness”. Morality is not about aggregate happiness, but our being happy together, and happy in each others’ happiness. It is a dialectical process by which we find ways to live together in harmony, setting shared goals, rights, and mutual expectations. We might think of this as a kind of moral “market price”, laying out our rights, obligations, and norms. It is not the “objective” valuation, but an intersubjective, communal valuation.
In this way, we ground morality not in abstract measures of utility, or the judgments of hypothetical rational impartial observers. These both require taking a single evaluator as objectively correct, much like divine command theory. Instead, morality arises from the interplay of our free cooperation, allowing practical alignment within a plurality of perspectives.
Conclusion
Utilitarianism begins by promising equality. But this turns out to be an authoritarian equality, where one perspective must judge the value of all, and make decisions on others’ behalf. The happiness of each is disposable for “the greater good”. We might say that each perspective is equally worthless. And yet, there is no way to even objectively aggregate our subjective happiness!
Instead, we should try to respect each others’ autonomy and subjectivity, and attempt to cooperate for mutual benefit. Each perspective is priceless, because it is the very thing that sets value. It is not about making unilateral decisions that sacrifice others, but finding ways of living together in harmony and increasing our mutual happiness. This is the heart of morality.
What do you think?
OK… I’ve been a bit ambitious with this post, and now I must invite your thoughts and especially criticisms.
Can utility be objectively aggregated or compared in some way I haven’t thought of?
How could you try to measure another being’s happiness?
Do we have the right to measure and evaluate the experiences of other beings?
Is there reason to believe there’s an objective quantity of happiness, even if it’s impossible for us to find it?
Should morality give any weight to “evil” joys/sufferings, such as those of sadists who enjoy animal suffering?
Let me know what you think in the comments!
In practice utilitarians generally don't actually attach numbers and perform such calculations all the time, but if it were possible and practical to accurately do so, that would be ideal, according to the theory
Nor could we just extend the period of the experience. Eating a chocolate slowly is not the same as eating it quickly except multiplied by the time increase. It is a different experience. The only way to truly extend the length without changing the experience itself would be for the experiencer to have their experiences slowed down by the same factor, at which point the experience would be indistinguishable from its original length
I discussed this more in my post, ‘There Is No Conscious Observer’
I don't even need to convince them all. I just need one or two rich utilitarians to pay my bills...
This fits into my more general relationalist metaphysics, that I discussed previously in ‘Everything Is Empty’
Supposing utilitarianism is correct, this setup also guarantees that total utility will be increased, even if one party was unwittingly a utility monster. It is only by working together freely, sacrificing no one, that we can guarantee we aren't reducing total utility! And if there's a nonzero chance someone may be an infinite utility monster, getting infinitely more utility than others, it may even be obligatory under utilitarianism to take this path.
This leads to a distinctly libertarian view, but left-libertarianism, not the right wing version (which I do not view as truly libertarian). Democracy too should be understood and pursued as a forum for free cooperation and compromise as a society.
Interesting post, has some good challenges. I love philosophy man.
I think problem 1 is a good point and requires a defense, probably rooted in moral realism or using the constructivism-esqe argument you talk about later, but I think most of the rest of the post focuses on the *difficulty* of comparing utility instead of countering its correctness.
See, I don’t have to give you a full list of the objectively correct utility rankings to show utilitarian hedonism is right — all I need to do is get you to admit that a sad person suffering, being in debilitating pain, is WORSE than the experience of a happy person being loved and fulfilled.
If you concede that one experience is *worse* than the other, you’ve already conceded the ball game that we can compare states in this way, and that there’s states we prefer and can aim for. The “middle ground” that’s more difficult to compare is just this exact same idea but more difficult, and probably too difficult for a human to judge with our poor brains.
You say we can’t compare two lives by having someone perfectly rational experience both and choose, but your solution is this:
>We can evaluate and rank our own experiences
This is a much, much worse method than the idea of someone rational ranking. You can’t rank any moment except the one you’re in by your own logic! You have never lived in the past, and by what you’ve laid out, you don’t have authority because all you have is the memory — all humans can cling onto is one moment we label the present. Your solution is a flimsier version of the thing you’ve countered in the rest of the post.
I argue that the utility monster is easy for utilitarianism to sidestep here:
https://ramblingafter.substack.com/p/the-repugnant-conclusion-is-easy
But I agree with everything else!!!
In fact, it makes me want to switch from viewing myself as a utilitarian to something different, though similar/related. The cooperation you talk about at the end sounds like a promising start... cooperativism? Mutualism? Though, like, the idea needs more fleshing out I think. Or more defining.