The Greatest Happiness of the Greatest Number

Any good ethics textbook will tell you that “the greatest happiness for the greatest number” is something of a useless chimera of an ethical precept—imagine a gazelle with the legs of a tuna. There are two rather different principles jammed together here. “Promote the greatest happiness” is a principle exhorting us to maximize the quantity of happiness. “Promote happiness for the greatest number” tells us to seek the widest distribution of happiness. But these two principles don't necessarily jive—they can flatly contradict.

Suppose the population is evenly divided between blue people and green people. Green people are usually just a little bit happy (say, averaging 3 on a scale of +10 to -10. [I'm going to use averages here, for convenience sake. The example doesn't create a difference between average and total utility.]) But blue people are either extremely happy (+10) or almost not happy at all (1), depending on how happy green people are. If green people are not happy at all (< +1), then blue people are elated (+10), otherwise, barely happy (1). The “greatest happiness” principle then says that we want a world in which green people are not at all happy. That's a world with an average of 5 on our scale. The “greatest number” principle seems to say we want a world in which everyone is at least a little happy. That's the world with an average of 2 on our scale.

Eminent utilitarians like Bentham, Mill, Sidwick, and Parfit end up embracing the maximizing principle and simply dropping the distribution principle. But what is left over fails badly to capture the upshot of the Enlightenment conception of “public happiness” or “social happiness” which the “greatest happiness” principle is attempting to capture. If screwing over the green people is what maximizes the total… well, nobody said morality is easy. Well, harumph.

I propose that the maximizing utilitarian interpretation, as influential as it has been, is a wrong turn down a dead end—a heretical gloss on Enlightenment gospel. As any member of the Eastern Christian church will tell you, orthodoxy and heresy are not matters of popularity. Therefore, the fact that it is a minority view will not stop me from declaring that what I will call the “fuzzy contract” interpretation is the correct and therefore orthodox interpretation.

The two parts of the “greatest happiness for the greatest number” principle reflect two separate but conceptually related aspects of the Enlightenment creed. First, happiness is each person's moral goal. Second, people and their lives are of equal worth. Mainstream heretical utilitarianism chokes on both ideas.

Mill's attempt to cross the chasm from individual to aggregate happiness is an infamous example of the fallacy of composition. A classic example of the fallacy would be: atoms are invisible, therefore aggregates of atoms are invisible. Mill's argument, that since happiness is good for each of us, then the general happiness is good for the aggregate of people, really is like that. Sidgwick, the most clear-headed of heretical utilitarians, leaves us at the end of The Methods of Ethics with the famous “dualism of practical reason,” unable to reconcile heretical utilitarianism with orthodox Enlightenment moral individualism. On Sidgwick's account, we arrive at the value of the aggregate only through a mysterious intuition.

Regarding the egalitarianism embodied in the “greatest number” principle, heretical utilitarianism does even worse. Utilitarians make a big deal out of the fact that each person's pleasures and pains count equally. But the equality of pleasures and pains is a far cry from the equality of persons. Rawls and Nozick's separateness of persons criticisms get it right. The thing that counts equally is not feelings, but lives. To conceive of us as containers for pleasures and pains simply doesn't take persons and their life-constituting projects seriously.

Let's step back and think again about the “greatest happiness for the greatest number.” It's not a bad principle, really. And there's a way of reasonably parsing it so that makes good sense. Don't start with “greatest happiness.” Start with “greatest number.” The greatest number of people in society is, well, everybody—each individual, that is. So we're thinking about each person. Got it? Now we move on to “greatest happiness.” For each person, we want the greatest happiness, for them. For each person, we're going to try to see it their way.

This puts us in the neighborhood of the contract view. Everybody desires to achieve happiness by succesfully implementing his or her life-plan. Now, imagine we're all deliberating together about policy. Gary proposes policy P, because it's good for him. Lucy, who is well-informed and rational about her own interests, testifies that P would seriously hinder her ability to realize her life-plan and achieve happiness. So P doesn't promote the greatest happiness for Lucy. Now, on the interpretation I'm after, the “greatest happiness for the greatest number” principle states a presumption against imposing P, even if Gary's gain is happiness is bigger than Lucy's loss. The Pareto conception of social welfare is a sort of like this. We consider a change in policy an improvement just in case it makes someone better off and no one worse. (Pareto wasn't talking about happiness, though, he was talking about preference satisfaction—ophelimity!)

Now, on the the strict contractarian bargaining model, each person has a veto. Unanimity is required to make a move. Now, there's a lot you can do to achieve unanimity. If P is worth n+$.01 to Gary, then he'll put n on the table for Lucy to get her to change her vote. Etc. Of course, in the real world, we can't always actually bargain, can't actually offer each other side-payments, and can rarely get a unanimous decision. So our contractarian method is going to have to be fuzzy around the edges to work. The degree of gains and losses matters. What matters is not so much the quantity of feelings, as the impact on a life. If P would make Gary, Sara, and Delores rather better off, and would make Lucy just a little worse off, and no alternative that would be better for Lucy would be equally good as P for the others, then we should probably go ahead and just implement P. Sorry Lucy! Now, this is in the neighborhood of Scanlon's “reasonable rejectability” criterion. (But isn't the same: Scanlon rejects the idea that happiness is the sole consideration.) Even if Lucy knows P won't be as good for her as some alternative, she's benevolent, cares about other people, and knows their projects count too. And when other future hard choices arise, Lucy hopes that others would be will not hold out for every last scrap of satisfaction when doing so would place a significant burden on others. So it just wouldn't be “reasonable” for Lucy to object to P. However, if P really screwed up Lucy's projects and happiness in a deep way, then it wouldn't be reasonable for the others to press it.

There is a balancing to perform between the “greatest happiness” for each individual and that of all the others individuals, “the greatest number.” The fact that this balancing is required isn't a symptom of incoherence. Quite the opposite: it's a sign of realism in a world in which the pursuit of happiness is intricately interdependent. Our diverse ends aren't automatically reconciled—our interests aren't harmonized by magic. “Public happiness” requires ongoing give and take. Almost every real world change produces a loser. We should aim to keep losses small and gains broad, to create a stable system of institutions where everyone in pursuit of happiness is able to take a lot, and is required to give only a little. That's what I think the “fuzzy contract” view comes to. That's what I think the pre-heretical exponents of “the greatest happiness for the greatest number” had in mind, and that's largely what the American Founders were thinking when they talked about public of social happiness.

May the true principle of “the greatest happiness for the greatest number” be blessed. And may heretics be damned!