Part I: Evolution of cooperation - Chapter 4
MAXIMIN
Since litigating the merits and demerits of philosophical arguments from the Enlightenment forms a central pillar of Western political philosophy, we are not going to do it here. But somebody who took up this debate was John Rawls in his seminal 1971 book, A Theory of Justice.

For Rawls, justice is ultimately about fairness. According to him, a just society and a legitimate government can be established when we create the rules for them not knowing what position in that society we will take. To do this, he proposed a famous thought experiment where we create the rules for our society from behind a “veil of ignorance”–not knowing whether we will be rich or poor, black or white, sick or healthy etc.

What Rawls was concerned about was distributive justice, or how we distribute the primary social goods that our society produces. Our coexistence produces untold benefits, but it is not obvious who these benefits belong to or how they should be shared. Rawls argued that given this task, a rational actor would follow the maximin decision rule found in game theory, whereby you would maximize the minimum share the system guaranteed its least well-off member. This way, you could guarantee that even in the worst-case scenario you (and everybody else put in that situation) would get the best possible outcome.

This could perhaps be called a form of impartial utilitarianism. It also reminds us of the tenets of negative utilitarianism, as articulated by Karl Popper. Instead of trying to maximize happiness or well-being, negative utilitarianism seeks to minimize suffering, or negative utility.

A simple version of applying the maximin rule is when you are tasked with cutting a cake. If you didn’t know which slice you were going to get, or you knew you would get the last slice, how would you cut it? If your aim was to ensure the largest slice for yourself, the best way to cut the cake would be to cut it into equal-sized pieces. This would follow the maximin rule, where you maximized the smallest share anybody would receive.

At the level of society, however, Rawls didn’t believe that socialism or equality of outcomes would actually produce the biggest slice for the least well-off. Instead, he believed that another system would be able to bake a bigger cake. This, then, posed a question for Rawls: What kind of inequality should be acceptable in society? Rawls’s answer was that inequalities are justified so long as they also benefit the least well-off members of society. This is called the difference principle. It is the opposite of the efficiency principle, which states that we should help the least well-off until it starts to hurt somebody else.

Since all inequalities couldn’t be abolished, Rawls wanted to eliminate the advantages conferred by morally arbitrary inequalities handed out by the birth lottery. To him, the only legitimate inequalities were the result of work or effort, since they would benefit society as a whole.

Rawls saw himself as a Kantian, in the sense that according to him society should be constructed on universal principles that were seen as just no matter which end of the stick you received.

In his categorical imperative Immanuel Kant states that:
Act only according to that maxim whereby you can at the same time will that it should become a universal law.
Instead of morality being determined by the outcome of a given action, as utilitarianists thought, Kant and Rawls were concerned with the intention behind an action. Is a law proposed out of good will or out of selfish motivations? Rawls’s thought experiment tried to eliminate selfish motivations by shielding lawmakers from knowing exactly what would benefit them personally. This would then lead to fair and impartial universal rules everybody could accept.

The explicit goal we set for ourselves at the beginning of this book was to articulate the cooperative terms that maximize human well-being within the carrying capacity of the planet. Having heard Rawls’s arguments, we can now specify that a part of that task is to maximize the minimum share received by the least well-off members of society.

Instead of a philosophical concept that can be endlessly debated, we choose to pursue the maximin outcome also because it is an established idea within game theory. As game theory is a branch of mathematics, the hope is that we can keep our inquiry within the boundaries of hard science. The hope is that the solution I propose can one day be expressed as mathematical equations, which can be embedded in computer code. To morally justify the maximin outcome as our chosen objective, we defer back to John Rawls’s arguments.

That being said, I believe the solution I propose in this book is compatible with many, if not most, moral philosophies, including those discussed in the previous chapter.

The solution is based on the conviction that all humans have inherent value, are created equal and are deserving of their natural rights. At the same time, the solution proposed in this book is explicitly a social contract people can voluntarily adopt if they so choose. And since our aim is to maximize a particular utility, in this case well-being, the proposal is also compatible with the tenets of classical utilitarianism. The law of diminishing marginal utility states that we can create more collective well-being by lifting people out of poverty than giving more to the people who already have plenty. And finally, since the coexistence we seek precludes any form of exploitation, the proposal is also compatible with the requirements set by Marx.

Now, political philosophers like Rawls only argue about the larger principles, and rarely get into the nuts and bolts of how these principles should be implemented in practice. Rawls argued about how we should arrive at the rules but never put into words exactly what those rules should be. This is the task we take on in this book. Using the latest science and technological innovations, our task is to articulate how we can maximize the minimum level of well-being we can guarantee everybody in perpetuity within the planetary boundaries.

For our purposes, well-being consists of at least three components–physical, social and psychological well-being–and the best way to measure these is by surveying the subjective experience of well-being.

Renewable and non-renewable natural resources set the upper limit to our physical well-being, meaning the food we eat, the houses we build and the energy we use for transportation. But while these resources are finite, it doesn’t necessarily follow that our resource allocation is reduced to a zero-sum game. What we are essentially looking for is to bake the biggest possible cake within the carrying capacity of the planet. Doing so would create a win-win scenario resembling a non-zero-sum game.

Since our definition of well-being also consists of social and psychological well-being, which are not as directly tied to natural resources, it becomes much easier to turn our optimization challenge into a genuine positive-sum game. We can, after all, generate social and psychological well-being by building a free, just and kind system that serves everybody.
Made on
Tilda