Since genes act strictly selfishly in their quest to reproduce, an enduring mystery of evolutionary biology has been how natural selection could ever favor cooperation and altruism. In its most basic form, cooperation is an interaction between two parties in which the donor carries the cost and the recipient receives the benefit. Due to the imbalance of cost and benefit, evolutionary biology would predict that genes which promote altruism would be driven out of the gene pool.
And yet, cooperation is actually a winning evolutionary strategy. All complex life forms are cooperative endeavors made up of simpler life forms. The most cooperative animals, we humans and eusocial insects like ants, have effectively conquered the planet.
To explain cooperation, evolution singles out specific circumstances where cooperation makes selfish sense from the perspective of genes. Caring for your relatives, for example, has a selfish logic since animals share part of their genome with their direct relatives. The math behind kin selection is pretty straightforward. Risking your life to save an identical twin, two full siblings, four half-siblings or eight cousins could ensure the survival of an animal’s genome.
But what about instances where non-relatives cooperate? Why would an animal temporarily lower its own evolutionary fitness to improve the fitness of somebody else? Reciprocal altruism explains this behavior in cases in which the cooperation helps an animal leave more copies of itself. This behavior is also known as delayed return altruism, since it is predicated on the idea that an animal will help another now so that they will, in turn, help them in the future.
The central problem of cooperation is that there is a great incentive to cheat, to free ride and to not return favors. This is evident in delayed return altruism: Why should the other reciprocate when they have already collected the benefit and now they are expected to pay the cost? If there is no way to enforce the cooperation or punish the cheater, many cooperative opportunities fall apart in their infancy.
For cooperation to be stable, it has to make selfish sense. To do so, the benefits of cooperation have to outweigh its costs. By helping the groups we belong to, we also help ourselves. The cooperative logic we are looking for in this book follows the logic of delayed return altruism. This means that we must find a way which ensures that the benefits of cooperation exceed the costs for everybody.
One explanation of why genes that promote altruism remain in the gene pool is that while selfish behavior benefits the individual, cooperative behavior benefits the group they are a part of. Many species, for example, use warning calls to warn others of predators. But giving such a call can be costly because it draws the predator’s attention to the caller.
The problem occurs when every member of the group acts selfishly, and nobody gives a warning call. Such a group becomes much more vulnerable to predators, making it less likely that its members will pass on their genes. Compare this with a group where warning calls are readily given. It is likely that this group as a whole can better escape predators and that their reproductive success is greater.
While the cooperative genes always lose out against selfish genes within the group, between groups, the cooperative genes thrive. The share of the altruistic genes always shrinks within the group, but the share of the cooperative groups’ genes within the gene pool as a whole increases. Group selection is still an actively debated subject, but evolutionary biologists like David Sloan Wilson have made a strong case for this kind of multilevel selection in their work. It is easy to see how highly cooperative groups can outcompete selfish groups.
To better understand the dynamics of cooperation, we should turn to game theory, a branch of mathematics that quantifies decision-making by studying selfish and cooperative choices under various game scenarios. Game theory has become a powerful tool for analyzing evolutionary processes.
What makes game theory so powerful is that because the results are mathematically quantifiable, the results are objective in nature.
The puzzle that preoccupies game theory is the prisoner’s dilemma and its numerous variations. The prisoner’s dilemma is a zero-sum game in which one player’s win is another player’s loss. The basic premise of the game is that the police interrogate two prisoners in separate cells. They both have a choice to either cooperate with each other and stay quiet or defect and implicate the other in return for a lighter sentence. The specific story behind the game is irrelevant as the same principles apply to all forms of cooperative interactions
.
There are four possible outcomes in this game. If the prisoners stay loyal to each other and they both cooperate, they will each only get two years in prison due to lack of evidence. If one of the players defects and implicates the other while the other doesn’t, the defector receives a one-year sentence while the other, who has been implicated, will get four years. If they both defect and implicate the other, they will each get a three-year sentence.
Game theory uses a tool called the payoff matrix to evaluate the costs and benefits of the various strategies. Using simple math, one player’s potential strategies are listed in rows, and the other player’s strategies are listed in columns. In this case, what is listed are the various sentences each choice carries. By comparing the outcomes of the clashing strategies, we can identify the best strategies for both players depending on which strategy the other player follows. The problem is that the players have to decide before they know what the other player has chosen.
While mutual cooperation produces the shortest total sentence of four years, viewed from an individual player’s perspective choosing defection is always the best strategy in a one-off game. No matter what choice the other player makes, by defecting, you are always guaranteed a lighter sentence. This makes defection a dominant strategy. Now, since selfish behavior is so clearly rewarded, as we already discovered in the context of natural selection, how can cooperation ever emerge?
Mathematical biologist Martin Nowak has dedicated his life to answering this fundamental problem of cooperation. In his book SuperCooperators: Altruism, Evolution and Why We Need Each Other to Succeed, Nowak identifies five special circumstances that induce cooperation.
The first circumstance is when the prisoner’s dilemma is played multiple times. In a repeated game, a cooperator who has been betrayed can always return the favor in the next round. Since mutual cooperation produces the best collective result, direct reciprocity can establish itself: you scratch my back, and I scratch yours.
The second circumstance is when the players become aware that the choices they make will be known by their community and affect their reputation. This is indirect reciprocity, which ascribes to the idea that I scratch your back so that somebody else will scratch mine.
The third circumstance has to do with spatial clustering, where cooperators seek out cooperative neighborhoods. The fourth is group selection, where competition with other groups induces cooperation within the groups. The fifth is kin selection, where we are induced to cooperate with our direct relatives in the spirit of nepotism.
Game theory is widely used in economics, social sciences and tough political negotiations. What is striking is that not only does it predict how humans and many animals behave in cooperative situations, it can even predict how single-cell bacteria behave. The same mathematical principles apply even when there is no conscious decision-maker making the decisions.
These insights give rise to evolutionary game theory, which was originally developed by John Maynard Smith. The field takes ideas developed within game theory and applies them to the natural world. Evolutionary game theory studies survival strategies that lead to reproductive success in various species. What Maynard Smith was especially interested in was identifying evolutionarily stable strategies (ESS) in various species. An ESS is a particular survival strategy that cannot be bested by a mutant strategy when every member of a population follows the same strategy.
Game theory and evolutionary game theory are scientific explorations of cooperation and offer a highly useful perspective for this book. One of the stated intentions of this book series is to articulate an evolutionarily stable strategy for all of humankind. But instead of trying to maximize reproductive success, the focus should be on maximizing well-being. In terms of regular game theory, we are looking for an optimal strategy that cannot be dominated by other strategies. Ideally this has to be a strategy that can be played by everyone at the same time.
To do this, we can use ideas from game theory to design life into the best possible positive-sum game we can think of. If we manage to create a win-win scenario for everybody, this could lay the foundation for enduring cooperation. Creating a stable cooperative framework that spans the globe would in turn complete the major evolutionary transition. The hope is that when this transition is complete, it is capable of overcoming the existential threats that now imperil our existence.