Playing games with morality
Our ingrained moral modules interact with each other. With simple games, we can simulate how people in societies interact with each other. About dealing with power, division, revenge and trust.
Let us assume for a moment that we already have a picture of our genetically determined morality. In this series about morality and toleration, the previous episode was about that. But what about the rest of our morality? It is a big step from family first, the concept of property, empathy and reciprocity, to the current complex moral systems that we have today.
We're going to talk about moral evolutionary game theory. As an introduction, this short video, in which the game theorist and philosopher Jason McKenzie Alexander explains it well.
In this newsletter we are going to discover on the basis of game theory:
how communities with shared morality arise,
why cooperative communities are often suspicious of outsiders,
how societies with different moral systems can coexist,
and much more.
First, we will look at traditional game theory, in which we go through a number of interesting games. Think of those games as an introduction, where you'll ask yourself: what would I do? What would the other person do? And: what is rationally the best game strategy?
After that introduction to game theory, it becomes even more fun. Game theory also turned out to be very suitable for gaining insight into biological, evolutionary processes. After discussing evolutionary game theory, we then come to the core of this article: moral evolutionary game theory, essentially a derivative of evolutionary game theory.
This moral theory is essentially still in its infancy. A handful of philosophers are pioneering it; the modelling is full of maths; the theory is a bit nerdy. Although the results are still rather primitive, it is a fascinating approach to a complex problem. What can game theory teach us about the evolutionary development of moral issues encountered in every culture? We are talking about issues such as cooperation, trust, scarcity, distribution and retribution.
But now for the traditional game theory.
Game theory
The origins of game theory were mathematical. Games such as chess were analysed at a higher level of abstraction: which strategy would yield the most benefit in the longer term. In 1944, John von Neumann discovered that the principles of game theory can also be applied in economic theory. An abstract analysis of game strategies also came in handy in warfare. Which strategy is the best if the opponent also has a choice of multiple strategies? In most cases, you won't be able to coordinate with the other player. So, you have to estimate what the other person is going to do, who as far as you are concerned has the same problem.
In game theory, a number of morally inspired games were developed. We deal successively with the prisoner's dilemma, hawks and doves, the distribution of a loaf of bread, and finally the ultimatum game, because they can all be played in the evolutionary variant of game theory.
Let’s kick off with the basics: traditional game theory.
The prisoner's dilemma
The most famous game is the prisoner's dilemma. The game is essentially about cooperation and trust. When working together, everyone wins. Win-win, 1+1=3. But what if an individual has the opportunity to make a higher individual profit at the expense of cooperation? We encounter this problem on a daily basis, in small and large issues.
I spend a lot of time in the Balkans, in candidate countries for accession to the European Union. A major obstacle to accession in several of those countries is political corruption. Suppose you are a leader of such a country. It is in the interest of the entire country that it joins, but as a leader you rake in millions with corruption. Let’s say that you have to choose: to let your country join the EU, or to secretly amass a fortune of 50 million. What would you do?
The game goes like this. A serious crime has been committed and the police arrest two suspects. We call them Anna A. and Betty B. The police suspect them of having committed the crime together, but only have evidence of a smaller offence. The lighter offence carries seven years in prison, the serious offence thirty years. For conviction for the serious crime, the police need a confession.
Anna and Betty are both in a different cell and they can't communicate with each other. The police interrogate them individually. If they both keep their mouths shut, the police will only have evidence of the lighter offence, and they will both get seven years in prison. The police present them with a deal individually: if she betrays her accomplice, she only gets two years in prison, but her accomplice gets fifty years. And that offer is also made to her accomplice. But beware: if they betray each other, they will both be in prison for thirty years.
Note that the sum of Anna’s and Betty's sentences is smallest if both keep their mouths shut (14 years) and largest if they both defect. If only one keeps her mouth shut, the total sentence is lower (52 years) than if A and B both defect.
If the game is played once, betraying the other is, statistically speaking, the best strategy. That’s the irony of the game: that we achieve a suboptimal result, by choosing the option with the best possible result regardless of what the other person does.
I wrote about Thomas Hobbes in another series. That’s what he essentially meant: to work together on a win-win basis, a degree of coercion (or at least alignment or trust) is needed. If there is none, it is rational to choose one's own predicament. Without external stimulus, it is rational not to trust the other, especially if it concerns a stranger.
Several paragraphs below, we will look at what that means for the relationship between low trust and high trust societies.
The hawk and the dove
Another game has been devised about cooperation: the hawk and the dove. There is a limited amount of (scarcity!) feed on the ground. There are two animal species interested. The hawk will fight for the feed until he is wounded or the other gives up the fight. A dove immediately leaves the feed as soon as the other shows aggressive behaviour.
Two hawks will fight over it until the other gives up. Two doves will share the feed without fighting. If the game is played between a dove and a hawk, the hawk wins by definition, because the dove capitulates immediately.
In a community of doves, there are no losers. The feed is shared fairly and no one is injured. In a community of hawks, you have winners and losers: one hawk gets the feed and the other doesn't. Moreover, in a fight from hawk to hawk, one is always wounded. This is bad for the hawk population: many a hawk succumbs to the injuries. Doves do not have that problem: they share the feed with each other without fighting. Doves therefore deal with scarcity more efficiently.
The problem in this game is that while hawks deal with scarcity inefficiently, they still win the game. Because in a world with only doves and hawks, the hawks get all the feed, and the doves will starve.
Aggression pays off. Peaceful, cooperative societies thrive better than aggressive, violent societies, but peaceful societies have no defence against aggression. Or do they? We'll see later.
The Nash distribution game
The third game assumes a loaf of bread that can be divided in every possible way, and two players. The two players cannot communicate with each other. They both have to tell the referee what part of the bread they want. When their wishes add up to the whole bread, they both receive the share they have requested. If their accumulated wishes together amount to less than the whole bread, they both receive the requested share. But if they want more than the whole bread together, they both get nothing.
It is obvious to assume that everyone chooses 50/50. But why? The players have to decide in advance how much they will settle for. If a player is hungry and only asks for ten percent, he has a greater chance of getting bread than if he asks fifty percent. If the other player does the same, they have only divided twenty percent of the bread; the rest is lost.
On the other hand: an equal distribution is the strategy in which everyone asks for the maximum achievable without running a significant risk of losing bread. Anyone who demands more than fifty percent is almost certain that his demand will be nullified, unless he has information in advance that the other person will demand less than fifty percent.
What's interesting about this game is that any distribution that adds up to one hundred percent is stable. Once a player is used to getting ten, fifty or eighty percent while the other player demands the remaining part of the bread, he cannot change your strategy without cutting himself. That situation where you can no longer improve your strategy is called a Nash equilibrium, after the creator of this game, the mathematician John Nash.
An obvious conclusion is that shifting power relations are inevitably accompanied by suboptimal outcomes for the players collectively for a while, until the players settle with their new position of power. Grist to the mill for conservative thinkers. And those who strive for emancipation have to deal with it.
The ultimatum game
The fourth and final game is called the ultimatum game. That game is about fairness, power and revenge.
There are two players and a sum of money. Player A may make a proposal on how to distribute the money. Player B is not allowed to negotiate this: the only choice he has is to accept or reject the proposal. If the proposal is accepted, each player will receive the sum in accordance with player A's proposal. If the proposal is rejected by player B, neither of them will receive anything.
Intuitively, one would say that player A offers to share the spoils: half each. The risk of B refusing is small. But A can exploit his position of power. Suppose he proposes that he get eighty percent. B then has two options: grudgingly take the remaining twenty percent, or punish A for his exploitation. Rationally speaking, B is better off accepting that he has less power, so he should eat the crumbs off the table. If he refuses, A is punished for his exploitation. Player A is taught a lesson. In fact, that's altruistic behaviour: it might cost player B his twenty percent, but he's willing to do that to make it clear to A that exploitation doesn't pass.
This ultimatum game has been played a lot all over the world, in more than ten rounds in a row, and in some experiments the players were also given several days to think about it. In general, the spoils were divided 50/50. But that's boring. More interesting is the number of times A dared to exploit his position of power, and the number of times he was punished by B for it. Cultural differences were reflected in this. In Japan and Israel, player A's modal requirement rose on average to 55 to 60 percent after a number of rounds, while in Yugoslavia and the United States, it stuck at 50 percent. If A demands more than eighty percent, the offer is usually rejected.
On average, players are willing to teach 'unethical' opponents a lesson in the interest of the community, but only if player A goes overboard, as long as it does not cost the injured party too much. The degree of that willingness, and the extent to which the powerful party dares to exploit its position of power, are subject to cultural differences.
The latter was especially evident when the game was introduced by anthropologists to fifteen different small-scale, tribal communities around the world. The results showed large differences. The average offer varied by community from 42% to 74%. In some communities, no offer of A was refused by player B. But there were also communities where an offer was refused that was perceived by player B as too generous. They were cultures where the recipient of generosity feels obligated to the other. Throughout collectivist cultures, 50% bids were usually accepted and unfair bids were also generally accepted.
Evolutionary game theory
In the 1960s, biologists discovered that principles of game theory can also be applied in the analysis of evolutionary processes. A specialised version of game theory emerged in evolutionary biology, to get more insight in the interaction between different animal species.
How different animal species can play different games has been discussed before. How some animals can live eusocially, almost like an organism, while at the other end of the spectrum animals can also survive completely solitary. We saw in the game of the hawk and the dove that game roles can also come into conflict with each other. Animals with the wrong combination of physical characteristics and game roles lose out; species and mutations with a better combination thrive.
Characteristic of evolutionary game theory is that the game is repeated endlessly. There are multiple players with different game patterns and they all interact with each other. Players with bad strategies die out, so the best strategies end up facing each other, or coexist.
Evolutionary games should be exciting by nature, a toss-up. A game where one group eats its own children and the other does not, ends quickly. The first group quickly became extinct. The opposite is also true: a game between left- and right-handers is not interesting: one strategy will barely have an evolutionary advantage over the other.
What evolutionary game theory is mainly looking for are evolutionarily stable strategies: the evolved outcome will not change if outsiders or mutants join the game who apply a different strategy.
Another characteristic of evolutionary game theory is that ratio plays no role. In traditional game theory, we are dealing with rational players, who have goals and weigh odds. Evolutionary game theory is merely about clashing game patterns. More about human rationality later, but remember this: people are (boundedly) rational, they have goals that they consciously pursue. Animals don't usually do that; they have behavioural patterns without adapting their behaviour to the longer-term consequences. This does not matter for evolutionary game theory: it only looks at behaviour: which game patterns will eventually dominate, and what effect will that have on the size of the population?
In monogamy versus polygamy, for example, both patterns occur among both humans and animals. People can worry deeply about the pros and cons of a monogamous marriage, divorce, or cheating, and make a conscious decision about it. But effectively, it comes down to some people being monogamous, some polygamous, and many people being something in between or serially monogamous. Whatever the individual considerations, those do not change the overall human pattern. To the extent that that pattern is influenced, those are cultural influences rather than individual considerations.
Evolutionary game theory usually involves animal behaviour, and thus primarily genetically determined behaviour. Animals, especially the Economo group of great apes, elephants and cetaceans, can develop culture, but this culturally developed behaviour plays a less important role than with humans. Culture has a major influence on human behaviour. Therefore, before we talk about evolutionary game theory applied to human morality, we must first talk about the phenomenon of cultural evolution.
Cultural evolution and morality
The difference between our genetically determined morality and what you and I think is right or wrong can be summarised as cultural morality. Your parents played a major role in this, but also, for example, your friends, your upbringing at school and all kinds of opinions that you find in society. But of course, those views that you pick up here and there also have their origin, and so on. Moral views are not the same everywhere, and they change over time.
We call this process cultural evolution. This cultural evolution has similarities with biological evolution, but also, for example, with the way in which languages have developed over the millennia, a process of continuous fragmentation, mutual influence and regrouping. To illustrate, you might watch how Indo-European languages evolved over the centuries in the video below.
To put it simply, for tens of thousands of years we have had a whole patchwork of cultures that competed with each other. For example, one group was strictly monogamous, while the neighbouring group was much easier about it. The most successful groups (in terms of size, prosperity, territory, number of offspring, life expectancy, health, etc.) apparently had a successful combination of norms and customs. (Genes may also have played a role, but we'll leave those aside for now.) Successful groups grew, while groups with a less successful combination languished. The most successful groups, in turn, came into contact with other successful groups, and so on. Again: I simplify, the reality was of course much more complex. Coincidence, natural circumstances and disasters could also play a role, for example, but you understand what it is all about. Successful cultures thrived, at the expense of deficient cultures. For example, if a culture was too peaceful, or just the opposite, or economically not resilient enough, it would lose out.
Morality and evolutionary game theory
Roughly with the publication in 1984 of The evolution of cooperation by the American game theorist Robert Axelrod, a new discipline emerged, applying techniques of evolutionary game theory to morality. Just like in evolutionary game theory, the game is repeated endlessly, and there are many different players interacting with each other, and various strategies.
But this time, the players are rational and human: they have goals, and they fit their behaviour to their goals. And they can form cultures: their behaviour can be determined by cultures, and those cultures can clash with other cultures.
Unlike animals, humans can adjust their game strategy. If a strategy doesn't work, we look at strategies that do work and adopt them. Animals often cannot. In the animal world, altered game strategies arise mainly through kin selection and mutation.
Let's continue to talk about our rationality. We should not idealise it. People are only rational to a limited extent. In everyday reality, it is impossible to foresee all the consequences of our behaviour. We are not supercomputers; generally, we are lazy thinkers. Moreover, there are many uncertainties: we cannot predict all the consequences of our behaviour. And we are also notoriously bad at statistics and probability by nature and we suffer from cognitive errors (such as groupthink and confirmation bias). And even if we identify the best rational behaviour, we are also influenced by emotional barriers: fears, egos, aversion to risks, complacency, a preference for short-term results.
Much of our behaviour is determined by expectations, habits, and heuristics. Some of it has been passed on to us genetically. That's what the previous newsletter was about: our morality has a number of modules that are ingrained in us. That we always want to give our own offspring preferential treatment, for example, and that we always have an urge to retaliate injustice.
Another part is cultural in nature: we have acquired our behaviour partly through upbringing and through interactions with people around us. Based on this, we apply rules of thumb for our own behaviour, and we have expectations of the behaviour of others. Therefore, we form a line in front of the cash register and we shake hands.
Those are fairly trivial examples, but moral evolutionary game theory has identified some major moral themes that lend themselves to simulations. Cooperation and trust, distribution, and retribution, to name a few important ones. Power, conflict and aggression play a role in it, transaction costs, scarcity, and dealing with strangers who play a different moral game. You can see why this is such an important theory for those who want to delve into toleration.
Remember that cooperation, distribution and retribution are themes that also appear in the list of genetic moral modules from the previous article. It is hard to determine whether game-theoretic insights regard genetic or cultural morality. If a game has been played between our ancestors for at least tens of thousands of years, it could have influenced us genetically as well. But cultural influence will be greater. If a particular strategy of cooperation benefits a distinctive culture, it offers that society a greater chance of prosperity. To what extent game theory affects genetic or cultural morality has not yet been investigated, as far as I know. To the game-theoretic method, that is of lesser significance. The game is repeated endlessly between different players. Whether their game strategy is genetically or culturally motivated, and whether the players play rationally or instinctively, makes little difference to the simulation. Successful strategies get more players, either because more players are going to play that strategy, or because those players have a greater chance of survival.
That moral game strategy can also be genetically inclined is well known in biology. For example, in 1987 the German biologist Manfred Milinski published an article on the behaviour of three-spined sticklebacks towards predatory fish. A stickleback and a predatory fish were followed in an aquarium equipped with mirrors and glass walls. The stickleback saw its reflection, and held it for a congener. "Together" they then saw a predatory fish in the tank. Some prey fish, when they are together, go exploring to see how dangerous the predatory fish is. If they approach together, the chance that one will be eaten halves. Apparently, the benefits of such an exploration evolutionarily outweigh the risk of being eaten in that exploration.
If one stickleback goes out on its own, and the other stays behind, the scout runs a much greater chance of being eaten than the straggler. As in the prisoners' dilemma, they have a shared interest in cooperation, but individually they run less risk if they stay behind. Milinski now observed that sticklebacks exhibit strategic moral behaviour in this experiment. A stickleback who finds a companion is more likely to act in confidence that the other will join him in the exploration. But if that companion stayed behind, the next time, it will not initiate any more exploration. This is unlikely to be a learned strategy; presumably it is genetically inclined.
As in biological evolutionary game theory, moral evolutionary game theory looks for evolutionarily stable strategies (ESS): a situation that remains stable even when newcomers or mutants join the game with another strategy. In most cases, such an ESS consists of a number of strategies that balance keep each other in check, as it were. Cats eat mice, but mice do not die out because they biologically play a different game. Cats and mice balance each other game-theoretically.
Human games also balance each other, for example in situations where both cooperative and non-cooperative behaviour occurs. In every human society, we have people working together and we have spoilsports who save their own skin. There are individualistic cultures and collectivist cultures, and within those cultures there are people who are more individualistic than average, or more collectivist.
Let's see what insights moral evolutionary game theory offers us when we apply it to the four games discussed earlier.
The prisoner's dilemma
If betraying each other in the prisoner's dilemma is the rational strategy, why is it that people all over the world work together and trust each other? The answer must be found in evolutionary game theory. Suppose that we repeat the game endlessly, what will be the outcome?
The trick of the game is that evolutionary stability is not that of mutual trust. A group can continue to play the game for a long time on the basis of mutual trust, but as soon as newcomers start playing with it who betray their fellows, the trust is gone. The game is only evolutionarily stable if everyone betrays each other: outsiders with a new strategy can no longer improve the outcome.
But there is also a glimmer of hope. Computer simulations show that under special circumstances, a better strategy can thrive: tit-for-tat. Start cooperating: you don't betray the other person. But as soon as you yourself are betrayed, you betray the other person in the next game. If the other person cooperates, you will protect him next time. The stickleback just now used that strategy as well. However: how the stickleback can thrive evolutionarily with that strategy unfortunately remains unexplained. Because under normal circumstances, tit-for-tat unfortunately loses out to the strategy of betrayal.
But human societies have structure: we remember who has betrayed us before, or who can be trusted, and we can gossip about each other. We rarely play the game with a random other earthling, one of the 8 billion. We have family, neighbours, friends, colleagues, business partners. If we don't know someone yet, we can inquire. Could that help?
According to computer simulations, that does matter, but only if we can voluntarily choose who we want to play with. It goes too far to name all the variables under which a strategy of cooperation can become evolutionarily stable, but it is important that this cooperation takes place in smaller networks. As the network grows, and players know each other less well, the chance of a society of betrayal increases.
Low-trust societies are the rule, high-trust societies the fragile exception. But those high-trust societies obviously do better: the prisoner's dilemma is played with a more favourable outcome. An open question is how successful high-trust societies can protect themselves so that they do not succumb to their own success. Because prosperous societies inevitably attract outsiders who play the game differently. The fragile cooperation based on trust runs the risk of being undermined by outsiders, back to the evolutionarily stable strategy of mutual distrust. The investigation is still ongoing.
Hawk and dove
Back to the hawk and the dove. The dove population is clearly better off, because every dove receives feed, and there are no injuries. Each feeding moment leads to an injured hawk. The hawk population can support itself as long as the average harm from injuries does not exceed the yield of feed. If the harm from injuries exceeds the yield of feed, the hawk population will perish from its injuries.
A dove entering a community of hawks is going to starve to death: every time there is food on the ground, it loses to the hawk. A hawk entering a dove community is having the time of its life. It gets as much food as it wants, and it doesn't even have to fight for it. Thus, where a dove community deals most efficiently with scarcity, and the hawk community the least efficiently, the hawks dominate each dove community.
In essence, this is the outcome of the game: the pigeon community loses out and dies. There are only hawks left. But they also struggle for survival: the hawk community also dies out if there is not enough food available to care for the injured hawks.
An unsatisfactory outcome, also according to game theorists. Hence, they try in different ways to change the rules of the game. For example, by giving the pigeons limited power to defend themselves against hawks, as a herd of buffaloes do against a predator. Or by giving pigeons the role of hawk feed as well, so that hawks need a percentage of pigeons to survive. One can also make the game more complex by adding other animal species and then see what the outcome is. The most interesting species, let's call it the retributor, begins with pigeon behaviour. He plays the role of a dove towards other retributors and towards pigeons. But he goes into battle against a hawk. Simulations show that the retaliator can be very successful from an evolutionary point of view. How successful, that depends on the value assigned to the fodder in relation to the damage of a battle.
But whatever we come up with, the conclusion is inescapable: perfectly peaceful, conflict-free societies are the most prosperous, but they have no defence against aggressive peoples. A degree of aggression is game theoretically unavoidable. As long as the fruits of cooperation outweigh the costs of infighting, that is the only evolutionarily stable outcome.
Distribution according to Nash
Now the distribution of the bread. Remember? There are two players. If their wishes add up to the whole bread or less, they both get the desired share. But if they add up to more than the whole bread, they both get nothing.
Recall that in this game any distribution that adds up to one hundred percent constitutes a Nash equilibrium. Individual players can no longer improve their strategy. But not only that: every distribution (except 0 for one player and 100 for another) is evolutionarily stable: newcomers who do not respect the usual distribution are outplayed unless they conform.
This explains why the caste system in India is so persistent, why slavery was considered the norm for thousands of years, and why discrimination is so hard to combat. If an oppressed group revolts, this can have a socially disruptive effect.
This raises the question whether scenarios can be devised in computer simulations in which an unequal distribution gradually turns into an equal distribution. And they appear to exist. The magic ingredient is correlation: players are no longer randomly assigned to each other, but they can choose their opponents. It is obvious that fair players (who demand fifty percent) prefer to play with each other. Those who are inclined to demand more than fifty percent will have a hard time finding a player who voluntarily settles for less. If the greedy player does not find another player, he will be forced to settle for an equal distribution.
But what should really be understood by correlation: under what circumstances are you really free to choose your partner yourself? Suppose everyone is formally free to ask what they want, but they are used to playing for less than half from an early age. And society has a huge group of players who systematically agree that they only play for more than half. Formally, there may be a correlation. But in practice there is not.
As with the other games, there's a lot more to say about it. All kinds of social conditions can be modelled. If you find it really interesting, and don't shy away from some higher mathematics, then I can recommend Alexander's books.
The ultimatum game
You remember the ultimatum game. There is a sum of money on the table, and there are two players. Player A may determine how that sum is divided between players A and B. If B accepts, each receives his share in accordance with A's proposal. If B refuses, they both get nothing. We saw that the distribution is slightly in favour of A's dominant position: on average, A takes just over 50 percent. But if A demands about 80 percent or more, B often rejects the proposal.
Now we throw the game into the computer simulation with endless repetitions of the game and all kinds of players with various playing strategies. What evolutionarily stable strategy emerges from that?
The most resilient playing styles are the following:
The opportunist: takes advantage of his position of power by always demanding more than half as player A, and accepts any bid as player B.
The moralist: as player A always demands half, as player B always accepts half, and as player B refuses any demand of more than half.
The easy rider: as player A always demands half, and as player B accepts every bid.
The madman: as player A always demands more than half, as player B accepts every demand of more than half but refuses every demand of half or less.
In the simulation in which all player types are initially represented equally and play against each other, two player types eventually survive. The opportunists win the game. The moralists are eliminated. The opportunists take advantage of their position of power and have no incentive to punish exploitation by others. Interestingly, however, a small minority of madmen remain: 13 percent to be exact.
But the outcome is dramatically different if the moralists are overrepresented at the start of the game. In that simulation, the roles are completely reversed: the opportunists and the madmen perish, the moralists win the game, and about 40% of easy riders continue to exist. Moralists share fairly and punish anyone who does not share fairly, even if it is at their own expense. Easy riders have the same game strategy, but they don't punish unfair players. Where those two roles are dominant, there is no place for opportunists and madmen.
So, there are two evolutionarily stable situations: those where the opportunists dominate, or those where the moralists dominate. Thus, according to the lessons of this game, there can be two types of stable societies: communities where it is each for himself, and communities where selfishness is punished. The interesting question is what happens when these two types of societies come into contact with each other. If I interpret the results correctly, thirty percent moralists are enough to let the moralists dominate in the end.
Note that in both societies, money initially evaporates because demands from player A are rejected, but once an ESS has been reached, no more money leaks out of the game. In the opportunistic society there is always a small minority of madmen who punish fair demands, but fair demands no longer exist in that society. In a moralistic society, no one demands more than half of the sum, so no one is punished anymore. Money only evaporates when both moralists and opportunists are still at play.
Conclusion
So much for game theory. We have seen that from a game-theoretic point of view, communities navigate between self-interest and cooperation, between trust and mistrust, between xenophobia and the arrival of strangers, between aggression and non-aggression, between opportunism and moralism. We have also seen how beneficial the role of correlation can be: you can choose your partner based on previous behaviour. The role of gossip within communities and the emergence of a moral culture can be very beneficial to the outcome of the game. And finally, we've seen why communities can have an interest in protecting themselves from outsiders with different views of play.
The tricky thing about evolutionary game theory is that there are so many variables imaginable. How big is the community, how well do people know each other, how big are the sanctions, how many interactions are there, what is at stake? Et cetera. The game-theoretic methodologies are not yet sophisticated enough to take all variables into account. Still, it's a theory to keep an eye on.
For the theme of toleration, it is good to remember that cooperation, harmonious distribution and trust are fragile. Fragile harmony can easily turn around in a society where no one trusts each other, where there is no cooperation, aggressive behaviour is rewarded and power is abused. It's an indication that toleration is a delicate game. Fragile societies are less able to allow for dissent, behaviour and outsiders.
Further reading
Manfred Milinski, TIT FOR TAT in sticklebacks and the evolution of cooperation, Nature (1987)
Brian Skyrms, Evolution of the social contract (1996/2014)
William Harms, Brian Skyrms, Evolution of moral norms, in: Michael Ruse (ed.), Oxford handbook on the philosophy of biology (2007)
Alex Mesoudi, Peter Danielson, Ethics, evolution and culture, Theory in Biosciences (2008)
Alex Mesoudi, Cultural evolution (2011)
Philip Kitcher, The ethical project (2011)
Charles C. Cowden, Game theory, Evolutionary Stable Strategies and the evolution of biological interactions, Nature (2012)
Wei Chen et al., Evolutionary dynamics of N-person Hawk-Dove games, Nature (2017)
Jason McKenzie Alexander, The structural evolution of morality (2007)
Jason McKenzie Alexander, Evolutionary game theory (2023)
In closing
This was the second episode in the series on morality and toleration. The episodes so far are:
The morality that everyone is born with
About the moral modules that all people have in common. About kin selection, cooperation, empathy and much more.Playing games with morality
Our ingrained moral modules interact. With simple games you can simulate how people in societies interact with each other. About dealing with power, division, revenge and trust.The morality of our inner hunter-gatherer, farmer and citizen
Our morality is layered: every phase of human history has left its mark. Culturally, there are still layers of hunter-gatherer, farmer, and citizen in our ethics.Annoying questions about good and bad
Is there such a thing as moral knowledge? And how do we find out? About self-doubt, man-eaters, emotional judgements, and the difference between theft and vegetables.Good people are happier. But how to become a good and happy person?
Aristotle's virtue ethics along the empirical ruler. Do we even need an ethical system if everyone is virtuous and happy? And positive psychology: how to become a happier and better person?
Next time we’ll move on to cultural moral evolution: how our morality has evolved over millennia. But first a new episode about the separation of church and state, in the series on Christianity and tolerance.
I have now washed up in Niš, in southern Serbia, fleeing from the holidaymakers. No tourists to be seen here. Fittingly, it is the birthplace of Constantinus the Great, the first Christian Roman emperor.
Politically, Serbs have bad PR, but on a personal level I get along fine with them. The Serbian cities of Niš, Novi Sad and Belgrade are pleasant cities, which I can highly recommend.