Following the publication of his On the Origin of Man, from which the infamous term “survival of the fittest” was derived, Charles Darwin presented The Descent of Man, and Selection in Relation to Sex, applying evolutionary theory to the process of social human evolution. Compared to other species who regularly let their weak members die off for the greater good of the population, humans often create frameworks within society from which we preserve the weaker members of our race.
Perhaps surprisingly, Darwin does not advocate for the former. Instead, he argues that sympathy for our fellow man is in fact an evolutionary strength, and that sympathy is a trait that proves more likely to regenerate, and to be passed on to future generations. It may seem contradictory to think that Darwin’s social evolution theory is in line with his survivalist leanings, but the thought experiment of the prisoner’s dilemma has provided empirical data to support Darwin’s beliefs. This paper will explain what the prisoner’s dilemma is, how the prisoner’s dilemma is a form of game theory, how the rationalizations of the human mind can offer an understanding of decision making and interpreting decisions, and the ways in which our decisions ultimately inform basic morality.
A simple explanation of the prisoner’s dilemma is that it is a thought experiment. More specifically, a situation is posited in which a person must make a choice. The details vary, but the take home is the same: that while each player would individually benefit the most from defecting, both players would benefit the most together through cooperation (King, 2015).
The gist of the story of the prisoner’s dilemma is that both Smith and Jones are brought into the police station, held in solitary confinement, and told they are being charged with a crime. The prosecutors offer a sort of deal: if Smith betrays Jones by confessing while Jones remains silent, Smith goes free and Jones will be sentenced to ten years in prison. Alternately, Jones might rat while Smith remains silent, resulting in Jones going free while Smith is sentenced to ten years in prison.
If both players confess, both are sentenced to five years in prison. If neither confesses, both men will be held for just one year. The dilemma lies in knowing the best solution (where neither man confesses) while also knowing how appealing it might look to the other player to confess and go free, sticking the other man with the sentence for a crime he may or may not have committed (King, 2015). The likelihood of both men confessing grows when we add another piece of information to the storyline: the men are strangers, essentially owing nothing to each other, and acting simply out of self-interest.
The thought experiment of the prisoner’s dilemma can be considered a form of game theory, which attempts to explore decision making through the mathematical analysis of interactions between rational thinkers. Game theory has informed analysts about how to think decision making problem in a systematic fashion, through applying mathematical values to certain behaviors (Ross, 2014). These games typically set two or more opponents opposite each other, either in real experiments or hypothetical ones, in order to assess the choices players make.
Game theory can be applied to economics, military strategy, psychology, business strategy, and so on. These games simulate situations in which one player might take advantage of the other player’s trust in order to best serve his interests: to win. As such, this becomes a kind of abridged example of competitive interactions between humans, using strategies found in Darwin’s evolutionary theory and survival of the fittest (King, 2015). It is in every player’s best interest to always act in his best interest, based on what he thinks the other player will do.
The prisoner’s dilemma and game theory can be used to help us understand how we rationalize, as well as how we interpret such rationalizations into decisions. In the case of the prisoner’s dilemma, the rational thinker might choose to confess, because it is reasonable to believe the other player will also confess, in which case the first player is likely to earn either freedom (in the case player two does not confess) or five years in prison (if both players confess), which is considerably more appealing than fifteen years in prison.
To test this theory, in 1980, Robert Axelrod held a tournament of iterated prisoner’s dilemma games, where he invited game theorists to submit computer programs to play the game against each other. The winner of each game would go on to play another game, with the ultimate goal of accumulating the highest number of points over the course of hundreds of games with other computer programs (DeWitt, 2010). Surprising to most, the simplest program won the majority of games.
It was called Tit for Tat (TfT), and worked as follows: the first time around with a new program, TfT would always cooperate, and the second time TfT saw an opponent, it would either cooperate or retaliate, depending on how the other program treated TfT the first time around. TfT did not simply win the most points, but won by a surprising margin. Thus, in evolutionary terms, TfT was able to reproduce the greatest number of times, thereby providing empirical data about how cooperative behavior is advantageous on an evolutionary scale. This data helps us understand how cooperative behavior emerges in situations that seems particularly oriented toward promoting only self-interest (DeWitt, 2010). While at the outset, cooperation seems like the fool’s move, it has the greatest number of reported successes in long-term, grand scale reiterations of the original prisoner’s dilemma game. This suggests that cooperation can evolve naturally through reason (King, 2015), thereby giving credence to the decision to cooperate despite initial apprehension.
Taking this one step further, the prisoner’s dilemma then helps us understand how we interpret our decisions on a moral basis. In reincarnations of Axelrod’s experiment, in which the players reproduced to play the game against each other hundreds and thousands of times, what began to emerge was a population that shifted from long cycles of “Old Testament”-like behavior (be nice, but retain “eye for an eye” rationalization) to long cycles of “New Testament”-like behavior (be nice, forgive, “turn the other cheek” mentality) and reverting back and forth between the two over the course of many generations (Holodny, 2016).
As applied mathematician Steven Strogatz explains, the evolutionary shift expressed in the many computer simulations sounds a lot like human history, where civilizations that became too nice were eventually taken over by brutal cultures, which were eventually replaced by softer civilizations, and so on (Holodny, 2016). The shift in behaviors according to what decision is reasonable during each subsequent reincarnation of civilization in these computer simulations suggests that we can extrapolate normative ethics or morals from evolutionary evidence. One explanation of moral reasoning explains that it is 1) how we recognize moral considerations, 2) how they move us to act, and 3) opportunities for gathering insight about what we ought to do from how we reason about what we ought to do (Richardson, 2018). Thus, we can assert which decisions are moral based on evolutionary experience, because morality is self-organizing, forming as a result of natural selection (Holodny, 2016). This is how we extract morality from the iterated prisoner’s dilemma.
Having explored what the prisoner’s dilemma is, how it is a form of game theory, how humans tend to rationalize and interpret decisions, and how we can extrapolate morality from those rational decisions, we can see how the prisoner’s dilemma can be used to help us understand moral actions. It would be unwise, however, to proclaim that the data gathered from the iterated prisoner’s dilemma provides us with a clear-cut path to morality. For starters, the prisoner’s dilemma begins on the assumption that cooperation is morally superior to defection, but either can be better depending on the circumstances.
Certainly, there are some situations in which cooperation would be the less moral thing to do. Take cheating for example. A group of students sharing answers on an exam are cooperating, but the moral action is do not cheat; in this case, the defector would be the more moral agent (Hayden, 2013). Furthermore, the experiment run by Axelrod produces an idealized version of real life, by assuming that we perfectly understand the other player’s intentions at all times. In reality, we are likely to come across misunderstandings, in which players will retaliate out of perceived defection, even if their opponent is intending to cooperate. “In a social dilemma, an individual’s prediction of how others will behave can reveal the outcome this individual is intending to realize…the judgments of morality and competence then become an interactive function of observed behavior and perceived intention” (Krueger & DiDonato, 2010). As such, we might begin to extract morality from the prisoner’s dilemma, but we cannot declare the morals derived to be true in all circumstances.
Extracting Morality from the Prisoner's Dilemma. (2021, Nov 29).
Retrieved November 21, 2024 , from
https://studydriver.com/extracting-morality-from-the-prisoners-dilemma/
A professional writer will make a clear, mistake-free paper for you!
Get help with your assignmentPlease check your inbox
Hi!
I'm Amy :)
I can help you save hours on your homework. Let's start by finding a writer.
Find Writer