The prisoner’s dilemma is a famous example of how game theory functions. It predicts the behaviour of two people when forced to cooperate. The story goes as follows:
Two accomplices in crime are arrested by the police. They are interrogated in separate rooms. As the police have insufficient information, they offer a deal to each prisoner to confess that the two committed a crime (or deny). The deal is:
- If you confess and your partner denies taking part in the crime, you go free and your partner will serve ten years (and vice versa).
- If you both confess you will go to prison for four years each.
- If you both deny taking part in the crime, you both go to prison for two years.
Assuming the prisoners act rationally (i.e. for their best interest and minimising their jail time), the prisoner will obviously choose the “confess” option as this is hypothetically the best choice (minimum time = 0 years, compared to only 2 years minimum for denying). However, because both prisoners are thinking this, the result is almost always that both confess and end up with four years each. Therefore, because human beings are unable to trust another human being enough, people always end up acting irrationally (benefit not maximised).
If the two had been trusting (assuming the other would deny too) and cooperated, both would have served half the time. But people always assume (correctly) that the other person will betray them for their selfish gain and this win-win result is unattainable.
But what if the other prisoner was yourself? Let us assume that the prisoner’s dilemma game was played by you and an exact copy of you. A copy that thinks like you, acts like you and identical to you in every single way. Can you trust yourself? Do you trust yourself enough to deny the crime, when it is entirely possible that he or she rats you out to walk free while you suffer for 10 years? How do you know that he loves you more than himself?
Your greatest enemy is you.