Posted in Philosophy

Pascal’s Wager

In the 17th century, French philosopher Blaise Pascal made the following argument for believing in a god:

  1. There is a god or there is not.
  2. You can choose to believe in a god or not (the wager).
  3. If there is a god, you will be rewarded eternally in the afterlife for your faith, but be punished eternally if you do not believe.
  4. If there is no god, you lose a finite amount of your time and maybe some material wealth for believing in a god.
  5. Ergo: As the rewards and punishments that follow in the case of god existing is infinite, it is better to bet that there is a god, no matter how infinitesimal the odds may be.

Pascal’s wager does not deal with the possibility of whether gods exist or not; that is irrelevant to the wager. He merely suggests that the odds suggest that you should believe. But is this really the case?

To begin with, what Pascal promotes through this wager is not true belief or faith, but a rational choice to believe – something that is not really possible. Believing is not a product of reasoning but more of an alternative. Furthermore, if there really is an omniscient god, would he not easily see the impure motives behind your “faith”?

Secondly, how do we know that the god you believe in is the true god? There have been thousands and thousands of religions throughout history. Who is to say that the deity that you will face in the afterlife will not be Hades, Odin or Yama? If that is the case, then you will have lined up behind the wrong god and you will be punished for your “idol worship”. This argument nullifies the mathematical advantage of infinite rewards that Pascal suggests.

Lastly, one cannot rule out the possibility should a god exist, there is no way of knowing whether that god is benevolent or malevolent. Pascal’s wager only deals with the two possibilities of a benevolent god and the absence of god, but if a malevolent, wrathful god exists, then what is the gain from worshipping him? When you kill an insect, do you judge whether that insect has faith in you then reward or punish it accordingly? It is likely that in this scenario, worshipping such a god will be a waste of time and you will be relatively better off not believing in god.

In 1990, an American philosopher named Michael Martin presented a counter-wager to Pascal’s wager – the so-called atheist’s wager. He argued that if a benevolent god existed, then he should reward good deeds regardless of your faith. If a god does not exist, then your good deeds will leave a good legacy and the world will (hopefully) be a slightly better place to live in after you pass away.

Ergo, the wager we should be making is not whether a god exists or not, but that we should be good.

(If you are interested in this, you should read The God Delusion by Richard Dawkins, he explains this very elegantly)

Posted in Psychology & Medicine

Tit For Tat

In human society, there are many ways for a person to interact with others when in a group setting. Some may choose to be selfish and only be out for their best interests, while others may choose altruism and cooperate with each other. The mathematical model that tries to predict human behaviour and outcome in these settings is the Prisoner’s Dilemma – the core of game theory. Tit for tat is one strategy that can be employed in such a setting.

The basis of tit for tat is equivalent exchange. A tit for tat player always chooses to cooperate unless provoked. As seen in the Prisoner’s Dilemma, if both players cooperate, both benefit (let us say 3 points each); if one player defects, that person gains more than from cooperation (5 points) while the tit for tat player gains 0 points.
If a tit for tat player is provoked, that player will retaliate. However, the player is also quick to forgive. Ergo, if the other player chose to cooperate, the tit for tat player (following the principle of equivalent exchange), will also cooperate. If the other player defected, the tit for tat player loses the first round and then chooses to defect from then on.
Note that tit for tat strategy only works when there is more than one game so that the player has a chance to retaliate.

Let us use an example to illustrate why tit for tat strategy works. In this scenario, two tit for tat players and two defectors all play six games each, using the above point system (if both defect, they each receive 1 point). The results are as follows:
  • Tit for tat vs defector: Tit for tat loses first round, both defect for next 5 rounds (5 vs 10)
  • Tit for tat vs tit for tat: Both cooperate on every round (18 vs 18)
  • Defector vs defector: Both defect on every round (6 vs 6)

When the points are added up, a tit for tat player gains 28 points (5 + 5 + 18) while a defector only gains 26 points (6 + 10 + 10). This is a surprising turn of events, as the defectors never lost a round and tit for tat players never “won” a round. This goes to show how cooperation leads to better long-term results while selfishness prevails.

There are shortcomings of this strategy. If there is a failure in communication and one tit for tat player mistakes the other’s actions as an “attack”, they will retaliate. The other player then retaliates to this and a vicious cycle is formed. This is the basis of many conflicts ranging from schoolyard fights to wars (although interestingly, tit for tat strategy is also found during wars in the form of “live and let live”). One way to prevent this is tit for tat with forgiveness, where one player randomly cooperates to try break the cycle (a defector would respond negatively while a tit for tat player will accept the cooperation), or the tit for two tats, where the tit for tat player waits a turn before retaliating, giving the opponent a chance to “make up for their mistake”.

Computer simulations have all proven that tit for tat strategy (especially the other two types mentioned just before) are extremely effective in games. In fact, it is considered one of the most optimal strategies in overcoming the Prisoner’s Dilemma.

In human societies, there is usually a mix of “nice people” and “selfish people”. By cooperating and trusting each other, we can produce a much greater gain over time compared to being selfish. And since society still unfortunately has “defectors”, you can retaliate to those who refuse to cooperate by defecting on them also. Ergo, a good approach to life is to initially reach out your hand to whoever you meet and treat them from there on according to how they respond. If they take your hand and want to cooperate, treat them with altruism and help them out. If they swat your hand away and try to use you for their selfish gain, it is fine to shun them and not help them out.

Through cooperation, understanding and connection, we can build a far more productive and efficient society, just like the ants.

Posted in Philosophy

Prisoner’s Dilemma

The prisoner’s dilemma is a famous example of how game theory functions. It predicts the behaviour of two people when forced to cooperate. The story goes as follows:

Two accomplices in crime are arrested by the police. They are interrogated in separate rooms. As the police have insufficient information, they offer a deal to each prisoner to confess that the two committed a crime (or deny). The deal is:

  • If you confess and your partner denies taking part in the crime, you go free and your partner will serve ten years (and vice versa).
  • If you both confess you will go to prison for four years each.
  • If you both deny taking part in the crime, you both go to prison for two years.

Assuming the prisoners act rationally (i.e. for their best interest and minimising their jail time), the prisoner will obviously choose the “confess” option as this is hypothetically the best choice (minimum time = 0 years, compared to only 2 years minimum for denying). However, because both prisoners are thinking this, the result is almost always that both confess and end up with four years each. Therefore, because human beings are unable to trust another human being enough, people always end up acting irrationally (benefit not maximised).
If the two had been trusting (assuming the other would deny too) and cooperated, both would have served half the time. But people always assume (correctly) that the other person will betray them for their selfish gain and this win-win result is unattainable.

But what if the other prisoner was yourself? Let us assume that the prisoner’s dilemma game was played by you and an exact copy of you. A copy that thinks like you, acts like you and identical to you in every single way. Can you trust yourself? Do you trust yourself enough to deny the crime, when it is entirely possible that he or she rats you out to walk free while you suffer for 10 years? How do you know that he loves you more than himself? 

Your greatest enemy is you.