Daniel Pink talks about how he makes it a habit to assume that people operate from good intentions. By doing so, he chooses a default option that can have surprisingly large effects downstream.
This truth has been demonstrated by Dan Ariely’s study of organ donation rates across various European countries. He shows how the default option in a driving license form can make the difference between 5 – 20% enrollment and 85 – 99.9% enrollment for organ donation in the event of death by road accident.
Moreover, the default state can reinforce a strong feedback loop, turning it into a self-fulfilling prophesy. If you assume that people’s actions have good intent, you will observe their actions that confirm this assumption, which ends up strengthening it. The same would be true if the default state is to assume bad intention. One starts seeing bad intentions everywhere, and this belief only grows stronger. It was primarily this difference that caused two characters in the Mahabharata, Yudhistra and Duryodhana to view the world differently.
Assuming good intent isn’t being naive. It is merely giving people the benefit of doubt. The tit-for-tat algorithm in game theory is one that one that starts with an assumption of good intent in every transaction and changes that belief when it has evidence to the contrary. This algorithm was simulated to directly compete with several other programs – some of them rigged to cheat and deceive, with others that mimicked “nice” behaviours.
The result? The tit-for-tat algorithm with its ‘good intent until proven otherwise’ setting, emerged as the most effective long-term strategy in several games.
Further reading: The evolution of cooperation