top of page

Why Forgiveness Builds Cooperation — and When It Backfires

  • Writer: Ilana
    Ilana
  • Aug 25
  • 12 min read

Updated: Aug 26

Why do we sometimes retaliate and other times forgive?

Why do we risk trust with one person but shut down with another?


ree

At some level, we are all strategists. Each choice we make — whether in love, friendship, or politics — is an attempt to maximize two things at once :

  • our utility function (the outcomes we value, like love, safety, dignity, prosperity)

  • and our resilience (the capacity to keep playing the long game without being destroyed by setbacks).


The trouble is, most of these decisions are guided by our unconscious “database” of past experiences.

If betrayal left a deep mark, we may overestimate the risk of being hurt again.

If kindness was once rewarded, we may extend trust too easily.

Our internal calculator is powerful, but it’s also biased: it magnifies wounds, discounts surprises, and generalizes from too little data.


This is where game theory comes in.

Born from mathematics and economics, it offers a framework to rationnally analyze choices when our outcome depends on someone else’s move — a move we can’t predict with certainty. Unlike our biased intuition, game theory helps us identify which strategies reliably maximize our utility and our resilience in the long run. Having some knowledge of it allows us to combine our unconscious statistical calculator with more rigourous probabilistic models.

In practice, it can shift our mindset: instead of repeating the same reflexes, we can experiment with new strategies — and by doing so, feed our internal database with new responses.


One of its simplest and most famous models, the Prisoner’s Dilemma, shows why cooperation, retaliation, and forgiveness each have a role to play — and why balancing them wisely is the key not only to healthy relationships but also to robust societies.



The Prisoner’s Dilemma: Why We Don’t Always Cooperate

To see how game theory works, let’s start with its most famous model: the Prisoner’s Dilemma.


Imagine two partners in crime, held in separate cells.

Each must choose: stay silent (cooperate) or betray the other (defect). So:

  • If both cooperate, each gets a light sentence.

  • If one defects while the other cooperates, the defector goes free while the cooperator pays a heavy price.

  • If both defect, both get medium sentences.


Mathematicians represent the consequences of your choice with a simple payoff matrix. The numbers of points show utility — higher is better :

You/ The Other

Cooperate (C)

Defect (D)

Cooperate

Light sentence (7 pts)

High sentence (0 pts)

Defect

Free (10 pts)

Medium sentence (4 pts)

At first glance, defection seems the smartest move. Whatever the other person does, you protect yourself by defecting:

  • If they cooperate, you walk away with the biggest reward.

  • If they defect, you still avoid the worst outcome.

That’s why game theorists say "Both Defect" is the Nash equilibrium of the one-shot Prisoner’s Dilemma: neither player can improve their outcome by unilaterally changing strategy.


That logic makes betrayal look like the “rational” move.

But here’s the paradox: when both people reason this way, both defect, and the result is worse than if they had trusted each other and cooperated.

In other words, rational self-interest produces collective loss.


And this is exactly what the model reveals:  trust creates the best collective result, but fear of betrayal makes mistrust the individually rational choice.


This explains why cooperation in human affairs is always fragile: the temptation to defect is often built into the structure of the interaction itself.


But of course, real life is rarely a one-shot game. We see each other again tomorrow, next week, next year. And once the game is repeated, the whole logic changes — and cooperation can suddenly become not only possible, but rational.



When the Game Repeats: Retaliation and Forgiveness

In real life, we rarely play once and walk away.

Relationships, communities, even rival nations all interact again and again. This repetition changes everything.


In a single round of the Prisoner’s Dilemma, defection is the best strategy to maximize our utility. But when the game is repeated, today’s betrayal can trigger tomorrow’s punishment. Suddenly, cooperation can become rational — not out of pure altruism, but because it protects long-term outcomes, and the mere possibility to keep playing (resilience).


Game theorists have studied how to play depending on what has happened before.

Let’s look at the most important strategies:

  • Always Cooperate: Whatever happens, keep trusting. This looks generous but is easy to exploit — in relationships, it’s the person who becomes a doormat!

  • Always Defect: Never trust, always betray. This avoids exploitation, but guarantees a cold, destructive dynamic — you never get the best outcome of mutual cooperation.

  • Grim Trigger: Cooperate until betrayed — then defect forever. Harsh, unforgiving. One strike and the relationship is over.

  • Tit-for-Tat (TFT): Start with cooperation, then simply copy the other’s last move. Friendly but firm. If they cooperate, you reward it; if they defect, you retaliate. This strategy proved highly effective in Robert Axelrod’s famous computer tournaments in the 1980s. it maximizes utility and resilience.

  • Tit-for-Tat with Forgiveness: Like TFT, but occasionally lets a defection slide. This prevents endless retaliation loops caused by errors or misunderstandings, while still discouraging consistent betrayal.


What Axelrod’s experiments revealed was striking: simple reciprocity strategies like Tit-for-Tat, especially with a touch of forgiveness, outperformed both naive kindness and cynical betrayal.

The lesson is clear: the most robust players were neither saints nor villains, but those who combined cooperation with boundaries — and forgiveness when it mattered.


And in real life, repetition often comes with something even more powerful: reputation. You are rarely playing just against one person; you are also being observed by others — friends, colleagues, allies, communities. Every move becomes part of your record.


Reputation magnifies the shadow of the future:

  • If you defect today, you don’t just face retaliation from one partner tomorrow — you may lose the trust of many others.

  • If you cooperate reliably, you don’t just earn one ally — you build a network of people willing to engage with you.


This is why reciprocity becomes rational in communities, businesses, or nations. The cost of betrayal is not only direct punishment but also a loss of reputation, which shrinks your future opportunities. Reputation acts like an invisible scorecard, extending the reach of Tit-for-Tat far beyond individual relationships.


And in our time, this scorecard has become literal. We rate almost everything: restaurants, drivers, hosts, products, even professionals — and sometimes people themselves. Platforms amplify reputation, making it instantly visible and harder to escape. In such a society, every act of cooperation or defection is not just a private choice — it’s part of a public record that can shape future payoffs. Betray once, and you may not just lose a partner, but access to the entire network.


Game theorists call this indirect reciprocity: cooperation spreads and stabilizes because trustworthy behavior is rewarded with opportunities, while betrayal closes doors. In a world of ratings and reviews, the shadow of the future is longer and sharper than ever.



Why Forgiveness Matters

Tit-for-Tat looks like the perfect balance: start with trust, reward cooperation, and punish betrayal. It’s fair, simple, and remarkably effective.


But there is a hidden fragility.

In real life, mistakes happen. Someone miscommunicates, a gesture is misread, or a person is simply undelicate without any harmful intention. If both players are using Tit-for-Tat, even this kind of innocent misstep can trigger retaliation. That retaliation then looks like betrayal to the other, who retaliates again, and so on. Soon both are locked in a spiral of punishment, and cooperation collapses — not because either side truly intended harm, but because neither allowed room for error.


This is why forgiveness matters. It acknowledges human imperfection. People will sometimes fail, stumble, or act clumsily — not always out of malice, but often out of limitation. Without forgiveness, these small cracks in behavior harden into permanent fractures. With forgiveness, cooperation becomes resilient: punishment still discourages exploitation, but occasional grace allows the relationship to reset after noise, accidents, or flaws in execution.


Robert Axelrod’s computer tournaments in the 1980s confirmed this. Strategies with a touch of forgiveness consistently outperformed those that were purely retaliatory. Forgiveness, far from being naive, proved to be a stabilizing force — and at the same time, it is a recognition of our shared imperfection.



From Individuals to Societies

What holds for individuals also scales up to entire cultures. Strategies of cooperation, retaliation, and forgiveness are not only personal choices — they are embedded in the norms, laws, and institutions of societies.


Religions were among the first to codify these strategies.

Judaism emphasized proportionate reciprocity: “an eye for an eye” was not a call for vengeance, but a rule of fairness — punishment should be neither harsher nor lighter than the offense.

Christianity later shifted the emphasis toward forgiveness: “turn the other cheek” elevated mercy as a counterweight to strict justice.

Together, these traditions captured the same lesson that Axelrod’s tournaments later proved mathematically: reciprocity without forgiveness is brittle, forgiveness without reciprocity is fragile. The real resilience lies in holding both.


At the level of whole societies, reputation becomes law and culture. A community remembers who defected and who cooperated, not only through gossip or personal memory, but through legal codes, moral traditions, and social norms. This collective memory acts like a giant scorecard:

  • In a strict Tit-for-Tat society, the punishment for betrayal is severe and lasting. One crime, one mistake, and your reputation is ruined. This deters some defectors, but also pushes others into the role of “permanent outlaws,” since the cost of re-entry is too high. Rationally, if you can never rebuild trust, your best strategy is to keep defecting. The system enforces order, but at the cost of turning some members into long-term enemies.

  • In a strong-forgiveness society, the opposite happens. Reputation damage is light, second chances abound, and punishment is quickly lifted. This prevents permanent exclusion, but also creates the temptation for opportunists to exploit the system. Rationally, if tomorrow’s cost is negligible, you may treat every interaction as if it were a one-shot game — and in one-shots, defection dominates. The future no longer disciplines the present. With no real consequences tomorrow, the rational calculation becomes: “exploit today, and try again tomorrow.” This encourages a small but significant fraction of people to game the system, slowly eroding trust for everyone else.


Both extremes destroy the very conditions that make cooperation sustainable.

Strict Tit-for-Tat creates permanent outlaws.

Strong forgiveness creates permanent opportunists.

In both cases, the reputation mechanism breaks: it no longer balances deterrence with reintegration.


The lesson is that societies, like individuals, need the same triad of robustness:

  • Conditional cooperation as the default.

  • Proportionate retaliation that deters without permanently excluding.

  • Limited forgiveness that allows repair without erasing accountability.


In modern societies — where everything from drivers to doctors to companies is publicly rated — this balance becomes even more crucial. Too harsh a penalty, and reputation systems destroy lives for single mistakes. Too lenient, and the ratings lose meaning, enabling exploitation.

The art of resilience, at every scale, is finding the middle path where reputation enforces trust while still leaving room for human imperfection.



When the Model Breaks: Strategies for the Exceptions

So far, we’ve worked with the rational model: each player tries to maximize their own outcome, and reciprocity plus forgiveness can sustain cooperation.

But in real life, not everyone plays by those rules. What can we do when the model breaks down?


1. Destructive utility – “your loss is my gain.”

Some actors don’t seek to maximize their own payoff, but simply to make you lose, even at great cost to themselves. Their utility function is inverted: your suffering = my satisfaction.

Suicide bombers, vindictive ex-partners, or scorched-earth politicians fall into this category.

  • Implication: cooperation is mathematically impossible, because there is no shared interest in survival or mutual benefit.

  • Strategy (in the model): the only rational moves are to remove them from the game (destruction/elimination, if possible) or to isolate/contain them (minimize interaction as much as possible so they cannot impose losses). Forgiveness or reciprocity have no meaning here, because there is no payoff structure where cooperation is rewarded.


2. Random or irrational behavior – inconsistency without strategy.

Others don’t follow a stable logic at all. Their choices seem erratic, impulsive, or chaotic, driven by moods, addictions or hidden motives. You don’t know whether kindness will be met with kindness or with defection. That uncertainty makes trust impossible to build.

  • Implication:  in game-theory terms, it’s as if the “noise” is so high that the interaction never stabilizes. Reciprocity and forgiveness only work when there’s at least some consistency to respond to. Without it, the relationship remains unstable — one day up, the next day down — with no possibility of building lasting trust.

  • Strategy (in the model): the rational response is to exit the game. If the opponent has no strategy, you cannot build one around them.

  • The brain’s trap: and yet, humans often stay. Our dopamine system responds strongly to unpredictability — intermittent affection or rejection creates addictive loops, just like a slot machine. From a game-theory perspective, leaving is optimal; from a neurological perspective, we often get hooked and keep playing.


3. The madman bluff – calculated unpredictability.

A third category is strategic. The player may be rational, but they deliberately cultivate the image of being irrational. Nixon’s Cold War “madman theory” was exactly this: if the opponent believes you might just do something extreme, they will hesitate.

  • Implication: uncertainty shifts the payoff structure. You don’t know if they’re bluffing or truly unstable, so you overestimate the risk of pushing back.

  • Strategy (in the model): the rational counter is to establish credible deterrence and clear boundaries — making sure that escalation carries real costs for them, while refusing to be manipulated by their bluff. The goal is to re-anchor the game in rational payoffs, where cooperation or restraint again make sense.



In each of these cases, the neat balance of Tit-for-Tat and forgiveness collapses. The model itself points us toward harsher or more decisive strategies: destruction or isolation for purely destructive players, exit in the face of randomness, and deterrence against bluffers.


And when our brain resists these moves — clinging to chaos or tolerating destruction — it is not the model that fails, but our own wiring that tempts us to keep playing a game that cannot be won.



Why Humans Don’t Always Play Rationally

So far we’ve built a rational framework: strategies like reciprocity, retaliation, and forgiveness can sustain cooperation; extremes of harshness or leniency collapse the game; and when actors don’t play by the rules at all, the model points us toward destruction, exit, or deterrence.


But human beings are not machines. Even when the rational move is clear, we often fail to take it. Our brains don’t evaluate payoffs like cold mathematicians — they run on chemistry, emotions, and stories. One of the strongest biases we carry is our response to unpredictability.


The dopamine system in our brain doesn’t just reward us when something good happens; it spikes when something good happens unexpectedly. That surprise produces a bigger rush than a predictable reward. This is why slot machines are addictive: it’s not the size of the win that hooks us, but the uncertainty of whether a win will come at all.


And dopamine does more than create a rush — it shapes memory. Surprising events get “over-encoded,” stamped into our internal database more strongly than routine ones. Rationally, we should weigh outcomes by frequency: ten betrayals should count for more than one act of kindness. But because that one kindness was unexpected, dopamine amplifies it in memory. Our internal “statistical model” becomes biased: we give too much weight to rare, surprising rewards, and too little to consistent patterns.


The result is wishful thinking loops. We keep hoping the unpredictable partner will finally become reliable, or that a destructive organization will suddenly turn cooperative. Rationally, the game has shown us its structure; but emotionally, our memory keeps us playing, because that one bright exception was etched too deeply.


This is why people sometimes remain in dynamics that no payoff matrix could justify. Rationally, cooperation is impossible with randomness or malice. But emotionally, unpredictability intoxicates us: it makes us hope for more than the game itself can deliver.


Knowing this doesn’t make us immune, but it does help us step back, recognize the bias, and choose more consciously when to keep playing — and when to walk away.



The Final Lesson: Forgive, But Know When Not To

If there is one lesson that game theory leaves us with, it is this: cooperation is fragile, but possible. 

The strategies that endure — whether in individuals, societies, or nations — are those that combine three elements:

  • Conditional cooperation as the default.

  • Proportionate retaliation to deter betrayal.

  • Limited forgiveness to repair mistakes and absorb human imperfection.


Without forgiveness, cooperation cannot last in the long term. Noise, misunderstandings, and clumsy behavior are inevitable; if every slip ends trust, then all relationships collapse into brittle cycles of punishment at some point. Forgiveness is the buffer that keeps cooperation alive.


But forgiveness is only rational inside a shared framework of reciprocity and mutual utility.

If both players value their own outcomes and have an interest in preserving the relationship, forgiveness stabilizes cooperation. It helps reputation recover, makes re-entry possible after mistakes, and keeps the social fabric intact.


Outside of that framework, however, forgiveness loses its strategic power and can become self-sabotage.

  • With destructive actors whose payoff is simply your loss, forgiveness is wasted — the model tells us the only rational responses are isolation or elimination.

  • With random or irrational actors, forgiveness cannot build trust, because you never know if kindness will be met with kindness or betrayal. Our brain may tempt us to stay, hooked by dopamine and unpredictability, but mathematically, leaving is the only move that preserves resilience.

  • With madman bluffers, forgiveness can be read as weakness. Here, deterrence and boundaries matter more than mercy.


This is the paradox at the heart of human cooperation: forgiveness is at once the key to sustaining relationships, and the boundary we must not cross when reciprocity disappears. Forgive too little, and the web of trust and cooperation breaks.

Forgive where there is no shared framework, and you invite endless exploitation or harm.


The art of robustness — in love, in society, in politics — lies in knowing the difference.

Forgive when it strengthens cooperation.

Refuse forgiveness when the game itself has collapsed.

 
 
 

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page