« Sustainable Darwinian Agriculture and Organic Tomatoes | Main | Choosy mothers may choose wisely »

Altruistic punishment? Maybe not.

Punishing cheaters selects against cheating, but what selects for punishing? Are the answers different, depending on whether the species involved have brains? A recent internet experiment suggests that altruistic punishment, perhaps unique to humans, doesn't promote cooperation as effectively as previously thought.

My own research focuses on cooperation in species without brains. We showed that "sanctions" imposed by legume plants limit the evolution of "cheating" rhizobium bacteria (those that divert more plant resources to their own reproduction, relative to other rhizobia, by investing less in fixing the nitrogen needed by the plant). We think individual plants help themselves by imposing sanctions that limit wasteful resource use by less-beneficial rhizobia - they don't do it for the benefit other legumes.

In theory "altruistic punishment" (paying some cost or taking some risk to punish noncooperators) could help explain why there is more cooperation among unrelated humans than might otherwise be expected. (Cooperation among relatives is explained by kin selection.) But how much are individuals willing to pay to punish noncooperators?
The latest experiments attempting to answer this question were just published on-line in Proceedings of the Royal Society, by Martijn Egas and Arno Riedl: The economics of altruistic punishment and the maintenance of cooperation.

A total of 846 Dutch-speaking people who responded to an ad: "Play to get rich over the internet" participated in the experiments. This might not be a random sample, but at least they weren't all college students; a majority were male, but income was near the national average. I would be interested in seeing results from other groups.

In each experiment, three people interacted anonymously, over the internet. Each person got $20 - I'm using $ to symbolize a monetary unit - and could invest some of it in a common "project." As is typical in these "public goods" experiments, the reward for investing $1 was $1.50, split among the participants. So if everyone invests $20, everyone gets $30 back, but an individual could invest nothing and still get a share of what others invest. (Like rhizobia benefiting from nitrogen fixed by other rhizobia; but I digress.) Then, each participant had a chance to punish others for being stingy, at some cost to the punisher. The experiment was repeated, so those punished in the first round had a chance to reform, but they interacted with different individuals in successive rounds.

The interesting thing about this study is that, in different experiments, they varied the cost of punishing. In treatment T13, you could pay $1to fine another person $3, whereas in treatment T11it cost $1 to fine someone $1.

Without the punishment option, cooperation got worse in successive rounds. When it was cheap to punish (T13), cheaters got punished and the average investment increased from about $9 in the first round to about $11 in round 6. Both results are similar to previous experiments. But when it was expensive to punish, there was less punishment and things got worse over rounds. Furthermore, even in T13, the benefit of greater cooperation was less than the cost of punishment, at least in terms of total pay-off to the group. The authors concluded that "altruistic punishment leads to an overall loss of individual and group welfare."

These results show that altruistic punishment isn't an "easy" solution to the problem of cooperation. Things were slowly getting better in the groups with altruistic punishment, however, and worse without it, so maybe long-term results would be different. If altruistic punishment is consistent enough, it may create and maintain cooperation. But for it to be consistent, it can't be too expensive for the punisher.

If I understand the experimental setup, the punishment was totally altruistic, because you wouldn't interact again with those whose behavior you may have improved by punishment. So this would not represent the situation in a small group of chimps or humans. (Let's ignore kin selection, for simplicity.) There, punishing noncooperation would be somewhat altruistic -- you might prefer that someone else take the risk of confronting him -- but you could be among the beneficiaries of future cooperation. The experiments in this paper seem more relevant to someone living in a big city. What are the chances that that dangerous-looking car thief is going to steal your own car next?


Comments

Since humans are primarily social animals, anything that invests in one's "tribe/society" is an investment in one's DNA.
I can see a benefit in trying to assure that you invest in a society that won't cheat your DNA of the goods a well ordered society can provide.

I agree that investing in one's tribe or society makes sense even from a "selfish-gene" perspective, if the individual opportunity cost is small enough and the collective benefit is great enough.

“but income was near the national average”

What “national average” ? There are a lot of very different national averages in the world.

The Dutch national average. Certainly not representative of all humans.

Interesting. But the thing I wonder about is the nature of the equilibrium or steady state that may be reached; in particular, the extremes? e.g. what happens when there is a benefit to imitating the punishers? So rather than a general drift to a society where all are contributors is there a equilibrium where most are punishers and what does does mean for a group? Does this set up force mutation to become latent or hidden in some way?

Post a comment

(If you haven't left a comment here before, you may need to be approved by the site owner before your comment will appear. Until then, it won't appear on the entry. Thanks for waiting.)


Type the characters you see in the picture above.