How does a group of people succeed in coming together to contribute to a collective project over an extended period of time? For example, what leads a group of unemployed workers to travel to the capital to lobby for an extension of unemployment benefits, or a group of expatriate Burmese people in London to attend demonstrations against the junta? What motivations are relevant at the individual level? And what circumstances are most conducive to creating and sustaining collective action?
Purely self-interested egoists won’t make it — that is the message of Mancur Olson’s Logic of Collective Action: Public Goods. The maximizing egoist will reason that the activity will either succeed or fail independent of his/her own participation. If it succeeds then he will enjoy the benefits of cooperation; and if it fails he will have avoided the wasted costs of participation. Either way the egoist does better by refraining from participation. So collective action in pursuit of a public good is all but impossible within a society of rationally disinterested egoists. As Amartya Sen observes in “Rational Fools” (link), “The purely economic man is indeed close to being a social moron.”
But we know that this conclusion does a bad job of describing real social life. People in villages, communities, political parties, religious organizations, public television audiences, and ethnic groups do in fact often succeed in getting themselves organized and mobilized in pursuit of a public good for the group. Often the level of mobilization is below the level that would be optimal for production of the good for the population; often it is fairly straightforward to identify the symptoms of incipient free-riding; but ordinary social experience and history alike are replete with examples of voluntary collective action.
Many theories can be articulated in order to account for the spontaneous occurrence of collective action. People may be irrational; they may be motivated entirely by non-utility considerations; they may be governed by norms of solidarity beyond their rational control; they may be disciplined by grassroots organizations that punish defectors; there may be an evolutionary basis hard-wired into the human cognitive-deliberative system that favors cooperation; or, for that matter, there may be a hard-wired impulse towards punishing defectors from common projects that tips the balance of utility calculation for would-be free-riders.
But here is a factor that seems to be a credible observation about social motivation and that still makes sense of the behavior in deliberative terms. Many real social actors seem to be what might be called “conditional altruists”: they are willing to contribute some effort or personal resource to a collective project if they have grounds for confidence that a reasonable number of other members of the group will contribute as well. (Jon Elster explores the idea in The Cement of Society: A Survey of Social Order.) And it isn’t that these actors make a calculation error along the lines of the fallacy of unanimity — “I want the benefits of the collective action, and it won’t occur without me.” Instead, they seem to reason in ways that would please a communitarian: “I’m a member of this group, I believe that other members will do what’s good for the group, and I’m willing to do my part as well.” This is a fairly explicit willingness to sacrifice the benefits of free riding. But the conditional part is important as well: the conditional altruist is calculating about the likelihood of success in the collective undertaking, and is willing to participate only if he/she judges that enough other people will contribute as well to make the undertaking feasible.
Conditional altruism thus attributes a common moral psychology to social actors, which we might refer to as the “fairness factor.” Individuals are willing to factor collective goods into their calculation of the costs and benefits of action, and they have some degree of motivation to act in accordance with a proposed collective action that would benefit them even if they could evade participation. They are disposed to act fairly: “If I benefit from the action, I should take my fair share of creating the benefit.” (Allan Gibbard’s Wise Choices, Apt Feelings: A Theory of Normative Judgment offers an effort to bring together the evolutionary history of the species with a philosopher’s analysis of moral reasoning.)
If fairness or conditional altruism are real components of human agency (for all or many human beings), then we can identify a few factors that are likely to increase the likelihood of cooperation and collective action. Measures that increase the actor’s assurance of the behavior of others will have the effect of eliciting higher levels of collective action. And it is possible to think of quite a few social circumstances that have this effect. A shared history of success in collective action is clearly relevant to current actors’ level of assurance about future cooperation. Shared history can be made more powerful in the present through the currency of songs, stories, and performances that highlight earlier successes (Michael Taylor, Community, Anarchy and Liberty). Researchers who study peasant village communities emphasize the importance of face-to-face relations among villagers; individuals know a good deal about the past behavior of their neighbors, which can provide a better basis for predicting their future cooperative behavior (Robert Netting, Smallholders, Householders: Farm Families and the Ecology of Intensive, Sustainable Agriculture). And members of small, stable communities also know that they will need to interact with each other long into the future — increasing the cost of non-cooperation today (Robert Axelrod, The Evolution of Cooperation: Revised Edition).
What is particularly interesting about this topic is the fact that actual social outcomes show a wide range of variations in the degree of self-interest and fairness that seems to be present. Some groups seem to act more like Mancur Olson egoists; others (like Welsh coal miners) seem to act as though they have a very high “solidarity and fairness” quotient. So no single answer to the question of collective action seems to work: “people are rational egoists,” “people are altruists,” or “people are conditional altruists.” Rather, a given opportunity for collective action seems to display a mix of all these styles of reasoning. These variations could be the result of several independent factors: differences in the formation of individuals’ moral psychology (emphasizing individualism or community from infancy); differences in current institutional settings (arrangements that make future interactions seem more likely to each participant); even potentially differences in personality or the genetic basis of decision-making across individuals.
I’m sure that there is work in experimental economics that probes the boundaries of this feature of practical reasoning. Ordinary social experience informs us that people have different levels of willingness to undertake sacrifice for a group’s projects. And having a more nuanced empirical understanding of how people behave in the settings of potential cooperation and collective action would help refine our understanding of the thought-processes and styles of reasoning through which individuals decide what to do. Here is an interesting paper by Ernst Fehr and Klaus Schmidt titled “The Economics of Fairness, Reciprocity and Altruism – Experimental Evidence and New Theories.”
One Reply to “Assurance game”
It seems to me that you're missing out the literature on 'parochial altruism', for example this paper by Bowles and Choi or this paper by Bernhard, Fischbacher and Fehr. and related topics on ingroup and outgroup preferences and that kind of conditional cooperation. The flipside of which is a rather dangerous predilection to violence toward or non-cooperation with outsiders.