One of Adam Smith’s contributions to the study of philosophical ethics is his book, The Theory of Moral Sentiments. It is an interesting work, one part descriptive moral psychology, one part theory of the emotions. Here is the opening paragraph (link):
How selfish soever man may be supposed, there are evidently some principles in his nature, which interest him in the fortune of others, and render their happiness necessary to him, though he derives nothing from it except the pleasure of seeing it. Of this kind is pity or compassion, the emotion which we feel for the misery of others, when we either see it, or are made to conceive it in a very lively manner. That we often derive sorrow from the sorrow of others, is a matter of fact too obvious to require any instances to prove it; for this sentiment, like all the other original passions of human nature, is by no means confined to the virtuous and humane, though they perhaps may feel it with the most exquisite sensibility. The greatest ruffian, the most hardened violator of the laws of society, is not altogether without it.
So Smith asserts as a matter of empirical fact that there are common moral emotions and feelings — sympathy, pity, compassion — that underlie human social and moral behavior. And the most basic kinds of morally motivated behavior — altruism in particular — are explained by the workings of these natural emotions of empathy with other human beings. So Smith posed a fundamental question: is there an innate human moral psychology, beyond the reach of training and teaching, that accounts for our willingness to give to others and sometimes sacrifice important interests for the good of others? Why do firemen rush into the highly dangerous environment of a large fire in order to rescue the people inside?
Now fast-forward to the post-Darwinian world; look at the human organism from the point of view of the study of primate behavior; and ask this key question: Is there an evolutionary basis for social behaviors? Are there emotions supporting cooperation that were selected for through our evolutionary history? Is a moral capacity hardwired?
Philosophers have treated this question in the past. Allan Gibbard’s Wise Choices, Apt Feelings: A Theory of Normative Judgment is a particularly good example. Here is how Gibbard describes the situation.
Consider now human beings evolving in hunting-gathering societies. We could expect them to face an abundance of human bargaining situations, involving mutual aid, personal property, mates, territory, use of housing, and the like. Human bargaining situations tend to be evolutionary bargaining situations. Human goals tend toward biological fitness, toward reproduction. The point is not, of course, that a person’s sole goal is to maximize his reproduction; few if any people have that as a goal at all. Rather, the point concerns propensities to develop goals. Those propensities that conferred greatest fitness were selected; hence in a hunting-gathering society, people tended to want the various things it was fitness-enhancing for them to want. Conditions of primitive human life must have required intricate coordination–both of the simple cooperative kinds involved, say, in meeting a person, and of the kind required for bargaining problems to yield mutually beneficial outcomes. Propensities well coordinated with the propensities of others would have been fitness-enhancing, and so we may view a vast array of human propensities as coordinating devices. Our emotional propensities, I suggest, are largely the results of these selection pressures, and so are our normative capacities. (67)
One of Gibbard’s key points is an analytical one. He argues against the idea of there being specific moral content, ethical principles, or moral emotions that are embodied in the central nervous system (CNS) as a result of variation and selection. Instead, he argues for there being a hardwired set of more abstract capacities that have CNS reality and selection advantage: the ability to learn a norm and to act in accordance with it. (Richard Joyce makes a similar point: “Evolutionary psychology does not claim that observable human behavior is adaptive, but rather that it is produced by psychological mechanisms that are adaptations. The output of an adaptation need not be adaptive” (5).)
This is the part that seems counter-intuitive from a simple Darwinian point of view. Wouldn’t an organism possessing a genetically determined disposition to act contrary to its mortal interests almost necessarily have less reproductive success? So shouldn’t such a gene quickly lose out to a more opportunistic alternative? Gibbard considers the evolutionary arguments surrounding the topic of altruism (including Richard Dawkins’ Selfish Gene), and concludes — not necessarily. It is possible to mount an evolutionary argument that establishes the fitness-enhancing characteristics of some specific kinds of altruistic behavior.
So what does the current research on this topic add to what we already knew? And, can we draw any interesting connections back to the venerable Smith?
In fact, there seems to be a new surge of interest in the topic. A number of philosophers and psychologists are now interested in treating moral psychology as an empirical question, and they are interested in working back to the evolutionary environment in which these human capacities emerged. (For example, Richard Joyce, The Evolution of Morality and Walter Sinnott-Armstrong, ed., Moral Psychology, Volume 1: The Evolution of Morality: Adaptations and Innateness.) Particularly interesting is research by Michael Tomasello and his collaborators. Tomasello is the co-director of the Max Planck Institute for Evolutionary Anthropology. He argues that human beings are hardwired for cooperation, empathy, and social intensionality in a very interesting recent book, Why We Cooperate. A great deal of his research has to do with experiments and observations of human children (9-24 months) and of young non-human primates. He finds, essentially, that infants and children display a range of behaviors that seem to reveal a natural readiness for altruism, sharing, coordination, and eventually following of norms. “I only propose that the kinds of collaborative activities in which young children today engage are the natural cradle of social norms of the cooperative variety. This is because they contain the seeds of the two key ingredients” (89-90). He presents a range of experimental data supporting these ideas:
- Human infants have a pre-cultural disposition to be helpful and empathetic (12-14 months)
- Human toddlers adjust their cooperative and normative behavior to be more attentive to the behavior of others: generous to the generous and not to the ungenerous.
- Human infants and toddlers have a precultural disposition to absorb and enforce norms.
- The emotions of guilt and shame to be hardwired to conformance to norms.
- Infants appear to take a “we” intentional stance without learning. They are able to quickly figure out what another agent is trying to do.
- Chimps differ from human infants in virtually each of these areas.
Here is a particularly interesting piece of evidence that Tomasello offers in support of the idea that human evolution was shaped by selection pressures that favored social coordination: the whites of the eyes in the human being. Almost all non-human species have eyes that are primarily dark; whereas human eyes feature a large and conspicuous circle of white (the sclera). The whites of the eyes permit an observer to determine what another individual is looking at — allowing human individuals to achieve a substantially greater degree of shared attention and coordination. “My team has argued that advertising my eye direction for all to see could only have evolved in a cooperative social environment in which others were not likely to exploit it to my detriment” (76).
So does this recent work on the evolutionary basis of moral emotions have anything to do with Smith and the moral sentiments? What the two bodies of thought have in common is the idea that there is a psychological foundation to moral behavior, cooperation, altruism, and helping. Pure maximizing rationality doesn’t get you to “helping”; rather, there needs to be some psychological impulse to improve things for the other person. Where evolutionary psychology differs from Smith is precisely in the nature of the explanation that is offered for this moral psychology; we have the advantage of having a pretty good idea of how natural selection works on biological traits, and we are therefore in a better position than Smith was to explain why human beings possess moral sentiments. What we cannot yet answer is the question of the nature of the mechanism at the level of the central nervous system or the cognitive system, of how these moral sentiments are embodied in the human organism.
(It is interesting to contrast this line of argument with that of Tom Nagel in The Possibility of Altruism. Nagel argues against the moral psychology of Hume — very similar to that of Smith — and argues that altruism is actually a feature of rationality. We behave altruistically, fundamentally, because we have a rational representation of the reality of the external world and of other persons; and to recognize the reality of another person is immediately to have a reason to help the other person. So no “motor” of moral emotion is needed in order to explain altruistic behavior. On this approach, we don’t need to postulate moral sentiments to explain moral behavior; all we need is a rich conception of practical rationality.)