Deficiencies of practical rationality in organizations

Suppose we are willing to take seriously the idea that organizations possess a kind of intentionality — beliefs, goals, and purposive actions — and suppose that we believe that the microfoundations of these quasi-intentional states depend on the workings of individual purposive actors within specific sets of relations, incentives, and practices. How does the resulting form of “bureaucratic intelligence” compare with human thought and action?

There is a major set of differences between organizational “intelligence” and human intelligence that turn on the unity of human action compared to the fundamental disunity of organizational action. An individual human being gathers a set of beliefs about a situation, reflects on a range of possible actions, and chooses a line of action designed to bring about his/her goals. An organization is disjointed in each of these activities. The belief-setting part of an organization usually consists of multiple separate processes culminating in an amalgamated set of beliefs or representations. And this amalgamation often reflects deep differences in perspective and method across various sub-departments. (Consider inputs into an international crisis incorporating assessments from intelligence, military, and trade specialists.)

Second, individual intentionality possess a substantial degree of practical autonomy. The individual assesses and adopts the set of beliefs that seem best to him or her in current circumstances. The organization in its belief-acquisition is subject to conflicting interests, both internal and external, that bias the belief set in one direction or the other. (This is the central thrust of experts on science policy like Naomi Oreskes.) The organization is not autonomous in its belief formation processes.

Third, an individual’s actions have a reasonable level of consistency and coherence over time. The individual seeks to avoid being self-defeating by doing X and Y while knowing that X undercuts Y. An organization is entirely capable of pursuing a suite of actions which embody exactly this kind of inconsistency, precisely because the actions chosen are the result of multiple disagreeing sub-agencies and officers.

Fourth, we have some reason to expect a degree of stability in the goals and values that underlie actions by an individual. But organizations, exactly because their behavior is a joint product of sub-agents with conflicting plans and goals, are entirely capable of rapid change of goals and values. Deepening this instability is the fluctuating powers and interests of external stakeholders who apply pressure for different values and goals over time.

Finally, human thinkers are potentially epistemic thinkers — they are at least potentially capable of following disciplines of analysis, reasoning, and evidence in their practical engagement with the world. By contrast, because of the influence of interests, both internal and external, organizations are perpetually subject to the distortion of belief, intention, and implementation by actors who have an interest in the outcome of the project. And organizations have little ability to apply rational rational standards to their processes of belief, intention, and implementation formation. Organizational intentionality lacks overriding rational control.

Consider more briefly the topic of action. Human actors suffer various deficiencies of performance when it comes to purposive action, including weakness of the will and self deception. But organizations are altogether less capable of effectively mounting the steps needed to fully implement a plan or a complicated policy or action. This is because of the looseness of linkages that exist between executive and agent within an organization, the perennial possibility of principal-agent problems, and the potential interference with performance created by interested parties outside the organization.

This line of thought suggests that organizational lack “unity of apperception and intention”. There are multiple levels and zones of intention formation, and much of this plurality persists throughout real processes of organizational thinking. And this disunity affects both belief, intention and action. Organizations are not univocal at any point. Belief formation, intention formation, and action remain fragmented and multivocal.

These observations are somewhat parallel to the paradoxes of social choice and various voting systems governing a social choice function. Kenneth Arrow demonstrated it is impossible to design a voting system that guarantees consistency of choice by a group of individual consistency voters. The analogy here is the idea that there is no organizational design possible that guarantees a high degree of consistency and rationality in large organizational decision processes at any stage of quasi-intentionality, including belief acquisition, policy formulation, and policy implementation. 

What is a norm?

The role of norms in social behavior is a key question for sociology. Is a norm a sociological reality? And do individuals behave in conformance to norms?

We can offer mundane examples of social norms deriving from a wide range of social situations: norms of politeness, norms of fairness, norms of appropriate dress, norms of behavior in business meetings, norms of gendered behavior, and norms of body language and tone of voice in police work. In each case we suppose that (a) there is a publicly recognized norm governing the specified conduct within a specific social group, (b) the norm influences individual behavior in some way, and (c) sanctions and internal motivations come into the explanation of conformant behavior. Norm-breakers may come in for rough treatment by the people around them — which may induce them to honor the norm in the future. And norm-conformers may do so because they have internalized a set of inhibitions about the proscribed behavior.

Here are a number of key empirical and conceptual questions that are raised by norms.

  • What is a norm?
  • How are social norms embodied in behavior and structure?
  • How do individuals internalize norms?
  • How do norms influence behavior?
  • Why do individuals conform their behavior to a set of local norms?
  • What factors stabilize a norm system over time?
  • What social factors influence change in a norm system?

Before we can go much further into this issue, we need to have a fairly clear idea of what we mean by a norm. We might define a norm as —

a socially embodied and individually perceived imperative that such-and-so an action must be performed in such-and-so a fashion.

We can then separate out several other types of questions: First, what induces individuals to conform to the imperative? How do individuals come to have the psychological dispositions to conform to the norm? Second, how is the norm embodied in social relations and behavior? And third, what are the social mechanisms or processes that created the imperative within the given social group? What mechanisms serve to sustain it over time?

To the first question, there seem to be only three possible answers — and each is in fact socially and psychologically possible. The imperative may be internalized into the motivational space of the individual, so he/she chooses to act according to the imperative (or is habituated to acting in such a way). There may be an effective and well-know system of sanctions that attach to violations of the norms, so the individual has an incentive to comply. These sanctions may be formal or informal. The sanction may be as benign as being laughed at for wearing a hawaiian shirt to a black tie ball (I’ll never do that again!), or as severe as being beaten for seeming gay in a cowboy bar. Or, third, there may be benefits from conformance that make conformance a choice that is in the actor’s rational self-interest. (Every time one demonstrates that he/she can choose the right fork for dessert, the likelihood of being invited to another formal dinner increases.) Each of these would make sense of the fact that an individual conforms his/her behavior to the requirements of a norm and helps to answer the question, why do individuals conform to norms?

The questions about the social embodiment of a norm are the most difficult. Does the embodiment of a given norm consist simply in the fact that a certain percentage of people in fact behave in accordance with the rule — for whatever reason? Does the norm exist in virtue of the fact that people consciously champion the norm and impose sanctions on violators? Might we imagine that human beings are normative animals and absorb normative systems in the way that we absorb grammatical systems — by observing and inferring about the behavior of others?

As for the third cluster of questions about genesis and persistence, there is a range of possibilities here as well. The system may have been designed by one or more deliberate actors. It may have emerged through a fairly random process that is guided by positive social feedback of some sort. It may be the resultant of multiple groups advocating for one set of norms or another to govern a given situation of conflict and/or cooperation. And, conceivably, it may be the result of something analogous to natural selection across small groups: the groups with a more efficient set of norms may out-perform competing groups.

For example, how should we explain the emergence and persistence of a particular set of norms of marriage and reproduction in a given society? Is it causally relevant to observe that “this set of norms results in a rate of fertility that matches the rate of growth of output”? How would this functionally desirable fact play a causal role in the emergence and persistence of this set of norms? Is there any sort of feedback process that we can hypothesize between “norms at time T”, “material results of behavior governed by these norms at T+1”, and “persistence/change of norms at time T+2”? The business practices of a company are consciously adjusted over time to bring about better overall performance; but what about spontaneously occurring sets of social norms? How do these change over time? Do individuals or groups have the ability to deliberately modify the norms that govern their everyday activities?

It seems inescapable that norms of behavior exist in a society and that individuals adjust their behavior out of regard for relevant norms. The microfoundations of how this works is obscure, however, in that we don’t really have good answers to the parallel questions: how do individuals internalize norms? And how do informal practices of norm enforcement work? And what social-causal factors play a role in the emergence, persistence, and change of a system of norms at a given time?

(The photo of the ice rink at Rockefeller Center is intended to evoke several observations about social norms: the facts that the skaters are largely moving in a counter-clockwise direction, no one is carrying a hockey stick, and there are many children present all reflect one aspect or another of the norms governing skating in public arenas.)

Habits, plans, and improvisation

How does thought figure in our ordinary actions and plans? To what extent are our routine actions the result of deliberation and planning, and to what extent do we function on auto-pilot and habit?

It is clear that much of one’s daily activity is habitual: routine actions and social responses that reflect little internal deliberation and choice. Habitual behavior comes into all aspects of life — daily morning routines (exercise, shower, choose a tie, make a fast breakfast), routine work activities (turn on the computer, check the email account, riffle through the paperwork in the inbox, review the morning’s business reports), and routine social contacts (greet the co-worker in the parking lot, gossip about the local news with the admin assistant, laugh about a lopsided weekend football score with a faculty colleague).

Particularly interesting is the last category of behavior — the fairly specific modes of interaction we’ve learned in response to typical social situations. What do you do if you bump into a person with your shoulder at a buffet line? How do you respond to a person who greets you familiarly but whom you don’t know? How do you interact with your boss, your peer, and your subordinate? How do you queue with other passengers when exiting a crowded airplane? When do you make a joke in a small group, and when is it better to keep quiet? In these and hundreds of other stereotyped social encounters we have learned stylized ways of behaving, so when the occasion arises we slip into habitual gear. And it seems certain that there are highly patterned differences in the repertoires of social habits associated with different cultures and sub-cultures — how to greet, how to handle minor conflicts, how to comport oneself. These repertoires of habits and stereotyped behavioral scenarios are an important component of the “culture” we wear.

It is interesting to reflect a bit on how habits are socially and psychologically embodied, and to consider whether this is an avenue through which social differences among groups are maintained. (This topic parallels earlier postings on local cultures and practices.)

What is “habitual” about these forms of behavior is the idea that they seem to be learned patterns of response, involving little reflection or deliberation. They become small “programs” of behavior that we have internalized through past experience; and they are invoked by the shuffling of the cards of ordinary experience. It is as if the “action executive” of the mind consults a library of routines and deploys a relevant series of behaviors in the context of a particular social environment.

But of course, not all action is habitual. The opposite end of the spectrum includes both deliberation and improvisation. These categories themselves are different from each other. Deliberation involves explicit consideration of one’s goals, the opportunities that are currently available within the environment of choice, and the pro’s and con’s of the various choices. Deliberation results in deliberate, planned choice. This represents the category of agency that is partially captured by rational choice theory: deliberate analysis of means and ends, and a calculating choice among possible actions. Planning is an extended version of this process, in which the actor attempts to orchestrate a series of actions and responses in such a way as to bring about a longterm goal.

Improvisation differs from both habit and deliberation. Improvisation is a creative response to a current and changing situation. It involves intelligent, fluid adaptation to the current situation, and seems more intuitive than analytical. The skilled basketball player displays improvisational intelligence as he changes his dribble, stutter-steps around a defender, switches hands, and passes to a teammate streaking under the basket for the score. At each moment there are shifting opportunities that appear and disappear as defenders lose their man, teammates slip into view, and the shot clock winds down. This series of actions is unplanned but non-habitual, and it displays an important aspect of situational intelligence. Bourdieu captures a lot of this aspect of intelligent behavior in his concept of habitus in Outline of a Theory of Practice.

Unintended consequences

International relations studies offer plentiful examples of the phenomenon of unintended consequences — for example, wars that break out unexpectedly because of actions taken by states to achieve their security, or financial crises that erupt because of steps taken to avert them. (The recent military escalations in Pakistan and India raise the specter of unintended consequences in the form of military conflict between the two states.) But technology development, city planning, and economic development policy all offer examples of the occurrence of unintended consequences deriving from complex plans as well.

Putting the concept schematically — an actor foresees an objective to be gained or an outcome to be avoided. The actor creates a plan of action designed to achieve the objective or avert the undesired outcome. The plan is based on a theory of the causal and social processes that govern the domain in question and the actions that other parties may take. The plan of action, however, also creates an unforeseen or unintended series of developments that lead to a result that is contrary to the actor’s original intentions.

It’s worth thinking about this concept a bit. An unintended consequence is different than simply an undesired outcome; a train wreck or a volcano is not an unintended consequence, but rather simply an unfortunate event. Rather, the concept fits into the framework of intention and purposive action. An unintended consequence is a result that came about because of deliberate actions and policies that were set in train at an earlier time — so an unintended consequence is the result of deliberate action. But the outcome is not one of the goals to which the plan or action was directed; it is “unintended”. In other words, analysis of the concept of unintended consequences fits into what we might call the “philosophy of complex action and planning.” (Unlikely as this sub-specialty of philosophy might sound, here’s a good example of a work in this field by Michael Bratman, Intention, Plans, and Practical Reason. Robert Merton wrote about the phenomenon of unintended consequences quite a bit, based on his analysis of the relationships between policy and social science knowledge, in Social Theory and Social Structure.)

But there is also an element of paradox in our normal uses of the concept of an unintended consequence — the suggestion that plans of action often contain elements that work out to defeat them. The very effort to bring about X creates a dynamic that frustrates the achievement of X. This is suggested by the phrase, the “law of unintended consequences.” (I think this is what Hegel refers to as the cunning of reason.)

There is an important parallel between unintended and unforeseen consequences, but they are not the same. A harmful outcome may have occurred precisely because because it was unforeseen — it might have been easily averted if the planner had been aware of it as a possible consequence. An example might be the results of the inadvertent distribution of a contaminant in the packaging of a food product. But it is also possible that an undesired outcome is both unintended but also fully foreseen. An example of this possibility is the decision of state legislators to raise the speed limit to 70 mph. Good and reliable safety statistics make it readily apparent that the accident rate will rise. Nonetheless the officials may reason that the increase in efficiency and convenience more than offsets the harm of the increase in the accident rate. In this case the harmful result is unintended but foreseen. (This is the kind of situation where cost-benefit analysis is brought to bear.)

Is it essential to the idea of unintended consequences that the outcome in question be harmful or undesirable? Or is the category of “beneficial unintended consequence” a coherent one? There does seem to be an implication that the unintended consequence is one that the actor would have avoided if possible, so a beneficial unintended consequence violates this implicature. But I suppose we could imagine a situation like this: a city planner sets out to design a park that will give teenagers a place to play safely, increase the “green” footprint of the city, and draw more families to the central city. Suppose the plan is implemented and each goal is achieved. But it is also observed that the rate of rat infestation in surrounding neighborhoods falls dramatically — because the park creates habitat for voracious rat predators. This is an unintended but beneficial consequence. And full knowledge of this dynamic would not lead the planner to revise the plan to remove this feature.

The category of “unintended but foreseen consequences” is easy to handle from the point of view of rational planning. The planner should design the plan so as to minimize avoidable bad consequences; then do a cost-benefit analysis to assess whether the value of the intended consequences outweighs the harms associated with the unintended consequences.

The category of consequences of a plan that are currently unforeseen is more difficult to handle from the point of view of rational decision-making. Good planning requires that the planner make energetic efforts to canvass the consequences the plan may give rise to. But of course it isn’t possible to discover all possible consequences of a line of action; so the possibility always exists that there will be persistent unforeseen negative consequences of the plan. The most we can ask, it would seem, is that the planner should exercise due diligence in exploring the most likely collateral consequences of the plan. And we might also want the planner to incorporate some sort of plan for “soft landings” in cases where unforeseen negative consequences do arise.

Finally, is there a “law of unintended consequences”, along the lines of something like this:

“No matter how careful one is in estimating the probable consequences of a line of action, there is a high likelihood that the action will produce harmful unanticipated consequences that negate the purpose of the action.”

No; this statement might be called “reverse teleology” or negative functionalism, and certainly goes further than empirical experience or logic would support. The problem with this statement is the inclusion of the modifier “high likelihood”. Rather, what we can say is this:

“No matter how careful one is in estimating the probable consequences of a line of action, there is the residual possibility that the action will produce harmful unanticipated consequences that negate the purpose of the action.”

And this statement amounts to a simple, prudent observation of theoretical modesty: we can’t know all the possible results of an action undertaken. Does the possibility that any plan may have unintended harmful consequences imply that we should not act? Certainly not; rather, it implies that we should be as ingenious as possible in trying to anticipate at least the most likely consequences of the contemplated actions. And it suggests the wisdom of action plans that make allowances for soft landings rather than catastrophic failures.

(Writers about the morality of war make quite a bit about the moral significance of consequences of action that are unintended but foreseen. Some ethicists refer to the principle of double effect, and assert that moral responsibility attaches differently to intended versus unintended but foreseen consequences. The principles of military necessity and proportionality come into the discussion at this point. There is an interesting back-and-forth about the doctrine of double effect in the theory of just war in relation to Gaza on Crooked Timber and Punditry.)


What is the role of trust in ordinary social workings? I would say that a fairly high level of trust is simply mandatory in any social group, from a family to a workplace to a full society. Lacking trust, each agent is forced into a kind of Hobbesian calculation about the behavior of those around him or her, watching for covert strategies in which the other is trying to take advantage of oneself. The cost of self-protection is impossibly high in a zero-trust society. Gated communities don’t help. We would need to have gated and solitary lives. Even our brothers and sisters, spouses, and offspring would have to be watched suspiciously. We would live like Howard Hughes at the end of his life.

To begin, what is trust? It is a condition of reliance on the statements, assurances, and basic good behavior of others. The status of commitments over time is essential to trust. We need to consider whether we can trust a neighbor who has promised to return a lawn mower — will he keep his promise? Can we trust the car park attendant not to take the iPod from the glove compartment? Can we trust the phone company to not add hidden fees to our bill in a corporate decision that they won’t be noticed by most consumers?

It is sort of a commonplace in moral philosophy that you can’t trust a pure egoist or an act utilitarian. The reason is simple: trust means reliance on the correct behavior of other agents even when there is an opportunity for gain in incorrect behavior and the probability of detection and sanctions is low. The egoist will reason on the basis of the advantage he/she anticipates and will discount the low likelihood of sanction. But likewise, the act utilitarian will add up all the utilities created by “correct action” and “incorrect action”, and will be bound to choose the action with the greater utility. The fact of an existing promise or other obligation will not change the calculation. So the act utilitarian cannot be trusted to honor his promises and obligations, no matter what.

Standards of “correct behavior” are difficult to articulate precisely, but here’s a start: telling the truth, keeping promises and assurances when they come into play, acting according to generally shared rules of professional and social ethics, and respecting the rights of others. We sometimes describe people and organizations whose behavior conforms to these sorts of characteristics as possessing “integrity”.

In general, agents whose behavior is governed solely by calculation of consequences cannot be trusted, since occasions requiring trust are precisely those in which we need to rely on others to do the right thing in spite of consequences that would favor doing the wrong thing. (For example, taking the iPod in circumstances where there are dozens of attendants and the theft cannot be attributed to one person; keeping the lawnmower if the owner is in a state of rapid-onset dementia; adding the phony charges in a business environment where it can be predicted with confidence that only 5% of customers will notice and the penalties are trivial.)

So there are two basic models of action that people can choose: consequentialist and “constrained by obligations” (deontological). The first approach is opportunistic and myopic; the other reflects integrity and the validity of long-term obligations.

But here we have a problem. The most ordinary social transactions become almost impossible in a no-trust environment. If I can’t trust my bank to hold my savings honestly, or my employer to keep its commitments about my retirement accounts, or the passenger on the seat next to me on a long airplane flight to not go through my briefcase if I drift off to sleep — then I am forced into a condition of exhausting, sleepless vigilance. And, of course, we do generally trust in these circumstances.

But it is an interesting problem for research to consider whether different societies and groups elicit and sustain different levels of trust in ordinary life, and what the institutional factors are that affect this outcome. Is there a higher level of trust in Bloomington, Illinois than Chicago or Houston? Is trust a feature of the learning environment through which people gain their social psychologies? Are there institutional features that encourage or discourage dispositions towards trust? And what are the compensating mechanisms through which social interactions proceed in a low-trust environment? Is that where “trust but verify” comes in?

Bad behavior

How do we explain the occurrence of anti-social behavior that we witness in everyday life? For that matter, how do we explain the more common occurrence of good behavior?

There are numerous extreme examples of anti-social behavior. But more prosaic examples are more interesting.

  • A passenger on a jet airliner becomes enraged at being denied additional alcohol; screams at and punches flight attendants; attempts to open the hatch at 20,000 feet.
  • A couple continue to talk loudly on their cellphones — during a blacktie dinner, interfering with the keynote speaker’s presentation. When asked to be quiet, they say indignantly, “this is important.”
  • A business traveler marches to the front of the security line and squeezes in front, saying, “I’m in a rush.”
  • A parent enters a crowded elevator with a three-year-old child and stands by as the child presses all 15 buttons.

Most people are “polite”. Most people treat others with consideration and respect. Most recognize the limits imposed on their behavior by the needs, wants and rights of others. But some do not — they behave badly.

I’m mostly interested here in the minor forms of bad behavior — disturbing or endangering others, confronting others with aggressively rude behavior, taking more than a reasonable amount of “space” in public settings. Behaving boorishly is what I’m talking about — noisy, intrusive, rude, and self-centered actions that impose on others or that greatly privilege one’s own immediate wants. This is the kind of behavior that once was attributed to American tourists, though today it seems to be the monopoly of no particular nationality. (I’ve just been on vacation, so I’ve been exposed to a lot of it.)

So now to hypotheses. Perhaps people behave badly because —

  • They don’t see how their behavior affects other people.
  • They haven’t internalized the norms defining appropriate behavior in public.
  • They reason that the norms don’t apply to them in these circumstances.
  • They overvalue their own importance in a social setting. “My needs are more important than yours.”
  • They think “I deserve this — I’ve worked for it and these other people can take it or leave it.”

What these hypotheses amount to is either a failure to recognize the nature of one’s behavior in the circumstances, a failure to have adequately internalized the relevant social norms of behavior, an inability to recognize the legitimate and normal wants of others, or combinations of all these.

This subject is relevant to “understandingsociety” because it fundamentally has to do with social behavior, norms, and the cognitive-practical frameworks through which people generate their actions. In order to understand this behavior we need to know how people understand their own presence within a social setting. We need to know how they construct an ongoing representation along these lines “What’s going on here? What’s my role in this social encounter? What’s expected of me? How much entitlement do I have to shape the encounter, versus the others present?” And we need to know how important conformance to local norms is to them. The oilman talking too loudly in the dining room at the Paris Ritz-Carlton may not know that local standards call for more decorous conversation, he may be thinking he’s in his own private club back in Houston — or he may just not care about the standards and the peace and quiet of the other guests.

Seen properly, then, this is an occasion for verstehen — interpretation of the puzzling actions of others in terms of an extended hypothesis about the states of mind and motive from which the action emanated and “makes sense”. And there is a lot of social cognition — or failures of cognition — that goes into bad behavior.

What is a "moral intuition"?

We have all had this experience: we hear of a complicated social or personal event, and we think inwardly, “that’s wrong!” A co-worker tells us an embarrassing private story about another co-worker; we hear on the news that the number of children in poverty has increased; we read about a mining company that has dumped toxic chemicals into fishing rivers for years. And we have a moral intuition about what we hear — not only a quick judgment about its goodness or badness, but also a sketch of reasons: “She shouldn’t violate Frank’s privacy that way”; “so much suffering of the innocent is awful”; “how can a company have such disregard for the people whose lives depend on those rivers.” In other words, we have intuitions about complex situations that evidently rest upon some kind of reasoning — but we haven’t deliberated about the case.

What is the nature of such an intuition, and what kind of cognition does it represent?

As I’ve sketched it out, these “intuitions” are cognitively complex — not just an inward “ugh!”, but a sketchy representation of facts, assumptions, and relevant principles or rules. Our thought processes have somehow organized the description of the event into a story of sorts, along with some collateral judgments or principles about how people and institutions ought to behave. In the moment of intuition, we are also involved in judgment; and judgment involves something like reasoning and analysis. And yet the intuition itself is revealed in a moment — not as a developing piece of analysis and deliberation, but an apparently instantaneous moment of moral perception.

If this description is phenomenologically correct, then moral intuitions are a feature of human judgment that involves complexity in something like the way that sizing up an auto accident occurs for the experienced investigator — a quick cognition of the likely speeds and directions of the vehicles, the evasive action that appears to have occurred (skidmarks), etc. Following Kant, we might call this an act of “apperception”, happening below the level of consciousness but bringing to bear quite a bit of analysis, knowledge, and principle in the construction of the resultant perception. And we might refer to an individual’s overall set of interpretive frameworks as his or her “moral sensibility”. Our moral sensibility provides us with a set of framing possibilities within the context of which we can begin to understand and represent the complicated human situations we encounter.

Now we can give a bit more content to the roles that moral analysis and principle play in this story. We are more or less forced to hypothesize that the person possesses a mental process of moral / social cognition that assembles a representation of a situation, including incorporation of a background set of principles or interpretive rules as well as a set of facts about the case, that eventuates in a morally tagged picture or narrative. The person is able to dig down into some of the underlying architecture of this picture if pressed; this accounts for the fact that we can give some analysis or explanation of our moral intuitions when asked, “why do you think this situation is wrong?”. And the principles or rules that play an evaluative role are themselves learned through some concrete process of social development. (Though it is interesting to consider whether there might be a component of social cognition that is hard-wired through our evolutionary history; Alan Gibbard, Wise Choices, Apt Feelings: A Theory of Normative Judgment).

So one’s moral intuitions are not grounded in something like “direct apprehension of moral facts”; rather, they are the result of a complex, conditioned, and fallible process of sub-conscious reconstruction of the circumstances of a case. And for this reason we might suspect that there will be significant differences across individuals and across cultures in the contents of people’s moral intuitions about cases.

(It is worthwhile to contrast the idea of a moral intuition with John Rawls’s idea of a “considered judgment”. Rawls’s idea captures the conception of a full, deliberative consideration of a case in detail, considering all the relevant facts and principles, the fit between a given judgment and many other judgments we make, and a host of other constraints of coherence. This picture is one of full, transparent moral reasoning and deliberation. The account of moral intuition just provided is non-deliberative but not for that reason non-rational.)

(This treatment of moral intuitions converges somewhat with Malcolm Gladwell’s Blink: The Power of Thinking Without Thinking.)

%d bloggers like this: