The mind of government

We often speak of government as if it has intentions, beliefs, fears, plans, and phobias. This sounds a lot like a mind. But this impression is fundamentally misleading. “Government” is not a conscious entity with a unified apperception of the world and its own intentions. So it is worth teasing out the ways in which government nonetheless arrives at “beliefs”, “intentions”, and “decisions”.

Let’s first address the question of the mythical unity of government. In brief, government is not one unified thing. Rather, it is an extended network of offices, bureaus, departments, analysts, decision-makers, and authority structures, each of which has its own reticulated internal structure.

This has an important consequence. Instead of asking “what is the policy of the United States government towards Africa?”, we are driven to ask subordinate questions: what are the policies towards Africa of the State Department, the Department of Defense, the Department of Commerce, the Central Intelligence Agency, or the Agency for International Development? And for each of these departments we are forced to recognize that each is itself a large bureaucracy, with sub-units that have chosen or adapted their own working policy objectives and priorities. There are chief executives at a range of levels — President of the United States, Secretary of State, Secretary of Defense, Director of CIA — and each often has the aspiration of directing his or her organization as a tightly unified and purposive unit. But it is perfectly plain that the behavior of functional units within agencies are only loosely controlled by the will of the executive. This does not mean that executives have no control over the activities and priorities of subordinate units. But it does reflect a simple and unavoidable fact about large organizations. An organization is more like a slime mold than it is like a control algorithm in a factory.

This said, organizational units at all levels arrive at something analogous to beliefs (assessments of fact and probable future outcomes), assessments of priorities and their interactions, plans, and decisions (actions to take in the near and intermediate future). And governments make decisions at the highest level (leave the EU, raise taxes on fuel, prohibit immigration from certain countries, …). How does the analytical and factual part of this process proceed? And how does the decision-making part unfold?

One factor is particularly evident in the current political environment in the United States. Sometimes the analysis and decision-making activities of government are short-circuited and taken by individual executives without an underlying organizational process. A president arrives at his view of the facts of global climate change based on his “gut instincts” rather than an objective and disinterested assessment of the scientific analysis available to him. An Administrator of the EPA acts to eliminate long-standing environmental protections based on his own particular ideological and personal interests. A Secretary of the Department of Energy takes leadership of the department without requesting a briefing on any of its current projects. These are instances of the dictator strategy (in the social-choice sense), where a single actor substitutes his will for the collective aggregation of beliefs and desires associated with both bureaucracy and democracy. In this instance the answer to our question is a simple one: in cases like these government has beliefs and intentions because particular actors have beliefs and intentions and those actors have the power and authority to impose their beliefs and intentions on government.

The more interesting cases involve situations where there is a genuine collective process through which analysis and assessment takes place (of facts and priorities), and through which strategies are considered and ultimately adopted. Agencies usually make decisions through extended and formalized processes. There is generally an organized process of fact gathering and scientific assessment, followed by an assessment of various policy options with public exposure. Final a policy is adopted (the moment of decision).

The decision by the EPA to ban DDT in 1972 is illustrative (link, linklink). This was a decision of government which thereby became the will of government. It was the result of several important sub-processes: citizen and NGO activism about the possible toxic harms created by DDT, non-governmental scientific research assessing the toxicity of DDT, an internal EPA process designed to assess the scientific conclusions about the environmental and human-health effects of DDT, an analysis of the competing priorities involved in this issue (farming, forestry, and malaria control versus public health), and a decision recommended to the Administrator and adopted that concluded that the priority of public health and environmental safety was weightier than the economic interests served by the use of the pesticide.

Other examples of agency decision-making follow a similar pattern. The development of policy concerning science and technology is particularly interesting in this context. Consider, for example, Susan Wright (link) on the politics of regulation of recombinant DNA. This issue is explored more fully in her book Molecular Politics: Developing American and British Regulatory Policy for Genetic Engineering, 1972-1982. This is a good case study of “government making up its mind”. Another interesting case study is the development of US policy concerning ozone depletion; link.

These cases of science and technology policy illustrate two dimensions of the processes through which a government agency “makes up its mind” about a complex issue. There is an analytical component in which the scientific facts and the policy goals and priorities are gathered and assessed. And there is a decision-making component in which these analytical findings are crafted into a decision — a policy, a set of regulations, or a funding program, for example. It is routine in science and technology policy studies to observe that there is commonly a substantial degree of intertwining between factual judgments and political preferences and influences brought to bear by powerful outsiders. (Here is an earlier discussion of these processes; link.)

Ideally we would like to imagine a process of government decision-making that proceeds along these lines: careful gathering and assessment of the best available scientific evidence about an issue through expert specialist panels and sections; careful analysis of the consequences of available policy choices measured against a clear understanding of goals and priorities of the government; and selection of a policy or action that is best, all things considered, for forwarding the public interest and minimizing public harms. Unfortunately, as the experience of government policies concerning climate change in both the Bush administration and the Trump administration illustrates, ideology and private interest distort every phase of this idealized process.

(Philip Tetlock’s Superforecasting: The Art and Science of Prediction offers an interesting analysis of the process of expert factual assessment and prediction. Particularly interesting is his treatment of intelligence estimates.)

Patient safety

An issue which is of concern to anyone who receives treatment in a hospital is the topic of patient safety. How likely is it that there will be a serious mistake in treatment — wrong-site surgery, incorrect medication or radiation dose, exposure to a hospital-acquired infection? The current evidence is alarming. (Martin Makary et al estimate that over 250,000 deaths per year result from medical mistakes — making medical error now the third leading cause of mortality in the United States (link).) And when these events occur, where should we look for assigning responsibility — at the individual providers, at the systems that have been implemented for patient care, at the regulatory agencies responsible for overseeing patient safety?

Medical accidents commonly demonstrate a complex interaction of factors, from the individual provider to the technologies in use to failures of regulation and oversight. We can look at a hospital as a place where caring professionals do their best to improve the health of their patients while scrupulously avoiding errors. Or we can look at it as an intricate system involving the recording and dissemination of information about patients; the administration of procedures to patients (surgery, medication, radiation therapy). In this sense a hospital is similar to a factory with multiple intersecting locations of activity. Finally, we can look at it as an organization — a system of division of labor, cooperation, and supervision by large numbers of staff whose joint efforts lead to health and accidents alike. Obviously each of these perspectives is partially correct. Doctors, nurses, and technicians are carefully and extensively trained to diagnose and treat their patients. The technology of the hospital — the digital patient record system, the devices that administer drugs, the surgical robots — can be designed better or worse from a safety point of view. And the social organization of the hospital can be effective and safe, or it can be dysfunctional and unsafe. So all three aspects are relevant both to safe operations and the possibility of chronic lack of safety.

So how should we analyze the phenomenon of patient safety? What factors can be identified that distinguish high safety hospitals from low safety? What lessons can be learned from the study of accidents and mistakes that cumulatively lead to a hospitals patient safety record?

The view that primarily emphasizes expertise and training of individual practitioners is very common in the healthcare industry, and yet this approach is not particularly useful as a basis for improving the safety of healthcare systems. Skill and expertise are necessary conditions for effective medical treatment; but the other two zones of accident space are probably more important for reducing accidents — the design of treatment systems and the organizational features that coordinate the activities of the various individuals within the system.

Dr. James Bagian is a strong advocate for the perspective of treating healthcare institutions as systems. Bagian considers both technical systems characteristics of processes and the organizational forms through which these processes are carried out and monitored. And he is very skilled at teasing out some of the ways in which features of both system and organization lead to avoidable accidents and failures. I recall his description of a safety walkthrough he had done in a major hospital. He said that during the tour he noticed a number of nurses’ stations which were covered with yellow sticky notes. He observed that this is both a symptom and a cause of an accident-prone organization. It means that individual caregivers were obligated to remind themselves of tasks and exceptions that needed to be observed. Far better was to have a set of systems and protocols that made sticky notes unnecessary. Here is the abstract from a short summary article by Bagian on the current state of patient safety:

Abstract

The traditional approach to patient safety in health care has ranged from reticence to outward denial of serious flaws. This undermines the otherwise remarkable advances in technology and information that have characterized the specialty of medical practice. In addition, lessons learned in industries outside health care, such as in aviation, provide opportunities for improvements that successfully reduce mishaps and errors while maintaining a standard of excellence. This is precisely the call in medicine prompted by the 1999 Institute of Medicine report “To Err Is Human: Building a Safer Health System.” However, to effect these changes, key components of a successful safety system must include: (1) communication, (2) a shift from a posture of reliance on human infallibility (hence “shame and blame”) to checklists that recognize the contribution of the system and account for human limitations, and (3) a cultivation of non-punitive open and/or de-identified/anonymous reporting of safety concerns, including close calls, in addition to adverse events.

(Here is the Institute of Medicine study to which Bagian refers; link.)

Nancy Leveson is an aeronautical and software engineer who has spent most of her career devoted to designing safe systems. Her book Engineering a Safer World: Systems Thinking Applied to Safety is a recent presentation of her theories of systems safety. She applies these approaches to problems of patient safety with several co-authors in “A Systems Approach to Analyzing and Preventing Hospital Adverse Events” (link). Here is the abstract and summary of findings for that article:

Objective:

This study aimed to demonstrate the use of a systems theory-based accident analysis technique in health care applications as a more powerful alternative to the chain-of-event accident models currently underpinning root cause analysis methods.

Method:

A new accident analysis technique, CAST [Causal Analysis based on Systems Theory], is described and illustrated on a set of adverse cardiovascular surgery events at a large medical center. The lessons that can be learned from the analysis are compared with those that can be derived from the typical root cause analysis techniques used today.

Results:

The analysis of the 30 cardiovascular surgery adverse events using CAST revealed the reasons behind unsafe individual behavior, which were related to the design of the system involved and not negligence or incompetence on the part of individuals. With the use of the system-theoretic analysis results, recommendations can be generated to change the context in which decisions are made and thus improve decision making and reduce the risk of an accident.

Conclusions:

The use of a systems-theoretic accident analysis technique can assist in identifying causal factors at all levels of the system without simply assigning blame to either the frontline clinicians or technicians involved. Identification of these causal factors in accidents will help health care systems learn from mistakes and design system-level changes to prevent them in the future.

Crucial in this article is this research group’s effort to identify causes “at all levels of the system without simply assigning blame to either the frontline clinicians or technicians involved”. The key result is this: “The analysis of the 30 cardiovascular surgery adverse events using CAST revealed the reasons behind unsafe individual behavior, which were related to the design of the system involved and not negligence or incompetence on the part of individuals.”

Bagian, Leveson, and others make a crucial point: in order to substantially increase the performance of hospitals and the healthcare system more generally when it comes to patient safety, it will be necessary to extend the focus of safety analysis from individual incidents and agents to the systems and organizations through which these accidents were possible. In other words, attention to systems and organizations is crucial if we are to significantly reduce the frequency of medical and hospital mistakes.

(The Makary et al estimate of 250,000 deaths caused by medical error has been questioned on methodological grounds. See Aaron Carroll’s thoughtful rebuttal (NYT 8/15/16; link).)

Nuclear accidents

 
diagrams: Chernobyl reactor before and after
 

Nuclear fission is one of the world-changing discoveries of the mid-twentieth century. The atomic bomb projects of the United States led to the atomic bombing of Japan in August 1945, and the hope for limitless electricity brought about the proliferation of a variety of nuclear reactors around the world in the decades following World War II. And, of course, nuclear weapons proliferated to other countries beyond the original circle of atomic powers.

Given the enormous energies associated with fission and the dangerous and toxic properties of radioactive components of fission processes, the possibility of a nuclear accident is a particularly frightening one for the modern public. The world has seen the results of several massive nuclear accidents — Chernobyl and Fukushima in particular — and the devastating results they have had on human populations and the social and economic wellbeing of the regions in which they occurred.

Safety is therefore a paramount priority in the nuclear industry, both in research labs and military and civilian applications. So what is the situation of safety in the nuclear sector? Jim Mahaffey’s Atomic Accidents: A History of Nuclear Meltdowns and Disasters: From the Ozark Mountains to Fukushima is a detailed and carefully researched attempt to answer this question. And the information he provides is not reassuring. Beyond the celebrated and well-known disasters at nuclear power plants (Three Mile Island, Chernobyl, Fukushima), Mahaffey refers to hundreds of accidents involving reactors, research laboratories, weapons plants, and deployed nuclear weapons that have had less public awareness. These accidents resulted in a very low number of lives lost, but their frequency is alarming. They are indeed “normal accidents” (Perrow, Normal Accidents: Living with High-Risk Technologies. For example:

  • a Japanese fishing boat is contaminated by fallout from Castle Bravo test of hydrogen bomb; lots of radioactive fish at the markets in Japan (March 1, 1954) (kl 1706)
  • one MK-6 atomic bomb is dropped on Mars Bluff, South Carolina, after a crew member accidentally pulled the emergency bomb release handle (February 5, 1958) (kl 5774)
  • Fermi 1 liquid sodium plutonium breeder reactor experiences fuel meltdown during startup trials near Detroit (October 4, 1966) (kl 4127)

Mahaffey also provides detailed accounts of the most serious nuclear accidents and meltdowns during the past forty years, Three Mile Island, Chernobyl, and Fukushima.

The safety and control of nuclear weapons is of particular interest. Here is Mahaffey’s summary of “Broken Arrow” events — the loss of atomic and fusion weapons:

Did the Air Force ever lose an A-bomb, or did they just misplace a few of them for a short time? Did they ever drop anything that could be picked up by someone else and used against us? Is humanity going to perish because of poisonous plutonium spread that was snapped up by the wrong people after being somehow misplaced? Several examples will follow. You be the judge. 

Chuck Hansen [

U.S. Nuclear Weapons – The Secret History

] was wrong about one thing. He counted thirty-two “Broken Arrow” accidents. There are now sixty-five documented incidents in which nuclear weapons owned by the United States were lost, destroyed, or damaged between 1945 and 1989. These bombs and warheads, which contain hundreds of pounds of high explosive, have been abused in a wide range of unfortunate events. They have been accidentally dropped from high altitude, dropped from low altitude, crashed through the bomb bay doors while standing on the runway, tumbled off a fork lift, escaped from a chain hoist, and rolled off an aircraft carrier into the ocean. Bombs have been abandoned at the bottom of a test shaft, left buried in a crater, and lost in the mud off the coast of Georgia. Nuclear devices have been pounded with artillery of a foreign nature, struck by lightning, smashed to pieces, scorched, toasted, and burned beyond recognition. Incredibly, in all this mayhem, not a single nuclear weapon has gone off accidentally, anywhere in the world. If it had, the public would know about it. That type of accident would be almost impossible to conceal. (kl 5527)

There are a few common threads in the stories of accident and malfunction that Mahaffey provides. First, there are failures of training and knowledge on the part of front-line workers. The physics of nuclear fission are often counter-intuitive, and the idea of critical mass does not fully capture the danger of a quantity of fissionable material. The geometry of the storage of the material makes a critical difference in going critical. Fissionable material is often transported and manipulated in liquid solution; and the shape and configuration of the vessel in which the solution is held makes a difference to the probability of exponential growth of neutron emission — leading to runaway fission of the material. Mahaffey documents accidents that occurred in nuclear materials processing plants that resulted from plant workers applying what they knew from industrial plumbing to their efforts to solve basic shop-floor problems. All too often the result was a flash of blue light and the release of a great deal of heat and radioactive material.

Second, there is a fault at the opposite end of the knowledge spectrum — the tendency of expert engineers and scientists to believe that they can solve complicated reactor problems on the fly. This turned out to be a critical problem at Chernobyl (kl 6859).

The most difficult problem to handle is that the reactor operator, highly trained and educated with an active and disciplined mind, is liable to think beyond the rote procedures and carefully scheduled tasks. The operator is not a computer, and he or she cannot think like a machine. When the operator at NRX saw some untidy valve handles in the basement, he stepped outside the procedures and straightened them out, so that they were all facing the same way. (kl 2057)

There are also clear examples of inappropriate supervision in the accounts shared by Mahaffey. Here is an example from Chernobyl.

[Deputy chief engineer] Dyatlov was enraged. He paced up and down the control panel, berating the operators, cursing, spitting, threatening, and waving his arms. He demanded that the power be brought back up to 1,500 megawatts, where it was supposed to be for the test. The operators, Toptunov and Akimov, refused on grounds that it was against the rules to do so, even if they were not sure why. 

Dyatlov turned on Toptunov. “You lying idiot! If you don’t increase power, Tregub will!”  

Tregub, the Shift Foreman from the previous shift, was officially off the clock, but he had stayed around just to see the test. He tried to stay out of it. 

Toptunov, in fear of losing his job, started pulling rods. By the time he had wrestled it back to 200 megawatts, 205 of the 211 control rods were all the way out. In this unusual condition, there was danger of an emergency shutdown causing prompt supercriticality and a resulting steam explosion. At 1: 22: 30 a.m., a read-out from the operations computer advised that the reserve reactivity was too low for controlling the reactor, and it should be shut down immediately. Dyatlov was not worried. “Another two or three minutes, and it will be all over. Get moving, boys! (kl 6887)

This was the turning point in the disaster.

A related fault is the intrusion of political and business interests into the design and conduct of high-risk nuclear actions. Leaders want a given outcome without understanding the technical details of the processes they are demanding; subordinates like Toptunov are eventually cajoled or coerced into taking the problematic actions. The persistence of advocates for liquid sodium breeder reactors represents a higher-level example of the same fault. Associated with this role of political and business interests is an impulse towards secrecy and concealment when accidents occur and deliberate understatement of the public dangers created by an accident — a fault amply demonstrated in the Fukushima disaster.

Atomic Accidents provides a fascinating history of events of which most of us are unaware. The book is not primarily intended to offer an account of the causes of these accidents, but rather the ways in which they unfolded and the consequences they had for human welfare. (Generally speaking his view is that nuclear accidents in North America and Western Europe have had remarkably few human casualties.) And many of the accidents he describes are exactly the sorts of failures that are common in all largescale industrial and military processes.

(Largescale technology failure has come up frequently here. See these posts for analysis of some of the organizational causes of technology failure (link, link, link).)

Trust and organizational effectiveness

It is fairly well agreed that organizations require a degree of trust among the participants in order for the organization to function at all. But what does this mean? How much trust is needed? How is trust cultivated among participants? And what are the mechanisms through which trust enhances organizational effectiveness?

The minimal requirements of cooperation presuppose a certain level of trust. As A plans and undertakes a sequence of actions designed to bring about Y, his or her efforts must rely upon the coordination promised by other actors. If A does not have a sufficiently high level of confidence in B’s assurances and compliance, then he will be rationally compelled to choose another series of actions. If Larry Bird didn’t have trust in his teammate Dennis Johnson, the famous steal would not have happened.

 

First, what do we mean by trust in the current context? Each actor in an organization or group has intentions, engages in behavior, and communicates with other actors. Part of communication is often in the form of sharing information and agreeing upon a plan of coordinated action. Agreeing upon a plan in turn often requires statements and commitments from various actors about the future actions they will take. Trust is the circumstance that permits others to rely upon those statements and commitments. We might say, then, that A trusts B just in case —

  • A believes that when B asserts P, this is an honest expression of B’s beliefs.
  • A believes that when B says he/she will do X, this is an honest commitment on B’s part and B will carry it out (absent extraordinary reasons to the contrary).
  • A believes that when B asserts that his/her actions will be guided by his best understanding of the purposes and goals of the organization, this is a truthful expression.
  • A believes that B’s future actions, observed and unobserved, will be consistent with his/her avowals of intentions, values, and commitments.

So what are some reasons why mistrust might rear its ugly head between actors in an organization? Why might A fail to trust B?

  • A may believe that B’s private interests are driving B’s actions (rather than adherence to prior commitments and values).
  • A may believe that B suffers from weakness of the will, an inability to carry out his honest intentions.
  • A may believe that B manipulates his statements of fact to suit his private interests.
  • Or less dramatically: A may not have high confidence in these features of B’s behavior.
  • B may have no real interest or intention in behaving in a truthful way.

And what features of organizational life and practice might be expected to either enhance inter-personal trust or to undermine it?

Trust is enhanced by individuals having the opportunity to get acquainted with their collaborators in a more personal way — to see from non-organizational contexts that they are generally well intentioned; that they make serious efforts to live up to their stated intentions and commitments; and that they are generally honest. So perhaps there is a rationale for the bonding exercises that many companies undertake for their workers.

Likewise, trust is enhanced by the presence of a shared and practiced commitment to the value of trustworthiness. An organization itself can enhance trust in its participants by performing the actions that its participants expect the organization to perform. For example, an organization that abruptly and without consultation ends an important employee benefit undermines trust in the employees that the organization has their best interests at heart. This abrogation of prior obligations may in turn lead individuals to behave in a less trustworthy way, and lead others to have lower levels of trust in each other.

How does enhancing trust have the promise of bringing about higher levels of organizational effectiveness? Fundamentally this comes down to the question of the value of teamwork and the burden of unnecessary transaction costs. If every expense report requires investigation, the amount of resources spent on accountants will be much greater than a situation where only the outlying reports are questioned. If each vice president needs to defend him or herself against the possibility that another vice president is conspiring against him, then less time and energy are available to do the work of the organization. If the CEO doesn’t have high confidence that her executive team will work wholeheartedly to bring about a successful implementation of a risky investment, then the CEO will choose less risky investments.

In other words, trust is crucial for collaboration and teamwork. And organizations that manage to help to cultivate a high level of trust among its participants is likely to perform better than one that depends primarily on supervision and enforcement.

The culture of an organization

It is often held that the behavior of a particular organization is affected by its culture. Two banks may have very similar organizational structures but show rather different patterns of behavior, and those differences are ascribed to differences in culture. What does this mean? Clifford Geertz is one of the most articulate theorists of culture — especially in his earlier works. Here is a statement couched in terms of religion as a cultural system from The Interpretation Of Cultures. A religion is …

(1) a system of symbols which act to (2) establish powerful, pervasive, and long-lasting moods and motivations in men by (3) formulating conceptions of a general order of existence and (4) clothing these conceptions with such an aura of factuality that (5) the moods and motivations seem uniquely realistic. (90)

And again:

The concept of culture I espouse, and whose utility the essays below attempt to demonstrate, is essentially a semiotic one. Believing, with Max Weber, that man is an animal suspended in webs of significance he himself has spun, I take culture to be those webs, and the analysis of it to be therefore not an experimental science in search of law but an interpretive one in search of meaning. (5)

On its face this idea seems fairly simple. We might stipulate that “culture” refers to a set of beliefs, values, and practices that are shared by a number of individuals within the group, including leaders, managers, and staff members. But, as we have seen repeatedly in other posts, we need to think of these statements in terms of a distribution across a population rather than as a uniform set of values.

Consider this hypothetical comparison of two organizations with respect to employees’ attitudes towards working with colleagues of a different religion. (This example is fictitious.) Suppose that the employees of two organizations have been surveyed on the topic of their comfort level at working with other people of different religious beliefs, on a scale of 0-21. Low values indicate a lower level of comfort.

The blue organization shows a distribution of individuals who are on average more accepting of religious diversity than the gold organization. The weighted score for the blue population is about 10.4, in comparison to a weighted score of 9.9 for the gold population. This is a relatively small difference between the two populations; but it may be enough to generate meaningful differences in behavior and performance. If, for example, the attitude measured here leads to an increased likelihood for individuals to make disparaging comments about the co-worker’s religion, then we might predict that the gold group will have a somewhat higher level of incidents of religious intolerance. And if we further hypothesize that a disparaging work environment has some effect on work productivity, then we might predict that the blue group will have somewhat higher productivity.

Current discussions of sexual harassment in the workplace are often couched in terms of organizational culture. It appears that sexual harassment is more frequent and flagrant in some organizations than others. Women are particularly likely to be harassed in a work culture in which men believe and act as though they are at liberty to impose sexual language and action on female co-workers and in which the formal processes of reporting of harassment are weak or disregarded. The first is a cultural fact and the second is a structural or institutional fact.

We can ask several causal questions about this interpretation of organizational culture. What are the factors that lead to the establishment and currency of a given profile of beliefs, values, and practices within an organization? And what factors exist that either reproduce those beliefs or undermine them? Finally we can ask what the consequences of a given culture profile are in the internal and external performance of the organization.

There seem to be two large causal mechanisms responsible for establishment and maintenance of a particular cultural constellation within an organization. First is recruitment. One organization may make a specific effort to screen candidates so as to select in favor of a particular set of values and attitudes — acceptance, collaboration, trustworthiness, openness to others. And another may favor attitudes and values that are thought to be more directly related to profitability or employee malleability. These selection mechanisms can lead to significant differences in the overall culture of the organization. And the decision to orient recruitment in one way rather than another is itself an expression of values.

The second large mechanism is the internal socialization and leadership processes of the organization. We can hypothesize that an organization whose leaders and supervisors both articulate the values of equality and respect in the workplace and who demonstrate that commitment in their own actions will be one in which more people in the organization will adopt those values. And we can likewise hypothesize that the training and evaluation processes of an organization can be effective in cultivating the values of the organization. In other words, it seems evident that leadership and training are particularly relevant to the establishment of a particular organizational culture.

The other large causal question is how and to what extent cultural differences across organizations have effects on the performance and behavior of those organizations. We can hypothesize that differences in organizational values and culture lead to differences in behavior within the organization — more or less collaboration, more or less harassment, more or less bad behavior of various kinds. These differences are themselves highly important. But we can also hypothesize that differences like these can lead to differences in organizational effectiveness. This is the central idea of the field of positive organizational studies. Scholars like Kim Cameron and others argue, on the basis of empirical studies across organizational settings, that organizations that embody the values of mutual acceptance, equality, and a positive orientation towards each others’ contributions are in fact more productive organizations as well (Competing Values Leadership: Second Edition; link).

A new model of organization?

In Team of Teams: New Rules of Engagement for a Complex World General Stanley McChrystal (with Tantum Collins, David Silverman, and Chris Fussell) describes a new, 21st-century conception of organization for large, complex activities involving thousands of individuals and hundreds of major sub-tasks. His concept is grounded in his experience in counter-insurgency warfare in Iraq. Rather than being constructed as centrally organized, bureaucratic, hierarchical processes with commanders and scripted agents, McChrystal argues that modern counter-terrorism requires a more decentralized and flexible system of action, which he refers to as “teams of teams”. Information is shared freely, local commanders have ready access to resources and knowledge from other experts, and they make decisions in a more flexible way. The model hopes to capture the benefits of improvisation, flexibility, and a much higher level of trust and communication than is characteristic of typical military and corporate organizations.

 

One place where the “team of teams” structure is plausible is in the context of a focused technology startup company, where the whole group of participants need to be in regular and frequent collaboration with each other. Indeed, Paul Rabinow’s ethnography in 1996 of the Cetus Corporation in its pursuit of PCR (polymerase chain reaction) in Making PCR: A Story of Biotechnology reflects a very similar topology of information flows and collaboration links across and within working subgroups (link). But the vision does not fit very well the organizational and operational needs of a large hospital, a railroad company, or a research university. It seems plausible that the challenges the US military faced in fighting Al-Qaeda and ISIL are not really analogous to those faced by less dramatic organizations like hospitals, universities, and corporations. The decentralized and improvisational circumstances of urban warfare against loosely organized terrorists may be sui generis

McChrystal proposes an organizational structure that is more decentralized, more open to local decision-making, and more flexible and resilient. These are unmistakeable virtues in some circumstances; but not in all circumstances and all organizations. And arguably such a structure would have been impossible in the planning and execution of the French defense of Dien Bien Phu or the US decision to wage war against the Vietnamese insurgency ten years later. These were situations where central decisions needed to be made, and the decisions needed to be implemented through well organized bureaucracies. The problem in both instances is that the wrong decisions were made, based on the wrong information and assessments. What was needed, it would appear, was better executive leadership and decision-making — not a fundamentally decentralized pattern of response and counter-response.

One thing that deserves comment in the context of McChrystal’s book is the history of bad organization, bad intelligence, and bad decision-making the world has witnessed in the military experiences of the past century. The radical miscalculations and failures of planning involved in the first months of the Korean War, the painful and tragic misjudgments made by the French military in preparing for Dien Bien Phu, the equally bad thinking and planning done by Robert McNamara and the whiz kids leading to the Vietnam War — these examples stand out as sentinel illustrations of the failures of large organizations that have been tasked to carry out large, complex activities involving numerous operational units. The military and the national security establishments were good at some tasks, and disastrously bad at others. And the things they were bad at were both systemic and devastating. Bernard Fall illustrates these failures in Hell In A Very Small Place: The Siege Of Dien Bien Phu, and David Halberstam does so for the decision-making that led to the war in Vietnam in The Best and the Brightest.

So devising new ideas about command, planning, intelligence gathering and analysis, and priority-setting that are more effective would be a big contribution to humanity. But the deficiencies in Dien Bien Phu, Korea, or Vietnam seem different from those McChrystal identifies in Iraq. What was needed in these portentous moments of policy choice was clear-eyed establishment of appropriate priorities and goals, honest collection of intelligence and sources of information, and disinterested implementation of policies and plans that served the highest interests of the country. The “team of teams” approach doesn’t seem to be a general solution to the wide range of military and political challenges nations face.

What one would have wanted to see in the French military or the US national security apparatus is something different from the kind of teamwork described by McChrystal: greater honesty on all parts, a commitment to taking seriously the assessments of experts and participants in the field, an openness to questioning strongly held assumptions, and a greater capacity for institutional wisdom in arriving at decisions of this magnitude. We would have wanted to see a process that was not dominated by large egos, self-interest, and fixed ideas. We would have wanted French generals and their civilian masters to soberly assess the military function that a fortress camp at Dien Bien Phu could satisfy; the realistic military requirements that would need to be satisfied in order to defend the location; and an honest effort to solicit the very best information and judgment from experienced commanders and officials about what a Viet-Minh siege might look like. Instead, the French military was guided by complacent assumptions about French military superiority, which led to a genuine catastrophe for the soldiers assigned to the task and to French society more broadly.

There are valid insights contained in McChrystal’s book about the urgency of breaking down obstacles to communication and action within sprawling organizations as they confront a changing environment. But it doesn’t add up to a model that is well designed for most contexts in which large organizations actually function.

How organizations adapt

Organizations do things; they depend upon the coordinated efforts of numerous individuals; and they exist in environments that affect their ongoing success or failure. Moreover, organizations are to some extent plastic: the practices and rules that make them up can change over time. Sometimes these changes happen as the result of deliberate design choices by individuals inside or outside the organization; so a manager may alter the rules through which decisions are made about hiring new staff in order to improve the quality of work. And sometimes they happen through gradual processes over time that no one is specifically aware of. The question arises, then, whether organizations evolve toward higher functioning based on the signals from the environments in which they live; or on the contrary, whether organizational change is stochastic, without a gradient of change towards more effective functioning? Do changes within an organization add up over time to improved functioning? What kinds of social mechanisms might bring about such an outcome?

One way of addressing this topic is to consider organizations as mid-level social entities that are potentially capable of adaptation and learning. An organization has identifiable internal processes of functioning as well as a delineated boundary of activity. It has a degree of control over its functioning. And it is situated in an environment that signals differential success/failure through a variety of means (profitability, success in gaining adherents, improvement in market share, number of patents issued, …). So the environment responds favorably or unfavorably, and change occurs.

Is there anything in this specification of the structure, composition, and environmental location of an organization that suggests the possibility or likelihood of adaptation over time in the direction of improvement of some measure of organizational success? Do institutions and organizations get better as a result of their interactions with their environments and their internal structure and actors?

There are a few possible social mechanisms that would support the possibility of adaptation towards higher functioning. One is the fact that purposive agents are involved in maintaining and changing institutional practices. Those agents are capable of perceiving inefficiencies and potential gains from innovation, and are sometimes in a position to introduce appropriate innovations. This is true at various levels within an organization, from the supervisor of a custodial staff to a vice president for marketing to a CEO. If the incentives presented to these agents are aligned with the important needs of the organization, then we can expect that they will introduce innovations that enhance functioning. So one mechanism through which we might expect that organizations will get better over time is the fact that some agents within an organization have the knowledge and power necessary to enact changes that will improve performance, and they sometimes have an interest in doing so. In other words, there is a degree of intelligent intentionality within an organization that might work in favor of enhancement.

This line of thought should not be over-emphasized, however, because there are competing forces and interests within most organizations. Previous posts have focused on current organizational theory based on the idea of a “strategic action field” of insiders and outsiders who determine the activities of the organization (Fligstein and McAdam, Crozier; linklink). This framework suggests that the structure and functioning of an organization is not wholly determined by a single intelligent actor (“the founder”), but is rather the temporally extended result of interactions among actors in the pursuit of diverse aims. This heterogeneity of purposive actions by actors within an institution means that the direction of change is indeterminate; it is possible that the coalitions that form will bring about positive change, but the reverse is possible as well.

And in fact, many authors and participants have pointed out that it is often enough not the case that the agents’ interests are aligned with the priorities and needs of the organization. Jack Knight offers persuasive critique of the idea that organizations and institutions tend to increase in their ability to provide collective benefits in Institutions and Social Conflict. CEOs who have a financial interest in a rapid stock price increase may take steps that worsen functioning for shortterm market gain; supervisors may avoid work-flow innovations because they don’t want the headache of an extended change process; vice presidents may deny information to other divisions in order to enhance appreciation of the efforts of their own division. Here is a short description from Knight’s book of the way that institutional adjustment occurs as a result of conflict among players of unequal powers:

Individual bargaining is resolved by the commitments of those who enjoy a relative advantage in substantive resources. Through a series of interactions with various members of the group, actors with similar resources establish a pattern of successful action in a particular type of interaction. As others recognize that they are interacting with one of the actors who possess these resources, they adjust their strategies to achieve their best outcome given the anticipated commitments of others. Over time rational actors continue to adjust their strategies until an equilibrium is reached. As this becomes recognized as the socially expected combination of equilibrium strategies, a self-enforcing social institution is established. (Knight, 143)

A very different possible mechanism is unit selection, where more successful innovations or firms survive and less successful innovations and firms fail. This is the premise of the evolutionary theory of the firm (Nelson and Winter, An Evolutionary Theory of Economic Change). In a competitive market, firms with low internal efficiency will have a difficult time competing on price with more efficient firms; so these low-efficiency firms will go out of business occasionally. Here the question of “units of selection” arises: is it firms over which selection operates, or is it lower-level innovations that are the object of selection?

Geoffrey Hodgson provides a thoughtful review of this set of theories here, part of what he calls “competence-based theories of the firm”. Here is Hobson’s diagram of the relationships that exist among several different approaches to study of the firm.

The market mechanism does not work very well as a selection mechanism for some important categories of organizations — government agencies, legislative systems, or non-profit organizations. This is so, because the criterion of selection is “profitability / efficiency within a competitive market”; and government and non-profit organizations are not importantly subject to the workings of a market.

In short, the answer to the fundamental question here is mixed. There are factors that unquestionably work to enhance effectiveness in an organization. But these factors are weak and defeasible, and the countervailing factors (internal conflict, divided interests of actors, slackness of corporate marketplace) leave open the possibility that institutions change but they do not evolve in a consistent direction. And the glaring dysfunctions that have afflicted many organizations, both corporate and governmental, make this conclusion even more persuasive. Perhaps what demands explanation is the rare case where an organization achieves a high level of effectiveness and consistency in its actions, rather than the many cases that come to mind of dysfunctional organizational activity.

(The examples of organizational dysfunction that come to mind are many — the failures of nuclear regulation of the civilian nuclear industry (Perrow, The Next Catastrophe: Reducing Our Vulnerabilities to Natural, Industrial, and Terrorist Disasters); the failure of US anti-submarine warfare in World War II (Cohen, Military Misfortunes: The Anatomy of Failure in War); and the failure of chemical companies to ensure safe operations of their plants (Shrivastava, Bhopal: Anatomy of Crisis). Here is an earlier post that addresses some of these examples; link. And here are several earlier posts on the topic of institutional change and organizational behavior; linklink.)

Errors in organizations

Organizations do things — process tax returns, deploy armies, send spacecraft to Mars. And in order to do these various things, organizations have people with job descriptions; organization charts; internal rules and procedures; information flows and pathways; leaders, supervisors, and frontline staff; training and professional development programs; and other particular characteristics that make up the decision-making and action implementation of the organization. These individuals and sub-units take on tasks, communicate with each other, and give rise to action steps.

And often enough organizations make mistakes — sometimes small mistakes (a tax return is sent to the wrong person, a hospital patient is administered two aspirins rather than one) and sometimes large mistakes (the space shuttle Challenger is cleared for launch on January 28, 1986, a Union Carbide plant accidentally releases toxic gases over a large population in Bhopal, FEMA bungles its response to Hurricane Katrina). What can we say about the causes of organizational mistakes? And how can organizations and their processes be improved so mistakes are less common and less harmful?

Charles Perrow has devoted much of his career to studying these questions. Two books in particular have shed a great deal of light on the organizational causes of industrial and technological accidents, Normal Accidents: Living with High-Risk Technologies and The Next Catastrophe: Reducing Our Vulnerabilities to Natural, Industrial, and Terrorist Disasters. (Perrow’s work has been discussed in several earlier posts; linklinklink.) The first book emphasizes that errors and accidents are unavoidable; they are the random noise of the workings of a complex organization. So the key challenge is to have processes that detect errors and that are resilient to the ones that make it through. One of Perrow’s central findings in The Next Catastrophe is the importance of achieving a higher level of system resilience by decentralizing risk and potential damage. Don’t route tanker cars of chlorine through dense urban populations; don’t place nuclear power plants adjacent to cities; don’t create an Internet or a power grid with a very small number of critical nodes. Kathleen Tierney’s The Social Roots of Risk: Producing Disasters, Promoting Resilience (High Reliability and Crisis Management) emphasizes the need for system resilience as well (link).

Is it possible to arrive at a more granular understanding of organizational errors and their sources? A good place to begin is with the theory of organizations as “strategic action fields” in the sense advocated by Fligstein and McAdam in A Theory of Fields. This approach imposes an important discipline on us — it discourages the mental mistake of reification when we think about organizations. Organizations are not unitary decision and action bodies; instead, they are networks of people linked in a variety of forms of dependency and cooperation. Various sub-entities consider tasks, gather information, and arrive at decisions for action, and each of these steps is vulnerable to errors and shortfalls. The activities of individuals and sub-groups are stimulated and conveyed through these networks of association; and, like any network of control or communication, there is always the possibility of a broken link or a faulty action step within the extended set of relationships that exist.

Errors can derive from individual mistakes; they can derive from miscommunication across individuals and sub-units within the organization; they can derive from more intentional sources, including self-interested or corrupt behavior on the part of internal participants. And they can derive from conflicts of interest between units within an organization (the manufacturing unit has an interest in maximizing throughput, the quality control unit has an interest in minimizing faulty products).

Errors are likely in every part of an organization’s life. Errors occur in the data-gathering and analysis functions of an organization. A sloppy market study is incorporated into a planning process leading to a substantial over-estimate of demand for a product; a survey of suppliers makes use of ambiguous questions that lead to misinterpretation of the results; a vice president under-estimates the risk posed by a competitor’s advertising campaign. For an organization to pursue its mission effectively, it needs to have accurate information about the external circumstances that are most relevant to its goals. But “relevance” is a judgment issue; and it is possible for an organization to devote its intelligence-gathering resources to the collection of data that are only tangentially helpful for the task of designing actions to carry out the mission of the institution.

Errors occur in implementation as well. The action initiatives that emerge from an organization’s processes — from committees, from CEOs, from intermediate-level leaders, from informal groups of staff — are also vulnerable to errors of implementation. The facilities team formulates a plan for re-surfacing a group of parking lots; this plan depends upon closing these lots several days in advance; but the safety department delays in implementing the closure and the lots have hundreds of cars in them when the resurfacing equipment arrives. An error of implementation.

One way of describing these kinds of errors is to recognize that organizations are “loosely connected” when it comes to internal processes of information gathering, decision making, and action. The CFO stipulates that the internal audit function should be based on best practices nationally; the chief of internal audit interprets this as an expectation that processes should be designed based on the example of top-tier companies in the same industry; and the subordinate operationalizes this expectation by doing a survey of business-school case studies of internal audit functions at 10 companies. But the data collection that occurs now has only a loose relationship to the higher-level expectation formulated by the CFO. Similar disconnects — or loose connections — occur on the side of implementation of action steps as well. Presumably top FEMA officials did not intend that FEMA’s actions in response to Hurricane Katrina would be as ineffective and sporadic as they turned out to be.

Organizations also have a tendency towards acting on the basis of collective habits and traditions of behavior. It is easier for a university’s admissions department to continue the same programs of recruitment and enrollment year after year than it is to rethink the approach to recruitment in a fundamental way. And yet it may be that the circumstances of the external environment have changed so dramatically that the habitual practices will no longer achieve similar results. A good example is the emergence of social media marketing in admissions; in a very short period of time the 17- and 18-year-old young people whom admissions departments want to influence went from willing recipients of glossy admissions publications in the mail to “Facebook-only” readers. Yesterday’s correct solution to an organizational problem may become tomorrow’s serious error, because the environment has changed.

In a way the problem of organizational errors is analogous to the problem of software bugs in large, complex computer systems. It is recognized by software experts that bugs are inevitable; and some of these coding errors or design errors may have catastrophic consequences in unusual settings. (Nancy Leveson’s Safeware: System Safety and Computers provides an excellent review of these possibilities.) So the task for software engineers and organizational designers and leaders is similar: designing fallible systems that do a pretty good job almost all of the time, and are likely to fail gracefully when errors inevitably occur.

Positive organizational behavior

source: Rob Cross, Wayne Baker, Andrew Parker, “What creates energy in organizations?” (link)

Organizations need study for several important reasons. One is their ubiquity in modern life — almost nothing that we need in daily life is created by solo producers. Rather, activity among a number of individuals is coordinated and directed through organizations that produce the goods and services we need — grocery chains, trucking companies, police departments, universities, small groups of cooperating chimpanzees.

A second reason for studying organizations is that existing theories of human behavior don’t do a very good job of explaining organizational behavior. The theory of rational self interest — the premise of the market — doesn’t work very well as a sole theory of behavior within an organization. But neither does the normative theory of a Durkheim or a Weber. We need better theories of the forms of agency that are at work within organizations — the motives individuals have, the ways in which the rules and incentives of the organization affect behavior, the ways the culture of the workplace influences behavior, and the role that local level practices have in influencing individual behavior that makes a difference to the functioning of the organization.

Here are a few complications from current work in sociology and economics.

Economist Amartya Sen observes that the premises of market rationality make social cooperation all but impossible. This is Sen’s central conclusion in “Rational Fools” (link), and it is surely correct: “The purely rational economic man is indeed close to being a social moron”. Sen’s work demonstrates that social behavior — even conceding the point that it derives from the thought processes of individuals — is substantially more layered and multi-factored than neoclassical economics postulates. Sen’s own addition to the mix is his theory of commitments — the idea that individuals have priorities that don’t map conveniently onto utility schemes — and that lots of ordinary collective behavior depends on these behavioral characteristics.

Sociologist Michele Lamont argues that a major difference between upper-middle class French and American men is their attitudes towards their own work in the office or factory. In Money, Morals, and Manners: The Culture of the French and the American Upper-Middle Class she finds that professional-class French men express a certain amount of contempt for their hard-working American counterparts. Her findings suggest substantial differences in the “culture of work and profession” in different national and regional settings. (Here is an earlier post on Lamont’s work; link.)

Experimental economist Ernst Fehr finds that workplaces create substantial behavioral predispositions that are triggered by the frame of the workplace (link). In unpublished work he finds that individuals in the banking industry are slightly more honest than the general population when they think in the frame of their personal lives, but that they are substantially less honest when they think in the frame of the banking office. Fehr and his colleagues demonstrate the power of cultural cues in the workplace (and presumably other well-defined social environments) in influencing the way that individuals make decisions in that environment.

Fehr has also made a major contribution through his research in experimental economics on the subject of altruism. He finds — context-independently — that decision makers are generally not rationally self interested maximizers. And using some results from the neurosciences he argues that there is a biological basis for this “pro-social” element of behavior.  Here is an example of Fehr’s approach:

If we randomly pick two human strangers from a modern society and give them the chance to engage in repeated anonymous exchanges in a laboratory experiment, reciprocally altruistic behaviour emerges spontaneously with a high probability…. However, human altruism even extends far beyond reciprocal altruism and reputation-based cooperation taking the form of strong reciprocity. (Fehr and Fischbacher 2005:7;

link

)

(Here is an article by Jon Elster on Fehr’s experimental research on altruism; link.)

So what can we discover about common features of behavior that can be observed in different kinds of organizations? There is a degree of convergence between the theoretical and experimental results that have come out of this research in sociology and economics and the organizational theories of what is now referred to as positive organizational studies. Here is a brilliant collection of research in this area edited by Kim Cameron and Gretchen Spreitzer, The Oxford Handbook of Positive Organizational Scholarship. Cameron and Spreitzer define the field in their introduction in these terms:

Positive organizational scholarship is an umbrella concept used to unify a variety of approaches in organizational studies, each of which incorporates the notion of ‘the positive.’ … “organizational research occurring at the micro, meso, and macro levels which points to unanswered questions about what processes, states, and conditions are important in explaining individual and collective flourishing. Flourishing refers to being in an optimal range of human functioning” [quoting Jane Dutton] (2).

The POS research community places a great deal of importance on the impact that positive social behavior has on the effectiveness of an organization. And these scholars believe that specific institutional arrangements and actions by leaders can increase the levels of positive social behavior in a work environment.

Studies have shown that organizations in several industries (including financial services, health care, manufacturing, and government) that implemented and improved their positive practices over time also increased their performance in desired outcomes such as profitability, productivity, quality, customer satisfaction, and employee retention. That is, positive practices that were institutionalized in organizations, including providing compassionate support for employees, forgiving mistakes and avoiding blame, fostering the meaningfulness of work, expressing frequent gratitude, showing kindness, and caring for colleagues, led organizations to perform at significantly higher levels on desired outcomes. (6)

In a sense they point to the possibility of high level and low level equilibria within roughly the same set of rules. And organizations that succeed in promoting positive behavioral motivations will be more successful in achieving their goals. Adam Grant and Justin Berg analyze these positive motives in their contribution to the Handbook, “Prosocial Motivation at Work”.

What motivates employees to care about making a positive difference in the lives of others, and what actions and experiences does this motivation fuel? (29)

It is both a theoretical premise of the POS research community and an empirical subject of inquiry for these researchers that it is possible to elicit “prosocial” motivations through suitable institutional arrangements and leadership. Interestingly, this seems to be an implication of the work by Ernst Fehr mentioned above as well.

Positive organizational scholarship is a timely contribution to the social sciences because it stands on the cusp between the need for better theories of the actor and the imperative to improve the performance of organizations. Hospitals, manufacturing companies, universities, and non-profit organizations all want to improve their performance in a variety of ways: improve patient safety, reduce costs, improve product quality, improve student retention, improve the delivery of effective social services, and the like. And POS is an empirically grounded approach to arriving at a better understanding of the range of social behaviors that can potentially motivate participants and lead to better collective performance. And the category of “prosocial motivation” that underlies the POS approach is an important dimension of behavior for further research and investigation.

What drives organizational performance?

 

We have a pretty good idea of the characteristics that support very high individual performance in a variety of fields, from jazz to track to physics to business. An earlier post discussed some of the different combinations of features that characterize leaders in several different professions (link). And it isn’t difficult to sketch out qualities of personality, character, and style that make for a great teacher, researcher, entrepreneur, a great soccer player, or an exceptional police investigator. So we might imagine that a high-performing organization is one that has succeeded in assembling a group of high-performing individuals. But this is plainly untrue — witness the New York Yankees during much of the 2000s, the dot-com company WebVan during the late 1990s, and the XYZ Orchestra today. (Here is a thoughtful Mellon Foundation study of quality factors in symphony orchestras; link.) In each case the organization consisted of high-performing stars in their various disciplines, but somehow the ensemble performed poorly. The lesson from these examples is an obvious one: the performance of an organization is more than the sum of the abilities of its component members.

In fact, it seems apparent that organizational performance, like physical health, is a function of a number of separate parameters:

  • clarity about mission
  • appropriateness of internal functional specialization
  • quality of internal communication and collaboration across units and individuals
  • quality and intensity of individuals
  • quality of internal motivation
  • quality of leadership

We might say that an organization is like a physical mechanism in the sense that its overall performance depends on the quality of the design, the appropriate interconnections among the parts, and the quality of the individual components.

So what else goes into determining great organizational performance besides the quality of the individuals who make it up? A few things are obvious. Of course it is true that having individual participants who have the right kinds of talents is crucial. A technology company needs excellent engineers and designers. But it also needs highly talented marketing professionals, financial experts, and strategic planners. And it needs these talented specialists in a number of critical areas. Why did Xerox PARC fail in spite of the excellence of its scientists and engineers, and the innovativeness of the products that they created? Because the organization lacked the ability — and the individuals — to turn those ideas and innovations into products that the public wanted to buy. (Here is Malcolm Gladwell’s take on Xerox PARC in the New Yorker; link.)

A key aspect of the problem of designing and tuning an organization’s features to ensure high performance is being able to determine with precision what the mission of the organization is. What is the organization fundamentally established to bring about? If the Red Cross is an organization that is intended to deliver resources and assistance to communities that have suffered extensive disasters, that implies one set of functional needs to be satisfied by divisions and specialists within the organization. If it is primarily a fund-raising and marketing organization aimed at raising public awareness and generating large amounts of public donations to be used for disaster relief, that implies a different set of internal specialists. So being clear about the overall mission of the organization is crucial for the designers, so they can skillfully design a set of divisions, specialists, and work processes that can work together effectively to carry out the tasks necessary to succeed in achieving the mission.

This point highlights the fact that an organization needs to have a functional structure in which the activities of individuals or departments carry out specialized tasks. These sub-units depend upon the high-level work of other departments or individuals, and the functional structure of the organization can be more or less appropriate to the task. The organization succeeds to the extent that its component parts succeed in identifying the needs and opportunities facing the organization and in carrying out their roles in responding to those needs and opportunities. Poor performance in one department can have the effect of ruining the overall success of the organization to carry out its mission — even if other departments are highly successful in carrying out their tasks. Charles Perrow highlights this kind of organizational deficiency in Normal Accidents: Living with High-Risk Technologies.

Here is another important variable in bringing about organizational effectiveness: the procedures within the organization that are designed to encourage high-quality effort and results on the parts of the individuals who occupy roles throughout the organization. One line of response to this issue flows through a system of supervision and assessment. This approach emphasizes measurement of performance and positive and negative incentives to motivate satisfactory performance. Supervisors are tasked to ensure that employees are exerting themselves and that their work product is of satisfactory quality.

But a different response proceeds through a theory of internal motivation. Leaders and supervisors encourage high-quality effort and achievement by expressing the valuable goals that the organization is pursuing and by offering the reward of participation in effective work that one cares about to employees. This positive motivational feature is strengthened if the organization visibly maintains its commitment to treat its employees fairly and decently. If an employee is proud to work for Ben and Jerry’s, he or she is strongly motivated to make the best contribution possible to the work of the company. In a nutshell this is the theory that underlies the very interesting literature of positive organizational scholarship (Kim Cameron and Gretchen Spreitzer, The Oxford Handbook of Positive Organizational Scholarship).

A fifth facet of organizational performance plainly has to do with internal communication, coordination, and collaboration. The eventual success or failure of an organizational initiative will depend on the activities of individuals and units spread out throughout the organization. The work of various of those units can be made more effective or less effective by the ease and seriousness with which they are able to communicate with each other. Suppose a car company is designing a new model. Many units will be involved in bringing the design to fruition. If the body designers, the power train designers, and the manufacturing engineers haven’t talked to each other, there is a likelihood that solutions chosen by one set of specialists will create major problems for the other specialists. (The Saab 900 of the late 1970s was a beautiful and high-performing vehicle; but because the design process had not taken into account the need for convenient servicing, it was necessary to remove the engine to carry out some common kinds of repair.) Thomas Hughes provides an excellent analysis of the organizational deficiencies of the design process used in the United States military aerospace sector in the 1950s and 1960s in Rescuing Prometheus: Four Monumental Projects That Changed the Modern World. Here is his comparison of good and bad organizational forms:

The top diagram is entirely hierarchical, with decision-makers at the top deciding the flow of work below and essentially no communication across sub-units. The bottom diagram, by contrast, involves a great deal of internal communication, allowing for adjustment of design and timing decisions so that the eventual plan has the greatest likelihood for success. The latter permits the implementation of systems engineering rather than component engineering. Here is Hughes’s depiction of what happens when an organization lacks good internal communication and coordination:

What this implies is that improving organizational performance is a bit like tuning a piano: we need to continually adjust the factors (motivation, collaboration, mission, leadership, specialization) in such a way as to create a joint system of activity that succeeds at a high level in creating the desired results.

(I used images of musical ensembles to open this topic. But how good is the analogy? Actually, it is not a particularly good analogy. The issue of the quality of the players is obviously relevant, and quality of leadership has an exact parallel in the symphony orchestra. But the task of giving an excellent performance of Dvorak’s ninth symphony is much simpler than that of bringing about a successful intervention by FEMA in response to a hurricane. There is a score for the musicians; there is a central conductor who keeps them in step with each other; and most crucially, there is no uncertainty about what to do once the third movement is finished; the musicians turn the page and move on to the fourth movement. Perhaps the jazz ensemble pictured above is a slightly better metaphor for a complex organization in that it leaves room for improvisation by the players. But even here, the activity is orders of magnitude simpler and easier to coordinate than a large organization whose actions take place over months or years, dispersed over thousands of miles and multiple sites of activity. So organizational effectiveness is a more complex process than musical coordination and performance.)

(I emphasize here the importance of collaboration as a variable in organizational effectiveness. This suggests examples drawn from team activities like soccer or a research laboratory. But some experts doubt the idea that teams are always superior to more hierarchical structures. Here is J. Richard Hackman on the positives and negatives of teams (link).)

%d bloggers like this: