Organizations and dysfunction

A recurring theme in recent months in Understanding Society is organizational dysfunction and the organizational causes of technology failure. Helmut Anheier’s volume When Things Go Wrong: Organizational Failures and Breakdowns is highly relevant to this topic, and it makes for very interesting reading. The volume includes contributions by a number of leading scholars in the sociology of organizations.

And yet the volume seems to miss the mark in some important ways. For one thing, it is unduly focused on the question of “mortality” of firms and other organizations. Bankruptcy and organizational death are frequent synonyms for “failure” here. This frame is evident in the summary the introduction offers of existing approaches in the field: organizational aspects, political aspects, cognitive aspects, and structural aspects. All bring us back to the causes of extinction and bankruptcy in a business organization. Further, the approach highlights the importance of internal conflict within an organization as a source of eventual failure. But it gives no insight into the internal structure and workings of the organization itself, the ways in which behavior and internal structure function to systematically produce certain kinds of outcomes that we can identify as dysfunctional.

Significantly, however, dysfunction does not routinely lead to death of a firm. (Seibel’s contribution in the volume raises this possibility, which Seibel refers to as “successful failures“). This is a familiar observation from political science: what looks dysfunctional from the outside may be perfectly well tuned to a different set of interests (for example, in Robert Bates’s account of pricing boards in Africa in Markets and States in Tropical Africa: The Political Basis of Agricultural Policies). In their introduction to this volume Anheier and Moulton refer to this possibility as a direction for future research: “successful for whom, a failure for whom?” (14).

The volume tends to look at success and failure in terms of profitability and the satisfaction of stakeholders. But we can define dysfunction in a more granular way by linking characteristics of performance to the perceived “purposes and goals” of the organization. A regulatory agency exists in order to effectively project the health and safety of the public. In this kind of case, failure is any outcome in which the agency flagrantly and avoidably fails to prevent a serious harm — release of radioactive material, contamination of food, a building fire resulting from defects that should have been detected by inspection. If it fails to do so as well as it might then it is dysfunctional.

Why do dysfunctions persist in organizations? It is possible to identify several possible causes. The first is that a dysfunction from one point of view may well be a desirable feature from another point of view. The lack of an authoritative safety officer in a chemical plant may be thought to be dysfunctional if we are thinking about the safety of workers and the public as a primary goal of the plant (link). But if profitability and cost-savings are the primary goals from the point of view of the stakeholders, then the cost-benefit analysis may favor the lack of the safety officer.

Second, there may be internal failures within an organization that are beyond the reach of any executive or manager who might want to correct them. The complexity and loose-coupling of large organizations militate against house cleaning on a large scale.

Third, there may be powerful factions within an organization for whom the “dysfunctional” feature is an important component of their own set of purposes and goals. Fligstein and McAdam argue for this kind of disaggregation with their theory of strategic action fields (link). By disaggregating purposes and goals to the various actors who figure in the life cycle of the organization – founders, stakeholders, executives, managers, experts, frontline workers, labor organizers – it is possible to see the organization as a whole as simply the aggregation of the multiple actions and purposes of the actors within and adjacent to the organization. This aggregation does not imply that the organization is carefully adjusted to serve the public good or to maximize efficiency or to protect the health and safety of the public. Rather, it suggests that the resultant organizational structure serves the interests of the various actors to the fullest extent each actor is able to manage.

Consider the account offered by Thomas Misa of the decline of the steel industry in the United States in the first part of the twentieth century in A Nation of Steel: The Making of Modern America, 1865-1925. Misa’s account seems to point to a massive dysfunction in the steel corporations of the inter-war period, a deliberate and sustained failure to invest in research on new steel technologies in metallurgy and production. Misa argues that the great steel corporations — US Steel in particular — failed to remain competitive in their industry in the early years of the twentieth century because management persistently pursued short-term profits and financial advantage for the company through domination of the market at the expense of research and development. It relied on market domination instead of research and development for its source of revenue and profits.

In short, U.S. Steel was big but not illegal. Its price leadership resulted from its complete dominance in the core markets for steel…. Indeed, many steelmakers had grown comfortable with U.S. Steel’s overriding policy of price and technical stability, which permitted them to create or develop markets where the combine chose not to compete, and they testified to the court in favor of the combine. The real price of stability … was the stifling of technological innovation. (255)

The result was that the modernized steel industries in Europe leap-frogged the previous US advantage and eventually led to unviable production technology in the United States.

At the periphery of the newest and most promising alloy steels, dismissive of continuous-sheet rolling, actively hostile to new structural shapes, a price leader but not a technical leader: this was U.S. Steel. What was the company doing with technological innovation? (257)

Misa is interested in arriving at a better way of understanding the imperatives leading to technical change — better than neoclassical economics and labor history. His solution highlights the changing relationships that developed between industrial consumers and producers in the steel industry.

We now possess a series of powerful insights into the dynamics of technology and social change. Together, these insights offer the realistic promise of being better able, if we choose, to modulate the complex process of technical change. We can now locate the range of sites for technical decision making, including private companies, trade organizations, engineering societies, and government agencies. We can suggest a typology of user-producer interactions, including centralized, multicentered, decentralized, and direct-consumer interactions, that will enable certain kinds of actions while constraining others. We can even suggest a range of activities that are likely to effect technical change, including standards setting, building and zoning codes, and government procurement. Furthermore, we can also suggest a range of strategies by which citizens supposedly on the “outside” may be able to influence decisions supposedly made on the “inside” about technical change, including credibility pressure, forced technology choice, and regulatory issues. (277-278)

In fact Misa places the dynamic of relationship between producer and large consumer at the center of the imperatives towards technological innovation:

In retrospect, what was wrong with U.S. Steel was not its size or even its market power but its policy of isolating itself from the new demands from users that might have spurred technical change. The resulting technological torpidity that doomed the industry was not primarily a matter of industrial concentration, outrageous behavior on the part of white- and blue-collar employees, or even dysfunctional relations among management, labor, and government. What went wrong was the industry’s relations with its consumers. (278)

This relative “callous treatment of consumers” was profoundly harmful when international competition gave large industrial users of steel a choice. When US Steel had market dominance, large industrial users had little choice; but this situation changed after WWII. “This favorable balance of trade eroded during the 1950s as German and Japanese steelmakers rebuilt their bombed-out plants with a new production technology, the basic oxygen furnace (BOF), which American steelmakers had dismissed as unproven and unworkable” (279). Misa quotes a president of a small steel producer: “The Big Steel companies tend to resist new technologies as long as they can … They only accept a new technology when they need it to survive” (280).

*****

Here is an interesting table from Misa’s book that sheds light on some of the economic and political history in the United States since the post-war period, leading right up to the populist politics of 2016 in the Midwest. This chart provides mute testimony to the decline of the rustbelt industrial cities. Michigan, Illinois, Ohio, Pennsylvania, and western New York account for 83% of the steel production on this table. When American producers lost the competitive battle for steel production in the 1980s, the Rustbelt suffered disproportionately, and eventually blue collar workers lost their places in the affluent economy.

Nuclear power plant siting decisions

Readers may be skeptical about the practical importance of the topic of nuclear power plant siting decisions, since very few new nuclear plants have been proposed or approved in the United States for decades. However, the topic is one for which there is an extensive historical record, and it is a process that illuminates the challenge for government to balance risk and benefit, private gain and public cost.  Moreover, siting inherently brings up issues that are both of concern to the public in general (throughout a state or region of the country) and to the citizens who live in close proximity to the recommended site. The NIMBY problem is unavoidable — it is someone’s backyard, and it is a worrisome neighbor. So this is a good case in terms of which to think creatively about the responsibilities of government for ensuring the public good in the face of risky private activity, and the detailed institutions of regulation and oversight that would work to make wise public outcomes more likely.

I’ve been thinking quite a bit recently about technology failure, government regulation, and risky technologies, and there is a lot to learn about these subjects by looking at the history of nuclear power in the United States. Two books in particular have been interesting to me. Neither is particularly recent, but both shed valuable light on the public-policy context of nuclear decision-making. The first is Joan Aron’s account of the processes that led to the cancellation of the Shoreham nuclear power plant on Long Island in the 1970s (Licensed To Kill?: The Nuclear Regulatory Commission and the Shoreham Power Plant) and the second is Donald Stever, Jr.’s account of the licensing process for the Seabrook nuclear power plant in Seabrook and The Nuclear Regulatory Commission: The Licensing of a Nuclear Power Plant. Both are fascinating books and well worthy of study as a window into government decision-making and regulation. Stever’s book is especially interesting because it is a highly capable analysis of the licensing process, both at the state level and at the level of the NRC, and because Stever himself was a participant. As an assistant attorney general in New Hampshire he was assigned the role of Counsel for the Public throughout the process in New Hampshire.

Joan Aron’s 1997 book Licensed to Kill? is a detailed case study the effort to establish the Shoreham nuclear power plant on Long Island in the 1980s. LILCO had proposed the plant to respond to rising demand for electricity on Long Island as population and energy use rose. And Long Island is a long, narrow island on which traffic congestion at certain times of day is legendary. Evacuation planning was both crucial and in the end, perhaps impossible.

This is an intriguing story, because it led eventually to the cancellation of the operating license for the plant by the NRC after completion of the plant. And the cancellation resulted largely from the effectiveness of public opposition and interest-group political pressure. Aron provides a detailed account of the decisions made by the public utility company LILCO, the AEC and NRC, New York state and local authorities, and citizen activist groups that led to the costliest failed investment in the history of nuclear power in the United States.

In 1991 the NRC made the decision to rescind the operating license for the Shoreham plant, after completion at a cost of over $5 billion but before it had generated a kilowatt of electricity.

Aron’s basic finding is that the project collapsed in costly fiasco because of a loss of trust among the diverse stakeholders: LILCO, the Long Island public, state and local agencies and officials, scientific experts, and the Nuclear Regulatory Commission. The Long Island tabloid Newsday played a role as well, sensationalizing every step of the process and contributing to public distrust of the process. Aron finds that the NRC and LILCO underestimated the need for full analysis of safety and emergency preparedness issues raised by the plant’s design, including the issue of evacuation from a largely inaccessible island full of two million people in the event of disaster. LILCO’s decision to upscale the capacity of the plant in the middle of the process contributed to the failure as well. And the occurrence of the Three Mile Island disaster in 1979 gave new urgency to the concerns experienced by citizens living within fifty miles of the Shoreham site about the risks of a nuclear plant.

As we have seen, Shoreham failed to operate because of intense public opposition, in which the governor played a key role, inspired in part by the utility’s management incompetence and distrust of the NRC. Inefficiencies in the NRC licensing process were largely irrelevant to the outcome. The public by and large ignored NRC’s findings and took the nonsafety of the plant for granted. (131)

The most influential issue was public safety: would it be possible to perform an orderly evacuation of the population near the plant in the event of a serious emergency? Clarke and Perrow (included in Helmut Anheier, ed., When Things Go Wrong: Organizational Failures and Breakdowns) provide an extensive analysis of the failures that occurred during tests of the emergency evacuation plan designed by LILCO. As they demonstrate, the errors that occurred during the evacuation test were both “normal” and potentially deadly.

One thing that comes out of both books is the fact that the commissioning and regulatory processes are far from ideal examples of the rational development of sound public policy. Rather, business interests, institutional shortcomings, lack of procedural knowledge by committee chairs, and dozens of other factors lead to outcomes that appear to fall far short of what the public needs. But in addition to ordinary intrusions into otherwise rational policy deliberations, there are other reasons to believe that decision-making is more complicated and less rational than a simple model of rational public policy formation would suggest. Every decision-maker brings a set of “framing assumptions” about the reality concerning which he or she is deliberating. These framing assumptions impose an unavoidable kind of cognitive bias into collective decision-making. A business executive brings a worldview to the question of regulation of risk that is quite different from that of an ecologist or an environmental activist. This is different from the point often made about self-interest; our framing assumptions do not feel like expressions of self-interest, but rather simply secure convictions about how the world works and what is important in the world. This is one reason why the work of social scientists like Scott Page (The Difference: How the Power of Diversity Creates Better Groups, Firms, Schools, and Societies) on the value of diversity in problem-solving and decision-making is so important: by bringing multiple perspectives and cognitive frames to a problem, we are more likely to get a balanced decision that gives appropriate weight to the legitimate interests and concerns of all involved.

Here is an interesting concrete illustration of cognitive bias (with a generous measure of self-interest as well) in Stever’s discussion of siting decisions for nuclear power plants:

From the time a utility makes the critical in-house decision to choose a site, any further study of alternatives is necessarily negative in approach. Once sufficient corporate assets have been sunk into the chosen site to produce data adequate for state site review, the company’s management has a large enough stake in it to resist suggestions that a full study of site alternatives be undertaken as a part of the state (or for that matter as a part of the NEPA) review process. hence, the company’s methodological approach to evaluating alternates to the chosen site will always be oriented toward the desired conclusion that the chosen site is superior. (Stever 1980 : 30)

This is the bias of sunk costs, both inside the organization and in the cognitive frames of independent decision makers in state agencies.

Stever’s central point here is a very important one: the pace of site selection favors the energy company’s choices over the concerns and preferences of affected groups because the company is in a position to have dedicated substantial resources to development of the preferred site proposal. Likewise, scientific experts have a difficult time making their concerns about habitat or traffic flow heard in the context.
But here is a crucial thing to observe: the siting decision is only one of dozens in the development of a new power plant, which is itself only one of hundreds of government / business decisions made every year. What Stever describes is a structural bias in the regulatory process, not a one-off flaw. At its bottom, this is the task that government faces when considering the creation of a new nuclear power plant: “to assess the various public and private costs and benefits of a site proposed by a utility” (32); and Stever’s analysis makes it doubtful that existing public processes do this in a consistent and effective way. Stever argues that government needs to have more of a role in site selection, not less, as pro-market advocates demand: “The kind of social and environmental cost accounting required for a balanced initial assessment of, and development of, alternative sites should be done by a public body acting not as a reviewer of private choices, but as an active planner” (32).

Notice how this scheme shifts the pace and process from the company to the relevant state agency. The preliminary site selection and screening is done by a state site planning agency, with input then invited from the utilities companies, interest groups, and a formal environmental assessment. This places the power squarely in the hands of the government agency rather than the private owner of the plant — reflecting the overriding interest the public has in ensuring health, safety, and environmental controls.
Stever closes a chapter on regulatory issues with these cogent recommendations (38-39):

  1. Electric utility companies should not be responsible for decisions concerning early nuclear-site planning.
  2. Early site identification, evaluation, and inventorying is a public responsibility that should be undertaken by a public agency, with formal participation by utilities and interest groups, based upon criteria developed by the state legislature.
  3. Prior to the use of a particular site, the state should prepare a complete environmental assessment for it, and hold adjudicatory hearings on contested issues.
  4. Further effort should be made toward assessing the public risk of nuclear power plant sites.
  5. In areas like New England, characterized by geographically small states and high energy demand, serious efforts should be made to develop regional site planning and evaluation.
  6. Nuclear licensing reform should focus on the quality of decision-making.
  7. There should be a continued federal presence in nuclear site selection, and the resolution of environmental problems should not be delegated entirely to the states. 

(It is very interesting to me that I have not been able to locate a full organizational study of the Nuclear Regulatory Commission itself.)

Is the Xerox Corporation supervenient?

Supervenience is the view that the properties of some composite entity B are wholly fixed by the properties and relations of the items A of which it is composed (link, link). The transparency of glass supervenes upon the properties of the atoms of silicon and oxygen of which it is composed and their arrangement.

Can the same be said of a business firm like Xerox when we consider its constituents to be its employees, stakeholders, and other influential actors and their relations and actions? (Call that total field of factors S.) Or is it possible that exactly these actors at exactly the same time could have manifested a corporation with different characteristics?

 
Let’s say the organizational properties we are interested in include internal organizational structure, innovativeness, market adaptability, and level of internal trust among employees. And S consists of the specific individuals and their properties and relations that make up the corporation at a given time. Could this same S have manifested with different properties for Xerox?

One thing is clear. If a highly similar group of individuals had been involved in the creation and development of Xerox, it is entirely possible that the organization would have been substantially different today. We could expect that contingent events and a high level of path dependency would have led to substantial differences in organization, functioning, and internal structure. So the company does not supervene upon a generic group of actors defined in terms of a certain set of beliefs, goals, and modes of decision making over the history of its founding and development. I have sometimes thought this path dependency itself if enough to refute supervenience.

But the claim of supervenience is not a temporal or diachronic claim, but instead a synchronic claim: the current features of structure, causal powers, functioning, etc., of the higher-level entity today are thought to be entirely fixed by the supervenience base (in this case, the particular individuals and their relations and actions). Putting the idea in terms of possible-world theory, there is no possible world in which exactly similar individuals in exactly similar states of relationship and action would underlie a business firm Xerox* which had properties different from the current Xerox firm.

 
One way in which this counterfactual might be true is if a property P of the corporation depended on the states of the agents plus something else — say, the conductivity of copper in its pure state. In the real world W copper is highly conductive, while in W* copper is not-conductive. And in W*, let’s suppose, Xerox has property P* rather than P. On this scenario Xerox does not supervene upon the states of the actors, since these states are identical in W and W*. This is because dependence on the conductivity of copper make a difference not reflected in a difference in the states of the actors. 
 
But this is a pretty hypothetical case. We would only be justified in thinking Xerox does not supervene on S if we had a credible candidate for another property that would make a difference, and I’m hard pressed to do so.  
 
There is another possible line of response for the hardcore supervenience advocate in this case. I’ve assumed the conductivity of copper makes a difference to the corporation without making a difference for the actors. But I suppose it might be maintained that this is impossible: only the states of the actors affect the corporation, since they constitute the corporation; so the scenario I describe is impossible. 
 
The upshot seems to be this: there is no way of resolving the question at the level of pure philosophy. The best we can do is to do concrete empirical work on the actual causal and organizational processes through which the properties of the whole are constituted through the actions and thoughts of the individuals who make it up.

But here is a deeper concern. What makes supervenience minimally plausible in the case of social entities is the insistence on synchronic dependence. But generally speaking, we are always interested in the diachronic behavior and evolution of a social entity. And here the idea of path dependence is more credible than the idea of moment-to-moment dependency on the “supervenience base”. We might say that the property of “innovativeness” displayed by the Xerox Corporation at some periods in its history supervenes moment-to-moment on the actions and thoughts of its constituent individuals; but we might also say that this fact does not explain the higher-level property of innovativeness. Instead, some set of events in the past set the corporation on a path that favored innovation; this corporate culture or climate influenced the selection and behavior of the individuals who make it up; and the day-to-day behavior reflects both the path-dependent history of its higher-level properties and the current configuration of its parts.

(Thanks, Raphael van Riel, for your warm welcome to the Institute of Philosophy at the University of Duisburg-Essen, and for the many stimulating conversations we had on the topics of supervenience, generativity, and functionalism.)

Deficiencies of practical rationality in organizations

Suppose we are willing to take seriously the idea that organizations possess a kind of intentionality — beliefs, goals, and purposive actions — and suppose that we believe that the microfoundations of these quasi-intentional states depend on the workings of individual purposive actors within specific sets of relations, incentives, and practices. How does the resulting form of “bureaucratic intelligence” compare with human thought and action?

There is a major set of differences between organizational “intelligence” and human intelligence that turn on the unity of human action compared to the fundamental disunity of organizational action. An individual human being gathers a set of beliefs about a situation, reflects on a range of possible actions, and chooses a line of action designed to bring about his/her goals. An organization is disjointed in each of these activities. The belief-setting part of an organization usually consists of multiple separate processes culminating in an amalgamated set of beliefs or representations. And this amalgamation often reflects deep differences in perspective and method across various sub-departments. (Consider inputs into an international crisis incorporating assessments from intelligence, military, and trade specialists.)

Second, individual intentionality possess a substantial degree of practical autonomy. The individual assesses and adopts the set of beliefs that seem best to him or her in current circumstances. The organization in its belief-acquisition is subject to conflicting interests, both internal and external, that bias the belief set in one direction or the other. (This is the central thrust of experts on science policy like Naomi Oreskes.) The organization is not autonomous in its belief formation processes.

Third, an individual’s actions have a reasonable level of consistency and coherence over time. The individual seeks to avoid being self-defeating by doing X and Y while knowing that X undercuts Y. An organization is entirely capable of pursuing a suite of actions which embody exactly this kind of inconsistency, precisely because the actions chosen are the result of multiple disagreeing sub-agencies and officers.

Fourth, we have some reason to expect a degree of stability in the goals and values that underlie actions by an individual. But organizations, exactly because their behavior is a joint product of sub-agents with conflicting plans and goals, are entirely capable of rapid change of goals and values. Deepening this instability is the fluctuating powers and interests of external stakeholders who apply pressure for different values and goals over time.

Finally, human thinkers are potentially epistemic thinkers — they are at least potentially capable of following disciplines of analysis, reasoning, and evidence in their practical engagement with the world. By contrast, because of the influence of interests, both internal and external, organizations are perpetually subject to the distortion of belief, intention, and implementation by actors who have an interest in the outcome of the project. And organizations have little ability to apply rational rational standards to their processes of belief, intention, and implementation formation. Organizational intentionality lacks overriding rational control.

Consider more briefly the topic of action. Human actors suffer various deficiencies of performance when it comes to purposive action, including weakness of the will and self deception. But organizations are altogether less capable of effectively mounting the steps needed to fully implement a plan or a complicated policy or action. This is because of the looseness of linkages that exist between executive and agent within an organization, the perennial possibility of principal-agent problems, and the potential interference with performance created by interested parties outside the organization.

This line of thought suggests that organizational lack “unity of apperception and intention”. There are multiple levels and zones of intention formation, and much of this plurality persists throughout real processes of organizational thinking. And this disunity affects both belief, intention and action. Organizations are not univocal at any point. Belief formation, intention formation, and action remain fragmented and multivocal.

These observations are somewhat parallel to the paradoxes of social choice and various voting systems governing a social choice function. Kenneth Arrow demonstrated it is impossible to design a voting system that guarantees consistency of choice by a group of individual consistency voters. The analogy here is the idea that there is no organizational design possible that guarantees a high degree of consistency and rationality in large organizational decision processes at any stage of quasi-intentionality, including belief acquisition, policy formulation, and policy implementation. 

The mind of government

We often speak of government as if it has intentions, beliefs, fears, plans, and phobias. This sounds a lot like a mind. But this impression is fundamentally misleading. “Government” is not a conscious entity with a unified apperception of the world and its own intentions. So it is worth teasing out the ways in which government nonetheless arrives at “beliefs”, “intentions”, and “decisions”.

Let’s first address the question of the mythical unity of government. In brief, government is not one unified thing. Rather, it is an extended network of offices, bureaus, departments, analysts, decision-makers, and authority structures, each of which has its own reticulated internal structure.

This has an important consequence. Instead of asking “what is the policy of the United States government towards Africa?”, we are driven to ask subordinate questions: what are the policies towards Africa of the State Department, the Department of Defense, the Department of Commerce, the Central Intelligence Agency, or the Agency for International Development? And for each of these departments we are forced to recognize that each is itself a large bureaucracy, with sub-units that have chosen or adapted their own working policy objectives and priorities. There are chief executives at a range of levels — President of the United States, Secretary of State, Secretary of Defense, Director of CIA — and each often has the aspiration of directing his or her organization as a tightly unified and purposive unit. But it is perfectly plain that the behavior of functional units within agencies are only loosely controlled by the will of the executive. This does not mean that executives have no control over the activities and priorities of subordinate units. But it does reflect a simple and unavoidable fact about large organizations. An organization is more like a slime mold than it is like a control algorithm in a factory.

This said, organizational units at all levels arrive at something analogous to beliefs (assessments of fact and probable future outcomes), assessments of priorities and their interactions, plans, and decisions (actions to take in the near and intermediate future). And governments make decisions at the highest level (leave the EU, raise taxes on fuel, prohibit immigration from certain countries, …). How does the analytical and factual part of this process proceed? And how does the decision-making part unfold?

One factor is particularly evident in the current political environment in the United States. Sometimes the analysis and decision-making activities of government are short-circuited and taken by individual executives without an underlying organizational process. A president arrives at his view of the facts of global climate change based on his “gut instincts” rather than an objective and disinterested assessment of the scientific analysis available to him. An Administrator of the EPA acts to eliminate long-standing environmental protections based on his own particular ideological and personal interests. A Secretary of the Department of Energy takes leadership of the department without requesting a briefing on any of its current projects. These are instances of the dictator strategy (in the social-choice sense), where a single actor substitutes his will for the collective aggregation of beliefs and desires associated with both bureaucracy and democracy. In this instance the answer to our question is a simple one: in cases like these government has beliefs and intentions because particular actors have beliefs and intentions and those actors have the power and authority to impose their beliefs and intentions on government.

The more interesting cases involve situations where there is a genuine collective process through which analysis and assessment takes place (of facts and priorities), and through which strategies are considered and ultimately adopted. Agencies usually make decisions through extended and formalized processes. There is generally an organized process of fact gathering and scientific assessment, followed by an assessment of various policy options with public exposure. Final a policy is adopted (the moment of decision).

The decision by the EPA to ban DDT in 1972 is illustrative (link, linklink). This was a decision of government which thereby became the will of government. It was the result of several important sub-processes: citizen and NGO activism about the possible toxic harms created by DDT, non-governmental scientific research assessing the toxicity of DDT, an internal EPA process designed to assess the scientific conclusions about the environmental and human-health effects of DDT, an analysis of the competing priorities involved in this issue (farming, forestry, and malaria control versus public health), and a decision recommended to the Administrator and adopted that concluded that the priority of public health and environmental safety was weightier than the economic interests served by the use of the pesticide.

Other examples of agency decision-making follow a similar pattern. The development of policy concerning science and technology is particularly interesting in this context. Consider, for example, Susan Wright (link) on the politics of regulation of recombinant DNA. This issue is explored more fully in her book Molecular Politics: Developing American and British Regulatory Policy for Genetic Engineering, 1972-1982. This is a good case study of “government making up its mind”. Another interesting case study is the development of US policy concerning ozone depletion; link.

These cases of science and technology policy illustrate two dimensions of the processes through which a government agency “makes up its mind” about a complex issue. There is an analytical component in which the scientific facts and the policy goals and priorities are gathered and assessed. And there is a decision-making component in which these analytical findings are crafted into a decision — a policy, a set of regulations, or a funding program, for example. It is routine in science and technology policy studies to observe that there is commonly a substantial degree of intertwining between factual judgments and political preferences and influences brought to bear by powerful outsiders. (Here is an earlier discussion of these processes; link.)

Ideally we would like to imagine a process of government decision-making that proceeds along these lines: careful gathering and assessment of the best available scientific evidence about an issue through expert specialist panels and sections; careful analysis of the consequences of available policy choices measured against a clear understanding of goals and priorities of the government; and selection of a policy or action that is best, all things considered, for forwarding the public interest and minimizing public harms. Unfortunately, as the experience of government policies concerning climate change in both the Bush administration and the Trump administration illustrates, ideology and private interest distort every phase of this idealized process.

(Philip Tetlock’s Superforecasting: The Art and Science of Prediction offers an interesting analysis of the process of expert factual assessment and prediction. Particularly interesting is his treatment of intelligence estimates.)

Patient safety

An issue which is of concern to anyone who receives treatment in a hospital is the topic of patient safety. How likely is it that there will be a serious mistake in treatment — wrong-site surgery, incorrect medication or radiation dose, exposure to a hospital-acquired infection? The current evidence is alarming. (Martin Makary et al estimate that over 250,000 deaths per year result from medical mistakes — making medical error now the third leading cause of mortality in the United States (link).) And when these events occur, where should we look for assigning responsibility — at the individual providers, at the systems that have been implemented for patient care, at the regulatory agencies responsible for overseeing patient safety?

Medical accidents commonly demonstrate a complex interaction of factors, from the individual provider to the technologies in use to failures of regulation and oversight. We can look at a hospital as a place where caring professionals do their best to improve the health of their patients while scrupulously avoiding errors. Or we can look at it as an intricate system involving the recording and dissemination of information about patients; the administration of procedures to patients (surgery, medication, radiation therapy). In this sense a hospital is similar to a factory with multiple intersecting locations of activity. Finally, we can look at it as an organization — a system of division of labor, cooperation, and supervision by large numbers of staff whose joint efforts lead to health and accidents alike. Obviously each of these perspectives is partially correct. Doctors, nurses, and technicians are carefully and extensively trained to diagnose and treat their patients. The technology of the hospital — the digital patient record system, the devices that administer drugs, the surgical robots — can be designed better or worse from a safety point of view. And the social organization of the hospital can be effective and safe, or it can be dysfunctional and unsafe. So all three aspects are relevant both to safe operations and the possibility of chronic lack of safety.

So how should we analyze the phenomenon of patient safety? What factors can be identified that distinguish high safety hospitals from low safety? What lessons can be learned from the study of accidents and mistakes that cumulatively lead to a hospitals patient safety record?

The view that primarily emphasizes expertise and training of individual practitioners is very common in the healthcare industry, and yet this approach is not particularly useful as a basis for improving the safety of healthcare systems. Skill and expertise are necessary conditions for effective medical treatment; but the other two zones of accident space are probably more important for reducing accidents — the design of treatment systems and the organizational features that coordinate the activities of the various individuals within the system.

Dr. James Bagian is a strong advocate for the perspective of treating healthcare institutions as systems. Bagian considers both technical systems characteristics of processes and the organizational forms through which these processes are carried out and monitored. And he is very skilled at teasing out some of the ways in which features of both system and organization lead to avoidable accidents and failures. I recall his description of a safety walkthrough he had done in a major hospital. He said that during the tour he noticed a number of nurses’ stations which were covered with yellow sticky notes. He observed that this is both a symptom and a cause of an accident-prone organization. It means that individual caregivers were obligated to remind themselves of tasks and exceptions that needed to be observed. Far better was to have a set of systems and protocols that made sticky notes unnecessary. Here is the abstract from a short summary article by Bagian on the current state of patient safety:

Abstract

The traditional approach to patient safety in health care has ranged from reticence to outward denial of serious flaws. This undermines the otherwise remarkable advances in technology and information that have characterized the specialty of medical practice. In addition, lessons learned in industries outside health care, such as in aviation, provide opportunities for improvements that successfully reduce mishaps and errors while maintaining a standard of excellence. This is precisely the call in medicine prompted by the 1999 Institute of Medicine report “To Err Is Human: Building a Safer Health System.” However, to effect these changes, key components of a successful safety system must include: (1) communication, (2) a shift from a posture of reliance on human infallibility (hence “shame and blame”) to checklists that recognize the contribution of the system and account for human limitations, and (3) a cultivation of non-punitive open and/or de-identified/anonymous reporting of safety concerns, including close calls, in addition to adverse events.

(Here is the Institute of Medicine study to which Bagian refers; link.)

Nancy Leveson is an aeronautical and software engineer who has spent most of her career devoted to designing safe systems. Her book Engineering a Safer World: Systems Thinking Applied to Safety is a recent presentation of her theories of systems safety. She applies these approaches to problems of patient safety with several co-authors in “A Systems Approach to Analyzing and Preventing Hospital Adverse Events” (link). Here is the abstract and summary of findings for that article:

Objective:

This study aimed to demonstrate the use of a systems theory-based accident analysis technique in health care applications as a more powerful alternative to the chain-of-event accident models currently underpinning root cause analysis methods.

Method:

A new accident analysis technique, CAST [Causal Analysis based on Systems Theory], is described and illustrated on a set of adverse cardiovascular surgery events at a large medical center. The lessons that can be learned from the analysis are compared with those that can be derived from the typical root cause analysis techniques used today.

Results:

The analysis of the 30 cardiovascular surgery adverse events using CAST revealed the reasons behind unsafe individual behavior, which were related to the design of the system involved and not negligence or incompetence on the part of individuals. With the use of the system-theoretic analysis results, recommendations can be generated to change the context in which decisions are made and thus improve decision making and reduce the risk of an accident.

Conclusions:

The use of a systems-theoretic accident analysis technique can assist in identifying causal factors at all levels of the system without simply assigning blame to either the frontline clinicians or technicians involved. Identification of these causal factors in accidents will help health care systems learn from mistakes and design system-level changes to prevent them in the future.

Crucial in this article is this research group’s effort to identify causes “at all levels of the system without simply assigning blame to either the frontline clinicians or technicians involved”. The key result is this: “The analysis of the 30 cardiovascular surgery adverse events using CAST revealed the reasons behind unsafe individual behavior, which were related to the design of the system involved and not negligence or incompetence on the part of individuals.”

Bagian, Leveson, and others make a crucial point: in order to substantially increase the performance of hospitals and the healthcare system more generally when it comes to patient safety, it will be necessary to extend the focus of safety analysis from individual incidents and agents to the systems and organizations through which these accidents were possible. In other words, attention to systems and organizations is crucial if we are to significantly reduce the frequency of medical and hospital mistakes.

(The Makary et al estimate of 250,000 deaths caused by medical error has been questioned on methodological grounds. See Aaron Carroll’s thoughtful rebuttal (NYT 8/15/16; link).)

Nuclear accidents

 
diagrams: Chernobyl reactor before and after
 

Nuclear fission is one of the world-changing discoveries of the mid-twentieth century. The atomic bomb projects of the United States led to the atomic bombing of Japan in August 1945, and the hope for limitless electricity brought about the proliferation of a variety of nuclear reactors around the world in the decades following World War II. And, of course, nuclear weapons proliferated to other countries beyond the original circle of atomic powers.

Given the enormous energies associated with fission and the dangerous and toxic properties of radioactive components of fission processes, the possibility of a nuclear accident is a particularly frightening one for the modern public. The world has seen the results of several massive nuclear accidents — Chernobyl and Fukushima in particular — and the devastating results they have had on human populations and the social and economic wellbeing of the regions in which they occurred.

Safety is therefore a paramount priority in the nuclear industry, both in research labs and military and civilian applications. So what is the situation of safety in the nuclear sector? Jim Mahaffey’s Atomic Accidents: A History of Nuclear Meltdowns and Disasters: From the Ozark Mountains to Fukushima is a detailed and carefully researched attempt to answer this question. And the information he provides is not reassuring. Beyond the celebrated and well-known disasters at nuclear power plants (Three Mile Island, Chernobyl, Fukushima), Mahaffey refers to hundreds of accidents involving reactors, research laboratories, weapons plants, and deployed nuclear weapons that have had less public awareness. These accidents resulted in a very low number of lives lost, but their frequency is alarming. They are indeed “normal accidents” (Perrow, Normal Accidents: Living with High-Risk Technologies. For example:

  • a Japanese fishing boat is contaminated by fallout from Castle Bravo test of hydrogen bomb; lots of radioactive fish at the markets in Japan (March 1, 1954) (kl 1706)
  • one MK-6 atomic bomb is dropped on Mars Bluff, South Carolina, after a crew member accidentally pulled the emergency bomb release handle (February 5, 1958) (kl 5774)
  • Fermi 1 liquid sodium plutonium breeder reactor experiences fuel meltdown during startup trials near Detroit (October 4, 1966) (kl 4127)

Mahaffey also provides detailed accounts of the most serious nuclear accidents and meltdowns during the past forty years, Three Mile Island, Chernobyl, and Fukushima.

The safety and control of nuclear weapons is of particular interest. Here is Mahaffey’s summary of “Broken Arrow” events — the loss of atomic and fusion weapons:

Did the Air Force ever lose an A-bomb, or did they just misplace a few of them for a short time? Did they ever drop anything that could be picked up by someone else and used against us? Is humanity going to perish because of poisonous plutonium spread that was snapped up by the wrong people after being somehow misplaced? Several examples will follow. You be the judge. 

Chuck Hansen [

U.S. Nuclear Weapons – The Secret History

] was wrong about one thing. He counted thirty-two “Broken Arrow” accidents. There are now sixty-five documented incidents in which nuclear weapons owned by the United States were lost, destroyed, or damaged between 1945 and 1989. These bombs and warheads, which contain hundreds of pounds of high explosive, have been abused in a wide range of unfortunate events. They have been accidentally dropped from high altitude, dropped from low altitude, crashed through the bomb bay doors while standing on the runway, tumbled off a fork lift, escaped from a chain hoist, and rolled off an aircraft carrier into the ocean. Bombs have been abandoned at the bottom of a test shaft, left buried in a crater, and lost in the mud off the coast of Georgia. Nuclear devices have been pounded with artillery of a foreign nature, struck by lightning, smashed to pieces, scorched, toasted, and burned beyond recognition. Incredibly, in all this mayhem, not a single nuclear weapon has gone off accidentally, anywhere in the world. If it had, the public would know about it. That type of accident would be almost impossible to conceal. (kl 5527)

There are a few common threads in the stories of accident and malfunction that Mahaffey provides. First, there are failures of training and knowledge on the part of front-line workers. The physics of nuclear fission are often counter-intuitive, and the idea of critical mass does not fully capture the danger of a quantity of fissionable material. The geometry of the storage of the material makes a critical difference in going critical. Fissionable material is often transported and manipulated in liquid solution; and the shape and configuration of the vessel in which the solution is held makes a difference to the probability of exponential growth of neutron emission — leading to runaway fission of the material. Mahaffey documents accidents that occurred in nuclear materials processing plants that resulted from plant workers applying what they knew from industrial plumbing to their efforts to solve basic shop-floor problems. All too often the result was a flash of blue light and the release of a great deal of heat and radioactive material.

Second, there is a fault at the opposite end of the knowledge spectrum — the tendency of expert engineers and scientists to believe that they can solve complicated reactor problems on the fly. This turned out to be a critical problem at Chernobyl (kl 6859).

The most difficult problem to handle is that the reactor operator, highly trained and educated with an active and disciplined mind, is liable to think beyond the rote procedures and carefully scheduled tasks. The operator is not a computer, and he or she cannot think like a machine. When the operator at NRX saw some untidy valve handles in the basement, he stepped outside the procedures and straightened them out, so that they were all facing the same way. (kl 2057)

There are also clear examples of inappropriate supervision in the accounts shared by Mahaffey. Here is an example from Chernobyl.

[Deputy chief engineer] Dyatlov was enraged. He paced up and down the control panel, berating the operators, cursing, spitting, threatening, and waving his arms. He demanded that the power be brought back up to 1,500 megawatts, where it was supposed to be for the test. The operators, Toptunov and Akimov, refused on grounds that it was against the rules to do so, even if they were not sure why. 

Dyatlov turned on Toptunov. “You lying idiot! If you don’t increase power, Tregub will!”  

Tregub, the Shift Foreman from the previous shift, was officially off the clock, but he had stayed around just to see the test. He tried to stay out of it. 

Toptunov, in fear of losing his job, started pulling rods. By the time he had wrestled it back to 200 megawatts, 205 of the 211 control rods were all the way out. In this unusual condition, there was danger of an emergency shutdown causing prompt supercriticality and a resulting steam explosion. At 1: 22: 30 a.m., a read-out from the operations computer advised that the reserve reactivity was too low for controlling the reactor, and it should be shut down immediately. Dyatlov was not worried. “Another two or three minutes, and it will be all over. Get moving, boys! (kl 6887)

This was the turning point in the disaster.

A related fault is the intrusion of political and business interests into the design and conduct of high-risk nuclear actions. Leaders want a given outcome without understanding the technical details of the processes they are demanding; subordinates like Toptunov are eventually cajoled or coerced into taking the problematic actions. The persistence of advocates for liquid sodium breeder reactors represents a higher-level example of the same fault. Associated with this role of political and business interests is an impulse towards secrecy and concealment when accidents occur and deliberate understatement of the public dangers created by an accident — a fault amply demonstrated in the Fukushima disaster.

Atomic Accidents provides a fascinating history of events of which most of us are unaware. The book is not primarily intended to offer an account of the causes of these accidents, but rather the ways in which they unfolded and the consequences they had for human welfare. (Generally speaking his view is that nuclear accidents in North America and Western Europe have had remarkably few human casualties.) And many of the accidents he describes are exactly the sorts of failures that are common in all largescale industrial and military processes.

(Largescale technology failure has come up frequently here. See these posts for analysis of some of the organizational causes of technology failure (link, link, link).)

Trust and organizational effectiveness

It is fairly well agreed that organizations require a degree of trust among the participants in order for the organization to function at all. But what does this mean? How much trust is needed? How is trust cultivated among participants? And what are the mechanisms through which trust enhances organizational effectiveness?

The minimal requirements of cooperation presuppose a certain level of trust. As A plans and undertakes a sequence of actions designed to bring about Y, his or her efforts must rely upon the coordination promised by other actors. If A does not have a sufficiently high level of confidence in B’s assurances and compliance, then he will be rationally compelled to choose another series of actions. If Larry Bird didn’t have trust in his teammate Dennis Johnson, the famous steal would not have happened.

 

First, what do we mean by trust in the current context? Each actor in an organization or group has intentions, engages in behavior, and communicates with other actors. Part of communication is often in the form of sharing information and agreeing upon a plan of coordinated action. Agreeing upon a plan in turn often requires statements and commitments from various actors about the future actions they will take. Trust is the circumstance that permits others to rely upon those statements and commitments. We might say, then, that A trusts B just in case —

  • A believes that when B asserts P, this is an honest expression of B’s beliefs.
  • A believes that when B says he/she will do X, this is an honest commitment on B’s part and B will carry it out (absent extraordinary reasons to the contrary).
  • A believes that when B asserts that his/her actions will be guided by his best understanding of the purposes and goals of the organization, this is a truthful expression.
  • A believes that B’s future actions, observed and unobserved, will be consistent with his/her avowals of intentions, values, and commitments.

So what are some reasons why mistrust might rear its ugly head between actors in an organization? Why might A fail to trust B?

  • A may believe that B’s private interests are driving B’s actions (rather than adherence to prior commitments and values).
  • A may believe that B suffers from weakness of the will, an inability to carry out his honest intentions.
  • A may believe that B manipulates his statements of fact to suit his private interests.
  • Or less dramatically: A may not have high confidence in these features of B’s behavior.
  • B may have no real interest or intention in behaving in a truthful way.

And what features of organizational life and practice might be expected to either enhance inter-personal trust or to undermine it?

Trust is enhanced by individuals having the opportunity to get acquainted with their collaborators in a more personal way — to see from non-organizational contexts that they are generally well intentioned; that they make serious efforts to live up to their stated intentions and commitments; and that they are generally honest. So perhaps there is a rationale for the bonding exercises that many companies undertake for their workers.

Likewise, trust is enhanced by the presence of a shared and practiced commitment to the value of trustworthiness. An organization itself can enhance trust in its participants by performing the actions that its participants expect the organization to perform. For example, an organization that abruptly and without consultation ends an important employee benefit undermines trust in the employees that the organization has their best interests at heart. This abrogation of prior obligations may in turn lead individuals to behave in a less trustworthy way, and lead others to have lower levels of trust in each other.

How does enhancing trust have the promise of bringing about higher levels of organizational effectiveness? Fundamentally this comes down to the question of the value of teamwork and the burden of unnecessary transaction costs. If every expense report requires investigation, the amount of resources spent on accountants will be much greater than a situation where only the outlying reports are questioned. If each vice president needs to defend him or herself against the possibility that another vice president is conspiring against him, then less time and energy are available to do the work of the organization. If the CEO doesn’t have high confidence that her executive team will work wholeheartedly to bring about a successful implementation of a risky investment, then the CEO will choose less risky investments.

In other words, trust is crucial for collaboration and teamwork. And organizations that manage to help to cultivate a high level of trust among its participants is likely to perform better than one that depends primarily on supervision and enforcement.

The culture of an organization

It is often held that the behavior of a particular organization is affected by its culture. Two banks may have very similar organizational structures but show rather different patterns of behavior, and those differences are ascribed to differences in culture. What does this mean? Clifford Geertz is one of the most articulate theorists of culture — especially in his earlier works. Here is a statement couched in terms of religion as a cultural system from The Interpretation Of Cultures. A religion is …

(1) a system of symbols which act to (2) establish powerful, pervasive, and long-lasting moods and motivations in men by (3) formulating conceptions of a general order of existence and (4) clothing these conceptions with such an aura of factuality that (5) the moods and motivations seem uniquely realistic. (90)

And again:

The concept of culture I espouse, and whose utility the essays below attempt to demonstrate, is essentially a semiotic one. Believing, with Max Weber, that man is an animal suspended in webs of significance he himself has spun, I take culture to be those webs, and the analysis of it to be therefore not an experimental science in search of law but an interpretive one in search of meaning. (5)

On its face this idea seems fairly simple. We might stipulate that “culture” refers to a set of beliefs, values, and practices that are shared by a number of individuals within the group, including leaders, managers, and staff members. But, as we have seen repeatedly in other posts, we need to think of these statements in terms of a distribution across a population rather than as a uniform set of values.

Consider this hypothetical comparison of two organizations with respect to employees’ attitudes towards working with colleagues of a different religion. (This example is fictitious.) Suppose that the employees of two organizations have been surveyed on the topic of their comfort level at working with other people of different religious beliefs, on a scale of 0-21. Low values indicate a lower level of comfort.

The blue organization shows a distribution of individuals who are on average more accepting of religious diversity than the gold organization. The weighted score for the blue population is about 10.4, in comparison to a weighted score of 9.9 for the gold population. This is a relatively small difference between the two populations; but it may be enough to generate meaningful differences in behavior and performance. If, for example, the attitude measured here leads to an increased likelihood for individuals to make disparaging comments about the co-worker’s religion, then we might predict that the gold group will have a somewhat higher level of incidents of religious intolerance. And if we further hypothesize that a disparaging work environment has some effect on work productivity, then we might predict that the blue group will have somewhat higher productivity.

Current discussions of sexual harassment in the workplace are often couched in terms of organizational culture. It appears that sexual harassment is more frequent and flagrant in some organizations than others. Women are particularly likely to be harassed in a work culture in which men believe and act as though they are at liberty to impose sexual language and action on female co-workers and in which the formal processes of reporting of harassment are weak or disregarded. The first is a cultural fact and the second is a structural or institutional fact.

We can ask several causal questions about this interpretation of organizational culture. What are the factors that lead to the establishment and currency of a given profile of beliefs, values, and practices within an organization? And what factors exist that either reproduce those beliefs or undermine them? Finally we can ask what the consequences of a given culture profile are in the internal and external performance of the organization.

There seem to be two large causal mechanisms responsible for establishment and maintenance of a particular cultural constellation within an organization. First is recruitment. One organization may make a specific effort to screen candidates so as to select in favor of a particular set of values and attitudes — acceptance, collaboration, trustworthiness, openness to others. And another may favor attitudes and values that are thought to be more directly related to profitability or employee malleability. These selection mechanisms can lead to significant differences in the overall culture of the organization. And the decision to orient recruitment in one way rather than another is itself an expression of values.

The second large mechanism is the internal socialization and leadership processes of the organization. We can hypothesize that an organization whose leaders and supervisors both articulate the values of equality and respect in the workplace and who demonstrate that commitment in their own actions will be one in which more people in the organization will adopt those values. And we can likewise hypothesize that the training and evaluation processes of an organization can be effective in cultivating the values of the organization. In other words, it seems evident that leadership and training are particularly relevant to the establishment of a particular organizational culture.

The other large causal question is how and to what extent cultural differences across organizations have effects on the performance and behavior of those organizations. We can hypothesize that differences in organizational values and culture lead to differences in behavior within the organization — more or less collaboration, more or less harassment, more or less bad behavior of various kinds. These differences are themselves highly important. But we can also hypothesize that differences like these can lead to differences in organizational effectiveness. This is the central idea of the field of positive organizational studies. Scholars like Kim Cameron and others argue, on the basis of empirical studies across organizational settings, that organizations that embody the values of mutual acceptance, equality, and a positive orientation towards each others’ contributions are in fact more productive organizations as well (Competing Values Leadership: Second Edition; link).

A new model of organization?

In Team of Teams: New Rules of Engagement for a Complex World General Stanley McChrystal (with Tantum Collins, David Silverman, and Chris Fussell) describes a new, 21st-century conception of organization for large, complex activities involving thousands of individuals and hundreds of major sub-tasks. His concept is grounded in his experience in counter-insurgency warfare in Iraq. Rather than being constructed as centrally organized, bureaucratic, hierarchical processes with commanders and scripted agents, McChrystal argues that modern counter-terrorism requires a more decentralized and flexible system of action, which he refers to as “teams of teams”. Information is shared freely, local commanders have ready access to resources and knowledge from other experts, and they make decisions in a more flexible way. The model hopes to capture the benefits of improvisation, flexibility, and a much higher level of trust and communication than is characteristic of typical military and corporate organizations.

 

One place where the “team of teams” structure is plausible is in the context of a focused technology startup company, where the whole group of participants need to be in regular and frequent collaboration with each other. Indeed, Paul Rabinow’s ethnography in 1996 of the Cetus Corporation in its pursuit of PCR (polymerase chain reaction) in Making PCR: A Story of Biotechnology reflects a very similar topology of information flows and collaboration links across and within working subgroups (link). But the vision does not fit very well the organizational and operational needs of a large hospital, a railroad company, or a research university. It seems plausible that the challenges the US military faced in fighting Al-Qaeda and ISIL are not really analogous to those faced by less dramatic organizations like hospitals, universities, and corporations. The decentralized and improvisational circumstances of urban warfare against loosely organized terrorists may be sui generis

McChrystal proposes an organizational structure that is more decentralized, more open to local decision-making, and more flexible and resilient. These are unmistakeable virtues in some circumstances; but not in all circumstances and all organizations. And arguably such a structure would have been impossible in the planning and execution of the French defense of Dien Bien Phu or the US decision to wage war against the Vietnamese insurgency ten years later. These were situations where central decisions needed to be made, and the decisions needed to be implemented through well organized bureaucracies. The problem in both instances is that the wrong decisions were made, based on the wrong information and assessments. What was needed, it would appear, was better executive leadership and decision-making — not a fundamentally decentralized pattern of response and counter-response.

One thing that deserves comment in the context of McChrystal’s book is the history of bad organization, bad intelligence, and bad decision-making the world has witnessed in the military experiences of the past century. The radical miscalculations and failures of planning involved in the first months of the Korean War, the painful and tragic misjudgments made by the French military in preparing for Dien Bien Phu, the equally bad thinking and planning done by Robert McNamara and the whiz kids leading to the Vietnam War — these examples stand out as sentinel illustrations of the failures of large organizations that have been tasked to carry out large, complex activities involving numerous operational units. The military and the national security establishments were good at some tasks, and disastrously bad at others. And the things they were bad at were both systemic and devastating. Bernard Fall illustrates these failures in Hell In A Very Small Place: The Siege Of Dien Bien Phu, and David Halberstam does so for the decision-making that led to the war in Vietnam in The Best and the Brightest.

So devising new ideas about command, planning, intelligence gathering and analysis, and priority-setting that are more effective would be a big contribution to humanity. But the deficiencies in Dien Bien Phu, Korea, or Vietnam seem different from those McChrystal identifies in Iraq. What was needed in these portentous moments of policy choice was clear-eyed establishment of appropriate priorities and goals, honest collection of intelligence and sources of information, and disinterested implementation of policies and plans that served the highest interests of the country. The “team of teams” approach doesn’t seem to be a general solution to the wide range of military and political challenges nations face.

What one would have wanted to see in the French military or the US national security apparatus is something different from the kind of teamwork described by McChrystal: greater honesty on all parts, a commitment to taking seriously the assessments of experts and participants in the field, an openness to questioning strongly held assumptions, and a greater capacity for institutional wisdom in arriving at decisions of this magnitude. We would have wanted to see a process that was not dominated by large egos, self-interest, and fixed ideas. We would have wanted French generals and their civilian masters to soberly assess the military function that a fortress camp at Dien Bien Phu could satisfy; the realistic military requirements that would need to be satisfied in order to defend the location; and an honest effort to solicit the very best information and judgment from experienced commanders and officials about what a Viet-Minh siege might look like. Instead, the French military was guided by complacent assumptions about French military superiority, which led to a genuine catastrophe for the soldiers assigned to the task and to French society more broadly.

There are valid insights contained in McChrystal’s book about the urgency of breaking down obstacles to communication and action within sprawling organizations as they confront a changing environment. But it doesn’t add up to a model that is well designed for most contexts in which large organizations actually function.

%d bloggers like this: