Organizations as open systems

Key to understanding the “ontology of government” is the empirical and theoretical challenge of understanding how organizations work. The activities of government encompass organizations across a wide range of scales, from the local office of the Department of Motor Vehicles (40 employees) to the Department of Defense (861,000 civilian employees). Having the best understanding possible of how organizations work and fail is crucial to understanding the workings of government.

I have given substantial attention to the theory of strategic action fields as a basis for understanding organizations in previous posts (link, link). The basic idea in that approach is that organizations are a bit like social movements, with active coalition-building, conflicting goals, and strategic jockeying making up much of the substantive behavior of the organization. It is significant that organizational theory as a field has moved in this direction in the past fifteen years or so as well. A good example is Scott and Davis, Organizations and Organizing: Rational, Natural and Open System Perspectives (2007). Their book is intended as a “state of the art” textbook in the field of organizational studies. And the title expresses some of the shifts that have taken place in the field since the work of March, Simon, and Perrow (link, link). The word “organizing” in the title signals the idea that organizations are no longer looked at as static structures within which actors carry out well defined roles; but are instead dynamic processes in which active efforts by leaders, managers, and employees define goals and strategies and work to carry them out. And the “open system” phrase highlights the point that organizations always exist and function within a broader environment — political constraints, economic forces, public opinion, technological innovation, other organizations, and today climate change and environmental disaster.

Organizations themselves exist only as a complex set of social processes, some of which reproduce existing modes of behavior and others that serve to challenge, undermine, contradict, and transform current routines. Individual actors are constrained by, make use of, and modify existing structures. (20)

Most analysts have conceived of organizations as social structures created by individuals to support the collaborative pursuit of specified goals. Given this conception, all organizations confront a number of common problems: all must define (and redefine) their objectives; all must induce participants to contribute services; all must control and coordinate these contributions; resources must be garnered from the environment and products or services dispensed; participants must be selected, trained, and replaced; and some sort of working accommodation with the neighbors must be achieved. (23)

Scott and Davis analyze the field of organizational studies in several dimensions: sector (for-profit, public, non-profit), levels of analysis (social psychological level, organizational level, ecological level), and theoretical perspective. They emphasize several key “ontological” elements that any theory of organizations needs to address: the environment in which an organization functions; the strategy and goals of the organization and its powerful actors; the features of work and technology chosen by the organization; the features of formal organization that have been codified (human resources, job design, organizational structure); the elements of “informal organization” that exist in the entity (culture, social networks); and the people of the organization.

They describe three theoretical frameworks through which organizational theories have attempted to approach the empirical analysis of organizations. First, the rational framework:

Organizations are collectivities oriented to the pursuit of relatively specific goals. They are “purposeful” in the sense that the activities and interactions of participants are coordinated to achieve specified goals….. Organizations are collectivities that exhibit a relatively high degree of formalization. The cooperation among participants is “conscious” and “deliberate”; the structure of relations is made explicit. (38)

From the rational system perspective, organizations are instruments designed to attain specified goals. How blunt or fine an instrument they are depends on many factors that are summarized by the concept of rationality of structure. The term rationality in this context is used in the narrow sense of technical or functional rationality (Mannheim, 1950 trans.: 53) and refers to the extent to which a series of actions is organized in such a way as to lead to predetermined goals with maximum efficiency. (45)

Here is a description of the natural-systems framework:

Organizations are collectivities whose participants are pursuing multiple interests, both disparate and common, but who recognize the value of perpetuating the organization as an important resource. The natural system view emphasizes the common attributes that organizations share with all social collectivities. (39)

Organizational goals and their relation to the behavior of participants are much more problematic for the natural than the rational system theorist. This is largely because natural system analysts pay more attention to behavior and hence worry more about the complex interconnections between the normative and the behavioral structures of organizations. Two general themes characterize their views of organizational goals. First, there is frequently a disparity between the stated and the “real” goals pursued by organizations—between the professed or official goals that are announced and the actual or operative goals that can be observed to govern the activities of participants. Second, natural system analysts emphasize that even when the stated goals are actually being pursued, they are never the only goals governing participants’ behavior. They point out that all organizations must pursue support or “maintenance” goals in addition to their output goals (Gross, 1968; Perrow, 1970:135). No organization can devote its full resources to producing products or services; each must expend energies maintaining itself. (67)

And the “open-system” definition:

From the open system perspective, environments shape, support, and infiltrate organizations. Connections with “external” elements can be more critical than those among “internal” components; indeed, for many functions the distinction between organization and environment is revealed to be shifting, ambiguous, and arbitrary…. Organizations are congeries of interdependent flows and activities linking shifting coalitions of participants embedded in wider material-resource and institutional environments.  (40)

(Note that the natural-system and “open-system” definitions are very consistent with the strategic-action-field approach.)

Here is a useful table provided by Scott and Davis to illustrate the three approaches to organizational studies:

An important characteristic of recent organizational theory has to do with the way that theorists think about the actors within organizations. Instead of looking at individual behavior within an organization as being fundamentally rational and goal-directed, primarily responsive to incentives and punishments, organizational theorists have come to pay more attention to the non-rational components of organizational behavior — values, cultural affinities, cognitive frameworks and expectations.

This emphasis on culture and mental frameworks leads to another important shift of emphasis in next-generation ideas about organizations, involving an emphasis on informal practices, norms, and behaviors that exist within organizations. Rather than looking at an organization as a rational structure implementing mission and strategy, contemporary organization theory confirms the idea that informal practices, norms, and cultural expectations are ineliminable parts of organizational behavior. Here is a good description of the concept of culture provided by Scott and Davis in the context of organizations:

Culture describes the pattern of values, beliefs, and expectations more or less shared by the organization’s members. Schein (1992) analyzes culture in terms of underlying assumptions about the organization’s relationship to its environment (that is, what business are we in, and why); the nature of reality and truth (how do we decide which interpretations of information and events are correct, and how do we make decisions); the nature of human nature (are people basically lazy or industrious, fixed or malleable); the nature of human activity (what are the “right” things to do, and what is the best way to influence human action); and the nature of human relationships (should people relate as competitors or cooperators, individualists or collaborators). These components hang together as a more-or-less coherent theory that guides the organization’s more formalized policies and strategies. Of course, the extent to which these elements are “shared” or even coherent within a culture is likely to be highly contentious (see Martin, 2002)—there can be subcultures and even countercultures within an organization. (33)

Also of interest is Scott’s earlier book Institutions and Organizations: Ideas, Interests, and Identities, which first appeared in 1995 and is now in its 4th edition (2014). Scott looks at organizations as a particular kind of institution, with differentiating characteristics but commonalities as well. The IBM Corporation is an organization; the practice of youth soccer in the United States is an institution; but both have features in common. In some contexts, however, he appears to distinguish between institutions and organizations, with institutions constituting the larger normative, regulative, and opportunity-creating environment within which organizations emerge.

Scott opens with a series of crucial questions about organizations — questions for which we need answers if we want to know how organizations work, what confers stability upon them, and why and how they change. Out of a long list of questions, these seem particularly important for our purposes here: “How are we to regard behavior in organizational settings? Does it reflect the pursuit of rational interests and the exercise of conscious choice, or is it primarily shaped by conventions, routines, and habits?” “Why do individuals and organizations conform to institutions? Is it because they are rewarded for doing so, because they believe they are morally obligated to obey, or because they can conceive of no other way of behaving?” “Why is the behavior of organizational participants often observed to depart from the formal rules and stated goals of the organization?” “Do control systems function only when they are associated with incentives … or are other processes sometimes at work?” “How do differences in cultural beliefs shape the nature and operation of organizations?” (Introduction).

Scott and Davis’s work is of particular interest here because it supports analysis of a key question I’ve pursued over the past year: how does government work, and what ontological assumptions do we need to make in order to better understand the successes and failures of government action? What I have called organizational dysfunction in earlier posts (link, link) finds a very comfortable home in the theoretical spaces created by the intellectual frameworks of organizational studies described by Scott and Davis.

Herbert Simon’s theories of organizations

Image: detail from Family Portrait 2 1965 
(Creative Commons license, Richard Rappaport)

Herbert Simon made paradigm-changing contributions to the theory of rational behavior, including particularly his treatment of “satisficing” as an alternative to “maximizing” economic rationality (link). It is therefore worthwhile examining his views of organizations and organizational decision-making and action — especially given how relevant those theories are to my current research interest in organizational dysfunction. His highly successful book Administrative Behavior went through four editions between 1947 and 1997 — more than fifty years of thinking about organizations and organizational behavior. The more recent editions consist of the original text and “commentary” chapters that Simon wrote to incorporate more recent thinking about the content of each of the chapters.

Here I will pull out some of the highlights of Simon’s approach to organizations. There are many features of his analysis of organizational behavior that are worth noting. But my summary assessment is that the book is surprisingly positive about the rationality of organizations and the processes through which they collect information and reach decisions. In the contemporary environment where we have all too many examples of organizational failure in decision-making — from Boeing to Purdue Pharma to the Federal Emergency Management Agency — this confidence seems to be fundamentally misplaced. The theorist who invented the idea of imperfect rationality and satisficing at the individual level perhaps should have offered a somewhat more critical analysis of organizational thinking.

The first thing that the reader will observe is that Simon thinks about organizations as systems of decision-making and execution. His working definition of organization highlights this view:

In this book, the term organization refers to the pattern of communications and relations among a group of human beings, including the processes for making and implementing decisions. This pattern provides to organization members much of the information and many of the assumptions, goals, and attitudes that enter into their decisions, and provides also a set of stable and comprehensible expectations as to what the other members of the group are doing and how they will react to what one says and does. (18-19).

What is a scientifically relevant description of an organization? It is a description that, so far as possible, designates for each person in the organization what decisions that person makes, and the influences to which he is subject in making each of these decisions. (43)

The central theme around which the analysis has been developed is that organization behavior is a complex network of decisional processes, all pointed toward their influence upon the behaviors of the operatives — those who do the action ‘physical’ work of the organization. (305)

The task of decision-making breaks down into the assimilation of relevant facts and values — a distinction that Simon attributes to logical positivism in the original text but makes more general in the commentary. Answering the question, “what should we do?”, requires a clear answer to two kinds of questions: what values are we attempting to achieve? And how does the world work such that interventions will bring about those values?

It is refreshing to see Simon’s skepticism about the “rules of administration” that various generations of organizational theorists have advanced — “specialization,” “unity of command,” “span of control,” and so forth. Simon describes these as proverbs rather than as useful empirical discoveries about effective administration. And he finds the idea of “schools of management theory” to be entirely unhelpful (26). Likewise, he is entirely skeptical about the value of the economic theory of the firm, which abstracts from all of the arrangements among participants that are crucial to the internal processes of the organization in Simon’s view. He recommends an approach to the study of organizations (and the design of organizations) that focuses on the specific arrangements needed to bring factual and value claims into a process of deliberation leading to decision — incorporating the kinds of specialization and control that make sense for a particular set of business and organizational tasks.

An organization has only two fundamental tasks: decision-making and “making things happen”. The decision-making process involves intelligently gathering facts and values and designing a plan. Simon generally approaches this process as a reasonably rational one. He identifies three kinds of limits on rational decision-making:

  • The individual is limited by those skills, habits, and reflexes which are no longer in the realm of the conscious…
  • The individual is limited by his values and those conceptions of purpose which influence him in making his decision…
  • The individual is limited by the extent of his knowledge of things relevant to his job. (46)

And he explicitly regards these points as being part of a theory of administrative rationality:

Perhaps this triangle of limits does not completely bound the area of rationality, and other sides need to be added to the figure. In any case, the enumeration will serve to indicate the kinds of considerations that must go into the construction of valid and noncontradictory principles of administration. (47)

The “making it happen” part is more complicated. This has to do with the problem the executive faces of bringing about the efficient, effective, and loyal performance of assigned tasks by operatives. Simon’s theory essentially comes down to training, loyalty, and authority.

If this is a correct description of the administrative process, then the construction of an efficient administrative organization is a problem in social psychology. It is a task of setting up an operative staff and superimposing on that staff a supervisory staff capable of influencing the operative group toward a pattern of coordinated and effective behavior. (2)

To understand how the behavior of the individual becomes a part of the system of behavior of the organization, it is necessary to study the relation between the personal motivation of the individual and the objectives toward which the activity of the organization is oriented. (13-14)

Simon refers to three kinds of influence that executives and supervisors can have over “operatives”: formal authority (enforced by the power to hire and fire), organizational loyalty (cultivated through specific means within the organization), and training. Simon holds that a crucial role of administrative leadership is the task of motivating the employees of the organization to carry out the plan efficiently and effectively.

Later he refers to five “mechanisms of organization influence” (112): specialization and division of task; the creation of standard practices; transmission of decisions downwards through authority and influence; channels of communication in all directions; and training and indoctrination. Through these mechanisms the executive seeks to ensure a high level of conformance and efficient performance of tasks.

What about the actors within an organization? How do they behave as individual actors? Simon treats them as “boundedly rational”:

To anyone who has observed organizations, it seems obvious enough that human behavior in them is, if not wholly rational, at least in good part intendedly so. Much behavior in organizations is, or seems to be, task-oriented–and often efficacious in attaining its goals. (88)

But this description leaves out altogether the possibility and likelihood of mixed motives, conflicts of interest, and intra-organizational disagreement. When Simon considers the fact of multiple agents within an organization, he acknowledges that this poses a challenge for rationalistic organizational theory:

Complications are introduced into the picture if more than one individual is involved, for in this case the decisions of the other individuals will be included among the conditions which each individual must consider in reaching his decisions. (80)

This acknowledges the essential feature of organizations — the multiplicity of actors — but fails to treat it with the seriousness it demands. He attempts to resolve the issue by invoking cooperation and the language of strategic rationality: “administrative organizations are systems of cooperative behavior. The members of the organization are expected to orient their behavior with respect to certain goals that are taken as ‘organization objectives'” (81). But this simply presupposes the result we might want to occur, without providing a basis for expecting it to take place.

With the hindsight of half a century, I am inclined to think that Simon attributes too much rationality and hierarchical purpose to organizations.

The rational administrator is concerned with the selection of these effective means. For the construction of an administrative theory it is necessary to examine further the notion of rationality and, in particular, to achieve perfect clarity as to what is meant by “the selection of effective means.” (72)

These sentences, and many others like them, present the task as one of defining the conditions of rationality of an organization or firm; this takes for granted the notion that the relations of communication, planning, and authority can result in a coherent implementation of a plan of action. His model of an organization involves high-level executives who pull together factual information (making use of specialized experts in this task) and integrating the purposes and goals of the organization (profits, maintaining the health and safety of the public, reducing poverty) into an actionable set of plans to be implemented by subordinates. He refers to a “hierarchy of decisions,” in which higher-level goals are broken down into intermediate-level goals and tasks, with a coherent relationship between intermediate and higher-level goals. “Behavior is purposive in so far as it is guided by general goals or objectives; it is rational in so far as it selects alternatives which are conducive to the achievement of the previously selected goals” (4).  And the suggestion is that a well-designed organization succeeds in establishing this kind of coherence of decision and action.

It is true that he also asserts that decisions are “composite” —

It should be perfectly apparent that almost no decision made in an organization is the task of a single individual. Even though the final responsibility for taking a particular action rests with some definite person, we shall always find, in studying the manner in which this decision was reached, that its various components can be traced through the formal and informal channels of communication to many individuals … (305)

But even here he fails to consider the possibility that this compositional process may involve systematic dysfunctions that require study. Rather, he seems to presuppose that this composite process itself proceeds logically and coherently. In commenting on a case study by Oswyn Murray (1923) on the design of a post-WWI battleship, he writes: “The point which is so clearly illustrated here is that the planning procedure permits expertise of every kind to be drawn into the decision without any difficulties being imposed by the lines of authority in the organization” (314). This conclusion is strikingly at odds with most accounts of science-military relations during World War II in Britain — for example, the pernicious interference of Frederick Alexander Lindemann with Patrick Blackett over Blackett’s struggles to create an operations-research basis for anti-submarine warfare (Blackett’s War: The Men Who Defeated the Nazi U-Boats and Brought Science to the Art of Warfare). His comments about the processes of review that can be implemented within organizations (314 ff.) are similarly excessively optimistic — contrary to the literature on principal-agent problems in many areas of complex collaboration.

This is surprising, given Simon’s contributions to the theory of imperfect rationality in the case of individual decision-making. Against this confidence, the sources of organizational dysfunction that are now apparent in several literatures on organization make it more difficult to imagine that organizations can have a high success rate in rational decision-making. If we were seeking for a Simon-like phrase for organizational thinking to parallel the idea of satisficing, we might come up with the notion of bounded localistic organizational rationality”: “locally rational, frequently influenced by extraneous forces, incomplete information, incomplete communication across divisions, rarely coherent over the whole organization”.

Simon makes the point emphatically in the opening chapters of the book that administrative science is an incremental and evolving field. And in fact, it seems apparent that his own thinking continued to evolve. There are occasional threads of argument in Simon’s work that seem to point towards a more contingent view of organizational behavior and rationality, along the lines of Fligstein and McAdam’s theories of strategic action fields. For example, when discussing organizational loyalty Simon raises the kind of issue that is central to the strategic action field model of organizations: the conflicts of interest that can arise across units (11). And in the commentary on Chapter I he points forward to the theories of strategic action fields and complex adaptive systems:

The concepts of systems, multiple constituencies, power and politics, and organization culture all flow quite naturally from the concept of organizations as complex interactive structures held together by a balance of the inducements provided to various groups of participants and the contributions received from them. (27)

The book has been a foundational contribution to organizational studies. At the same time, if Herbert Simon were at the beginning of his career and were beginning his study of organizational decision-making today, I suspect he might have taken a different tack. He was plainly committed to empirical study of existing organizations and the mechanisms through which they worked. And he was receptive to the ideas surrounding the notion of imperfect rationality. The current literature on the sources of contention and dysfunction within organizations (Perrow, Fligstein, McAdam, Crozier, …) might well have led him to write a different book altogether, one that gave more attention to the sources of failures of rational decision-making and implementation alongside the occasional examples of organizations that seem to work at a very high level of rationality and effectiveness.

Theorizing about organizations

The fields of organizational studies and organizational sociology originated in the early twentieth century but flourished in the post-war period. This makes a certain amount of historical sense. The emergence in the nineteenth century of large, complex organizations in business and government became a factor in modern society that dwarfed the impact of the organizations of the past — universities, religious societies, and guilds. There was therefore a new sociological topic that demanded study. How do corporations and large government departments work? What concepts permit insightful analysis of large, complex organizations? Max Weber’s theory of bureaucracy provided a beginning, but organizations proved to have greater variety and more perplexing features than Weber’s ideas could account for.

Large, complex organizations are the most pervasive social structure in the modern world. They structure the food we eat, the ways we work, the compensation we receive for our labors, the technologies that inform our daily lives, the ways that wars occur, and the modes through which governments function. And, as any observant person will recognize, large organizations create some of the most important dysfunctions that our modern society confronts. So it is enormously important to have a better idea of what a large organization is and how it works. We need to understand the variety, structures, and dynamics of large organizations if we are to have realistic ideas about how to make a more humane world.

Charles Perrow has been one of the most insightful contributors to organizational sociology since the 1960s. His research on the topic of safety within high-risk industries (space, nuclear power, marine transport, chemicals) has been highly influential, including especially his 1984 book, Normal Accidents: Living with High-Risk Technologies.

In 1972 Perrow published Complex Organizations: A Critical Essay, which was released in its third edition in 2014. The book is a masterful synthesis of the schools of thought that have emerged in organizational sociology since 1945. Perrow describes the human relations school, the neo-Weberian school, the institutional tradition, the technology [contingency] approach, the economic interpretation, and the “power” interpretation of organizations. The book therefore provides a valuable map of the geography of the field today, and the intellectual origins of current research. But more than that, the book is an important and original presentation of how organizations work, in Perrow’s view. Perrow takes a “structural” view of organizations, which amounts fundamentally to the idea that the most important questions have to do with the internal processes of various organizations and the relationships the organization has to powerful external forces. (Perrow quotes March and Simon on organizational structure: “those aspects of the pattern of behavior in the organization that are relatively stable and that change only slowly”; (124). This contrasts with the “human relations” school, which holds that the important properties of organizations derive from features of behavior associated with the individuals who make them up, including leaders, managers, and workers.

An idea that emerges as particularly important in Perrow’s account is the idea of bounded rationality and the limits on rational planning and decision-making within an organization. This part of Perrow’s treatment depends heavily on the theories of Herbert Simon and James March (March and Simon, Organizations and Simon, Administrative Behavior).

Bounded rationality, however, is visited upon the elites as well. Their position is always insecure, for their information, understanding, and goals are never fully rational. This allows for occasional resistance and subtle changes by the controlled. In fact, bounded rationality, by elites or their subjects, creates a great deal of change, for it permits unexpected interactions, new discoveries, serendipities, and new goals and values. (123)

Perrow emphasizes the inherent diversity of goals and purposes that are operative within an organization at any given point. He describes the “garbage can” theory of organizational goal-setting and problem-setting (135). Executives, managers, and other decision-makers are portrayed as unavoidably opportunistic, in the sense that they address one set of problems rather than another without a compelling reason for thinking that this is the best path forward for the organization.

Goals may thus emerge in a rather fortuitous fashion, as when the organization seems to back into a new line of activity or into an external alliance in a fit of absentmindedness. (135)

Associated with this idea is the idea advanced by March and Simon that plans and goals are often adopted retrospectively rather than in advance of action.

No coherent, stable goal guided the total process, but after the fact a coherent stable goal was presumed to have been present. It would be unsettling to see it otherwise. (135)

This recognition of the multiplicity and sketchiness of organizational goals casts profound doubt on the functionalism that observers sometimes bring to organizations (the idea that organizations possess the structures and goals they need to optimize the achievement of their goals). Perrow specifically endorses these doubts:

For those doing case studies of organizations it is also indispensable, checking the tendency of social scientists to find reason, cause, and function in all behavior, and emphasizing instead the accidental, temporary, shifting, and fluid nature of all social life…. Garbage can theory provides the tools to examine the process and not be taken in by functional explanations. The decision process must be seen as involving a shifting set of actors with unpredictable entrances and exits from the “can” (or the decision mechanism), the often unrelated problems these actors have on their agendas, the solutions of some that are looking for problems they can apply them to, the accidental availability of external candidates that then bring new solutions and problems to the decision process, and finally the necessity of “explaining” the outcomes as rational and intended. (136, 137)

Typology and classification of organizations has been a preoccupation of organizational theory for a century. Perrow believes that we do not yet have a satisfactory basis for classifying organizations, but in his discussion of safety and disaster he provides a typology that has a lot going for it. The scheme sorts organizational tasks along two dimensions: the nature of interactions within the functioning of the organization (linear / complex) and the nature of the coupling of events and processes that exists (loose / tight coupling). His analysis of accidents finds that organizations involving high complexity and tight coupling are most vulnerable to disasters; so nuclear plants, the handling of nuclear weapons, the operations of aircraft, military early warning systems, chemical plants, and genetic research fall in the high-risk category. Motor-vehicle departments, community colleges, assembly-line factories, and post offices fall in the “linear, loose coupling” category and present the lowest risk. The intriguing question that arises here is whether there are organizational features that are best suited to safe and efficient functioning in the four quadrants.

Also interesting is Perrow’s treatment of the institutionalist school, represented here by Philip Selznick’s Leadership in Administration: A Sociological Interpretation and Selznick’s study of the Tennessee Valley Authority. This approach is grounded in structuralist-functionalist sociological theory.

Perrow’s considered theory or organizations is offered in the final chapter of the book. He advocates for an interpretation of organizations as vehicles of power through which some individuals control the behavior and products of others.

In my scheme, power is the ability of persons or groups to extract for themselves valued outputs from a system in which other persons or groups either seek the same outputs for themselves or would prefer to expend their effort toward other outputs. Power is exercised to alter the initial distribution of outputs, to establish an unequal distribution, or to change the outputs. (259)

Two specific examples illustrate this approach. Corporations influence consumers’ palate for products, and they do this in ways that serve the interests of one group in society over another. And corporations and industrial bureaucracies have fundamentally shaped the practices and culture of “work” in ways that fundamentally serve the interests of one group over another. Both are examples of the “social construction” of important categories of social life; and corporations (business organizations) are actively involved in this process of social construction. (This is essentially the approach to the definition of “labor” and “work” offered by Bowles and Gintis in Schooling In Capitalist America: Educational Reform and the Contradictions of Economic Life.) This approach to organizations is mirrored in Perrow’s book about the emergence of the business corporation in the United States in the nineteenth century, Organizing America: Wealth, Power, and the Origins of Corporate Capitalism.

In short, Complex Organizations is an excellent overview of organizational theory today, and it provides many of the conceptual and theoretical tools that help to make sense of these extended and pervasive social constructions that so fundamentally shape our modern experience.

Organizations and dysfunction

Ford Rouge Plant

A recurring theme in recent months in Understanding Society is organizational dysfunction and the organizational causes of technology failure. Helmut Anheier’s volume When Things Go Wrong: Organizational Failures and Breakdowns is highly relevant to this topic, and it makes for very interesting reading. The volume includes contributions by a number of leading scholars in the sociology of organizations.

And yet the volume seems to miss the mark in some important ways. For one thing, it is unduly focused on the question of “mortality” of firms and other organizations. Bankruptcy and organizational death are frequent synonyms for “failure” here. This frame is evident in the summary the introduction offers of existing approaches in the field: organizational aspects, political aspects, cognitive aspects, and structural aspects. All bring us back to the causes of extinction and bankruptcy in a business organization. Further, the approach highlights the importance of internal conflict within an organization as a source of eventual failure. But it gives no insight into the internal structure and workings of the organization itself, the ways in which behavior and internal structure function to systematically produce certain kinds of outcomes that we can identify as dysfunctional.

Significantly, however, dysfunction does not routinely lead to death of a firm. (Seibel’s contribution in the volume raises this possibility, which Seibel refers to as “successful failures“). This is a familiar observation from political science: what looks dysfunctional from the outside may be perfectly well tuned to a different set of interests (for example, in Robert Bates’s account of pricing boards in Africa in Markets and States in Tropical Africa: The Political Basis of Agricultural Policies). In their introduction to this volume Anheier and Moulton refer to this possibility as a direction for future research: “successful for whom, a failure for whom?” (14).

The volume tends to look at success and failure in terms of profitability and the satisfaction of stakeholders. But we can define dysfunction in a more granular way by linking characteristics of performance to the perceived “purposes and goals” of the organization. A regulatory agency exists in order to effectively project the health and safety of the public. In this kind of case, failure is any outcome in which the agency flagrantly and avoidably fails to prevent a serious harm — release of radioactive material, contamination of food, a building fire resulting from defects that should have been detected by inspection. If it fails to do so as well as it might then it is dysfunctional.

Why do dysfunctions persist in organizations? It is possible to identify several possible causes. The first is that a dysfunction from one point of view may well be a desirable feature from another point of view. The lack of an authoritative safety officer in a chemical plant may be thought to be dysfunctional if we are thinking about the safety of workers and the public as a primary goal of the plant (link). But if profitability and cost-savings are the primary goals from the point of view of the stakeholders, then the cost-benefit analysis may favor the lack of the safety officer.

Second, there may be internal failures within an organization that are beyond the reach of any executive or manager who might want to correct them. The complexity and loose-coupling of large organizations militate against house cleaning on a large scale.

Third, there may be powerful factions within an organization for whom the “dysfunctional” feature is an important component of their own set of purposes and goals. Fligstein and McAdam argue for this kind of disaggregation with their theory of strategic action fields (link). By disaggregating purposes and goals to the various actors who figure in the life cycle of the organization – founders, stakeholders, executives, managers, experts, frontline workers, labor organizers – it is possible to see the organization as a whole as simply the aggregation of the multiple actions and purposes of the actors within and adjacent to the organization. This aggregation does not imply that the organization is carefully adjusted to serve the public good or to maximize efficiency or to protect the health and safety of the public. Rather, it suggests that the resultant organizational structure serves the interests of the various actors to the fullest extent each actor is able to manage.

Consider the account offered by Thomas Misa of the decline of the steel industry in the United States in the first part of the twentieth century in A Nation of Steel: The Making of Modern America, 1865-1925. Misa’s account seems to point to a massive dysfunction in the steel corporations of the inter-war period, a deliberate and sustained failure to invest in research on new steel technologies in metallurgy and production. Misa argues that the great steel corporations — US Steel in particular — failed to remain competitive in their industry in the early years of the twentieth century because management persistently pursued short-term profits and financial advantage for the company through domination of the market at the expense of research and development. It relied on market domination instead of research and development for its source of revenue and profits.

In short, U.S. Steel was big but not illegal. Its price leadership resulted from its complete dominance in the core markets for steel…. Indeed, many steelmakers had grown comfortable with U.S. Steel’s overriding policy of price and technical stability, which permitted them to create or develop markets where the combine chose not to compete, and they testified to the court in favor of the combine. The real price of stability … was the stifling of technological innovation. (255)

The result was that the modernized steel industries in Europe leap-frogged the previous US advantage and eventually led to unviable production technology in the United States.

At the periphery of the newest and most promising alloy steels, dismissive of continuous-sheet rolling, actively hostile to new structural shapes, a price leader but not a technical leader: this was U.S. Steel. What was the company doing with technological innovation? (257)

Misa is interested in arriving at a better way of understanding the imperatives leading to technical change — better than neoclassical economics and labor history. His solution highlights the changing relationships that developed between industrial consumers and producers in the steel industry.

We now possess a series of powerful insights into the dynamics of technology and social change. Together, these insights offer the realistic promise of being better able, if we choose, to modulate the complex process of technical change. We can now locate the range of sites for technical decision making, including private companies, trade organizations, engineering societies, and government agencies. We can suggest a typology of user-producer interactions, including centralized, multicentered, decentralized, and direct-consumer interactions, that will enable certain kinds of actions while constraining others. We can even suggest a range of activities that are likely to effect technical change, including standards setting, building and zoning codes, and government procurement. Furthermore, we can also suggest a range of strategies by which citizens supposedly on the “outside” may be able to influence decisions supposedly made on the “inside” about technical change, including credibility pressure, forced technology choice, and regulatory issues. (277-278)

In fact Misa places the dynamic of relationship between producer and large consumer at the center of the imperatives towards technological innovation:

In retrospect, what was wrong with U.S. Steel was not its size or even its market power but its policy of isolating itself from the new demands from users that might have spurred technical change. The resulting technological torpidity that doomed the industry was not primarily a matter of industrial concentration, outrageous behavior on the part of white- and blue-collar employees, or even dysfunctional relations among management, labor, and government. What went wrong was the industry’s relations with its consumers. (278)

This relative “callous treatment of consumers” was profoundly harmful when international competition gave large industrial users of steel a choice. When US Steel had market dominance, large industrial users had little choice; but this situation changed after WWII. “This favorable balance of trade eroded during the 1950s as German and Japanese steelmakers rebuilt their bombed-out plants with a new production technology, the basic oxygen furnace (BOF), which American steelmakers had dismissed as unproven and unworkable” (279). Misa quotes a president of a small steel producer: “The Big Steel companies tend to resist new technologies as long as they can … They only accept a new technology when they need it to survive” (280).

*****

Here is an interesting table from Misa’s book that sheds light on some of the economic and political history in the United States since the post-war period, leading right up to the populist politics of 2016 in the Midwest. This chart provides mute testimony to the decline of the rustbelt industrial cities. Michigan, Illinois, Ohio, Pennsylvania, and western New York account for 83% of the steel production on this table. When American producers lost the competitive battle for steel production in the 1980s, the Rustbelt suffered disproportionately, and eventually blue collar workers lost their places in the affluent economy.

Nuclear power plant siting decisions

Readers may be skeptical about the practical importance of the topic of nuclear power plant siting decisions, since very few new nuclear plants have been proposed or approved in the United States for decades. However, the topic is one for which there is an extensive historical record, and it is a process that illuminates the challenge for government to balance risk and benefit, private gain and public cost.  Moreover, siting inherently brings up issues that are both of concern to the public in general (throughout a state or region of the country) and to the citizens who live in close proximity to the recommended site. The NIMBY problem is unavoidable — it is someone’s backyard, and it is a worrisome neighbor. So this is a good case in terms of which to think creatively about the responsibilities of government for ensuring the public good in the face of risky private activity, and the detailed institutions of regulation and oversight that would work to make wise public outcomes more likely.

I’ve been thinking quite a bit recently about technology failure, government regulation, and risky technologies, and there is a lot to learn about these subjects by looking at the history of nuclear power in the United States. Two books in particular have been interesting to me. Neither is particularly recent, but both shed valuable light on the public-policy context of nuclear decision-making. The first is Joan Aron’s account of the processes that led to the cancellation of the Shoreham nuclear power plant on Long Island in the 1970s (Licensed To Kill?: The Nuclear Regulatory Commission and the Shoreham Power Plant) and the second is Donald Stever, Jr.’s account of the licensing process for the Seabrook nuclear power plant in Seabrook and The Nuclear Regulatory Commission: The Licensing of a Nuclear Power Plant. Both are fascinating books and well worthy of study as a window into government decision-making and regulation. Stever’s book is especially interesting because it is a highly capable analysis of the licensing process, both at the state level and at the level of the NRC, and because Stever himself was a participant. As an assistant attorney general in New Hampshire he was assigned the role of Counsel for the Public throughout the process in New Hampshire.

Joan Aron’s 1997 book Licensed to Kill? is a detailed case study the effort to establish the Shoreham nuclear power plant on Long Island in the 1980s. LILCO had proposed the plant to respond to rising demand for electricity on Long Island as population and energy use rose. And Long Island is a long, narrow island on which traffic congestion at certain times of day is legendary. Evacuation planning was both crucial and in the end, perhaps impossible.

This is an intriguing story, because it led eventually to the cancellation of the operating license for the plant by the NRC after completion of the plant. And the cancellation resulted largely from the effectiveness of public opposition and interest-group political pressure. Aron provides a detailed account of the decisions made by the public utility company LILCO, the AEC and NRC, New York state and local authorities, and citizen activist groups that led to the costliest failed investment in the history of nuclear power in the United States.

In 1991 the NRC made the decision to rescind the operating license for the Shoreham plant, after completion at a cost of over $5 billion but before it had generated a kilowatt of electricity.

Aron’s basic finding is that the project collapsed in costly fiasco because of a loss of trust among the diverse stakeholders: LILCO, the Long Island public, state and local agencies and officials, scientific experts, and the Nuclear Regulatory Commission. The Long Island tabloid Newsday played a role as well, sensationalizing every step of the process and contributing to public distrust of the process. Aron finds that the NRC and LILCO underestimated the need for full analysis of safety and emergency preparedness issues raised by the plant’s design, including the issue of evacuation from a largely inaccessible island full of two million people in the event of disaster. LILCO’s decision to upscale the capacity of the plant in the middle of the process contributed to the failure as well. And the occurrence of the Three Mile Island disaster in 1979 gave new urgency to the concerns experienced by citizens living within fifty miles of the Shoreham site about the risks of a nuclear plant.

As we have seen, Shoreham failed to operate because of intense public opposition, in which the governor played a key role, inspired in part by the utility’s management incompetence and distrust of the NRC. Inefficiencies in the NRC licensing process were largely irrelevant to the outcome. The public by and large ignored NRC’s findings and took the nonsafety of the plant for granted. (131)

The most influential issue was public safety: would it be possible to perform an orderly evacuation of the population near the plant in the event of a serious emergency? Clarke and Perrow (included in Helmut Anheier, ed., When Things Go Wrong: Organizational Failures and Breakdowns) provide an extensive analysis of the failures that occurred during tests of the emergency evacuation plan designed by LILCO. As they demonstrate, the errors that occurred during the evacuation test were both “normal” and potentially deadly.

One thing that comes out of both books is the fact that the commissioning and regulatory processes are far from ideal examples of the rational development of sound public policy. Rather, business interests, institutional shortcomings, lack of procedural knowledge by committee chairs, and dozens of other factors lead to outcomes that appear to fall far short of what the public needs. But in addition to ordinary intrusions into otherwise rational policy deliberations, there are other reasons to believe that decision-making is more complicated and less rational than a simple model of rational public policy formation would suggest. Every decision-maker brings a set of “framing assumptions” about the reality concerning which he or she is deliberating. These framing assumptions impose an unavoidable kind of cognitive bias into collective decision-making. A business executive brings a worldview to the question of regulation of risk that is quite different from that of an ecologist or an environmental activist. This is different from the point often made about self-interest; our framing assumptions do not feel like expressions of self-interest, but rather simply secure convictions about how the world works and what is important in the world. This is one reason why the work of social scientists like Scott Page (The Difference: How the Power of Diversity Creates Better Groups, Firms, Schools, and Societies) on the value of diversity in problem-solving and decision-making is so important: by bringing multiple perspectives and cognitive frames to a problem, we are more likely to get a balanced decision that gives appropriate weight to the legitimate interests and concerns of all involved.

Here is an interesting concrete illustration of cognitive bias (with a generous measure of self-interest as well) in Stever’s discussion of siting decisions for nuclear power plants:

From the time a utility makes the critical in-house decision to choose a site, any further study of alternatives is necessarily negative in approach. Once sufficient corporate assets have been sunk into the chosen site to produce data adequate for state site review, the company’s management has a large enough stake in it to resist suggestions that a full study of site alternatives be undertaken as a part of the state (or for that matter as a part of the NEPA) review process. hence, the company’s methodological approach to evaluating alternates to the chosen site will always be oriented toward the desired conclusion that the chosen site is superior. (Stever 1980 : 30)

This is the bias of sunk costs, both inside the organization and in the cognitive frames of independent decision makers in state agencies.

Stever’s central point here is a very important one: the pace of site selection favors the energy company’s choices over the concerns and preferences of affected groups because the company is in a position to have dedicated substantial resources to development of the preferred site proposal. Likewise, scientific experts have a difficult time making their concerns about habitat or traffic flow heard in the context.
But here is a crucial thing to observe: the siting decision is only one of dozens in the development of a new power plant, which is itself only one of hundreds of government / business decisions made every year. What Stever describes is a structural bias in the regulatory process, not a one-off flaw. At its bottom, this is the task that government faces when considering the creation of a new nuclear power plant: “to assess the various public and private costs and benefits of a site proposed by a utility” (32); and Stever’s analysis makes it doubtful that existing public processes do this in a consistent and effective way. Stever argues that government needs to have more of a role in site selection, not less, as pro-market advocates demand: “The kind of social and environmental cost accounting required for a balanced initial assessment of, and development of, alternative sites should be done by a public body acting not as a reviewer of private choices, but as an active planner” (32).

Notice how this scheme shifts the pace and process from the company to the relevant state agency. The preliminary site selection and screening is done by a state site planning agency, with input then invited from the utilities companies, interest groups, and a formal environmental assessment. This places the power squarely in the hands of the government agency rather than the private owner of the plant — reflecting the overriding interest the public has in ensuring health, safety, and environmental controls.
Stever closes a chapter on regulatory issues with these cogent recommendations (38-39):

  1. Electric utility companies should not be responsible for decisions concerning early nuclear-site planning.
  2. Early site identification, evaluation, and inventorying is a public responsibility that should be undertaken by a public agency, with formal participation by utilities and interest groups, based upon criteria developed by the state legislature.
  3. Prior to the use of a particular site, the state should prepare a complete environmental assessment for it, and hold adjudicatory hearings on contested issues.
  4. Further effort should be made toward assessing the public risk of nuclear power plant sites.
  5. In areas like New England, characterized by geographically small states and high energy demand, serious efforts should be made to develop regional site planning and evaluation.
  6. Nuclear licensing reform should focus on the quality of decision-making.
  7. There should be a continued federal presence in nuclear site selection, and the resolution of environmental problems should not be delegated entirely to the states. 

(It is very interesting to me that I have not been able to locate a full organizational study of the Nuclear Regulatory Commission itself.)

Is the Xerox Corporation supervenient?

Supervenience is the view that the properties of some composite entity B are wholly fixed by the properties and relations of the items A of which it is composed (link, link). The transparency of glass supervenes upon the properties of the atoms of silicon and oxygen of which it is composed and their arrangement.

Can the same be said of a business firm like Xerox when we consider its constituents to be its employees, stakeholders, and other influential actors and their relations and actions? (Call that total field of factors S.) Or is it possible that exactly these actors at exactly the same time could have manifested a corporation with different characteristics?

Let’s say the organizational properties we are interested in include internal organizational structure, innovativeness, market adaptability, and level of internal trust among employees. And S consists of the specific individuals and their properties and relations that make up the corporation at a given time. Could this same S have manifested with different properties for Xerox?

One thing is clear. If a highly similar group of individuals had been involved in the creation and development of Xerox, it is entirely possible that the organization would have been substantially different today. We could expect that contingent events and a high level of path dependency would have led to substantial differences in organization, functioning, and internal structure. So the company does not supervene upon a generic group of actors defined in terms of a certain set of beliefs, goals, and modes of decision making over the history of its founding and development. I have sometimes thought this path dependency itself if enough to refute supervenience.

But the claim of supervenience is not a temporal or diachronic claim, but instead a synchronic claim: the current features of structure, causal powers, functioning, etc., of the higher-level entity today are thought to be entirely fixed by the supervenience base (in this case, the particular individuals and their relations and actions). Putting the idea in terms of possible-world theory, there is no possible world in which exactly similar individuals in exactly similar states of relationship and action would underlie a business firm Xerox* which had properties different from the current Xerox firm.

One way in which this counterfactual might be true is if a property P of the corporation depended on the states of the agents plus something else — say, the conductivity of copper in its pure state. In the real world W copper is highly conductive, while in W* copper is not-conductive. And in W, let’s suppose, Xerox has property P rather than P. On this scenario Xerox does not supervene upon the states of the actors, since these states are identical in W and W*. This is because dependence on the conductivity of copper makes a difference not reflected in a difference in the states of the actors.

But this is a pretty hypothetical case. We would only be justified in thinking Xerox does not supervene on S if we had a credible candidate for another property that would make a difference, and I’m hard pressed to do so.

There is another possible line of response for the hardcore supervenience advocate in this case. I’ve assumed the conductivity of copper makes a difference to the corporation without making a difference for the actors. But I suppose it might be maintained that this is impossible: only the states of the actors affect the corporation, since they constitute the corporation; so the scenario I describe is impossible.

The upshot seems to be this: there is no way of resolving the question at the level of pure philosophy. The best we can do is to do concrete empirical work on the actual causal and organizational processes through which the properties of the whole are constituted through the actions and thoughts of the individuals who make it up.

But here is a deeper concern. What makes supervenience minimally plausible in the case of social entities is the insistence on synchronic dependence. But generally speaking, we are always interested in the diachronic behavior and evolution of a social entity. And here the idea of path dependence is more credible than the idea of moment-to-moment dependency on the “supervenience base”. We might say that the property of “innovativeness” displayed by the Xerox Corporation at some periods in its history supervenes moment-to-moment on the actions and thoughts of its constituent individuals; but we might also say that this fact does not explain the higher-level property of innovativeness. Instead, some set of events in the past set the corporation on a path that favored innovation; this corporate culture or climate influenced the selection and behavior of the individuals who make it up; and the day-to-day behavior reflects both the path-dependent history of its higher-level properties and the current configuration of its parts.

(Thanks, Raphael van Riel, for your warm welcome to the Institute of Philosophy at the University of Duisburg-Essen, and for the many stimulating conversations we had on the topics of supervenience, generativity, and functionalism.)

Deficiencies of practical rationality in organizations

Suppose we are willing to take seriously the idea that organizations possess a kind of intentionality — beliefs, goals, and purposive actions — and suppose that we believe that the microfoundations of these quasi-intentional states depend on the workings of individual purposive actors within specific sets of relations, incentives, and practices. How does the resulting form of “bureaucratic intelligence” compare with human thought and action?

There is a major set of differences between organizational “intelligence” and human intelligence that turn on the unity of human action compared to the fundamental disunity of organizational action. An individual human being gathers a set of beliefs about a situation, reflects on a range of possible actions, and chooses a line of action designed to bring about his/her goals. An organization is disjointed in each of these activities. The belief-setting part of an organization usually consists of multiple separate processes culminating in an amalgamated set of beliefs or representations. And this amalgamation often reflects deep differences in perspective and method across various sub-departments. (Consider inputs into an international crisis incorporating assessments from intelligence, military, and trade specialists.)

Second, individual intentionality possess a substantial degree of practical autonomy. The individual assesses and adopts the set of beliefs that seem best to him or her in current circumstances. The organization in its belief-acquisition is subject to conflicting interests, both internal and external, that bias the belief set in one direction or the other. (This is the central thrust of experts on science policy like Naomi Oreskes.) The organization is not autonomous in its belief formation processes.

Third, an individual’s actions have a reasonable level of consistency and coherence over time. The individual seeks to avoid being self-defeating by doing X and Y while knowing that X undercuts Y. An organization is entirely capable of pursuing a suite of actions which embody exactly this kind of inconsistency, precisely because the actions chosen are the result of multiple disagreeing sub-agencies and officers.

Fourth, we have some reason to expect a degree of stability in the goals and values that underlie actions by an individual. But organizations, exactly because their behavior is a joint product of sub-agents with conflicting plans and goals, are entirely capable of rapid change of goals and values. Deepening this instability is the fluctuating powers and interests of external stakeholders who apply pressure for different values and goals over time.

Finally, human thinkers are potentially epistemic thinkers — they are at least potentially capable of following disciplines of analysis, reasoning, and evidence in their practical engagement with the world. By contrast, because of the influence of interests, both internal and external, organizations are perpetually subject to the distortion of belief, intention, and implementation by actors who have an interest in the outcome of the project. And organizations have little ability to apply rational rational standards to their processes of belief, intention, and implementation formation. Organizational intentionality lacks overriding rational control.

Consider more briefly the topic of action. Human actors suffer various deficiencies of performance when it comes to purposive action, including weakness of the will and self deception. But organizations are altogether less capable of effectively mounting the steps needed to fully implement a plan or a complicated policy or action. This is because of the looseness of linkages that exist between executive and agent within an organization, the perennial possibility of principal-agent problems, and the potential interference with performance created by interested parties outside the organization.

This line of thought suggests that organizational lack “unity of apperception and intention”. There are multiple levels and zones of intention formation, and much of this plurality persists throughout real processes of organizational thinking. And this disunity affects both belief, intention and action. Organizations are not univocal at any point. Belief formation, intention formation, and action remain fragmented and multivocal.

These observations are somewhat parallel to the paradoxes of social choice and various voting systems governing a social choice function. Kenneth Arrow demonstrated it is impossible to design a voting system that guarantees consistency of choice by a group of individual consistency voters. The analogy here is the idea that there is no organizational design possible that guarantees a high degree of consistency and rationality in large organizational decision processes at any stage of quasi-intentionality, including belief acquisition, policy formulation, and policy implementation. 

The mind of government

We often speak of government as if it has intentions, beliefs, fears, plans, and phobias. This sounds a lot like a mind. But this impression is fundamentally misleading. “Government” is not a conscious entity with a unified apperception of the world and its own intentions. So it is worth teasing out the ways in which government nonetheless arrives at “beliefs”, “intentions”, and “decisions”.

Let’s first address the question of the mythical unity of government. In brief, government is not one unified thing. Rather, it is an extended network of offices, bureaus, departments, analysts, decision-makers, and authority structures, each of which has its own reticulated internal structure.

This has an important consequence. Instead of asking “what is the policy of the United States government towards Africa?”, we are driven to ask subordinate questions: what are the policies towards Africa of the State Department, the Department of Defense, the Department of Commerce, the Central Intelligence Agency, or the Agency for International Development? And for each of these departments we are forced to recognize that each is itself a large bureaucracy, with sub-units that have chosen or adapted their own working policy objectives and priorities. There are chief executives at a range of levels — President of the United States, Secretary of State, Secretary of Defense, Director of CIA — and each often has the aspiration of directing his or her organization as a tightly unified and purposive unit. But it is perfectly plain that the behavior of functional units within agencies are only loosely controlled by the will of the executive. This does not mean that executives have no control over the activities and priorities of subordinate units. But it does reflect a simple and unavoidable fact about large organizations. An organization is more like a slime mold than it is like a control algorithm in a factory.

This said, organizational units at all levels arrive at something analogous to beliefs (assessments of fact and probable future outcomes), assessments of priorities and their interactions, plans, and decisions (actions to take in the near and intermediate future). And governments make decisions at the highest level (leave the EU, raise taxes on fuel, prohibit immigration from certain countries, …). How does the analytical and factual part of this process proceed? And how does the decision-making part unfold?

One factor is particularly evident in the current political environment in the United States. Sometimes the analysis and decision-making activities of government are short-circuited and taken by individual executives without an underlying organizational process. A president arrives at his view of the facts of global climate change based on his “gut instincts” rather than an objective and disinterested assessment of the scientific analysis available to him. An Administrator of the EPA acts to eliminate long-standing environmental protections based on his own particular ideological and personal interests. A Secretary of the Department of Energy takes leadership of the department without requesting a briefing on any of its current projects. These are instances of the dictator strategy (in the social-choice sense), where a single actor substitutes his will for the collective aggregation of beliefs and desires associated with both bureaucracy and democracy. In this instance the answer to our question is a simple one: in cases like these government has beliefs and intentions because particular actors have beliefs and intentions and those actors have the power and authority to impose their beliefs and intentions on government.

The more interesting cases involve situations where there is a genuine collective process through which analysis and assessment takes place (of facts and priorities), and through which strategies are considered and ultimately adopted. Agencies usually make decisions through extended and formalized processes. There is generally an organized process of fact gathering and scientific assessment, followed by an assessment of various policy options with public exposure. Final a policy is adopted (the moment of decision).

The decision by the EPA to ban DDT in 1972 is illustrative (link, linklink). This was a decision of government which thereby became the will of government. It was the result of several important sub-processes: citizen and NGO activism about the possible toxic harms created by DDT, non-governmental scientific research assessing the toxicity of DDT, an internal EPA process designed to assess the scientific conclusions about the environmental and human-health effects of DDT, an analysis of the competing priorities involved in this issue (farming, forestry, and malaria control versus public health), and a decision recommended to the Administrator and adopted that concluded that the priority of public health and environmental safety was weightier than the economic interests served by the use of the pesticide.

Other examples of agency decision-making follow a similar pattern. The development of policy concerning science and technology is particularly interesting in this context. Consider, for example, Susan Wright (link) on the politics of regulation of recombinant DNA. This issue is explored more fully in her book Molecular Politics: Developing American and British Regulatory Policy for Genetic Engineering, 1972-1982. This is a good case study of “government making up its mind”. Another interesting case study is the development of US policy concerning ozone depletion; link.

These cases of science and technology policy illustrate two dimensions of the processes through which a government agency “makes up its mind” about a complex issue. There is an analytical component in which the scientific facts and the policy goals and priorities are gathered and assessed. And there is a decision-making component in which these analytical findings are crafted into a decision — a policy, a set of regulations, or a funding program, for example. It is routine in science and technology policy studies to observe that there is commonly a substantial degree of intertwining between factual judgments and political preferences and influences brought to bear by powerful outsiders. (Here is an earlier discussion of these processes; link.)

Ideally we would like to imagine a process of government decision-making that proceeds along these lines: careful gathering and assessment of the best available scientific evidence about an issue through expert specialist panels and sections; careful analysis of the consequences of available policy choices measured against a clear understanding of goals and priorities of the government; and selection of a policy or action that is best, all things considered, for forwarding the public interest and minimizing public harms. Unfortunately, as the experience of government policies concerning climate change in both the Bush administration and the Trump administration illustrates, ideology and private interest distort every phase of this idealized process.

(Philip Tetlock’s Superforecasting: The Art and Science of Prediction offers an interesting analysis of the process of expert factual assessment and prediction. Particularly interesting is his treatment of intelligence estimates.)

Patient safety

An issue which is of concern to anyone who receives treatment in a hospital is the topic of patient safety. How likely is it that there will be a serious mistake in treatment — wrong-site surgery, incorrect medication or radiation dose, exposure to a hospital-acquired infection? The current evidence is alarming. (Martin Makary et al estimate that over 250,000 deaths per year result from medical mistakes — making medical error now the third leading cause of mortality in the United States (link).) And when these events occur, where should we look for assigning responsibility — at the individual providers, at the systems that have been implemented for patient care, at the regulatory agencies responsible for overseeing patient safety?

Medical accidents commonly demonstrate a complex interaction of factors, from the individual provider to the technologies in use to failures of regulation and oversight. We can look at a hospital as a place where caring professionals do their best to improve the health of their patients while scrupulously avoiding errors. Or we can look at it as an intricate system involving the recording and dissemination of information about patients; the administration of procedures to patients (surgery, medication, radiation therapy). In this sense a hospital is similar to a factory with multiple intersecting locations of activity. Finally, we can look at it as an organization — a system of division of labor, cooperation, and supervision by large numbers of staff whose joint efforts lead to health and accidents alike. Obviously each of these perspectives is partially correct. Doctors, nurses, and technicians are carefully and extensively trained to diagnose and treat their patients. The technology of the hospital — the digital patient record system, the devices that administer drugs, the surgical robots — can be designed better or worse from a safety point of view. And the social organization of the hospital can be effective and safe, or it can be dysfunctional and unsafe. So all three aspects are relevant both to safe operations and the possibility of chronic lack of safety.

So how should we analyze the phenomenon of patient safety? What factors can be identified that distinguish high safety hospitals from low safety? What lessons can be learned from the study of accidents and mistakes that cumulatively lead to a hospitals patient safety record?

The view that primarily emphasizes expertise and training of individual practitioners is very common in the healthcare industry, and yet this approach is not particularly useful as a basis for improving the safety of healthcare systems. Skill and expertise are necessary conditions for effective medical treatment; but the other two zones of accident space are probably more important for reducing accidents — the design of treatment systems and the organizational features that coordinate the activities of the various individuals within the system.

Dr. James Bagian is a strong advocate for the perspective of treating healthcare institutions as systems. Bagian considers both technical systems characteristics of processes and the organizational forms through which these processes are carried out and monitored. And he is very skilled at teasing out some of the ways in which features of both system and organization lead to avoidable accidents and failures. I recall his description of a safety walkthrough he had done in a major hospital. He said that during the tour he noticed a number of nurses’ stations which were covered with yellow sticky notes. He observed that this is both a symptom and a cause of an accident-prone organization. It means that individual caregivers were obligated to remind themselves of tasks and exceptions that needed to be observed. Far better was to have a set of systems and protocols that made sticky notes unnecessary. Here is the abstract from a short summary article by Bagian on the current state of patient safety:

Abstract

The traditional approach to patient safety in health care has ranged from reticence to outward denial of serious flaws. This undermines the otherwise remarkable advances in technology and information that have characterized the specialty of medical practice. In addition, lessons learned in industries outside health care, such as in aviation, provide opportunities for improvements that successfully reduce mishaps and errors while maintaining a standard of excellence. This is precisely the call in medicine prompted by the 1999 Institute of Medicine report “To Err Is Human: Building a Safer Health System.” However, to effect these changes, key components of a successful safety system must include: (1) communication, (2) a shift from a posture of reliance on human infallibility (hence “shame and blame”) to checklists that recognize the contribution of the system and account for human limitations, and (3) a cultivation of non-punitive open and/or de-identified/anonymous reporting of safety concerns, including close calls, in addition to adverse events.

(Here is the Institute of Medicine study to which Bagian refers; link.)

Nancy Leveson is an aeronautical and software engineer who has spent most of her career devoted to designing safe systems. Her book Engineering a Safer World: Systems Thinking Applied to Safety is a recent presentation of her theories of systems safety. She applies these approaches to problems of patient safety with several co-authors in “A Systems Approach to Analyzing and Preventing Hospital Adverse Events” (link). Here is the abstract and summary of findings for that article:

Objective:

This study aimed to demonstrate the use of a systems theory-based accident analysis technique in health care applications as a more powerful alternative to the chain-of-event accident models currently underpinning root cause analysis methods.

Method:

A new accident analysis technique, CAST [Causal Analysis based on Systems Theory], is described and illustrated on a set of adverse cardiovascular surgery events at a large medical center. The lessons that can be learned from the analysis are compared with those that can be derived from the typical root cause analysis techniques used today.

Results:

The analysis of the 30 cardiovascular surgery adverse events using CAST revealed the reasons behind unsafe individual behavior, which were related to the design of the system involved and not negligence or incompetence on the part of individuals. With the use of the system-theoretic analysis results, recommendations can be generated to change the context in which decisions are made and thus improve decision making and reduce the risk of an accident.

Conclusions:

The use of a systems-theoretic accident analysis technique can assist in identifying causal factors at all levels of the system without simply assigning blame to either the frontline clinicians or technicians involved. Identification of these causal factors in accidents will help health care systems learn from mistakes and design system-level changes to prevent them in the future.

Crucial in this article is this research group’s effort to identify causes “at all levels of the system without simply assigning blame to either the frontline clinicians or technicians involved”. The key result is this: “The analysis of the 30 cardiovascular surgery adverse events using CAST revealed the reasons behind unsafe individual behavior, which were related to the design of the system involved and not negligence or incompetence on the part of individuals.”

Bagian, Leveson, and others make a crucial point: in order to substantially increase the performance of hospitals and the healthcare system more generally when it comes to patient safety, it will be necessary to extend the focus of safety analysis from individual incidents and agents to the systems and organizations through which these accidents were possible. In other words, attention to systems and organizations is crucial if we are to significantly reduce the frequency of medical and hospital mistakes.

(The Makary et al estimate of 250,000 deaths caused by medical error has been questioned on methodological grounds. See Aaron Carroll’s thoughtful rebuttal (NYT 8/15/16; link).)

Nuclear accidents

diagrams: Chernobyl reactor before and after

Nuclear fission is one of the world-changing discoveries of the mid-twentieth century. The atomic bomb projects of the United States led to the atomic bombing of Japan in August 1945, and the hope for limitless electricity brought about the proliferation of a variety of nuclear reactors around the world in the decades following World War II. And, of course, nuclear weapons proliferated to other countries beyond the original circle of atomic powers.

Given the enormous energies associated with fission and the dangerous and toxic properties of radioactive components of fission processes, the possibility of a nuclear accident is a particularly frightening one for the modern public. The world has seen the results of several massive nuclear accidents — Chernobyl and Fukushima in particular — and the devastating results they have had on human populations and the social and economic wellbeing of the regions in which they occurred.

Safety is therefore a paramount priority in the nuclear industry, both in research labs and military and civilian applications. So what is the situation of safety in the nuclear sector? Jim Mahaffey’s Atomic Accidents: A History of Nuclear Meltdowns and Disasters: From the Ozark Mountains to Fukushima is a detailed and carefully researched attempt to answer this question. And the information he provides is not reassuring. Beyond the celebrated and well-known disasters at nuclear power plants (Three Mile Island, Chernobyl, Fukushima), Mahaffey refers to hundreds of accidents involving reactors, research laboratories, weapons plants, and deployed nuclear weapons that have had less public awareness. These accidents resulted in a very low number of lives lost, but their frequency is alarming. They are indeed “normal accidents” (Perrow, Normal Accidents: Living with High-Risk Technologies. For example:

  • a Japanese fishing boat is contaminated by fallout from Castle Bravo test of hydrogen bomb; lots of radioactive fish at the markets in Japan (March 1, 1954) (kl 1706)
  • one MK-6 atomic bomb is dropped on Mars Bluff, South Carolina, after a crew member accidentally pulled the emergency bomb release handle (February 5, 1958) (kl 5774)
  • Fermi 1 liquid sodium plutonium breeder reactor experiences fuel meltdown during startup trials near Detroit (October 4, 1966) (kl 4127)

Mahaffey also provides detailed accounts of the most serious nuclear accidents and meltdowns during the past forty years, Three Mile Island, Chernobyl, and Fukushima.

The safety and control of nuclear weapons is of particular interest. Here is Mahaffey’s summary of “Broken Arrow” events — the loss of atomic and fusion weapons:

Did the Air Force ever lose an A-bomb, or did they just misplace a few of them for a short time? Did they ever drop anything that could be picked up by someone else and used against us? Is humanity going to perish because of poisonous plutonium spread that was snapped up by the wrong people after being somehow misplaced? Several examples will follow. You be the judge.

Chuck Hansen [U.S. Nuclear Weapons – The Secret History] was wrong about one thing. He counted thirty-two “Broken Arrow” accidents. There are now sixty-five documented incidents in which nuclear weapons owned by the United States were lost, destroyed, or damaged between 1945 and 1989. These bombs and warheads, which contain hundreds of pounds of high explosive, have been abused in a wide range of unfortunate events. They have been accidentally dropped from high altitude, dropped from low altitude, crashed through the bomb bay doors while standing on the runway, tumbled off a fork lift, escaped from a chain hoist, and rolled off an aircraft carrier into the ocean. Bombs have been abandoned at the bottom of a test shaft, left buried in a crater, and lost in the mud off the coast of Georgia. Nuclear devices have been pounded with artillery of a foreign nature, struck by lightning, smashed to pieces, scorched, toasted, and burned beyond recognition. Incredibly, in all this mayhem, not a single nuclear weapon has gone off accidentally, anywhere in the world. If it had, the public would know about it. That type of accident would be almost impossible to conceal. (kl 5527)

There are a few common threads in the stories of accident and malfunction that Mahaffey provides. First, there are failures of training and knowledge on the part of front-line workers. The physics of nuclear fission are often counter-intuitive, and the idea of critical mass does not fully capture the danger of a quantity of fissionable material. The geometry of the storage of the material makes a critical difference in going critical. Fissionable material is often transported and manipulated in liquid solution; and the shape and configuration of the vessel in which the solution is held makes a difference to the probability of exponential growth of neutron emission — leading to runaway fission of the material. Mahaffey documents accidents that occurred in nuclear materials processing plants that resulted from plant workers applying what they knew from industrial plumbing to their efforts to solve basic shop-floor problems. All too often the result was a flash of blue light and the release of a great deal of heat and radioactive material.

Second, there is a fault at the opposite end of the knowledge spectrum — the tendency of expert engineers and scientists to believe that they can solve complicated reactor problems on the fly. This turned out to be a critical problem at Chernobyl (kl 6859).

The most difficult problem to handle is that the reactor operator, highly trained and educated with an active and disciplined mind, is liable to think beyond the rote procedures and carefully scheduled tasks. The operator is not a computer, and he or she cannot think like a machine. When the operator at NRX saw some untidy valve handles in the basement, he stepped outside the procedures and straightened them out, so that they were all facing the same way. (kl 2057)

There are also clear examples of inappropriate supervision in the accounts shared by Mahaffey. Here is an example from Chernobyl.

[Deputy chief engineer] Dyatlov was enraged. He paced up and down the control panel, berating the operators, cursing, spitting, threatening, and waving his arms. He demanded that the power be brought back up to 1,500 megawatts, where it was supposed to be for the test. The operators, Toptunov and Akimov, refused on grounds that it was against the rules to do so, even if they were not sure why.

Dyatlov turned on Toptunov. “You lying idiot! If you don’t increase power, Tregub will!”

Tregub, the Shift Foreman from the previous shift, was officially off the clock, but he had stayed around just to see the test. He tried to stay out of it.

Toptunov, in fear of losing his job, started pulling rods. By the time he had wrestled it back to 200 megawatts, 205 of the 211 control rods were all the way out. In this unusual condition, there was danger of an emergency shutdown causing prompt supercriticality and a resulting steam explosion. At 1: 22: 30 a.m., a read-out from the operations computer advised that the reserve reactivity was too low for controlling the reactor, and it should be shut down immediately. Dyatlov was not worried. “Another two or three minutes, and it will be all over. Get moving, boys! (kl 6887)

This was the turning point in the disaster.

A related fault is the intrusion of political and business interests into the design and conduct of high-risk nuclear actions. Leaders want a given outcome without understanding the technical details of the processes they are demanding; subordinates like Toptunov are eventually cajoled or coerced into taking the problematic actions. The persistence of advocates for liquid sodium breeder reactors represents a higher-level example of the same fault. Associated with this role of political and business interests is an impulse towards secrecy and concealment when accidents occur and deliberate understatement of the public dangers created by an accident — a fault amply demonstrated in the Fukushima disaster.

Atomic Accidents provides a fascinating history of events of which most of us are unaware. The book is not primarily intended to offer an account of the causes of these accidents, but rather the ways in which they unfolded and the consequences they had for human welfare. (Generally speaking his view is that nuclear accidents in North America and Western Europe have had remarkably few human casualties.) And many of the accidents he describes are exactly the sorts of failures that are common in all largescale industrial and military processes.

(Largescale technology failure has come up frequently here. See these posts for analysis of some of the organizational causes of technology failure (link, link, link).)

%d bloggers like this: