Organizations and dysfunction

A recurring theme in recent months in Understanding Society is organizational dysfunction and the organizational causes of technology failure. Helmut Anheier’s volume When Things Go Wrong: Organizational Failures and Breakdowns is highly relevant to this topic, and it makes for very interesting reading. The volume includes contributions by a number of leading scholars in the sociology of organizations.

And yet the volume seems to miss the mark in some important ways. For one thing, it is unduly focused on the question of “mortality” of firms and other organizations. Bankruptcy and organizational death are frequent synonyms for “failure” here. This frame is evident in the summary the introduction offers of existing approaches in the field: organizational aspects, political aspects, cognitive aspects, and structural aspects. All bring us back to the causes of extinction and bankruptcy in a business organization. Further, the approach highlights the importance of internal conflict within an organization as a source of eventual failure. But it gives no insight into the internal structure and workings of the organization itself, the ways in which behavior and internal structure function to systematically produce certain kinds of outcomes that we can identify as dysfunctional.

Significantly, however, dysfunction does not routinely lead to death of a firm. (Seibel’s contribution in the volume raises this possibility, which Seibel refers to as “successful failures“). This is a familiar observation from political science: what looks dysfunctional from the outside may be perfectly well tuned to a different set of interests (for example, in Robert Bates’s account of pricing boards in Africa in Markets and States in Tropical Africa: The Political Basis of Agricultural Policies). In their introduction to this volume Anheier and Moulton refer to this possibility as a direction for future research: “successful for whom, a failure for whom?” (14).

The volume tends to look at success and failure in terms of profitability and the satisfaction of stakeholders. But we can define dysfunction in a more granular way by linking characteristics of performance to the perceived “purposes and goals” of the organization. A regulatory agency exists in order to effectively project the health and safety of the public. In this kind of case, failure is any outcome in which the agency flagrantly and avoidably fails to prevent a serious harm — release of radioactive material, contamination of food, a building fire resulting from defects that should have been detected by inspection. If it fails to do so as well as it might then it is dysfunctional.

Why do dysfunctions persist in organizations? It is possible to identify several possible causes. The first is that a dysfunction from one point of view may well be a desirable feature from another point of view. The lack of an authoritative safety officer in a chemical plant may be thought to be dysfunctional if we are thinking about the safety of workers and the public as a primary goal of the plant (link). But if profitability and cost-savings are the primary goals from the point of view of the stakeholders, then the cost-benefit analysis may favor the lack of the safety officer.

Second, there may be internal failures within an organization that are beyond the reach of any executive or manager who might want to correct them. The complexity and loose-coupling of large organizations militate against house cleaning on a large scale.

Third, there may be powerful factions within an organization for whom the “dysfunctional” feature is an important component of their own set of purposes and goals. Fligstein and McAdam argue for this kind of disaggregation with their theory of strategic action fields (link). By disaggregating purposes and goals to the various actors who figure in the life cycle of the organization – founders, stakeholders, executives, managers, experts, frontline workers, labor organizers – it is possible to see the organization as a whole as simply the aggregation of the multiple actions and purposes of the actors within and adjacent to the organization. This aggregation does not imply that the organization is carefully adjusted to serve the public good or to maximize efficiency or to protect the health and safety of the public. Rather, it suggests that the resultant organizational structure serves the interests of the various actors to the fullest extent each actor is able to manage.

Consider the account offered by Thomas Misa of the decline of the steel industry in the United States in the first part of the twentieth century in A Nation of Steel: The Making of Modern America, 1865-1925. Misa’s account seems to point to a massive dysfunction in the steel corporations of the inter-war period, a deliberate and sustained failure to invest in research on new steel technologies in metallurgy and production. Misa argues that the great steel corporations — US Steel in particular — failed to remain competitive in their industry in the early years of the twentieth century because management persistently pursued short-term profits and financial advantage for the company through domination of the market at the expense of research and development. It relied on market domination instead of research and development for its source of revenue and profits.

In short, U.S. Steel was big but not illegal. Its price leadership resulted from its complete dominance in the core markets for steel…. Indeed, many steelmakers had grown comfortable with U.S. Steel’s overriding policy of price and technical stability, which permitted them to create or develop markets where the combine chose not to compete, and they testified to the court in favor of the combine. The real price of stability … was the stifling of technological innovation. (255)

The result was that the modernized steel industries in Europe leap-frogged the previous US advantage and eventually led to unviable production technology in the United States.

At the periphery of the newest and most promising alloy steels, dismissive of continuous-sheet rolling, actively hostile to new structural shapes, a price leader but not a technical leader: this was U.S. Steel. What was the company doing with technological innovation? (257)

Misa is interested in arriving at a better way of understanding the imperatives leading to technical change — better than neoclassical economics and labor history. His solution highlights the changing relationships that developed between industrial consumers and producers in the steel industry.

We now possess a series of powerful insights into the dynamics of technology and social change. Together, these insights offer the realistic promise of being better able, if we choose, to modulate the complex process of technical change. We can now locate the range of sites for technical decision making, including private companies, trade organizations, engineering societies, and government agencies. We can suggest a typology of user-producer interactions, including centralized, multicentered, decentralized, and direct-consumer interactions, that will enable certain kinds of actions while constraining others. We can even suggest a range of activities that are likely to effect technical change, including standards setting, building and zoning codes, and government procurement. Furthermore, we can also suggest a range of strategies by which citizens supposedly on the “outside” may be able to influence decisions supposedly made on the “inside” about technical change, including credibility pressure, forced technology choice, and regulatory issues. (277-278)

In fact Misa places the dynamic of relationship between producer and large consumer at the center of the imperatives towards technological innovation:

In retrospect, what was wrong with U.S. Steel was not its size or even its market power but its policy of isolating itself from the new demands from users that might have spurred technical change. The resulting technological torpidity that doomed the industry was not primarily a matter of industrial concentration, outrageous behavior on the part of white- and blue-collar employees, or even dysfunctional relations among management, labor, and government. What went wrong was the industry’s relations with its consumers. (278)

This relative “callous treatment of consumers” was profoundly harmful when international competition gave large industrial users of steel a choice. When US Steel had market dominance, large industrial users had little choice; but this situation changed after WWII. “This favorable balance of trade eroded during the 1950s as German and Japanese steelmakers rebuilt their bombed-out plants with a new production technology, the basic oxygen furnace (BOF), which American steelmakers had dismissed as unproven and unworkable” (279). Misa quotes a president of a small steel producer: “The Big Steel companies tend to resist new technologies as long as they can … They only accept a new technology when they need it to survive” (280).

*****

Here is an interesting table from Misa’s book that sheds light on some of the economic and political history in the United States since the post-war period, leading right up to the populist politics of 2016 in the Midwest. This chart provides mute testimony to the decline of the rustbelt industrial cities. Michigan, Illinois, Ohio, Pennsylvania, and western New York account for 83% of the steel production on this table. When American producers lost the competitive battle for steel production in the 1980s, the Rustbelt suffered disproportionately, and eventually blue collar workers lost their places in the affluent economy.

Ethical disasters

Many examples of technical disasters have been provided in Understanding Society, along with efforts to understand the systemic dysfunctions that contributed to their occurrence. Frequently those dysfunctions fall within the business organizations that manage large, complex technology systems, and often enough those dysfunctions derive from the imperatives of profit-maximization and cost avoidance. Andrew Hopkins’ account of the business decisions contributing to the explosion of the ESSO gas plant in Longford, Australia illustrates this dynamic in Lessons from Longford: The ESSO Gas Plant Explosion. The withdrawal of engineering experts from the plant to a remote corporate headquarters was a cost-saving move that, according to Hopkins, contributed to the eventual disaster.

A topic we have not addressed in detail is the occurrence of ethical disasters — terrible outcomes that are the result of deliberate choices by decision-makers within an organization that are, upon inspection, clearly and profoundly unethical and immoral. The collapse of Enron is probably one such disaster; the Bernie Madoff scandal is another. But it seems increasingly likely that Purdue Pharma and the Sackler family’s business leadership of the corporation represent another major example. Recent reporting by ProPublica, the Atlantic, and the New York Times relies on documents collected in the course of litigation against Purdue Pharma and members of the Sackler family in Massachusetts and New York. (Here are the unredacted court documents on which much of this reporting depends; link.) These documents make it hard to avoid the ethical conclusion that the Sackler family actively participated in business strategies for their company Purdue Pharma that treated the OxyContin addiction epidemic as an expanding business opportunity. And this seems to be a huge ethical breach.

This set of issues is currently unresolved by the courts, so it rests with the legal system to resolve the facts and the issues of legal culpability. But as citizens we all have the ability to read the documents and make our own decisions about the ethical status of decisions and strategies made by the family and the corporation over the course of this disaster. The point here is simply to ask these key questions: how should we think about the ethical status of decisions and strategies of owners and managers that lead to terrible harms, and harms that could reasonably have been anticipated? How should a company or a set of owners respond to a catastrophe in which several hundred thousand people have died, and which was facilitated in part by deliberate marketing efforts by the company and the owners? How should the company have adjusted its business when it became apparent that its product was creating addiction and widespread death?

First, here are a few details from the current reporting about the case. Here are a few paragraphs from the ProPublica story (January 30, 2019):

Not content with billions of dollars in profits from the potent painkiller OxyContin, its maker explored expanding into an “attractive market” fueled by the drug’s popularity — treatment of opioid addiction, according to previously secret passages in a court document filed by the state of Massachusetts.

In internal correspondence beginning in 2014, Purdue Pharma executives discussed how the sale of opioids and the treatment of opioid addiction are “naturally linked” and that the company should expand across “the pain and addiction spectrum,” according to redacted sections of the lawsuit by the Massachusetts attorney general. A member of the billionaire Sackler family, which founded and controls the privately held company, joined in those discussions and urged staff in an email to give “immediate attention” to this business opportunity, the complaint alleges. (ProPublica 1/30/2019; link)

The NYT story reproduces a diagram included in the New York court filings that illustrates the company’s business strategy of “Project Tango” — the idea that the company could make money both from sales of its pain medication and from sales of treatments for the addiction it caused.

Further, according to the reporting provided by the NYT and ProPublica, members of the Sackler family used their positions on the Purdue Pharma board to press for more aggressive business exploitation of the opportunities described here:

In 2009, two years after the federal guilty plea, Mortimer D.A. Sackler, a board member, demanded to know why the company wasn’t selling more opioids, email traffic cited by Massachusetts prosecutors showed. In 2011, as states looked for ways to curb opioid prescriptions, family members peppered the sales staff with questions about how to expand the market for the drugs…. The family’s statement said they were just acting as responsible board members, raising questions about “business issues that were highly relevant to doctors and patients. (NYT 4/1/2019; link)

From the 1/30/2019 ProPublica story, and based on more court documents:

Citing extensive emails and internal company documents, the redacted sections allege that Purdue and the Sackler family went to extreme lengths to boost OxyContin sales and burnish the drug’s reputation in the face of increased regulation and growing public awareness of its addictive nature. Concerns about doctors improperly prescribing the drug, and patients becoming addicted, were swept aside in an aggressive effort to drive OxyContin sales ever higher, the complaint alleges. (link)

And ProPublica underlines the fact that prosecutors believe that family members have personal responsibility for the management of the corporation:

The redacted paragraphs leave little doubt about the dominant role of the Sackler family in Purdue’s management. The five Purdue directors who are not Sacklers always voted with the family, according to the complaint. The family-controlled board approves everything from the number of sales staff to be hired to details of their bonus incentives, which have been tied to sales volume, the complaint says. In May 2017, when longtime employee Craig Landau was seeking to become Purdue’s chief executive, he wrote that the board acted as “de-facto CEO.” He was named CEO a few weeks later. (link)

The courts will resolve the question of legal culpability. The question here is one of the ethical standards that should govern the actions and strategies of owners and managers. Here are several simple ethical observations that seem relevant to this case.

First, it is obvious that pain medication is a good thing when used appropriately under the supervision of expert and well-informed physicians. Pain management enhances quality of life for people experiencing pain.

Second, addiction is plainly a bad thing, and it is worse when it leads to predictable death or disability for its victims. A company has a duty of concern for the quality of life of human beings affected by its product, and this extends to a duty to take all possible precautions to minimize the likelihood that human beings will be harmed by the product.

Third, given that the risks of addiction that were known about this product, the company has a moral obligation to treat its relations with physicians and other health providers as occasions of accurate and truthful education about the product, not opportunities for persuasion, inducement, and marketing. Rather than a sales force of representatives whose incomes are determined by the quantity of the product they sell, the company has a moral obligation to train and incentivize its representatives to function as honest educators providing full information about the risks as well as the benefits of the product. And, of course, it has an obligation not to immerse itself in the dynamics of “conflict of interest” discussed elsewhere (link) — this means there should be no incentives provided to the physicians who agree to prescribe the product.

Fourth, it might be argued that the profits generated by the business of a given pharmaceutical product should be used proportionally to ameliorate the unavoidable harms it creates. Rather than making billions in profits from the sale of the product, and then additional hundreds of millions on products that offset the addictions and illness created by dissemination of the product (this was the plan advanced as “Project Tango”), the company and its owners should hold themselves accountable for the harms created by their product. (That is, the social and human costs of addiction should not be treated as “externalities” or even additional sources of profit for the company.)

Finally, there is an important question at a more individual scale. How should we think about super-rich owners of a company who seem to lose sight entirely of the human tragedies created by their company’s product and simply demand more profits, more timely distribution of the profits, and more control of the management decisions of the company? These are individual human beings, and surely they have a responsibility to think rigorously about their own moral responsibilities. The documents released in these court proceedings seem to display an amazing blindness to moral responsibility on the part of some of these owners.

(There are other important cases illustrating the clash between moral responsibility, corporate profits, and corporate decision-making, having to do with the likelihood of collaboration between American companies, their German and Polish subsidiaries, and the Nazi regime during World War II. Edwin Black argues in IBM and the Holocaust: The Strategic Alliance Between Nazi Germany and America’s Most Powerful Corporation-Expanded Edition that the US-based computer company provided important support for Germany’s extermination strategy. Here is a 2002 piece from the Guardian on the update of Black’s book providing more documentary evidence for this claim; link. And here is a piece from the Washington Post on American car companies in Nazi Germany; link. )

(Stephen Arbogast’s Resisting Corporate Corruption: Cases in Practical Ethics From Enron Through The Financial Crisis is an interesting source on corporate ethics,)

Social ontology of government

I am currently writing a book on the topic of the “social ontology of government”. My goal is to provide a short treatment of the social mechanisms and entities that constitute the workings of government. The book will ask some important basic questions: what kind of thing is “government”? (I suggest it is an agglomeration of organizations, social networks, and rules and practices, with no overriding unity.) What does government do? (I simplify and suggest that governments create the conditions of social order and formulate policies and rules aimed at bringing about various social priorities that have been selected through the governmental process.) How does government work — what do we know about the social and institutional processes that constitute its metabolism? (How do government entities make decisions, gather needed information, and enforce the policies they construct?)

In my treatment of the topic of the workings of government I treat the idea of “dysfunction” with the same seriousness as I do topics concerning the effective and functional aspects of governmental action. Examples of dysfunctions include principal-agent problems, conflict of interest, loose coupling of agencies, corruption, bribery, and the corrosive influence of powerful outsiders. It is interesting to me that this topic — ontology of government — has unexpectedly crossed over with another of my interests, the organizational causes of largescale accidents.

If there are guiding perspectives in my treatment, they are eclectic: Neil Fligstein and Doug McAdam, Manuel DeLanda, Nicos Poulantzas, Charles Perrow, Nancy Leveson, and Lyndon B. Johnson, for example.

In light of these interests, I find the front page of the New York Times on March 28, 2019 to be a truly fascinating amalgam of the social ontology of government, with a heavy dose of dysfunction. Every story on the front page highlights one feature or another of the workings and failures of government. Let’s briefly name these features. (The item numbers flow roughly from upper right to lower left.)

Item 1 is the latest installment of the Boeing 737 MAX story. Failures of regulation and a growing regime of “collaborative regulation” in which the FAA delegates much of the work of certification of aircraft safety to the manufacturer appear at this early stage to be a part of the explanation of this systems failure. This was the topic of a recent post (link).

Items 2 and 3 feature the processes and consequences of failed government — the social crisis in Venezuela created in part by the breakdown of legitimate government, and the fundamental and continuing inability of the British government and its prime minister to arrive at a rational and acceptable policy on an issue of the greatest importance for the country. Given that decision-making and effective administration of law are fundamental functions of government, these two examples are key contributions to the ontology of government. The Brexit story also highlights the dysfunctions that flow from the shameful self-dealing of politicians and leaders who privilege their own political interests over the public good. Boris Johnson, this one’s for you!

Item 4 turns us to the  dynamics of presidential political competition. This item falls on the favorable side of the ledger, illustrating the important role that a strong independent press has in helping to inform the public about the past performance and behavior of candidates for high office. It is an important example of depth journalism and provides the public with accurate, nuanced information about an appealing candidate with a policy history as mayor that many may find unpalatable. The story also highlights the role that non-governmental organizations have in politics and government action, in this instance the ACLU.

Item 5 brings us inside the White House and gives the reader a look at the dynamics and mechanisms through which a small circle of presidential advisors are able to determine a particular approach to a policy issue that they favor. It displays the vulnerability the office of president shows to the privileged insiders’ advice concerning policies they personally favor. Whether it is Mick Mulvaney, acting chief of staff to the current president, or Robert McNamara’s advice to JFK and LBJ leading to escalation in Vietnam, the process permits ideologically committed insiders to wield extraordinary policy power.

Item 6 turns to the legislative process, this time in the New Jersey legislature, on the topic of the legalization of marijuana. This story too falls on the positive side of the “function-dysfunction” spectrum, in that it describes a fairly rational and publicly visible process of fact-gathering and policy assessment by a number of New Jersey legislators, leading to the withdrawal of the legislation.

Item 7 turns to the mechanisms of private influence on government, in a particularly unsavory but revealing way. The story reveals details of a high-end dinner “to pa tribute to the guest of honor, Gov. Andrew M. Cuomo.” The article writes, “Lobbyists told their clients that the event would be a good thing to go to”, at a minimum ticket price of $25,000 per couple. This story connects the dots between private interest and efforts to influence governmental policy. In this case the dots are not very far apart.

With a little effort all these items could be mapped onto the diagram of the interconnections within and across government and external social groups provided above.

What the boss wants to hear …

According to David Halberstam in his outstanding history of the war in Vietnam, The Best and the Brightest, a prime cause of disastrous decision-making by Presidents Kennedy and Johnson was an institutional imperative in the Defense Department to come up with a set of facts that conformed to what the President wanted to hear. Robert McNamara and McGeorge Bundy were among the highest-level miscreants in Halberstam’s account; they were determined to craft an assessment of the situation on the ground in Vietnam that conformed best with their strategic advice to the President.

Ironically, a very similar dynamic led to one of modern China’s greatest disasters, the Great Leap Forward famine in 1959. The Great Helmsman was certain that collective agriculture would be vastly more productive than private agriculture; and following the collectivization of agriculture, party officials in many provinces obliged this assumption by reporting inflated grain statistics throughout 1958 and 1959. The result was a famine that led to at least twenty million excess deaths during a two-year period as the central state shifted resources away from agriculture (Frank DikötterMao’s Great Famine: The History of China’s Most Devastating Catastrophe, 1958-62).

More mundane examples are available as well. When information about possible sexual harassment in a given department is suppressed because “it won’t look good for the organization” and “the boss will be unhappy”, the organization is on a collision course with serious problems. When concerns about product safety or reliability are suppressed within the organization for similar reasons, the results can be equally damaging, to consumers and to the corporation itself. General Motors, Volkswagen, and Michigan State University all seem to have suffered from these deficiencies of organizational behavior. This is a serious cause of organizational mistakes and failures. It is impossible to make wise decisions — individual or collective — without accurate and truthful information from the field. And yet the knowledge of higher-level executives depends upon the truthful and full reporting of subordinates, who sometimes have career incentives that work against honesty.

So how can this unhappy situation be avoided? Part of the answer has to do with the behavior of the leaders themselves. It is important for leaders to explicitly and implicitly invite the truth — whether it is good news or bad news. Subordinates must be encouraged to be forthcoming and truthful; and bearers of bad news must not be subject to retaliation. Boards of directors, both private and public, need to make clear their own expectations on this score as well: that they expect leading executives to invite and welcome truthful reporting, and that they expect individuals throughout the organization to provide truthful reporting. A culture of honesty and transparency is a powerful antidote to the disease of fabrications to please the boss.

Anonymous hotlines and formal protection of whistle-blowers are other institutional arrangements that lead to greater honesty and transparency within an organization. These avenues have the advantage of being largely outside the control of the upper executives, and therefore can serve as a somewhat independent check on dishonest reporting.

A reliable practice of accountability is also a deterrent to dishonest or partial reporting within an organization. The truth eventually comes out — whether about sexual harassment, about hidden defects in a product, or about workplace safety failures. When boards of directors and organizational policies make it clear that there will be negative consequences for dishonest behavior, this gives an ongoing incentive of prudence for individuals to honor their duties of honesty within the organization.

This topic falls within the broader question of how individual behavior throughout an organization has the potential for giving rise to important failures that harm the public and harm the organization itself. 


How organizations adapt

Organizations do things; they depend upon the coordinated efforts of numerous individuals; and they exist in environments that affect their ongoing success or failure. Moreover, organizations are to some extent plastic: the practices and rules that make them up can change over time. Sometimes these changes happen as the result of deliberate design choices by individuals inside or outside the organization; so a manager may alter the rules through which decisions are made about hiring new staff in order to improve the quality of work. And sometimes they happen through gradual processes over time that no one is specifically aware of. The question arises, then, whether organizations evolve toward higher functioning based on the signals from the environments in which they live; or on the contrary, whether organizational change is stochastic, without a gradient of change towards more effective functioning? Do changes within an organization add up over time to improved functioning? What kinds of social mechanisms might bring about such an outcome?

One way of addressing this topic is to consider organizations as mid-level social entities that are potentially capable of adaptation and learning. An organization has identifiable internal processes of functioning as well as a delineated boundary of activity. It has a degree of control over its functioning. And it is situated in an environment that signals differential success/failure through a variety of means (profitability, success in gaining adherents, improvement in market share, number of patents issued, …). So the environment responds favorably or unfavorably, and change occurs.

Is there anything in this specification of the structure, composition, and environmental location of an organization that suggests the possibility or likelihood of adaptation over time in the direction of improvement of some measure of organizational success? Do institutions and organizations get better as a result of their interactions with their environments and their internal structure and actors?

There are a few possible social mechanisms that would support the possibility of adaptation towards higher functioning. One is the fact that purposive agents are involved in maintaining and changing institutional practices. Those agents are capable of perceiving inefficiencies and potential gains from innovation, and are sometimes in a position to introduce appropriate innovations. This is true at various levels within an organization, from the supervisor of a custodial staff to a vice president for marketing to a CEO. If the incentives presented to these agents are aligned with the important needs of the organization, then we can expect that they will introduce innovations that enhance functioning. So one mechanism through which we might expect that organizations will get better over time is the fact that some agents within an organization have the knowledge and power necessary to enact changes that will improve performance, and they sometimes have an interest in doing so. In other words, there is a degree of intelligent intentionality within an organization that might work in favor of enhancement.

This line of thought should not be over-emphasized, however, because there are competing forces and interests within most organizations. Previous posts have focused on current organizational theory based on the idea of a “strategic action field” of insiders and outsiders who determine the activities of the organization (Fligstein and McAdam, Crozier; linklink). This framework suggests that the structure and functioning of an organization is not wholly determined by a single intelligent actor (“the founder”), but is rather the temporally extended result of interactions among actors in the pursuit of diverse aims. This heterogeneity of purposive actions by actors within an institution means that the direction of change is indeterminate; it is possible that the coalitions that form will bring about positive change, but the reverse is possible as well.

And in fact, many authors and participants have pointed out that it is often enough not the case that the agents’ interests are aligned with the priorities and needs of the organization. Jack Knight offers persuasive critique of the idea that organizations and institutions tend to increase in their ability to provide collective benefits in Institutions and Social Conflict. CEOs who have a financial interest in a rapid stock price increase may take steps that worsen functioning for shortterm market gain; supervisors may avoid work-flow innovations because they don’t want the headache of an extended change process; vice presidents may deny information to other divisions in order to enhance appreciation of the efforts of their own division. Here is a short description from Knight’s book of the way that institutional adjustment occurs as a result of conflict among players of unequal powers:

Individual bargaining is resolved by the commitments of those who enjoy a relative advantage in substantive resources. Through a series of interactions with various members of the group, actors with similar resources establish a pattern of successful action in a particular type of interaction. As others recognize that they are interacting with one of the actors who possess these resources, they adjust their strategies to achieve their best outcome given the anticipated commitments of others. Over time rational actors continue to adjust their strategies until an equilibrium is reached. As this becomes recognized as the socially expected combination of equilibrium strategies, a self-enforcing social institution is established. (Knight, 143)

A very different possible mechanism is unit selection, where more successful innovations or firms survive and less successful innovations and firms fail. This is the premise of the evolutionary theory of the firm (Nelson and Winter, An Evolutionary Theory of Economic Change). In a competitive market, firms with low internal efficiency will have a difficult time competing on price with more efficient firms; so these low-efficiency firms will go out of business occasionally. Here the question of “units of selection” arises: is it firms over which selection operates, or is it lower-level innovations that are the object of selection?

Geoffrey Hodgson provides a thoughtful review of this set of theories here, part of what he calls “competence-based theories of the firm”. Here is Hobson’s diagram of the relationships that exist among several different approaches to study of the firm.

The market mechanism does not work very well as a selection mechanism for some important categories of organizations — government agencies, legislative systems, or non-profit organizations. This is so, because the criterion of selection is “profitability / efficiency within a competitive market”; and government and non-profit organizations are not importantly subject to the workings of a market.

In short, the answer to the fundamental question here is mixed. There are factors that unquestionably work to enhance effectiveness in an organization. But these factors are weak and defeasible, and the countervailing factors (internal conflict, divided interests of actors, slackness of corporate marketplace) leave open the possibility that institutions change but they do not evolve in a consistent direction. And the glaring dysfunctions that have afflicted many organizations, both corporate and governmental, make this conclusion even more persuasive. Perhaps what demands explanation is the rare case where an organization achieves a high level of effectiveness and consistency in its actions, rather than the many cases that come to mind of dysfunctional organizational activity.

(The examples of organizational dysfunction that come to mind are many — the failures of nuclear regulation of the civilian nuclear industry (Perrow, The Next Catastrophe: Reducing Our Vulnerabilities to Natural, Industrial, and Terrorist Disasters); the failure of US anti-submarine warfare in World War II (Cohen, Military Misfortunes: The Anatomy of Failure in War); and the failure of chemical companies to ensure safe operations of their plants (Shrivastava, Bhopal: Anatomy of Crisis). Here is an earlier post that addresses some of these examples; link. And here are several earlier posts on the topic of institutional change and organizational behavior; linklink.)

System safety engineering and the Deepwater Horizon

The Deepwater Horizon oil rig explosion, fire, and uncontrolled release of oil into the Gulf is a disaster of unprecedented magnitude.  This disaster in the Gulf of Mexico appears to be more serious in objective terms than the Challenger space shuttle disaster in 1986 — in terms both of immediate loss of life and in terms of overall harm created. And sadly, it appears likely that the accident will reveal equally severe failures of management of enormously hazardous processes, defects in the associated safety engineering analysis, and inadequacies of the regulatory environment within which the activity took place.  The Challenger disaster fundamentally changed the ways that we thought about safety in the aerospace field.  It is likely that this disaster too will force radical new thinking and new procedures concerning how to deal with the inherently dangerous processes associated with deep-ocean drilling.

Nancy Leveson is an important expert in the area of systems safety engineering, and her book, Safeware: System Safety and Computers, is a genuinely important contribution.  Leveson led the investigation of the role that software design might have played in the Challenger disaster (link).  Here is a short, readable white paper of hers on system safety engineering (link) that is highly relevant to the discussions that will need to occur about deep-ocean drilling.  The paper does a great job of laying out how safety has been analyzed in several high-hazard industries, and presents a set of basic principles for systems safety design.  She discusses aviation, the nuclear industry, military aerospace, and the chemical industry; and she points out some important differences across industries when it comes to safety engineering.  Here is an instructive description of the safety situation in military aerospace in the 1950s and 1960s:

Within 18 months after the fleet of 71 Atlas F missiles became operational, four blew up in their silos during operational testing. The missiles also had an extremely low launch success rate.  An Air Force manual describes several of these accidents: 

     An ICBM silo was destroyed because the counterweights, used to balance the silo elevator on the way up and down in the silo, were designed with consideration only to raising a fueled missile to the surface for firing. There was no consideration that, when you were not firing in anger, you had to bring the fueled missile back down to defuel. 

     The first operation with a fueled missile was nearly successful. The drive mechanism held it for all but the last five feet when gravity took over and the missile dropped back. Very suddenly, the 40-foot diameter silo was altered to about 100-foot diameter. 

     During operational tests on another silo, the decision was made to continue a test against the safety engineer’s advice when all indications were that, because of high oxygen concentrations in the silo, a catastrophe was imminent. The resulting fire destroyed a missile and caused extensive silo damage. In another accident, five people were killed when a single-point failure in a hydraulic system caused a 120-ton door to fall. 

     Launch failures were caused by reversed gyros, reversed electrical plugs, bypass of procedural steps, and by management decisions to continue, in spite of contrary indications, because of schedule pressures. (from the Air Force System Safety Handbook for Acquisition Managers, Air Force Space Division, January 1984)

Leveson’s illustrations from the history of these industries are fascinating.  But even more valuable are the principles of safety engineering that she recapitulates.  These principles seem to have many implications for deep-ocean drilling and associated technologies and systems.  Here is her definition of systems safety:

System safety uses systems theory and systems engineering approaches to prevent foreseeable accidents and to minimize the result of unforeseen ones.  Losses in general, not just human death or injury, are considered. Such losses may include destruction of property, loss of mission, and environmental harm. The primary concern of system safety is the management of hazards: their identification, evaluation, elimination, and control through analysis, design and management procedures.

Here are several fundamental principles of designing safe systems that she discusses:
  • System safety emphasizes building in safety, not adding it on to a completed design.
  • System safety deals with systems as a whole rather than with subsystems or components.
  • System safety takes a larger view of hazards than just failures.
  • System safety emphasizes analysis rather than past experience and standards.
  • System safety emphasizes qualitative rather than quantitative approaches.
  • Recognition of tradeoffs and conflicts.
  • System safety is more than just system engineering.

And here is an important summary observation about the complexity of safe systems:

Safety is an emergent property that arises at the system level when components are operating together. The events leading to an accident may be a complex combination of equipment failure, faulty maintenance, instrumentation and control problems, human actions, and design errors. Reliability analysis considers only the possibility of accidents related to failures; it does not investigate potential damage that could result from successful operation of the individual components.

How do these principles apply to the engineering problem of deep-ocean drilling?  Perhaps the most important implications are these: a safe system needs to be based on careful and comprehensive analysis of the hazards that are inherently involved in the process; it needs to be designed with an eye to handling those hazards safely; and it can’t be done in a piecemeal, “fly-test-fly” fashion.

It would appear that deep-ocean drilling is characterized by too little analysis and too much confidence in the ability of engineers to “correct” inadvertent outcomes (“fly-fix-fly”).  The accident that occurred in the Gulf last month can be analyzed into several parts. First is the explosion and fire that destroyed the drilling rig and led to the tragic loss of life of 11 rig workers. And the second is the uncalculated harms caused by the uncontrolled venting of perhaps a hundred thousand barrels of crude oil to date into the Gulf of Mexico, now threatening the coasts and ecologies of several states.  Shockingly, there is now no high-reliability method for capping the well at a depth of over 5,000 feet; so the harm can continue to worsen for a very extended period of time.

The safety systems on the platform itself will need to be examined in detail. But the bottom line will probably look something like this: the platform is a complex system vulnerable to explosion and fire, and there was always a calculable (though presumably small) probability of catastrophic fire and loss of the ship. This is pretty analogous to the problem of safety in aircraft and other complex electro-mechanical systems. The loss of life in the incident is terrible but confined.  Planes crash and ships sink.

What elevates this accident to a globally important catastrophe is what happened next: destruction of the pipeline leading from the wellhead 5,000 feet below sea level to containers on the surface; and the failure of the shutoff valve system on the ocean floor. These two failures have resulted in unconstrained release of a massive and uncontrollable flow of crude oil into the Gulf and the likelihood of environmental harms that are likely to be greater than the Exxon Valdez.

Oil wells fail on the surface, and they are difficult to control. But there is a well-developed technology that teams of oil fire specialists like Red Adair employ to cap the flow and end the damage. We don’t have anything like this for wells drilled under water at the depth of this incident; this accident is less accessible than objects in space for corrective intervention. So surface well failures conform to a sort of epsilon-delta relationship: an epsilon accident leads to a limited delta harm. This deep-ocean well failure in the Gulf is catastrophically different: the relatively small incident on the surface is resulting in an unbounded and spiraling harm.

So was this a foreseeable hazard? Of course it was. There was always a finite probability of total loss of the platform, leading to destruction of the pipeline. There was also a finite probability of failure of the massive sea-floor emergency shutoff valve. And, critically, it was certainly known that there is no high-reliability fix in the event of failure of the shutoff valve. The effort to use the dome currently being tried by BP is untested and unproven at this great depth. The alternative of drilling a second well to relieve pressure may work; but it will take weeks or months. So essentially, when we reach the end of this failure pathway, we arrive at this conclusion: catastrophic, unbounded failure. If you reach this point in the fault tree, there is almost nothing to be done. And this is a totally irrational outcome to tolerate; how could any engineer or regulatory agency have accepted the circumstances of this activity, given that one possible failure pathway would lead predictably to unbounded harms?

There is one line of thought that might have led to the conclusion that deep ocean drilling is acceptably safe: engineers and policy makers might have optimistically overestimated the reliability of the critical components. If we estimate that the probability of failure of the platform is 1/1000, failure of the pipeline is 1/100, and failure of the emergency shutoff valve is 1/10,000 — then one might say that the probability of the nightmare scenario is vanishingly small: one in a billion. Perhaps one might reason that we can disregard scenarios with this level of likelihood. Reasoning very much like this was involved in the original safety designs of the shuttle (Safeware: System Safety and Computers). But several things are now clear: this disaster was not virtually impossible. In fact, it actually occurred. And second, it seems likely enough that the estimates of component failure are badly understated.

What does this imply about deep ocean drilling? It seems inescapable that the current state of technology does not permit us to take the risk of this kind of total systems failure. Until there is a reliable and reasonably quick technology for capping a deep-ocean well, the small probability of this kind of failure makes the use of the technology entirely unjustifiable. It makes no sense at all to play Russian roulette when the cost of failure is massive and unconstrained ecological damage.

There is another aspect of this disaster that needs to be called out, and that is the issue of regulation. Just as the nuclear industry requires close, rigorous regulation and inspection, so deep-ocean drilling must be rigorously regulated. The stakes are too high to allow the oil industry to regulate itself. And unfortunately there are clear indications of weak regulation in this industry (link).

(Here are links to a couple of earlier posts on safety and technology failure (link, link).)

Patient safety — Canada and France


Patient safety is a key issue in managing and assessing a regional or national health system. There are very sizable variations in patient safety statistics across hospitals, with significantly higher rates of infection and mortality in some institutions than others. Why is this? And what can be done in order to improve the safety performance of low-safety institutions, and to improve the overall safety performance of the hospital environment nationally?

Previous posts have made the point that safety is the net effect of a complex system within a hospital or chemical plant, including institutions, rules, practices, training, supervision, and day-to-day behavior by staff and supervisors (post, post). And experts on hospital safety agree that improvements in safety require careful analysis of patient processes in order to redesign processes so as to make infections, falls, improper medications, and unnecessary mortality less likely. Institutional design and workplace culture have to change if safety performance is to improve consistently and sustainably. (Here is a posting providing a bit more discussion of the institutions of a hospital; post.)

But here is an important question: what are the features of the social and legal environment that will make it most likely that hospital administrators will commit themselves to a thorough-going culture and management of safety? What incentives or constraints need to exist to offset the impulses of cost-cutting and status quo management that threaten to undermine patient safety? What will drive the institutional change in a health system that improving patient safety requires?

Several measures seem clear. One is state regulation of hospitals. This exists in every state; but the effectiveness of regulatory regimes varies widely across context. So understanding the dynamics of regulation and enforcement is a crucial step to improving hospital quality and patient safety. The oversight of rigorous hospital accreditation agencies is another important factor for improvement. For example, the Joint Commission accredits thousands of hospitals in the United States (web page) through dozens of accreditation and certification programs. Patient safety is the highest priority underlying Joint Commission standards of accreditation. So regulation and the formulation of standards are part of the answer. But a particularly important policy tool for improving safety performance is the mandatory collection and publication of safety statistics, so that potential patients can decide between hospitals on the basis of their safety performance. Publicity and transparency are crucial parts of good management behavior; and secrecy is a refuge of poor performance in areas of public concern such as safety, corruption, or rule-setting. (See an earlier post on the relationship between publicity and corruption.)

But here we have a little bit of a conundrum: achieving mandatory publication of safety statistics is politically difficult, because hospitals have a business interest in keeping these data private. So there was a lot of resistance to mandatory reporting of basic patient safety data in the US over the past twenty years. Fortunately, the public interest in having these data readily available has largely prevailed, and hospitals are now required to publish a broader and broader range of data on patient safety, including hospital-induced infection rates, ventilator-induced pneumonias, patient falls, and mortality rates. Here is a useful tool from USA Today that lets the public and the patient gather information about his/her hospital options and how these compare with other hospitals regionally and nationally. This is an effective accountability mechanism that inevitably drives hospitals towards better performance.

Canada has been very active in this area. Here is a website published by the Ontario Ministry of Health and Long-Term Care. The province requires hospitals to report a number of factors that are good indicators of patient safety: several kinds of hospital-born infections; central-line primary bloodstream infection and ventilator-associated pneumonia; surgical-site infection prevention activity; and hospital-standardized mortality ratio. The user can explore the site and find that there are in fact wide variations across hospitals in the province. This is likely to change patient choice; but it also serves as an instant guide for regulatory agencies and local hospital administrators as they attempt to focus attention on poor management practices and institutional arrangements. (It would be helpful for the purpose of comparison if the data could be easily downloaded into a spreadsheet.)

On first principles, it seems likely that any country that has a hospital system in which the safety performance of each hospital is kept secret will also show a wide distribution of patient safety outcomes across institutions, and will have an overall safety record that is much lower than it could be. This is because secrecy gives hospital administrators the ability to conceal the risks their institutions impose on patients through bad practices. So publicity and regular publication of patient safety information seems to be a necessary precondition to maintaining a high-safety hospital system.

But here is the crucial point: many countries continue to permit secrecy when it comes to hospital safety. In particular, this seems to be true in France. It seems that the French medical and hospital system continues to display a very high degree of secrecy and opacity when it comes to patient safety. In fact, anecdotal information about French hospitals suggests a wide range of levels of hospital-born infections in different hospitals. Hospital-born infections (infections nosocomiales) are an important and rising cause of patient illness and morbidity. And there are well-known practices and technologies that substantially reduce the incidence of these infections. But the implementation of these practices requires strong commitment and dedication at the unit level; and this degree of commitment is unlikely to occur in an environment of secrecy.

In fact, I have not been able to discover any of the tools that are now available for measuring patient safety in hospitals in North America in application to hospitals in France. But without this regular reporting, there is no mechanism through which institutions with bad safety performance can be “ratcheted” up into better practices and better safety outcomes. The impression that is given in the French medical system is that the doctors and the medical authorities are sacrosanct; patients are not expected to question their judgment, and the state appears not to require institutions to report and publish fundamental safety information. Patients have very little power and the media so far seem to have paid little attention to the issues of patient safety in French hospitals. This 2007 article in LePoint seems to be a first for France in that it provides quantitative rankings of a large number of hospitals in their treatment of a number of diseases. But it does not provide the kinds of safety information — infections, falls, pneumonias — that are core measures of patient safety.

There is a French state agency, OFFICE NATIONAL D’INDEMNISATION DES ACCIDENTS MÉDICAUX (ONIAM), that provides compensation to patients who can demonstrate that their injuries are the result of hospital-induced causes, including especially hospital-associated infections. But it appears that this agency is restricted to after-the-fact recognition of hospital errors rather than pro-active programs designed to reduce hospital errors. And here is a French government web site devoted to the issue of hospital infections. It announces a multi-pronged strategy for controlling the problem of infections nosocomiales, including the establishment of a national program of surveillance of the rates of these infections. So far, however, I have not been able to locate web resources that would provide hospital-level data about infection rates.

So I am offering a hypothesis that I would be very happy to find to be refuted: that the French medical establishment continues to be bureaucratically administered with very little public exposure of actual performance when it comes to patient safety. And without this system of publicity, it seems very likely that there are wide and tragic variations across French hospitals with regard to patient safety.

Are there French medical sociologists and public health researchers who are working on the issue of patient safety in French hospitals? Can good contemporary French sociologists like Céline Béraud, Baptiste Coulmont, and Philippe Masson offer some guidance on this topic (post)? If readers are aware of databases and patient safety research programs in France that are relevant to these topics, I would be very happy to hear about them.

Update: Baptiste Coulmont (blog) passes on this link to Réseau d’alerte d’investigations et de surveillance des infections nosocomia (RAISIN) within the Institut de veille sanitaire. The site provides research reports and regional assessments of nosocomia incidence. It does not appear to provide data at the level of the specific hospitals and medical centers. Baptiste refers also to work by Jean Peneff, a French medical sociologist and author of La France malade de ses médecins. Here is a link to a subsequent research report by Peneff. Thanks, Baptiste.

Safety as a social effect


Some organizations pose large safety issues for the public because of the technologies and processes they encompass. Industrial factories, chemical and nuclear plants, farms, mines, and aviation all represent sectors where safety issues are critically important because of the inherent risks of the processes they involve. However, “safety” is not primarily a technological characteristic; instead, it is an aggregate outcome that depends as much on the social organization and management of the processes involved as it does on the technologies they employ. (See an earlier posting on technology failure.)

We can define safety by relating it to the concept of “harmful incident”. A harmful incident is an occurrence that leads to injury or death of one or more persons. Safety is a relative concept, in that it involves analysis and comparison of the frequencies of harmful incidents relative to some measure of the volume of activity. If the claim is made that interstate highways are safer than county roads, this amounts to the assertion that there are fewer accidents per vehicle-mile on the former than the latter. If it is held that commercial aviation is safer than automobile transportation, this amounts to the claim that there are fewer harms per passenger-mile in air travel than auto travel. And if it is observed that the computer assembly industry is safer than the mining industry, this can be understood to mean that there are fewer harms per person-day in the one sector than the other. (We might give a parallel analysis of the concept of a healthy workplace.)

This analysis highlights two dimensions of industrial safety: the inherent capacity for creating harms associated with the technology and processes in use (heavy machinery, blasting, and uncertain tunnel stability in mining, in contrast to a computer and a red pencil on the editorial offices of a newspaper), and the processes and systems that are in place to guard against harm. The first set of factors is roughly “technological,” while the second set is social and organizational.

Variations in safety records across industries and across sites within a given industry provide an excellent tool for analyzing the effects of various institutional arrangements. It is often possible to pinpoint a crucial difference in organization — supervision, training, internal procedures, inspection protocols, etc. — that can account for a high accident rate in one factory and a low rate in an otherwise similar factory in a different state.

One of the most important findings of safety engineering is that organization and culture play critical roles in enhancing the safety characteristics of a given activity — that is to say, safety is strongly influenced by social factors that define and organize the behaviors of workers, users, or managers. (See Charles Perrow, Normal Accidents: Living with High-Risk Technologies and Nancy Leveson, Safeware: System Safety and Computers, for a couple of excellent treatments of the sociological dimensions of safety.)

This isn’t to say that only social factors can influence safety performance within an activity or industry. In fact, a central effort by safety engineers involves modifying the technology or process so as to remove the source of harm completely — what we might call “passive” safety. So, for example, if it is possible to design a nuclear reactor in such a way that a loss of coolant leads automatically to shutdown of the fission reaction, then we have designed out of the system the possibility of catastrophic meltdown and escape of radioactive material. This might be called “design for soft landings”.

However, most safety experts agree that the social and organizational characteristics of the dangerous activity are the most common causes of bad safety performance. Poor supervision and inspection of maintenance operations leads to mechanical failures, potentially harming workers or the public. A workplace culture that discourages disclosure of unsafe conditions makes the likelihood of accidental harm much greater. A communications system that permits ambiguous or unclear messages to occur can lead to air crashes and wrong-site surgeries.

This brings us at last to the point of this posting: the observation that safety data in a variety of industries and locations permit us to probe organizational features and their effects with quite a bit of precision. This is a place where institutions and organizations make a big difference in observable outcomes; safety is a consequence of a specific combination of technology, behaviors, and organizational practices. This is a good opportunity for combining comparative and statistical research methods in support of causal inquiry, and it invites us to probe for the social mechanisms that underlie the patterns of high or low safety performance that we discover.

Consider one example. Suppose we are interested in discovering some of the determinants of safety records in deep mining operations. We might approach the question from several points of view.

  • We might select five mines with “best in class” safety records and compare them in detail with five “worst in class” mines. Are there organizational or techology features that distinguish the cases?
  • We might do the large-N version of this study: examine a sample of mines from “best in class” and “worst in class” and test whether there are observed features that explain the differences in safety records. (For example, we may find that 75% of the former group but only 10% of the latter group are subject to frequent unannounced safety inspection. This supports the notion that inspections enhance safety.)
  • We might compare national records for mine safety–say, Poland and Britain. We might then attempt to identify the general characteristics that describe mines in the two countries and attempt to explain observed differences in safety records on the basis of these characteristics. Possible candidates might include degree of regulatory authority, capital investment per mine, workers per mine, …
  • We might form a hypothesis about a factor that should be expected to enhance safety — a company-endorsed safety education program, let’s say — and then randomly assign a group of mines to “treated” and “untreated” groups and compare safety records. (This is a quasi-experiment; see an earlier posting for a discussion of this mode of reasoning.) If we find that the treated group differs significantly in average safety performance, this supports the claim that the treatment is causally relevant to the safety outcome.

Investigations along these lines can establish an empirical basis for judging that one or more organizational features A, B, C have consequences for safety performance. In order to be confident in these judgments, however, we need to supplement the empirical analysis with a theory of the mechanisms through which features like A, B, C influence behavior in such a way as to make accidents more or less likely.

Safety, then, seems to be a good area of investigation for researchers within the general framework of the new institutionalism, because the effects of institutional and organizational differences emerge as observable differences in the rates of accidents in comparable industrial settings. (See Mary Brinton and Victor Nee, The New Institutionalism in Sociology, for a collection of essays on this approach.)

Explaining technology failure


Technology failure is often spectacular and devastating — witness Bhopal, Three Mile Island, Chernobyl, the Challenger disaster, and the DC10 failures of the 1970s. But in addition to being a particularly important cause of human suffering, technology failures are often very complicated social outcomes that involve a number of different kinds of factors. And this makes them interesting topics for social science study.

It is fairly common to attribute spectacular failures to a small number of causes — for example, faulty design, operator error, or a conjunction of unfortunate but singly non-fatal accidents. What sociologists who have studied technology failures have been able to add is the fact that the root causes of disastrous failures can often be traced back to deficiencies of the social organizations in which they are designed, used, or controlled (Charles Perrow, Normal Accidents: Living with High-Risk Technologies). Technology failures are commonly the result of specific social organizational defects; so technology failure is often or usually a social outcome, not simply a technical or mechanical misadventure. (Dietrich Dorner’s The Logic of Failure: Recognizing and Avoiding Error in Complex Situations is a fascinating treatment of a number of cases of failure; Eliot Cohen’s Military Misfortunes: The Anatomy of Failure in War provides an equally interesting treatment of military failures; for example, the American failure to suppress submarine attacks on merchant shipping off the US coast in the early part of World War II.)

First, a few examples. The Challenger space shuttle was destroyed as a result of O-rings in the rocket booster units that became brittle because of the low launch temperature — evidently an example of faulty design. But various observers have asked the more fundamental question: what features of the science-engineering-launch command process that was in place within NASA and between NASA and its aerospace suppliers led it to break down so profoundly (Diane Vaughan, The Challenger Launch Decision: Risky Technology, Culture, and Deviance at NASA)? What organizational defects made it possible for this extended group of talented scientists and engineers to come to the decision to launch over the specific warnings that were brought forward by the rocket provider’s team about the danger of a cold-temperature launch? Edward Tufte attributes the failure to poor scientific communication (Visual Explanations: Images and Quantities, Evidence and Narrative); Morton Thiokol engineer Roger Boisjoly attributes it to an excessively hierarchical and deferential relation between the engineers and the launch decision-makers. Either way, features of the NASA decision-making process — social-organizational features — played a critical role.

Bhopal represents another important case. Catastrophic failure of a Union Carbide pesticide plant in Bhopal, India in 1984 led to a release of a highly toxic gas. The toxic cloud passed into the densely populated city of Bhopal. Half a million people were affected, and between 16 and 30 thousand people died as a result. A chemical plant is a complex physical system. But even more, it is operated and maintained by a complex social organization, involving training, supervision, and operational assessment and oversight. In his careful case study of Bhopal, Paul Shrivastava maintains that this disaster was caused by a set of persistent and recurring organizational failures, especially in the areas of training and supervision of operators (Bhopal: Anatomy of Crisis).

Close studies of the nuclear disasters at Chernobyl and Three Mile Island have been equally fruitful in terms of shedding light on the characteristics of social, political, and business organization that have played a role in causing these great disasters. The stories are different in the two cases; but in each case, it turns out that social factors, including both organizational features internal to the nuclear plants and political features in the surrounding environment, played a role in the occurrence and eventual degree of destruction associated with the disasters.

These cases illustrate several important points. First, technology failures and disasters almost always involve a crucial social dimension — in the form of the organizations and systems through which the technology is developed, deployed, and maintained and the larger social environment within which the technology is situated. Technology systems are social systems. Second, technology failures therefore constitute an important subject matter for sociological and organizational research. Sociologists can shed light on the ways in which a complex technology might fail. And third, and most importantly, the design of safe systems — particularly systems that have the potential for creating great harms — needs to be an interdisciplinary effort. The perspectives of sociologists and organizational theorists need to be incorporated as deeply as those of industrial and systems engineers into the design of systems that will preserve a high degree of safety. This is an important realization for the high profile risky industries — aviation, chemicals, nuclear power. But it is also fundamental for other important social institutions, including especially hospitals and health systems. Safe technologies will only exist when they are embedded in safe, fault-tolerant organizations and institutions. And all of this means, in turn, that there is an urgent need for a sociology of safety.

%d bloggers like this: