Regulatory failure and the 737 MAX disasters

The recent crashes of two Boeing 737 MAX aircraft raise questions about the safety certification process through which this modified airframe was certified for use by the FAA. Recent accounts of the design and manufacture of the aircraft demonstrate an enormous pressure for speed and great pressure to reduce costs. Attention has focused on a software system, MCAS, which was a feature needed to adapt to the aerodynamics created by repositioning of larger engines on the existing 737 body. The software was designed to automatically intervene to prevent stall if a single sensor in the nose indicated unacceptable angle of ascent. The crash investigations are not complete, but current suspicions are that the pilots in the two aircraft were unable to control or disable the nose-down response of the system in the roughly 40 seconds they had to recover control of the aircraft. (James Fallows provides a good and accessible account of the details of the development of the 737 MAX in a story in the Atlanticlink.)

The question here concerns the regulatory background of the aircraft: was the certification process through which the 737 MAX was certified to fly a sufficiently rigorous and independent one?

Thomas Kaplan details in a New York Times article the division of responsibility that has been created in the certification process over the past several decades between the FAA and the manufacturer (NYT 3/27/19). Under this program, the FAA delegates a substantial part of the work of certification evaluation to the manufacturer and its engineering staff. Kaplan writes:

In theory, delegating much of the day-to-day regulatory work to Boeing allows the FAA to focus its limited resources on the most critical safety work, taps into existing industry technical expertise at a time when airliners are becoming increasingly complex, and allows Boeing in particular to bring out new planes faster at a time of intense global competition with its European rival Airbus.

However, it is apparent to both outsiders and insiders that this creates the possibility of impairing the certification process by placing crucial parts of the evaluation in the hands of experts whose interests and careers lie in the hands of the corporation whose product they are evaluating. This is an inherent conflict of interest for the employee, and it is potentially a critical flaw in the process from the point of view of safety. (See an earlier post on the need for an independent and empowered safety officer within complex and risky processes; link.)

Senator Richard Blumenthal (Connecticut) highlighted this concern when he wrote to the inspector general last week: “The staff responsible for regulating aircraft safety are answerable to the manufacturers who profit from cutting corners, not the American people who may be put at risk.”

A 2011 audit report from the Transportation Department’s inspector general’s office highlighted exactly this kind of issue: “The report cited an instance where FAA engineers were concerned about the ‘integrity’ of an employee acting on the agency’s behalf at an unnamed manufacturer because the employee was ‘advocating a position that directly opposed FAA rules on an aircraft fuel system in favor of the manufacturer’.” The article makes the point that Congress has encouraged this program of delegation in order to constrain budget requirements for the federal agency.

Kaplan notes that there is also a worrisome degree of exchange of executive staff between the FAA and the airline industry, raising the possibility that the industry’s priorities about cost and efficiency may unduly influence the regulatory agency:

The part of the FAA under scrutiny, the Transport Airplane Directorate, was led at the time by an aerospace engineer names Ali Bahrami. The next year, he took a job at the Aerospace Industries Association, a trade group whose members include Boeing. In that position, he urged his former agency to allow manufacturers like Boeing to perform as much of the work of certifying new planes as possible. Mr. Bahrami is now back at the FAA as its top safety official.

This episode illustrates one of the key dysfunctions of organizations that have been highlighted elsewhere here: the workings of conflict of commitment and interest within an organization, and the ability that the executives of an organization have to impose behavior and judgment on their employees that are at odds with the responsibilities these individuals have to other important social goods, including airline safety. The episode has a lot in common with the sequence of events leading to the launch of Space Shuttle Challenger (Vaughan, The Challenger Launch Decision: Risky Technology, Culture, and Deviance at NASA).

Charles Perrow has studied system failure extensively since publication of his important book, Normal Accidents: Living with High-Risk Technologies and extending through his 2011 book The Next Catastrophe: Reducing Our Vulnerabilities to Natural, Industrial, and Terrorist Disasters. In a 2015 article, “Cracks in the ‘regulatory state'” (link), he summarizes some of his concerns about the effectiveness of the regulatory enterprise. The abstract of the article shows its relevance to the current case: 

Over the last 30 years, the U.S. state has retreated from its regulatory responsibility over private-sector economic activities. Over the same period, a celebratory literature, mostly in political science, has developed, characterizing the current period as the rise of the regulatory state or regulatory capitalism. The notion of regulation in this literature, however, is a perverse one—one in which regulators mostly advise rather than direct, and industry and firm self- regulation is the norm. As a result, new and potentially dangerous technologies such as fracking or mortgage backed derivatives are left unregulated, and older necessary regulations such as prohibitions are weakened. This article provides a joint criticism of the celebratory literature and the deregulation reality, and strongly advocates for a new sociology of regulation that both recognizes and documents these failures. (203)

The 2015 article highlights some of the precise sources of failure that seem to be evident in the 737 MAX case. “Government assumes a coordinating rather than a directive role, in this account, as regulators draw upon industry best practices, its standard-setting proclamations, and encourage self-monitoring” (203). This is precisely what current reporting demonstrates about the FAA relationship to the manufacturers.

One of the key flaws of self-monitoring is the lack of truly independent inspectors:

Part of the problem stems from the failure of firms to select truly independent inspectors. Firms can, in fact, select their own inspectors—for example, firemen or police from the local areas who are quite conscious of the economic power of the local chemical firm they are to inspect. (205)

Here again, the Boeing 737 MAX certification story seems to illustrate this defect as well. How serious are these “cracked regulatory institutions”? According to Perrow they are deadly serious. Here is Perrow’s summary assessment about the relationship between regulatory failure and catastrophe:

Almost every major industrial accident in recent times has involved either regulatory failure or the deregulation demanded by business and industry. For more examples, see Perrow (2011). It is hard to make the case that the industries involved have failed to innovate because of federal regulation; in particular, I know of no innovations in the safety area that were stifled by regulation. Instead, we have a deregulated state and deregulated capitalism, and rising environmental problems accompanied by growing income and wealth inequality. (210)

In short, we seem to be at the beginning of an important reveal of the cost of neoliberal efforts to minimize regulation and to pass the responsibility for safety significantly to the manufacturer. 

(Baldwin, Cave, and Lodge provide a good exposure to current thinking about government regulation in Understanding Regulation: Theory, Strategy, and Practice, 2nd Edition. Their Oxford Handbook of Regulation also provides excellent resources on this topic.)

Philosophy of technology?

Is there such a thing as “philosophy of technology”? Is there a “philosophy of cooking” or a “philosophy of architecture”? All of these are practical activities – praxis – with large bodies of specialized knowledge and skill involved in their performance. But where does philosophy come in?

Most of us trained in analytic philosophy think of a philosophical topic as one that can be formulated in terms of a small number of familiar questions: what are the nature and limitations of knowledge in this area? What ethical or normative problems does this area raise? What kinds of conceptual issues need to be addressed before we can discuss problems in this area clearly and intelligently? Are there metaphysical issues raised by this area — special kinds of things that need special philosophical attention? Does “technology” support this kind of analytical approach?
We might choose to pursue a philosophy of technology in an especially minimalist (and somewhat Aristotelian) way, along these lines:

  • Human beings have needs and desires that require material objects for their satisfaction. 
  • Human beings engage in practical activity to satisfy their needs and desires.
  • Intelligent beings often seek to utilize and modify their environments so as to satisfy their needs and desires. 
  • Physical bodies are capable of rudimentary environment modification, which may permit adequate satisfaction of needs and desires in propitious environments (dolphins).
  • Intelligent beings often seek to develop “tools” to extend the powers of their bodies to engage in environment modification.
  • The use of tools produces benefits and harms for self and others, which raises ethical issues.

Now we can introduce the idea of the accumulation of knowledge (“science”):

  • Human beings have the capacity to learn how the world around them works, and they can learn the causal properties of materials and natural entities. 
  • Knowledge of causal properties permits intelligent intervention in the world.
  • Gaining scientific knowledge of the world creates the possibility of the invention of knowledge-based artifacts (instruments, tools, weapons).

And history suggests we need to add a few Hobbesian premises:

  • Human beings often find themselves in conflict with other agents for resources supporting the satisfaction of their needs and desires.
  • Intelligent beings seek to develop tools (weapons) to extend the powers of their bodies to engage in successful conflict with other agents.

Finally, history seems to make it clear that tools, machines, and weapons are not purely individual products; rather, social circumstances and social conflict influence the development of the specific kinds of tools, machines, and weapons that are created in a particular historical setting.

The idea of technology can now be fitted into the premises identified here. Technology is the sum of a set of tools, machines, and practical skills available at a given time in a given culture through which needs and interests are satisfied and the dialectic of power and conflict furthered.
This treatment suggests several leading questions for a philosophy of technology:

  1. How does technology relate to human nature and human needs?
  2. How does technology relate to intelligence and creativity?
  3. How does technology relate to scientific knowledge?
  4. How does technology fit into the logic of warfare?
  5. How does technology fit into the dialectic of social control among groups?
  6. How does technology relate to the social, historical, and cultural environment?
  7. Is the process of technology change determined by the technical characteristics of the technology?
  8. How does technology relate to issues of justice and morality?

Here are a few important contributions to several of these topics.

Lynn White’s Medieval Technology and Social Change illustrates almost all elements of this configuration. His classic book begins with the dynamics of medieval warfare (the impact of the development of the stirrup on mounted combat); proceeds to food production (the development and social impact of the heavy iron plough); and closes with medieval machines.

Charles Sabel’s treatment of industrialization and the creation of powered machinery in Work and Politics: The Division of Labour in Industry addresses topic 5; Sabel demonstrates how industrialization and the specific character of mechanization that ensued was a process substantially guided by conflicts of interest between workers and owners, and technologies were selected by owners that reduced the powers of resistance of workers. Sabel and Zeitlin make this argument in greater detail in World of Possibilities: Flexibility and Mass Production in Western Industrialization. One of their most basic arguments is the idea that firms are strategic and adaptive as they deal with a current set of business challenges. Rather than an inevitable logic of new technologies and their organizational needs, we see a highly adaptive and selective process in which firms pick and choose among alternatives, often mixing the choices to hedge against failure. They consider carefully a range of possible changes on the horizon, a set of possible strategic adaptations that might be selected; and they frequently hedge their bets by investing in both the old and the new technology. “Economic agents, we found again and again in the course of the seminar’s work, do not maximize so much as they strategize” (5). (Here is a more extensive discussion of Sabel and Zeitlin; link.)

The logic underlying the idea of technological inevitability (topic 7) goes something like this: a new technology creates a set of reasonably accessible new possibilities for achieving new forms of value: new products, more productive farming techniques, or new ways of satisfying common human needs. Once the technology exists, agents or organizations in society will recognize those new opportunities and will attempt to take advantage of them by investing in the technology and developing it more fully. Some of these attempts will fail, but others will succeed. So over time, the inherent potential of the technology will be realized; the technology will be fully exploited and utilized. And, often enough, the technology will both require and force a new set of social institutions to permit its full utilization; here again, agents will recognize opportunities for gain in the creation of social innovations, and will work towards implementing these social changes.

This view of history doesn’t stand up to scrutiny, however. There are many examples of technologies that failed to come to full development (the water mill in the ancient world and the Betamax in the contemporary world). There is nothing inevitable about the way in which a technology will develop — imposed, perhaps, by the underlying scientific realities of the technology; and there are numerous illustrations of a more complex back-and-forth between social conditions and the development of a technology. So technological determinism is not a credible historical theory.
Thomas Hughes addresses topic 6 in his book, Human-Built World: How to Think about Technology and Culture

Here Hughes considers how technology has affected our cultures in the past two centuries. The twentieth-century city, for example, could not have existed without the inventions of electricity, steel buildings, elevators, railroads, and modern waste-treatment technologies. So technology “created” the modern city. But it is also clear that life in the twentieth-century city was transformative for the several generations of rural people who migrated to them. And the literature, art, values, and social consciousness of people in the twentieth century have surely been affected by these new technology systems. Each part of this complex story involves processes that are highly contingent and highly intertwined with social, economic, and political relationships. And the ultimate shape of the technology is the result of decisions and pressures exerted throughout the web of relationships through which the technology took shape. But here is an important point: there is no moment in this story where it is possible to put “technology” on one side and “social context” on the other. Instead, the technology and the society develop together.

Peter Galison’s treatment of the simultaneous discovery of the relativity of time measurement by Einstein and Poincaré in Einstein’s Clocks and Poincaré’s Maps: Empires of Time provides a valuable set of insights into topic 3. Galison shows that Einstein’s thinking was very much influenced by practical issues in the measurement of time by mechanical devices. This has an interesting corollary: the scientific imagination is sometimes stimulated by technology issues, just as technology solutions are created through imaginative use of new scientific theories.

Topic 8 has produced an entire field of research of its own. The morality of the use of autonomous drones in warfare; the ethical issues raised by CRISPR technology in human embryos; the issues of justice and opportunity created by the digital divide between affluent people and poor people; privacy issues created by ubiquitous facial recognition technology — all these topics raise important moral and social-justice issues. Here is an interesting thought piece by Michael Lynch in the Guardian on the topic of digital privacy (link). Lynch is the author of The Internet of Us: Knowing More and Understanding Less in the Age of Big Data.

So, yes, there is such a thing as the philosophy of technology. But to be a vibrant and intellectually creative field, it needs to be cross-disciplinary, and as interested in the social and historical context of technology as it is the conceptual and normative issues raised by the field.

Conflicts of interest

The possibility or likelihood of conflict of interest is present in virtually all professions and occupations. We expect a researcher, a physician, or a legislator to perform her work according to the highest values and norms of their work (searching for objective knowledge, providing the best care possible for the patient, drafting and supporting legislation in order to enhance the public good). But there is always the possibility that the individual may have private financial interests that distort or bias the work she does, and there may be large companies that have a financial interest in one set of actions rather than another.

Marc Rodwin’s Conflicts of Interest and the Future of Medicine: The United States, France, and Japan is a rigorous and fair treatment of this issue with respect to conflicts of interest in the field of medicine. Rodwin has published extensively on this topic, and the current book is an important exploration of how professional ethics, individual interest, and business and professional institutions intersect to influence practitioner behavior in this field. The institutional actors in this story include the pharmaceutical companies and medical device manufacturers, insurers, hospitals and physician partnerships, and legislators and regulators. Rodwin shows in detail how differences in insurance policies, physician reimbursement policies, and gifts and benefits from health-related businesses to physicians contribute to an institutional environment where the physician’s choices are all too easily influenced by considerations other than the best health outcomes of the patient. Rodwin finds that the institutional setting for health economics is different in the US, France, and Japan, and that these differences lead to differences in physician behavior.

Here is Rodwin’s clear statement of the material situation that creates the possibility or likelihood of conflicts of interest in medicine.

Physicians earn their living through their medical work and so may practice in ways that enhance their income rather than the interests of patients. Moreover, when physicians prescribe drugs, devices, and treatments and choose who supplies these or refer patients to other providers, they affect the the fortunes of third parties. As a result, providers, suppliers, and insurers try to influence physicians’ clinical decisions for their own benefit. Thus, at the core of doctoring lies tension between self-interest and faithful service to patients and the public. The prevailing powerful medical ethos does influence physicians. Still, there is conflict between professional ethics and financial incentives. (kl 251)

Jerome Kassirer is a former editor-in-chief of the New England Journal of Medicine, and an expert observer of the field, and he provided a foreword to the book. Kassirer describes the current situation in the medical economy in these terms, drawing on his own synthesis of recent research and journalism:

Professionalism had been steadily eroded by complex financial ties between practicing physicians and academic physicians on the one hand and the pharmaceutical, medical device, and biotechnology industries on the other. These financial ties were deep and wide: they threatened to bias the clinical research on which physicians relied to care for the sick, and they permeated nearly every aspect of medical care. Physicians were accepting gifts, taking free trips, serving on companies’ speakers’ bureaus, signing their names to articles written for them by industry-paid ghostwriters, and engaging in research that endangered patient care. (kl 73)

The fundamental problem posed by Rodwin’s book is this set of questions:

In what context can physicians be trusted to act in their patients’ interests? How can medical practice be organized to minimize physicians’ conflicts of interest? How can society promote what is best in medical professionalism? What roles should physicians and organized medicine play in the medical economy? What roles should insurers, the state, and markets play in medical care? (kl 267)

The book sheds light on dozens of institutional arrangements that create the likelihood of conflicted choices, or that reduce that likelihood. One of those arrangements is the question for a non-profit hospital of whether the physicians are employed with a fixed salary or work on a fee-for-service basis. The latter system gives the physician a very different set of financial interests, including the possibility of making clinical choices that increase revenues to the physician or his or her group practice.

Consider physicians employed as public servants in public hospitals. Typically, they receive a fixed salary set by rank, enjoy tenure, and have clinical discretion. As a result, they lack financial incentives that bias their choices and have clinical freedom. Such arrangements preclude employment conflicts of interest. But relax some of these conditions and employers can compromise medical practice…. Furthermore, emplloyers can manage physicians to promote the organization’s goals. As a result, employed physicians might practice in ways that promote their employer’s over their patients’ interests. (kl 445)

And the disadvantages for the patient of the self-employed physician are also important:

Payment can encourage physicians to supply more, less, or different kinds of services, or to refer to particular providers. Each form of payment has some bias, but some compromise clinical decisions more than others do. (kl 445) 

Plainly, the circumstances and economic institutions described here are relevant to many other occupations as well. Scientists, policymakers, regulators, professors, and accountants all face similar circumstances — though the financial stakes in medicine are particularly high. (Here is an earlier post on corporate efforts to influence scientific research; link.)

 

This field of research makes an important contribution to a particular challenging topic in contemporary healthcare. But Rodwin’s study also provides an important contribution to the new institutionalism, since it serves as a micro-level case study of the differences in behavior created by differences in institutional rules and practices.

Each country’s laws, insurance, and medical institutions shape medical practice; and within each country, different forms of practice affect clinical choices. (kl 218)

This feature of the book allows it to contribute to the kinds of arguments on the causal and historical importance of specific configurations of institutions offered by Kathleen Thelen (link) and Frank Dobbin (link).

The Morandi Bridge collapse and regulatory capture

Lower image: Eugenio Ceroni and Luca Cozzi, Ponte Morandi – Autopsia di una strage

A recurring topic in Understanding Society is the question of the organizational causes that lie in the background of major accidents and technological disasters. One such disaster is the catastrophic collapse of the Morandi Bridge in Genoa in August, 2018, which resulted in the deaths of 43 people. Was this a technological failure, a design failure — or importantly a failure in which private and public organizational features led to the disaster?

A major story in the New York Times on March 5, 2019 (link) makes it clear that social and organizational causes were central to this horrendous failure. (What could be more terrifying than having the highway bridge under your vehicle collapse to the earth 150 feet beneath you?) In this case it is evident from the Times coverage that a major cause of the disaster was the relationship between Autostrade per l’Italia, the private company that manages the bridge and derives enormous profit from it, and the regulatory ministries responsible for regulating and supervising safe operations of highways and bridges.

In a sign of the arrogance of wealth and power involved in the relationship, the Benetton family threatened a multimillion dollar lawsuit against the economist Marco Ponti who had served on an expert panel advising the government and had made strong statements about the one-sided relationship that existed. The threat was not acted upon, but the abuse of power is clear.

This appears to be a textbook case of “regulatory capture”, a situation in which the private owners of a risky enterprise or activity use their economic power to influence or intimidate the government regulatory agencies that nominally oversee their activities. “Autostrade reaped huge profits and acquired so much power that the state became a largely passive regulatory” (NYT March 5, 2019). Moreover, independent governmental oversight was crippled by the fact that “the company effectively regulated itself– because Autostrade’s parent company owned the inspection company responsible for safety checks on the Morandi Bridge” (NYT). The Times quotes Carlo Scarpa, and economics professor at the University of Brescia:

Any investor would have been worried about bidding. The Benettons, though, knew the system and they understood that the Ministry of Infrastructure and Transport, which was supposed to supervise the whole thing, was weak. They were able to calculate the weight the company would have in the political arena. (NYT March 5, 2019)

And this seems to have worked out as the family expected:

Autostrade became a political powerhouse, acquiring clout that the Ministry of Infrastructure and Transport, perpetually underfunded and employing a small fraction of the staff, could not match. (NYT March 5, 2019)

The story notes that the private company made a great deal of money from this contract, but that the state also benefited financially. “Autostrade has poured billions of euros into state coffers, paying nearly 600 million euros a year in corporate taxes, V.A.T. and license fees.”

The story also surfaces other social factors that played a role in the disaster, including opposition by Genoa residents to the construction involved in creating a potential bypass to the bridge.

Here is what the Times story has to say about the inspections that occurred:

Beyond fixing blame for the bridge collapse, a central question of the Morandi tragedy is what happened to safety inspections. The answer is that the inspectors worked for Autostrade more than for the state. For decades, Spea Engineering, a Milan-based company, has performed inspections on the bridge. If nominally independent, Spea is owned by Autostrade’s parent company, Atlantia, and Autostrade is also Spea’s largest customer. Spea’s offices in Rome and elsewhere are housed inside Autostrade. One former bridge design engineer for Spea, Giulio Rambelli, described Autostrade’s control over Spea as “absolute,” (NYT March 5, 2019)

The story notes that this relationship raises the possibility of conflicts of interest that are prohibited in other countries. The story quotes Professor Giuliano Fonderico: “All this suggests a system failure.”

The failure appears to be first and foremost a failure of the state to fulfill its obligations of regulation and oversight of dangerous activities. By ceding any real and effective system of safety inspection to the business firms who are benefitting from the operations of the bridge, the state has essentially given up its responsibility of ensuring the safety of the public.

It is also worth underlining the point made in the article about the huge mismatch that exists between the capacities of the business firms in question and the agencies nominally charged to regulate and oversee them. This is a system-level failure at a higher level, since it highlights the fact of the power imbalance that almost always exists between large corporate wealth and the government agencies charged to oversee their activities.

Here is an editorial from the Guardian that makes some similar points; link. There don’t appear to be book-length treatments of the Morandi Bridge disaster available in English. Here is an Italian book on the subject by Eugenio Ceroni and Luca Cozzi, Ponte Morandi – Autopsia di una strage: I motivi tecnici, le colpe, gli errori. Quel che si poteva fare e non si è fatto (Italian Edition), which appears to be a technical civil-engineering analysis of the collapse. The Kindle translate option using Bing is helpful for non-Italian readers to get the thrust of this short book. In the engineering analysis inadequate inspection and incomplete maintenance remediation are key factors in the collapse.

The research university

Where do new ideas, new technologies, and new ways of thinking about the world come from in a modern society? Since World War II the answer to this question has largely been found in research universities. Research universities are doctoral institutions that employ professors who are advanced academic experts in a variety of fields and that expend significant amounts of external funds in support of ongoing research. Given the importance of innovation and new ideas in the knowledge economy of the twenty-first century, it is very important to understand the dynamics of research universities, and to understand factors that make them more or less productive in achieving new knowledge. And, crucially, we need to understand how public policy can enhance the effectiveness of the university research enterprise for the benefit of the whole of society.

Jason Owen-Smith’s recent Research Universities and the Public Good: Discovery for an Uncertain Future is a very welcome and insightful contribution to better understanding this topic. Owen-Smith is a sociology professor at the University of Michigan (itself a major research university with over 1.5 billion dollars in annual research funding), and he brings to his task some of the most insightful ideas currently transforming the field of organizational studies.

Owen-Smith analyzes research universities (RU) in terms of three fundamental ideas. RUs serves as sourceanchor, and hub for the generation of innovations and new ideas in a vast range of fields, from the humanities to basic science to engineering and medicine. And he believes that this triple function makes research universities virtually unique among American (or global) knowledge-producing organizations, including corporate and government laboratories (33).

The idea of the university as a source is fairly obvious: it is the idea that universities create and disseminate new knowledge in a very wide range of fields. Sometimes that knowledge is of interest to a hundred people worldwide; and sometimes it results in the creation of genuinely transformative technologies and methods. The idea of the university as “anchor” refers largely to the stability that research universities offer the knowledge enterprise. Another aspect of the idea of the university as an anchor is the fact that it helps to create a public infrastructure that encourages other kinds of innovation in the region that it serves — much as an anchor tenant helps to bring potential customers to smaller stores in a shopping mall. Unlike other knowledge-centered organizations like private research labs or federal laboratories, universities have a diverse portfolio of activity that confers a very high level of stability over time. This is a large asset for the country as a whole. It is also frequently an asset for the city or region in which it is located.

The idea of the university as a hub is perhaps the most innovative perspective offered here. The idea of a hub is a network concept. A hub is a node that links individuals and centers to each other in ways that transcend local organizational charts. And the power of a hub, and the networks that it joins, is that it facilitates the exchange of information and ideas and creates the possibility of new forms of cooperation and collaboration. Here the idea is that a research university is a place where researchers form working relationships, both on campus and in national networks of affiliation. And the density and configuration of these relationships serve to facilitate communication and diffusion of new ideas and approaches to a given problem, with the result that progress is more rapid. O-S makes use of Peter Galison’s treatment of the simultaneous discovery of the relativity of time measurement by Einstein and Poincaré in Einstein’s Clocks and Poincaré’s Maps: Empires of Time.  Galison shows that Einstein and Poincaré were both involved in extensive intellectual networks that were quite relevant to their discoveries; but that their innovations had substantially different effects because of differences in those networks. Owen-Smith believes that these differences are very relevant in the workings of modern RUs in the United States as well. (See also Galison’s Image and Logic: A Material Culture of Microphysics.)

Radical discoveries like the theory of special relativity are exceptionally rare, but the conditions that gave rise to them should also enable less radical insights. Imagining universities as organizational scaffolds for a complex collaboration networks and focal point where flows of ideas, people, and problems come together offers a systematic way to assess the potential for innovation and novelty as well as for multiple discoveries. (p. 15)

Treating a complex and interdependent social process that occurs across relatively long time scales as if it had certain needs, short time frames, and clear returns is not just incorrect, it’s destructive. The kinds of simple rules I suggested earlier represent what organizational theorist James March called “superstitious learning.” They were akin to arguing that because many successful Silicon Valley firms were founded in garages, economic growth is a simple matter of building more garages. (25)

Rather, O-S demonstrates in the case of the development of the key discoveries that led to the establishment of Google, the pathway was long, complex, and heavily dependent on social networks of scientists, funders, entrepreneurs, graduate students, and federal agencies.

A key observation in O-S’s narrative at numerous points is the futility — perhaps even harmfulness — of attempting to harness university research to specific, quantifiable economic or political goals. The idea of selecting university research and teaching programs on the basis of their ROI relative to economic goals is, according to O-S, deeply futile. The extended example he offers of the research that led to the establishment of Google as a company and a search engine illustrates this point very compellingly: much of the foundational research that made the search algorithms possible had the look of entirely non-pragmatic or utilitarian knowledge production at the time it was funded (chapter 1). (The development of the smart phone has a similar history; 63.) Philosophy, art history, and social theory can be as important to the overall success of the research enterprise as more intentionally directed areas of research (electrical engineering, genetic research, autonomous vehicle design). His discussion of Wisconsin Governor Scott Walker’s effort to revise the mission statement of the University of Wisconsin is exemplary (45 ff.).

Contra Governor Walker, the value of the university is found not in its ability to respond to immediate needs but in an expectation that joining systematic inquiry and education will result in people and ideas that reach beyond local, sometimes parochial, concerns. (46-47)

Also interesting is O-S’s discussion of the functionality of the extreme decentralization that is typical of most large research universities. In general O-S regards this decentralization as a positive thing, leading to greater independence for researchers and research teams and permitting higher levels of innovation and productive collaboration. In fact, O-S appears to believe that decentralization is a critical factor in the success of the research university as source, anchor, and hub in the creation of new knowledge.

The competition and collaboration enabled by decentralized organization, the pluralism and tension created when missions and fields collide, and the complex networks that emerge from knowledge work make universities sources by enabling them to produce new things on an ongoing basis. Their institutional and physical stability prevents them from succumbing to either internal strife or the kinds of ‘creative destruction’ that economist Joseph Schumpeter took to be a fundamental result of innovation under capitalism. (61)

O-S’s discussion of the micro-processes of discovery is particularly interesting (chapter 3). He makes a sustained attempt to dissect the interactive, networked ways in which multiple problems, methods, and perspectives occasionally come together to solve an important problem or develop a novel idea or technology. In O’S’s telling of the story, the existence of intellectual and scientific networks is crucial to the fecundity of these processes in and around research universities.

This is an important book and one that merits close reading. Nothing could be more critical to our future than the steady discovery of new ideas and solutions. Research universities have shown themselves to be uniquely powerful engines for discovery and dissemination of new knowledge. But the rapid decline of public appreciation of universities presents a serious risk to the continued vitality of the university-based knowledge sector. The most important contribution O-S has made here, in my reading, is the detailed work he has done to give exposition to the “micro-processes” of the research university — the collaborations, the networks, the unexpected contiguities of problems, and the high level of decentralization that American research universities embody. As O-S documents, these processes are difficult to present to the public in a compelling way, and the vitality of the research university itself is vulnerable to destructive interference in the current political environment. Providing a clear, well-documented account of how research universities work is a major and valuable contribution.

Eleven years of Understanding Society

This month marks the end of the eleventh year of publication of Understanding Society. Thanks to all the readers and visitors who have made the blog so rewarding. The audience continues to be international, with roughly half of visits coming from the United States and the rest from UK, the Philippines, India, Australia, and other European countries. There are a surprising number of visits from Ukraine.

Topics in the past year have been diverse. The most frequent topic is my current research interest, organizational dysfunction and technology failure. Also represented are topics in the philosophy of social science (causal mechanisms, computational modeling), philosophy of history, China, and the politics of hate and division. The post with the largest number of views was “Is history probabilistic?”, posted on December 30, and the least-read post was “The insights of biography”, posted on August 29. Not surprisingly, the content of the blog follows the topics which I’m currently thinking about, including most recently the issue of sexual harassment of women in university settings.

Writing the blog has been a good intellectual experience for me. Taking an hour or two to think intensively about a particular idea — large or small — and trying to figure out what I think about it is genuinely stimulating for me. It makes me think of the description that Richard Schacht gave in an undergraduate course on nineteenth-century philosophy of Hegel’s theory of creativity and labor. A sculptor begins with an indefinite idea of a physical form, a block of stone, and a hammer and chisel, and through interaction with the materials, tools, and hands he or she creates something new. The initial vision, inchoate as it is, is not enough, and the block of stone is mute. But the sculptor gives material expression to his or her visions through concrete interaction with the materials at hand. This is not a bad analogy for the process of thinking and writing itself. It is interesting that Marx’s conception of the creativity of labor derives from this Hegelian metaphor.

This is what I had hoped for when I began the blog in 2007. I wanted to have a challenging form of expression that would allow me to develop ideas about how society and the social sciences work, and I hoped that this activity would draw me into new ideas, new thinking, and new approaches to problems already of interest. This has certainly materialized for me — perhaps in the same way that a sculptor develops new capacities by contending with the resistance and contingency of the stone. There are issues, perspectives, and complexities that I have come to find very interesting that would not have come up in a more linear kind of academic writing.

It is also interesting for me to reflect on the role that “audience” plays for the writer. Since the first year of the blog I have felt that I understood the level of knowledge, questions, and interests that brought visitors to read a post or two, and sometimes to leave a comment. This is a smart, sophisticated audience. I have felt complete freedom in treating my subjects in the way that I think about them, without needing to simplify or reduce the problems I am considering to a more “public” level. This contrasts with the experience I had in blogging for the Huffington Post a number of years ago. Huff Post was a much more visible platform, but I never felt a connection with the audience, and I never felt the sense of intellectual comfort that I have in producing Understanding Society. As a result it was difficult to formulate my ideas in a way that seemed both authentic and original.

So thank you, to all the visitors and readers who have made the blog so satisfying for me over such a long time.

Sexual harassment in academic contexts

Sexual harassment of women in academic settings is regrettably common and pervasive, and its consequences are grave. At the same time, it is a remarkably difficult problem to solve. The “me-too” movement has shed welcome light on specific individual offenders and has generated more awareness of some aspects of the problem of sexual harassment and misconduct. But we have not yet come to a public awareness of the changes needed to create a genuinely inclusive and non-harassing environment for women across the spectrum of mistreatment that has been documented. The most common institutional response following an incident is to create a program of training and reporting, with a public commitment to investigating complaints and enforcing university or institutional policies rigorously and transparently. These efforts are often well intentioned, but by themselves they are insufficient. They do not address the underlying institutional and cultural features that make sexual harassment so prevalent.

The problem of sexual harassment in institutional contexts is a difficult one because it derives from multiple features of the organization. The ambient culture of the organization is often an important facilitator of harassing behavior — often enough a patriarchal culture that is deferential to the status of higher-powered individuals at the expense of lower-powered targets. There is the fact that executive leadership in many institutions continues to be predominantly male, who bring with them a set of gendered assumptions that they often fail to recognize. The hierarchical nature of the power relations of an academic institution is conducive to mistreatment of many kinds, including sexual harassment. Bosses to administrative assistants, research directors to post-docs, thesis advisors to PhD candidates — these unequal relations of power create a conducive environment for sexual harassment in many varieties. In each case the superior actor has enormous power and influence over the career prospects and work lives of the women over whom they exercise power. And then there are the habits of behavior that individuals bring to the workplace and the learning environment — sometimes habits of masculine entitlement, sometimes disdainful attitudes towards female scholars or scientists, sometimes an underlying willingness to bully others that finds expression in an academic environment. (A recent issue of the Journal of Social Issues (link) devotes substantial research to the topic of toxic leadership in the tech sector and the “masculinity contest culture” that this group of researchers finds to be a root cause of the toxicity this sector displays for women professionals. Research by Jennifer Berdahl, Peter Glick, Natalya Alonso, and more than a dozen other scholars provides in-depth analysis of this common feature of work environments.)

The scope and urgency of the problem of sexual harassment in academic contexts is documented in excellent and expert detail in a recent study report by the National Academies of Sciences, Engineering, and Medicine (link). This report deserves prominent discussion at every university.

The study documents the frequency of sexual harassment in academic and scientific research contexts, and the data are sobering. Here are the results of two indicative studies at Penn State University System and the University of Texas System:

The Penn State survey indicates that 43.4% of undergraduates, 58.9% of graduate students, and 72.8% of medical students have experienced gender harassment, while 5.1% of undergraduates, 6.0% of graduate students, and 5.7% of medical students report having experienced unwanted sexual attention and sexual coercion. These are staggering results, both in terms of the absolute number of students who were affected and the negative effects that these  experiences had on their ability to fulfill their educational potential. The University of Texas study shows a similar pattern, but also permits us to see meaningful differences across fields of study. Engineering and medicine provide significantly more harmful environments for female students than non-STEM and science disciplines. The authors make a particularly worrisome observation about medicine in this context:

The interviews conducted by RTI International revealed that unique settings such as medical residencies were described as breeding grounds for abusive behavior by superiors. Respondents expressed that this was largely because at this stage of the medical career, expectation of this behavior was widely accepted. The expectations of abusive, grueling conditions in training settings caused several respondents to view sexual harassment as a part of the continuum of what they were expected to endure. (63-64)

The report also does an excellent job of defining the scope of sexual harassment. Media discussion of sexual harassment and misconduct focuses primarily on egregious acts of sexual coercion. However, the  authors of the NAS study note that experts currently encompass sexual coercion, unwanted sexual attention, and gender harassment under this category of harmful interpersonal behavior. The largest sub-category is gender harassment:

“a broad range of verbal and nonverbal behaviors not aimed at sexual cooperation but that convey insulting, hostile, and degrading attitudes about” members of one gender (Fitzgerald, Gelfand, and Drasgow 1995, 430). (25)

The “iceberg” diagram (p. 32) captures the range of behaviors encompassed by the concept of sexual harassment. (See Leskinen, Cortina, and Kabat 2011 for extensive discussion of the varieties of sexual harassment and the harms associated with gender harassment.)

 

The report emphasizes organizational features as a root cause of a harassment-friendly environment.

By far, the greatest predictors of the occurrence of sexual harassment are organizational. Individual-level factors (e.g., sexist attitudes, beliefs that rationalize or justify harassment, etc.) that might make someone decide to harass a work colleague, student, or peer are surely important. However, a person that has proclivities for sexual harassment will have those behaviors greatly inhibited when exposed to role models who behave in a professional way as compared with role models who behave in a harassing way, or when in an environment that does not support harassing behaviors and/or has strong consequences for these behaviors. Thus, this section considers some of the organizational and environmental variables that increase the risk of sexual harassment perpetration. (46)

Some of the organizational factors that they refer to include the extreme gender imbalance that exists in many professional work environments, the perceived absence of organizational sanctions for harassing behavior, work environments where sexist views and sexually harassing behavior are modeled, and power differentials (47-49). The authors make the point that gender harassment is chiefly aimed at indicating disrespect towards the target rather than sexual exploitation. This has an important implication for institutional change. An institution that creates a strong core set of values emphasizing civility and respect is less conducive to gender harassment. They summarize this analysis in the statement of findings as well:

Organizational climate is, by far, the greatest predictor of the occurrence of sexual harassment, and ameliorating it can prevent people from sexually harassing others. A person more likely to engage in harassing behaviors is significantly less likely to do so in an environment that does not support harassing behaviors and/or has strong, clear, transparent consequences for these behaviors. (50)

So what can a university or research institution do to reduce and eliminate the likelihood of sexual harassment for women within the institution? Several remedies seem fairly obvious, though difficult.

  • Establish a pervasive expectation of civility and respect in the workplace and the learning environment
  • Diffuse the concentrations of power that give potential harassers the opportunity to harass women within their domains
  • Ensure that the institution honors its values by refusing the “star culture” common in universities that makes high-prestige university members untouchable
  • Be vigilant and transparent about the processes of investigation and adjudication through which complaints are considered
  • Create effective processes that ensure that complainants do not suffer retaliation
  • Consider candidates’ receptivity to the values of a respectful, civil, and non-harassing environment during the hiring and appointment process (including research directors, department and program chairs, and other positions of authority)
  • Address the gender imbalance that may exist in leadership circles

As the authors put the point in the final chapter of the report:

Preventing and effectively addressing sexual harassment of women in colleges and universities is a significant challenge, but we are optimistic that academic institutions can meet that challenge–if they demonstrate the will to do so. This is because the research shows what will work to prevent sexual harassment and why it will work. A systemwide change to the culture and climate in our nation’s colleges and universities can stop the pattern of harassing behavior from impacting the next generation of women entering science, engineering, and medicine. (169)

Turing’s journey

A recent post comments on the value of biography as a source of insight into history and thought. Currently I am reading Andrew Hodges’ Alan Turing: The Enigma (1983), which I am finding fascinating both for its portrayal of the evolution of a brilliant and unconventional mathematician as well as the honest efforts Hodges makes to describe Turing’s sexual evolution and the tragedy in which it eventuated. Hodges makes a serious effort to give the reader some understanding of Turing’s important contributions, including his enormously important “computable numbers” paper. (Here is a nice discussion of computability in the Stanford Encyclopedia of Philosophylink.) The book also offers a reasonably technical account of the Enigma code-breaking process.

Hilbert’s mathematical imagination plays an important role in Turing’s development. Hilbert’s speculation that all mathematical statements would turn out to be derivable or disprovable turned out to be wrong, and Turing’s computable numbers paper (along with Godel and Church) demonstrated the incompleteness of mathematics. But it was Hilbert’s formulation of the idea that permitted the precise and conclusive refutations that came later. (Here is Richard Zack’s account in the Stanford Encyclopedia of Philosophy of Hilbert’s program; link.)

And then there were the machines. I had always thought of the Turing machine as a pure thought experiment designed to give specific meaning to the idea of computability. It has been eye-opening to learn of the innovative and path-breaking work that Turing did at Bletchley Park, Bell Labs, and other places in developing real computational machines. Turing’s development of real computing machines and his invention of the activity of “programming” (“construction of tables”) make his contributions to the development of digital computing machines much more advanced and technical than I had previously understood. His work late in the war on the difficult problem of encrypting speech for secure telephone conversation was also very interesting and innovative. Further, his understanding of the priority of creating a technology that would support “random access memory” was especially prescient. Here is Hodges’ summary of Turing’s view in 1947:

Considering the storage problem, he listed every form of discrete store that he and Don Bayley had thought of, including film, plugboards, wheels, relays, paper tape, punched cards, magnetic tape, and ‘cerebral cortex’, each with an estimate, in some cases obviously fanciful, of access time, and of the number of digits that could be stored per pound sterling. At one extreme, the storage could all be on electronic valves, giving access within a microsecond, but this would be prohibitively expensive. As he put it in his 1947 elaboration, ‘To store the content of an ordinary novel by such means would cost many millions of pounds.’ It was necessary to make a trade-off between cost and speed of access. He agreed with von Neumann, who in the EDVAC report had referred to the future possibility of developing a special ‘Iconoscope’ or television screen, for storing digits in the form of a pattern of spots. This he described as ‘much the most hopeful scheme, for economy combined with speed.’ (403)

These contributions are no doubt well known by experts on the history of computing. But for me it was eye-opening to learn how directly Turing was involved in the design and implementation of various automatic computing engines, including the British ACE machine itself at the National Physical Laboratory (link). Here is Turing’s description of the evolution of his thinking on this topic, extracted from a lecture in 1947:

Some years ago I was researching on what might now be described as an investigation of the theoretical possibilities and limitations of digital computing machines. I considered a type of machine which had a central mechanism and an infinite memory which was contained on an infinite tape. This type of machine appeared to be sufficiently general. One of my conclusions was that the idea of a ‘rule of thumb’ process and a ‘machine process’ were synonymous. The expression ‘machine process’ of course means one which could be carried out by the type of machine I was considering…. Machines such as the ACE may be regarded as practical versions of this same type of machine. There is at least a very close analogy. (399)

At the same time his clear logical understanding of the implications of a universal computing machine was genuinely visionary. He was evangelical in his advocacy of the goal of creating a machine with a minimalist and simple architecture where all the complexity and specificity of the use of the machine derives from its instructions (programming), not its specialized hardware.

Also interesting is the fact that Turing had a literary impulse (not often exercised), and wrote at least one semi-autobiographical short story about a sexual encounter. Only a few pages survive. Here is a paragraph quoted by Hodges:

Alec had been working rather hard until two or three weeks before. It was about interplanetary travel. Alec had always been rather keen on such crackpot problems, but although he rather liked to let himself go rather wildly to newspapermen or on the Third Programme when he got the chance, when he wrote for technically trained readers, his work was quite sound, or had been when he was younger. This last paper was real good stuff, better than he’d done since his mid twenties when he had introduced the idea which is now becoming known as ‘Pryce’s buoy’. Alec always felt a glow of pride when this phrase was used. The rather obvious double-entendre rather pleased him too. He always liked to parade his homosexuality, and in suitable company Alec could pretend that the word was spelt without the ‘u’. It was quite some time now since he had ‘had’ anyone, in fact not since he had met that soldier in Paris last summer. Now that his paper was finished he might justifiably consider that he had earned another gay man, and he knew where he might find one who might be suitable. (564)

The passage is striking for several reasons; but most obviously, it brings together the two leading themes of his life, his scientific imagination and his sexuality.

This biography of Turing reinforces for me the value of the genre more generally. The reader gets a better understanding of the important developments in mathematics and computing that Turing achieved, it presents a vivid view of the high stakes in the secret conflict that Turing was a crucial part of in the use of cryptographic advances to defeat the Nazi submarine threat, and it gives personal insights into the very unique individual who developed into such a world-changing logician, engineer, and scientist.

Social generatively and complexity

The idea of generativity in the realm of the social world expresses the notion that social phenomena are generated by the actions and thoughts of the individuals who constitute them, and nothing else (linklink). More specifically, the principle of generativity postulates that the properties and dynamic characteristics of social entities like structures, ideologies, knowledge systems, institutions, and economic systems are produced by the actions, thoughts, and dispositions of the set of individuals who make them up. There is no other kind of influence that contributes to the causal and dynamic properties of social entities. Begin with a population of individuals with such-and-so mental and behavioral characteristics; allow them to interact with each other over time; and the structures we observe emerge as a determinate consequence of these interactions.

This view of the social world lends great ontological support to the methods associated with agent-based models (link). Here is how Joshua Epstein puts the idea in Generative Social Science: Studies in Agent-Based Computational Modeling):

Agent-based models provide computational demonstrations that a given microspecification is in fact sufficient to generate a macrostructure of interest…. Rather, the generativist wants an account of the configuration’s attainment by a decentralized system of heterogeneous autonomous agents. Thus, the motto of generative social science, if you will, is: If you didn’t grow it, you didn’t explain its emergence. (42)

Consider an analogy with cooking. The properties of the cake are generated by the properties of the ingredients, their chemical properties, and the sequence of steps that are applied to the assemblage of the mixture from the mixing bowl to the oven to the cooling board. The final characteristics of the cake are simply the consequence of the chemistry of the ingredients and the series of physical influences that were applied in a given sequence.

Now consider the concept of a complex system. A complex system is one in which there is a multiplicity of causal factors contributing to the dynamics of the system, in which there are causal interactions among the underlying causal factors, and in which causal interactions are often non-linear. Non-linearity is important here, because it implies that a small change in one or more factors may lead to very large changes in the outcome. We like to think of causal systems as consisting of causal factors whose effects are independent of each other and whose influence is linear and additive.

A gardener is justified in thinking of growing tomatoes in this way: a little more fertilizer, a little more water, and a little more sunlight each lead to a little more tomato growth. But imagine a garden in which the effect of fertilizer on tomato growth is dependent on the recent gradient of water provision, and the effects of both positive influencers depends substantially on the recent amount of sunlight available. Under these circumstances it is difficult to predict the aggregate size of the tomato given information about the quantities of the inputs.

One of the key insights of complexity science is that generativity is fully compatible with a wicked level of complexity. The tomato’s size is generated by its history of growth, determined by the sequence of inputs over time. But for the reason just mentioned, the complexity of interactions between water, sunlight, and fertilizer in their effects on growth mean that the overall dynamics of tomato growth are difficult to reconstruct.

Now consider the idea of strong emergence — the idea that some aggregates possess properties that cannot in principle be explained by reference to the causal properties of the constituents of the aggregate. This means that the properties of the aggregate are not generated by the workings of the constituents; otherwise we would be able in principle to explain the properties of the aggregate by demonstrating how they derive from the (complex) pathways leading from the constituents to the aggregate. This version of the absolute autonomy of some higher-level properties is inherently mysterious. It implies that the aggregate does not supervene upon the properties of the constituents; there could be different aggregate properties with identical constituent properties. And this seems ontological untenable.

The idea of ontological individualism captures this intuition in the setting of social phenomena: social entities are ultimately composed of and constituted by the properties of the individuals who make them up, and nothing else. This does not imply methodological individualism; for reasons of complexity or computational limitations it may be practically impossible to reconstruct the pathways through which the social entity is generated out of the properties of individuals. But ontological individualism places an ontological constraint on the way that we conceptualize the social world. And it gives a concrete meaning to the idea of the microfoundations for a social entity. The microfoundations of a social entity are the pathways and mechanisms, known or unknown, through which the social entity is generated by the actions and intentionality of the individuals who constitute it.

Empowering the safety officer?

How can industries involving processes that create large risks of harm for individuals or populations be modified so they are more capable of detecting and eliminating the precursors of harmful accidents? How can nuclear accidents, aviation crashes, chemical plant explosions, and medical errors be reduced, given that each of these activities involves large bureaucratic organizations conducting complex operations and with substantial inter-system linkages? How can organizations be reformed to enhance safety and to minimize the likelihood of harmful accidents?

One of the lessons learned from the Challenger space shuttle disaster is the importance of a strongly empowered safety officer in organizations that deal in high-risk activities. This means the creation of a position dedicated to ensuring safe operations that falls outside the normal chain of command. The idea is that the normal decision-making hierarchy of a large organization has a built-in tendency to maintain production schedules and avoid costly delays. In other words, there is a built-in incentive to treat safety issues with lower priority than most people would expect.

If there had been an empowered safety officer in the launch hierarchy for the Challenger launch in 1986, there is a good chance this officer would have listened more carefully to the Morton-Thiokol engineering team’s concerns about low temperature damage to O-rings and would have ordered a halt to the launch sequence until temperatures in Florida raised to the critical value. The Rogers Commission faulted the decision-making process leading to the launch decision in its final report on the accident (The Report of the Presidential Commission on the Space Shuttle Challenger Accident – The Tragedy of Mission 51-L in 1986 – Volume OneVolume TwoVolume Three).

This approach is productive because empowering a safety officer creates a different set of interests in the management of a risky process. The safety officer’s interest is in safety, whereas other decision makers are concerned about revenues and costs, public relations, reputation, and other instrumental goods. So a dedicated safety officer is empowered to raise safety concerns that other officers might be hesitant to raise. Ordinary bureaucratic incentives may lead to underestimating risks or concealing faults; so lowering the accident rate requires giving some individuals the incentive and power to act effectively to reduce risks.

Similar findings have emerged in the study of medical and hospital errors. It has been recognized that high-risk activities are made less risky by empowering all members of the team to call a halt in an activity when they perceive a safety issue. When all members of the surgical team are empowered to halt a procedure when they note an apparent error, serious operating-room errors are reduced. (Here is a report from the American College of Obstetricians and Gynecologists on surgical patient safety; link. And here is a 1999 National Academy report on medical error; link.)

The effectiveness of a team-based approach to safety depends on one central fact. There is a high level of expertise embodied in the staff operating a surgical suite, an engineering laboratory, or a drug manufacturing facility. By empowering these individuals to stop a procedure when they judge there is an unrecognized error in play, this greatly extend the amount of embodied knowledge involved in a process. The surgeon, the commanding officer, or the lab director is no longer the sole expert whose judgments count.

But it also seems clear that these innovations don’t work equally well in all circumstances. Take nuclear power plant operations. In Atomic Accidents: A History of Nuclear Meltdowns and Disasters: From the Ozark Mountains to Fukushima James Mahaffey documents multiple examples of nuclear accidents that resulted from the efforts of mid-level workers to address an emerging problem in an improvised way. In the case of nuclear power plant safety, it appears that the best prescription for safety is to insist on rigid adherence to pre-established protocols. In this case the function of a safety officer is to monitor operations to ensure protocol conformance — not to exercise independent judgment about the best way to respond to an unfavorable reactor event.

It is in fact an interesting exercise to try to identify the kinds of operations in which these innovations are likely to be effective.

Here is a fascinating interview in Slate with Jim Bagian, a former astronaut, one-time director of the Veteran Administration’s National Center for Patient Safety, and distinguished safety expert; link. Bagian emphasizes the importance of taking a system-based approach to safety. Rather than focusing on finding blame for specific individuals whose actions led to an accident, Bagian emphasizes the importance of tracing back to the institutional, organizational, or logistic background of the accident. What can be changed in the process — of delivering medications to patients, of fueling a rocket, or of moving nuclear solutions around in a laboratory — that make the likelihood of an accident substantially lower?

The safety principles involved here seem fairly simple: cultivate a culture in which errors and near-misses are reported and investigated without blame; empower individuals within risky processes to halt the process if their expertise and experience indicates the possibility of a significant risky error; create individuals within organizations whose interests are defined in terms of the identification and resolution of unsafe practices or conditions; and share information about safety within the industry and with the public.

%d bloggers like this: