Soviet nuclear disasters: Kyshtym

The 1986 meltdown of reactor number 4 at the Chernobyl Nuclear Power Plant was the greatest nuclear disaster the world has yet seen. Less well known is the Kyshtym disaster in 1957, which resulted in a massive release of radioactive material in the Eastern Ural region of the Soviet Union. This was a catastrophic underground explosion at a nuclear storage facility near the Mayak power plant in the Eastern Ural region of the USSR. Information about the disaster was tightly restricted by Soviet authorities, with predictably bad consequences.

Zhores Medvedev was one of the first qualified scientists to provide information and hypotheses about the Kyshtym disaster. His book Nuclear Disaster in the Urals was written while he was in exile in Great Britain and appeared in 1980. It is fascinating to learn that his reasoning is based on his study of ecological, biological, and environmental research done by Soviet scientists between 1957 and 1980. Medvedev was able to piece together the extent of contamination and the general nature of the cause of the event from basic information about radioactive contamination in lakes and streams in the region included incidentally in scientific reports from the period.

It is very interesting to find that scientists in the United States were surprisingly skeptical about Medvedev’s assertions. W. Stratton et al published a review analysis in Science in 1979 (link) that found Medvedev’s reasoning unpersuasive.

A steam explosion of one tank is not inconceivable but is most improbable, because the heat generation rate from a given amount of fission products is known precisely and is predictable. Means to dissipate this heat would be a part of the design and could be made highly reliable. (423)

They offer an alternative hypothesis about any possible radioactive contamination in the Kyshtym region — the handful of multimegaton nuclear weapons tests conducted by the USSR in the Novaya Zemlya area.

We suggest that the observed data can be satisfied by postulating localized fallout (perhaps with precipitation) from explosion of a large nuclear weapon, or even from more than one explosion, because we have no limits on the length of time that fallout continued. (425)

And they consider weather patterns during the relevant time period to argue that these tests could have been the source of radiation contamination identified by Medvedev. Novaya Zemlya is over 1000 miles north of Kyshtym (20 degrees of latitude). So the fallout from the nuclear tests may be a possible alternative hypothesis, but it is farfetched. They conclude:

We can only conclude that, though a radiation release incident may well be supported by the available evidence, the magnitude of the incident may have been grossly exaggerated, the source chosen uncritically, and the dispersal mechanism ignored. Even so we find it hard to believe that an area of this magnitude could become contaminated and the event not discussed in detail or by more than one individual for more than 20 years. (425)

The heart of their skepticism depends on an entirely indefensible assumption: that Soviet science, engineering, and management were entirely capable of designing and implementing a safe system for nuclear waste storage. They were perhaps right about the scientific and engineering capabilities of the Soviet system; but the management systems in place were woefully inadequate. Their account rested on an assumption of straightforward application of engineering knowledge to the problem; but they failed to take into account the defects of organization and oversight that were rampant within Soviet industrial systems. And in the end the core of Medvedev’s claims have been validated.

Another official report was compiled by Los Alamos scientists, released in 1982, that concluded unambiguously that Medvedev was mistaken, and that the widespread ecological devastation in the region resulted from small and gradual processes of contamination rather than a massive explosion of waste materials (link). Here is the conclusion put forward by the study’s authors:

What then did happen at Kyshtym? A disastrous nuclear accident that killed hundreds, injured thousands, and contaminated thousands of square miles of land? Or, a series of relatively minor incidents, embellished by rumor, and severely compounded by a history of sloppy practices associated with the complex? The latter seems more highly probable.

So Medvedev is dismissed.

After the collapse of the USSR voluminous records about the Kyshtym disaster became available from secret Soviet files, and those records make it plain that US scientists badly misjudged the nature of the Kyshtym disaster. Medvedev was much closer to the truth than were Stratton and his colleagues or the authors of the Los Alamos report.

A scientific report based on Soviet-era documents that were released after the fall of the Soviet Union appeared in the Journal of Radiological Protection in 2017 (A V Akleyev et al 2017; link). Here is their brief description of the accident:

Starting in the earliest period of Mayak PA activities, large amounts of liquid high-level radioactive waste from the radiochemical facility were placed into long-term controlled storage in metal tanks installed in concrete vaults. Each full tank contained 70–80 tons of radioactive wastes, mainly in the form of nitrate compounds. The tanks were water-cooled and equipped with temperature and liquid-level measurement devices. In September 1957, as a result of a failure of the temperature-control system of tank #14, cooling-water delivery became insufficient and radioactive decay caused an increase in temperature followed by complete evaporation of the water, and the nitrate salt deposits were heated to 330 °C–350 °C. The thermal explosion of tank #14 occurred on 29 September 1957 at 4:20 pm local time. At the time of the explosion the activity of the wastes contained in the tank was about 740 PBq [5, 6]. About 90% of the total activity settled in the immediate vicinity of the explosion site (within distances less than 5 km), primarily in the form of coarse particles. The explosion gave rise to a radioactive plume which dispersed into the atmosphere. About 2 × 106 Ci (74PBq) was dispersed by the wind (north-northeast direction with wind velocity of 5–10 m s−1) and caused the radioactive trace along the path of the plume [5]. Table 1 presents the latest estimates of radionuclide composition of the release used for reconstruction of doses in the EURT area. The mixture corresponded to uranium fission products formed in a nuclear reactor after a decay time of about 1 year, with depletion in 137Cs due to a special treatment of the radioactive waste involving the extraction of 137Cs [6]. (R20-21)

Here is the region of radiation contamination (EURT) that Akleyev et al identify:

This region represents a large area encompassing 23,000 square kilometers (8,880 square miles). Plainly Akleyev et al describe a massive disaster including a very large explosion in an underground nuclear waste storage facility, large-scale dispersal of nuclear materials, and evacuation of population throughout a large region. This is very close to the description provided by Medvedev.

A somewhat surprising finding of the Akleyev study is that the exposed population did not show dramatically worse health outcomes and mortality relative to unexposed populations. For example, “Leukemia mortality rates over a 30-year period after the accident did not differ from those in the group of unexposed people” (R30). Their epidemiological study for cancers overall likewise indicates only a small effect of accidental radiation exposure on cancer incidence:

The attributable risk (AR) of solid cancer incidence in the EURTC, which gives the proportion of excess cancer cases out of the sum of excess and baseline cases, calculated according to the linear model, made up 1.9% over the whole follow-up period. Therefore, only 27 cancer cases out of 1426 could be associated with accidental radiation exposure of the EURT population. AR is highest in the highest dose groups (250–500 mGy and >500 mGy) and exceeds 17%.

So why did the explosion occur? James Mahaffey examines the case in detail in Atomic Accidents: A History of Nuclear Meltdowns and Disasters: From the Ozark Mountains to Fukushima. Here is his account:

In the crash program to produce fissile bomb material, a great deal of plutonium was wasted in the crude separation process. Production officials decided that instead of being dumped irretrievably into the river, the plutonium that had failed to precipitate out, remaining in the extraction solution, should be saved for future processing. A big underground tank farm was built in 1953 to hold processed fission waste. Round steel tanks were installed in banks of 20, sitting on one large concrete slab poured at the bottom of an excavation, 27 feet deep. Each bank was equipped with a heat exchanger, removing the heat buildup from fission-product decay using water pipes wrapped around the tanks. The tanks were then buried under a backfill of dirt. The tanks began immediately to fill with various waste solutions from the extraction plant, with no particular distinction among the vessels. The tanks contained all the undesirable fission products, including cobalt-60, strontium-90, and cesium-137, along with unseparated plutonium and uranium, with both acetate and nitrate solutions pumped into the same volume. One tank could hold probably 100 tons of waste product. 

In 1956, a cooling-water pipe broke leading to one of the tanks. It would be a lot of work to dig up the tank, find the leak, and replace the pipe, so instead of going to all that trouble, the engineers in charge just turned off the water and forgot about it. 

A year passed. Not having any coolant flow and being insulated from the harsh Siberian winter by the fill dirt, the tank retained heat from the fission-product decay. Temperature inside reached 660 ° Fahrenheit, hot enough to melt lead and cast bullets. Under this condition, the nitrate solutions degraded into ammonium nitrate, or fertilizer, mixed with acetates. The water all boiled away, and what was left was enough solidified ANFO explosive to blow up Sterling Hall several times, being heated to the detonation point and laced with dangerous nuclides. [189] 

Sometime before 11: 00 P.M. on Sunday, September 29, 1957, the bomb went off, throwing a column of black smoke and debris reaching a kilometer into the sky, accented with larger fragments burning orange-red. The 160-ton concrete lid on the tank tumbled upward into the night like a badly thrown discus, and the ground thump was felt many miles away. Residents of Chelyabinsk rushed outside and looked at the lighted display to the northwest, as 20 million curies of radioactive dust spread out over everything sticking above ground. The high-level wind that night was blowing northeast, and a radioactive plume dusted the Earth in a tight line, about 300 kilometers long. This accident had not been a runaway explosion in an overworked Soviet production reactor. It was the world’s first “dirty bomb,” a powerful chemical explosive spreading radioactive nuclides having unusually high body burdens and guaranteed to cause havoc in the biosphere. The accidentally derived explosive in the tank was the equivalent of up to 100 tons of TNT, and there were probably 70 to 80 tons of radioactive waste thrown skyward. (KL 5295)

So what were the primary organizational and social causes of this disaster? One is the haste created in nuclear design and construction created by Stalin’s insistence on moving forward the Soviet nuclear weapons program as rapidly as possible. As is evident in the Chernobyl case as well, the political pressures on engineers and managers that followed from these political priorities often led to disastrous decisions and actions. A second is the institutionalized system of secrecy that surrounded industry generally, the military specifically, and the nuclear industry most especially. A third is the casual attitude taken by Soviet officials towards the health and wellbeing of the population. And a final cause highlighted by Mahaffey’s account is the low level of attention given at the plant level to safety and maintenance of highly risky facilities. Stratton et al based their analysis on the fact that the heat-generating characteristics of nuclear waste were well understood and that effective means existed for controlling those risks. That may be, but what they failed to anticipate is that these risks would be fundamentally disregarded on the ground and in the supervisory system above the Kyshtym reactor complex.

(It is interesting to note that Mahaffey himself underestimates the amount of information that is now available about the effects of the disaster. He writes that “studies of the effects of this disaster are extremely difficult, as records do not exist, and previous residents are hard to track down” (kl 5330). But the Akleyev study mentioned above provides extensive health details about the affected population made possible as a result of data collected during Soviet times and concealed.)

 

Safety and accident analysis: Longford

Andrew Hopkins has written a number of fascinating case studies of industrial accidents, usually in the field of petrochemicals. These books are crucial reading for anyone interested in arriving at a better understanding of technological safety in the context of complex systems involving high-energy and tightly-coupled processes. Especially interesting is his Lessons from Longford: The ESSO Gas Plant Explosion. The Longford refining plant suffered an explosion and fire in 1998 that killed two workers, badly injured others, and interrupted the supply of natural gas to the state of Victoria for two weeks. Hopkins is a sociologist, but has developed substantial expertise in the technical details of petrochemical refining plants. He served as an expert witness in the Royal Commission hearings that investigated the accident. The accounts he offers of these disasters are genuinely fascinating to read.

Hopkins makes the now-familiar point that companies often seek to lay responsibility for a major industrial accident on operator error or malfeasance. This was Esso’s defense concerning its corporate liability in the Longford disaster. But, as Hopkins points out, the larger causes of failure go far beyond the individual operators whose decisions and actions were proximate to the event. Training, operating plans, hazard analysis, availability of appropriate onsite technical expertise — these are all the responsibility of the owners and managers of the enterprise. And regulation and oversight of safety practices are the responsibility of stage agencies. So it is critical to examine the operations of a complex and dangerous technology system at all these levels.

A crucial part of management’s responsibility is to engage in formal “hazard and operability” (HAZOP) analysis. “A HAZOP involves systematically imagining everything that might go wrong in a processing plant and developing procedures or engineering solutions to avoid these potential problems” (26). This kind of analysis is especially critical in high-risk industries including chemical plants, petrochemical refineries, and nuclear reactors. It emerged during the Longford accident investigation that HAZOP analyses had been conducted for some aspects of risk but not for all — even in areas where the parent company Exxon was itself already fully engaged in analysis of those risky scenarios. The risk of embrittlement of processing equipment when exposed to super-chilled conditions was one that Exxon had already drawn attention to at the corporate level because of prior incidents.

A factor that Hopkins judges to be crucial to the occurrence of the Longford Esso disaster is the decision made by management to remove engineering staff from the plant to a central location where they could serve a larger number of facilities “more efficiently”.

A second relevant change was the relocation to Melbourne in 1992 of all the engineering staff who had previously worked at Longford, leaving the Longford operators without the engineering backup to which they were accustomed. Following their removal from Longford, engineers were expected to monitor the plant from a distance and operators were expected to telephone the engineers when they felt a need to. Perhaps predictably, these arrangements did not work effectively, and I shall argue in the next chapter that the absence of engineering expertise had certain long-term consequences which contributed to the accident. (34)

One result of this decision is the fact that when the Longford incident began there were no engineering experts on site who could correctly identify the risks created by the incident. Technicians therefore restarted the process by reintroducing warm oil into the super-chilled heat exchanger. The metal had become brittle as a result of the extremely low temperatures and cracked, leading to the release of fuel and subsequent explosion and fire. As Hopkins points out, Exxon experts had long been aware of the hazards of embrittlement. However, it appears that the operating procedures developed by Esso at Longford ignored this risk, and operators and supervisors lacked the technical/scientific knowledge to recognize the hazard when it arose.

The topic of “tight coupling” (the tight interconnection across different parts of a complex technological system) comes up frequently in discussions of technology accidents. Hopkins shows that the Longford case gives a new spin to this idea. In the case of the explosion and fire at Longford it turned out to be very important that plant 1 was interconnected by numerous plumbing connections to plants 2 and 3. This meant that fuel from plants 2 and 3 continued to flow into plant 1 and greatly extended the length of time it took to extinguish the fire. Plant 1 had to be fully isolated from plants 2 and 3 before the fire could be extinguished (or plants 2 and 3 could be restarted), and there were enough plumbing connections among them, poorly understood at the time of the fire, that took a great deal of time to disconnect (32).

Hopkins addresses the issue of government regulation of high-risk industries in connection with the Longford disaster. Written in 1999 or so, he recognizes the trend towards “self-regulation” in place of government rules stipulating the operating of various industries. He contrasts this approach with deregulation — the effort to allow the issue of safe operation to be governed by the market rather than by law.

Whereas the old-style legislation required employers to comply with precise, often quite technical rules, the new style imposes an overarching requirement on employers that they provide a safe and healthy workplace for their employees, as far as practicable. (92)

He notes that this approach does not necessarily reduce the need for government inspections; but the goal of regulatory inspection will be different. Inspectors will seek to satisfy themselves that the industry has done a responsible job of identify hazards and planning accordingly, rather than looking for violations of specific rules. (This parallels to some extent his discussion of two different philosophies of audit, one of which is much more conducive to increasing the systems-safety of high-risk industries; chapter 7.) But his preferred regulatory approach is what he describes as “safety case regulation”. (Hopkins provides more detail about the workings of a safety case regime in Disastrous Decisions: The Human and Organisational Causes of the Gulf of Mexico Blowout, chapter 10.)

The essence of the new approach is that the operator of a major hazard installation is required to make a case or demonstrate to the relevant authority that safety is being or will be effectively managed at the installation. Whereas under the self-regulatory approach, the facility operator is normally left to its own devices in deciding how to manage safety, under the safety case approach it must lay out its procedures for examination by the regulatory authority. (96)

The preparation of a safety case would presumably include a comprehensive HAZOP analysis, along with procedures for preventing or responding to the occurrence of possible hazards. Hopkins reports that the safety case approach to regulation is being adopted by the EU, Australia, and the UK with respect to a number of high-risk industries. This discussion is highly relevant to the current debate over aircraft manufacturing safety and the role of the FAA in overseeing manufacturers.

It is interesting to realize that Hopkins is implicitly critical of another of my favorite authors on the topic of accidents and technology safety, Charles Perrow. Perrow’s central idea of “normal accidents” brings along with it a certain pessimism about the ability to increase safety in complex industrial and technological systems; accidents are inevitable and normal (Normal Accidents: Living with High-Risk Technologies). Hopkins takes a more pragmatic approach and argues that there are engineering and management methodologies that can significantly reduce the likelihood and harm of accidents like the Esso gas plant explosion. His central point is that we don’t need to be able to anticipate a long chain of unlikely events in order to identify the hazard in which these chains may eventuate — for example, loss of coolant in a nuclear reactor or loss of warm oil in a refinery process. These final events of numerous different possible accident scenarios all require procedures in place that will guide the responses of engineers and technicians when “normal accidents” occur (33).

Hopkins highlights the challenge to safety created by the ongoing modification of a power plant or chemical plant; later modifications may create hazards not anticipated by the rigorous accident analysis performed on the original design.

Processing plants evolve and grow over time. A study of petroleum refineries in the US has shown that “the largest and most complex refineries in the sample are also the oldest … Their complexity emerged as a result of historical accretion. Processes were modified, added, linked, enhanced and replaced over a history that greatly exceeded the memories of those who worked in the refinery. (33)

This is one of the chief reasons why Perrow believes technological accidents are inevitable. However, Hopkins draws a different conclusion:

However, those who are committed to accident prevention draw a different conclusion, namely, that it is important that every time physical changes are made to plant these changes be subjected to a systematic hazard identification process. …  Esso’s own management of change philosophy recognises this. It notes that “changes potentially invalidate prior risk assessments and can create new risks, if not managed diligently.” (33)

(I believe this recommendation conforms to Nancy Leveson’s theories of system safety engineering as well; link.)

Here is the causal diagram that Hopkins offers for the occurrence of the explosion at Longford (122).

The lowest level of the diagram represents the sequence of physical events and operator actions leading to the explosion, fatalities, and loss of gas supply. The next level represents the organizational factors identified in Longford’s analysis of the event and its background. Central among these factors are the decision to withdraw engineers from the plant; a safety philosophy that focused on lost-time injuries rather than system hazards and processes; failures in the incident reporting system; failure to perform a HAZOP for plant 1; poor maintenance practices; inadequate audit practices; inadequate training for operators and supervisors; and a failure to identify the hazard created by interconnections with plants 2 and 3. The next level identifies the causes of the management failures — Esso’s overriding focus on cost-cutting and a failure by Exxon as the parent company to adequately oversee safety planning and share information from accidents at other plants. The final two levels of causation concern governmental and societal factors that contributed to the corporate behavior leading to the accident.

(Here is a list of major industrial disasters; link.)

Herbert Simon’s theories of organizations

Image: detail from Family Portrait 2 1965 
(Creative Commons license, Richard Rappaport)
 

Herbert Simon made paradigm-changing contributions to the theory of rational behavior, including particularly his treatment of “satisficing” as an alternative to “maximizing” economic rationality (link). It is therefore worthwhile examining his views of organizations and organizational decision-making and action — especially given how relevant those theories are to my current research interest in organizational dysfunction. His highly successful book Administrative Behavior went through four editions between 1947 and 1997 — more than fifty years of thinking about organizations and organizational behavior. The more recent editions consist of the original text and “commentary” chapters that Simon wrote to incorporate more recent thinking about the content of each of the chapters.

Here I will pull out some of the highlights of Simon’s approach to organizations. There are many features of his analysis of organizational behavior that are worth noting. But my summary assessment is that the book is surprisingly positive about the rationality of organizations and the processes through which they collect information and reach decisions. In the contemporary environment where we have all too many examples of organizational failure in decision-making — from Boeing to Purdue Pharma to the Federal Emergency Management Agency — this confidence seems to be fundamentally misplaced. The theorist who invented the idea of imperfect rationality and satisficing at the individual level perhaps should have offered a somewhat more critical analysis of organizational thinking.

The first thing that the reader will observe is that Simon thinks about organizations as systems of decision-making and execution. His working definition of organization highlights this view:

In this book, the term organization refers to the pattern of communications and relations among a group of human beings, including the processes for making and implementing decisions. This pattern provides to organization members much of the information and many of the assumptions, goals, and attitudes that enter into their decisions, and provides also a set of stable and comprehensible expectations as to what the other members of the group are doing and how they will react to what one says and does. (18-19).

What is a scientifically relevant description of an organization? It is a description that, so far as possible, designates for each person in the organization what decisions that person makes, and the influences to which he is subject in making each of these decisions. (43)

The central theme around which the analysis has been developed is that organization behavior is a complex network of decisional processes, all pointed toward their influence upon the behaviors of the operatives — those who do the action ‘physical’ work of the organization. (305)

The task of decision-making breaks down into the assimilation of relevant facts and values — a distinction that Simon attributes to logical positivism in the original text but makes more general in the commentary. Answering the question, “what should we do?”, requires a clear answer to two kinds of questions: what values are we attempting to achieve? And how does the world work such that interventions will bring about those values?

It is refreshing to see Simon’s skepticism about the “rules of administration” that various generations of organizational theorists have advanced — “specialization,” “unity of command,” “span of control,” and so forth. Simon describes these as proverbs rather than as useful empirical discoveries about effective administration. And he finds the idea of “schools of management theory” to be entirely unhelpful (26). Likewise, he is entirely skeptical about the value of the economic theory of the firm, which abstracts from all of the arrangements among participants that are crucial to the internal processes of the organization in Simon’s view. He recommends an approach to the study of organizations (and the design of organizations) that focuses on the specific arrangements needed to bring factual and value claims into a process of deliberation leading to decision — incorporating the kinds of specialization and control that make sense for a particular set of business and organizational tasks.

An organization has only two fundamental tasks: decision-making and “making things happen”. The decision-making process involves intelligently gathering facts and values and designing a plan. Simon generally approaches this process as a reasonably rational one. He identifies three kinds of limits on rational decision-making:

  • The individual is limited by those skills, habits, and reflexes which are no longer in the realm of the conscious…
  • The individual is limited by his values and those conceptions of purpose which influence him in making his decision…
  • The individual is limited by the extent of his knowledge of things relevant to his job. (46)

And he explicitly regards these points as being part of a theory of administrative rationality:

Perhaps this triangle of limits does not completely bound the area of rationality, and other sides need to be added to the figure. In any case, the enumeration will serve to indicate the kinds of considerations that must go into the construction of valid and noncontradictory principles of administration. (47)

The “making it happen” part is more complicated. This has to do with the problem the executive faces of bringing about the efficient, effective, and loyal performance of assigned tasks by operatives. Simon’s theory essentially comes down to training, loyalty, and authority.

If this is a correct description of the administrative process, then the construction of an efficient administrative organization is a problem in social psychology. It is a task of setting up an operative staff and superimposing on that staff a supervisory staff capable of influencing the operative group toward a pattern of coordinated and effective behavior. (2)

To understand how the behavior of the individual becomes a part of the system of behavior of the organization, it is necessary to study the relation between the personal motivation of the individual and the objectives toward which the activity of the organization is oriented. (13-14) 

Simon refers to three kinds of influence that executives and supervisors can have over “operatives”: formal authority (enforced by the power to hire and fire), organizational loyalty (cultivated through specific means within the organization), and training. Simon holds that a crucial role of administrative leadership is the task of motivating the employees of the organization to carry out the plan efficiently and effectively.

Later he refers to five “mechanisms of organization influence” (112): specialization and division of task; the creation of standard practices; transmission of decisions downwards through authority and influence; channels of communication in all directions; and training and indoctrination. Through these mechanisms the executive seeks to ensure a high level of conformance and efficient performance of tasks.

What about the actors within an organization? How do they behave as individual actors? Simon treats them as “boundedly rational”:

To anyone who has observed organizations, it seems obvious enough that human behavior in them is, if not wholly rational, at least in good part intendedly so. Much behavior in organizations is, or seems to be, task-oriented–and often efficacious in attaining its goals. (88)

But this description leaves out altogether the possibility and likelihood of mixed motives, conflicts of interest, and intra-organizational disagreement. When Simon considers the fact of multiple agents within an organization, he acknowledges that this poses a challenge for rationalistic organizational theory:

Complications are introduced into the picture if more than one individual is involved, for in this case the decisions of the other individuals will be included among the conditions which each individual must consider in reaching his decisions. (80)

This acknowledges the essential feature of organizations — the multiplicity of actors — but fails to treat it with the seriousness it demands. He attempts to resolve the issue by invoking cooperation and the language of strategic rationality: “administrative organizations are systems of cooperative behavior. The members of the organization are expected to orient their behavior with respect to certain goals that are taken as ‘organization objectives'” (81). But this simply presupposes the result we might want to occur, without providing a basis for expecting it to take place.

With the hindsight of half a century, I am inclined to think that Simon attributes too much rationality and hierarchical purpose to organizations.

The rational administrator is concerned with the selection of these effective means. For the construction of an administrative theory it is necessary to examine further the notion of rationality and, in particular, to achieve perfect clarity as to what is meant by “the selection of effective means.” (72)  

These sentences, and many others like them, present the task as one of defining the conditions of rationality of an organization or firm; this takes for granted the notion that the relations of communication, planning, and authority can result in a coherent implementation of a plan of action. His model of an organization involves high-level executives who pull together factual information (making use of specialized experts in this task) and integrating the purposes and goals of the organization (profits, maintaining the health and safety of the public, reducing poverty) into an actionable set of plans to be implemented by subordinates. He refers to a “hierarchy of decisions,” in which higher-level goals are broken down into intermediate-level goals and tasks, with a coherent relationship between intermediate and higher-level goals. “Behavior is purposive in so far as it is guided by general goals or objectives; it is rational in so far as it selects alternatives which are conducive to the achievement of the previously selected goals” (4).  And the suggestion is that a well-designed organization succeeds in establishing this kind of coherence of decision and action.

 

It is true that he also asserts that decisions are “composite” —

It should be perfectly apparent that almost no decision made in an organization is the task of a single individual. Even though the final responsibility for taking a particular action rests with some definite person, we shall always find, in studying the manner in which this decision was reached, that its various components can be traced through the formal and informal channels of communication to many individuals … (305)

But even here he fails to consider the possibility that this compositional process may involve systematic dysfunctions that require study. Rather, he seems to presuppose that this composite process itself proceeds logically and coherently. In commenting on a case study by Oswyn Murray (1923) on the design of a post-WWI battleship, he writes: “The point which is so clearly illustrated here is that the planning procedure permits expertise of every kind to be drawn into the decision without any difficulties being imposed by the lines of authority in the organization” (314). This conclusion is strikingly at odds with most accounts of science-military relations during World War II in Britain — for example, the pernicious interference of Frederick Alexander Lindemann with Patrick Blackett over Blackett’s struggles to create an operations-research basis for anti-submarine warfare (Blackett’s War: The Men Who Defeated the Nazi U-Boats and Brought Science to the Art of Warfare). His comments about the processes of review that can be implemented within organizations (314 ff.) are similarly excessively optimistic — contrary to the literature on principal-agent problems in many areas of complex collaboration.

This is surprising, given Simon’s contributions to the theory of imperfect rationality in the case of individual decision-making. Against this confidence, the sources of organizational dysfunction that are now apparent in several literatures on organization make it more difficult to imagine that organizations can have a high success rate in rational decision-making. If we were seeking for a Simon-like phrase for organizational thinking to parallel the idea of satisficing, we might come up with the notion of bounded localistic organizational rationality”: “locally rational, frequently influenced by extraneous forces, incomplete information, incomplete communication across divisions, rarely coherent over the whole organization”.

Simon makes the point emphatically in the opening chapters of the book that administrative science is an incremental and evolving field. And in fact, it seems apparent that his own thinking continued to evolve. There are occasional threads of argument in Simon’s work that seem to point towards a more contingent view of organizational behavior and rationality, along the lines of Fligstein and McAdam’s theories of strategic action fields. For example, when discussing organizational loyalty Simon raises the kind of issue that is central to the strategic action field model of organizations: the conflicts of interest that can arise across units (11). And in the commentary on Chapter I he points forward to the theories of strategic action fields and complex adaptive systems:

The concepts of systems, multiple constituencies, power and politics, and organization culture all flow quite naturally from the concept of organizations as complex interactive structures held together by a balance of the inducements provided to various groups of participants and the contributions received from them. (27)

The book has been a foundational contribution to organizational studies. At the same time, if Herbert Simon were at the beginning of his career and were beginning his study of organizational decision-making today, I suspect he might have taken a different tack. He was plainly committed to empirical study of existing organizations and the mechanisms through which they worked. And he was receptive to the ideas surrounding the notion of imperfect rationality. The current literature on the sources of contention and dysfunction within organizations (Perrow, Fligstein, McAdam, Crozier, …) might well have led him to write a different book altogether, one that gave more attention to the sources of failures of rational decision-making and implementation alongside the occasional examples of organizations that seem to work at a very high level of rationality and effectiveness.

Asian Conference on the Philosophy of the Social Sciences

photo: Tianjin, China

A group of philosophers of social science convened in Tianjin, China, at Nankai University in June to consider some of the ways that the social sciences can move forward in the twenty-first century. This was the Asian Conference on the Philosophy of the Social Sciences, and there were participants from Asia, Europe, Australia, and the United States. (It was timely for Nankai University to host such a meeting, since it is celebrating the centennial of its founding in 1919 this year.) The conference was highly productive for all participants, and it seems to have the potential of contributing to fruitful future thinking about philosophy and the social sciences in Chinese universities as well.

Organized by Francesco Di Iorio and the School of Philosophy at Nankai University, the meeting was a highly productive international gathering of scholars with interests in all aspects of the philosophy of the social sciences. Topics that came in for discussion included the nature of individual agency, the status of “social kinds”, the ways in which organizations “think”, current thinking about methodological individualism, and the status of idealizations in the social sciences, among many other topics. It was apparent that participants from many countries gained insights from their colleagues from other countries and other regions when discussing social science theory and specific social challenges.

Along with many others, I believe that the philosophy of social science has the potential for being a high-impact discipline in philosophy. The contemporary world poses complex, messy problems with huge import for the whole of the global population, and virtually all of those challenges involve difficult situations of social and behavioral interaction (link). Migration, poverty, youth disaffection, the cost of higher education, the importance of rising economic and social inequalities, the rise of extremism, and the creation of vast urban centers like Shanghai and Rio de Janeiro all involve a mix of behavior, technology, and environment that will require the very best social-science research to navigate successfully. And if anyone ever thought that the social sciences were simpler or easier than the natural sciences, the perplexities we currently face of nationalism, racism, and rising inequalities should certainly set that thought to rest for good.

Philosophy can help social scientists gain better theoretical and analytical understanding of the social world in which we live. Philosophers can do this by thinking carefully about the nature of causal relationships in the social world (link); by considering the limitations of social-science inquiry that are inherent in the nature of the social world (link); and by assessing the implications of various discoveries in the logic of collective action for social life (link).

When we undertake large technology projects we make use of the theories and methods of analysis about forces and materials that are provided by the natural sciences. This is what gives us confidence that buildings will stand up to earthquakes and bridges will be able to sustain the stresses associated with traffic and wind. We turn to policy and legislation in an effort to solve social problems. Public policy is the counterpart to technology. However, it is clear that public policy is far less amenable to precise scientific and analytical guidance. Cause and effect relationships are more difficult to discern in the social world, contingency and conjunction are vastly more important, and the ability of social-science theories to measure and predict is substantially more limited than the natural sciences. So it is all the more important to have a clear and dynamic understanding of the challenges and resources that confront social scientists as they attempt to understand social processes and behavior.

These kinds of “wicked” social problems occur in every country, but they are especially pressing in Asia at present (linklink). As citizens and academics consider their roles in the future of their countries in Japan, Thailand, China, or Russia, Serbia, or France, they will be empowered in their efforts by the best possible thinking about the scope and limits of various disciplines of the social sciences.

This kind of international meeting organized around topics in the philosophy of the social sciences has the potential of stimulating new thinking and substantial progress in our understanding of society. The fact that philosophers in China, Thailand, Finland, Japan, France, and the United States bring very different national and cultural experiences to their philosophical theories creates the possibility of synergy and the challenging of presuppositions. One such example came up in a discussion with Finnish philosopher Uskali Maki over my use of principal-agent problems as a general source of organizational dysfunction. Maki argued that this claim reflects a specific cultural context, and that this kind of dysfunction is substantially less prevalent in Finnish organizations and government agencies. (Maki also argued that my philosophy of social science over-emphasizes plasticity and change, whereas Maki holds that the fact of social order must be explained.) It was also interesting to consider with a Chinese philosopher whether there are aspects of traditional Chinese philosophy that might shed light on current social processes. Does Mencius provide a different way of thinking about the role and legitimacy of government than the social contract tradition in which European philosophers generally operate (link)?

So along with all the other participants, I would like to offer sincere appreciation to Francesco Di Iorio and his colleagues at the School of Philosophy for the superlative inspiration and coordination they provided for this international conference of philosophers.

Auditing FEMA

Crucial to improving an organization’s performance is being able to obtain honest and detailed assessments of its functioning, in normal times and in emergencies. FEMA has had a troubled reputation for faulty performance since the Katrina disaster in 2005, and its performance in response to Hurricane Maria in Louisiana and Puerto Rico was also criticized by observers and victims. So how can FEMA get better? The best avenue is careful, honest review of past performance, identifying specific areas of organizational failure and taking steps to improve in these areas.

It is therefore enormously disturbing to read an investigative report in the Washington Post ((Lisa Rein and Kimberly Kindy, Washington Post, June 6, 2019); link) documenting that investigation and audits by the Inspector General of the Department of Homeland Security were watered down and sanitized at the direction of the audit bureau’s acting director, John V. Kelly.

Auditors in the Department of Homeland Security inspector general’s office confirmed problems with the Federal Emergency Management Agency’s performance in Louisiana — and in 11 other states hit over five years by hurricanes, mudslides and other disasters. 

But the auditors’ boss, John V. Kelly, instead directed them to produce what they called “feel-good reports” that airbrushed most problems and portrayed emergency responders as heroes overcoming vast challenges, according to interviews and a new internal review. 

Investigators determined that Kelly didn’t just direct his staff to remove negative findings. He potentially compromised their objectivity by praising FEMA’s work ethic to the auditors, telling them they would see “FEMA at her best” and instructing supervisors to emphasize what the agency had done right in its disaster response. (Washington Post, June 6, 2019)

“Feel-good” reports are not what quality improvement requires, and they are not what legislators and other public officials need as they consider the adequacy of some of our most important governmental institutions. It is absolutely crucial for the public and for government oversight that we should be able to rely on the honest, professional, and rigorous work of auditors and investigators without political interference in their findings. These are the mechanisms through which the integrity of regulatory agencies and other crucial governmental agencies is maintained.

Legislators and the public are already concerned about the effectiveness of the Federal Aviation Agency’s oversight in the certification process of the Boeing 737 MAX. The evidence brought forward by the Washington Post concerning interference with the work of the staff of the Inspector General of DHS simply amplifies that concern. The article correctly observes that independent and rigorous oversight is crucial for improving the functioning of government agencies, including DHS and FEMA:

Across the federal government, agencies depend on inspectors general to provide them with independent, fact-driven analysis of their performance, conducting audits and investigations to ensure that taxpayers’ money is spent wisely. 

Emergency management experts said that oversight, particularly from auditors on the ground as a disaster is unfolding, is crucial to improving the response, especially in ensuring that contracts are properly administered. (Washington Post, June 6, 2019)

Honest government simply requires independent and effective oversight processes. Every agency, public and private, has an incentive to conceal perceived areas of poor performance. Hospitals prefer to keep secret outbreaks of infection and other medical misadventures (link), the Department of Interior has shown an extensive pattern of conflict of interest by some of its senior officials (link), and the Pentagon Papers showed how the Department of Defense sought to conceal evidence of military failure in Vietnam (link). The only protection we have from these efforts at concealment, lies, and spin is vigorous governmental review and oversight, embodied by offices like the Inspectors General of various agencies, and an independent and vigorous press able to seek out these kinds of deception.

Regulatory failure and the 737 MAX disasters

The recent crashes of two Boeing 737 MAX aircraft raise questions about the safety certification process through which this modified airframe was certified for use by the FAA. Recent accounts of the design and manufacture of the aircraft demonstrate an enormous pressure for speed and great pressure to reduce costs. Attention has focused on a software system, MCAS, which was a feature needed to adapt to the aerodynamics created by repositioning of larger engines on the existing 737 body. The software was designed to automatically intervene to prevent stall if a single sensor in the nose indicated unacceptable angle of ascent. The crash investigations are not complete, but current suspicions are that the pilots in the two aircraft were unable to control or disable the nose-down response of the system in the roughly 40 seconds they had to recover control of the aircraft. (James Fallows provides a good and accessible account of the details of the development of the 737 MAX in a story in the Atlanticlink.)

The question here concerns the regulatory background of the aircraft: was the certification process through which the 737 MAX was certified to fly a sufficiently rigorous and independent one?

Thomas Kaplan details in a New York Times article the division of responsibility that has been created in the certification process over the past several decades between the FAA and the manufacturer (NYT 3/27/19). Under this program, the FAA delegates a substantial part of the work of certification evaluation to the manufacturer and its engineering staff. Kaplan writes:

In theory, delegating much of the day-to-day regulatory work to Boeing allows the FAA to focus its limited resources on the most critical safety work, taps into existing industry technical expertise at a time when airliners are becoming increasingly complex, and allows Boeing in particular to bring out new planes faster at a time of intense global competition with its European rival Airbus.

However, it is apparent to both outsiders and insiders that this creates the possibility of impairing the certification process by placing crucial parts of the evaluation in the hands of experts whose interests and careers lie in the hands of the corporation whose product they are evaluating. This is an inherent conflict of interest for the employee, and it is potentially a critical flaw in the process from the point of view of safety. (See an earlier post on the need for an independent and empowered safety officer within complex and risky processes; link.)

Senator Richard Blumenthal (Connecticut) highlighted this concern when he wrote to the inspector general last week: “The staff responsible for regulating aircraft safety are answerable to the manufacturers who profit from cutting corners, not the American people who may be put at risk.”

A 2011 audit report from the Transportation Department’s inspector general’s office highlighted exactly this kind of issue: “The report cited an instance where FAA engineers were concerned about the ‘integrity’ of an employee acting on the agency’s behalf at an unnamed manufacturer because the employee was ‘advocating a position that directly opposed FAA rules on an aircraft fuel system in favor of the manufacturer’.” The article makes the point that Congress has encouraged this program of delegation in order to constrain budget requirements for the federal agency.

Kaplan notes that there is also a worrisome degree of exchange of executive staff between the FAA and the airline industry, raising the possibility that the industry’s priorities about cost and efficiency may unduly influence the regulatory agency:

The part of the FAA under scrutiny, the Transport Airplane Directorate, was led at the time by an aerospace engineer names Ali Bahrami. The next year, he took a job at the Aerospace Industries Association, a trade group whose members include Boeing. In that position, he urged his former agency to allow manufacturers like Boeing to perform as much of the work of certifying new planes as possible. Mr. Bahrami is now back at the FAA as its top safety official.

This episode illustrates one of the key dysfunctions of organizations that have been highlighted elsewhere here: the workings of conflict of commitment and interest within an organization, and the ability that the executives of an organization have to impose behavior and judgment on their employees that are at odds with the responsibilities these individuals have to other important social goods, including airline safety. The episode has a lot in common with the sequence of events leading to the launch of Space Shuttle Challenger (Vaughan, The Challenger Launch Decision: Risky Technology, Culture, and Deviance at NASA).

Charles Perrow has studied system failure extensively since publication of his important book, Normal Accidents: Living with High-Risk Technologies and extending through his 2011 book The Next Catastrophe: Reducing Our Vulnerabilities to Natural, Industrial, and Terrorist Disasters. In a 2015 article, “Cracks in the ‘regulatory state'” (link), he summarizes some of his concerns about the effectiveness of the regulatory enterprise. The abstract of the article shows its relevance to the current case: 

Over the last 30 years, the U.S. state has retreated from its regulatory responsibility over private-sector economic activities. Over the same period, a celebratory literature, mostly in political science, has developed, characterizing the current period as the rise of the regulatory state or regulatory capitalism. The notion of regulation in this literature, however, is a perverse one—one in which regulators mostly advise rather than direct, and industry and firm self- regulation is the norm. As a result, new and potentially dangerous technologies such as fracking or mortgage backed derivatives are left unregulated, and older necessary regulations such as prohibitions are weakened. This article provides a joint criticism of the celebratory literature and the deregulation reality, and strongly advocates for a new sociology of regulation that both recognizes and documents these failures. (203)

The 2015 article highlights some of the precise sources of failure that seem to be evident in the 737 MAX case. “Government assumes a coordinating rather than a directive role, in this account, as regulators draw upon industry best practices, its standard-setting proclamations, and encourage self-monitoring” (203). This is precisely what current reporting demonstrates about the FAA relationship to the manufacturers.

One of the key flaws of self-monitoring is the lack of truly independent inspectors:

Part of the problem stems from the failure of firms to select truly independent inspectors. Firms can, in fact, select their own inspectors—for example, firemen or police from the local areas who are quite conscious of the economic power of the local chemical firm they are to inspect. (205)

Here again, the Boeing 737 MAX certification story seems to illustrate this defect as well. How serious are these “cracked regulatory institutions”? According to Perrow they are deadly serious. Here is Perrow’s summary assessment about the relationship between regulatory failure and catastrophe:

Almost every major industrial accident in recent times has involved either regulatory failure or the deregulation demanded by business and industry. For more examples, see Perrow (2011). It is hard to make the case that the industries involved have failed to innovate because of federal regulation; in particular, I know of no innovations in the safety area that were stifled by regulation. Instead, we have a deregulated state and deregulated capitalism, and rising environmental problems accompanied by growing income and wealth inequality. (210)

In short, we seem to be at the beginning of an important reveal of the cost of neoliberal efforts to minimize regulation and to pass the responsibility for safety significantly to the manufacturer. 

(Baldwin, Cave, and Lodge provide a good exposure to current thinking about government regulation in Understanding Regulation: Theory, Strategy, and Practice, 2nd Edition. Their Oxford Handbook of Regulation also provides excellent resources on this topic.)

Philosophy of technology?

Is there such a thing as “philosophy of technology”? Is there a “philosophy of cooking” or a “philosophy of architecture”? All of these are practical activities – praxis – with large bodies of specialized knowledge and skill involved in their performance. But where does philosophy come in?

Most of us trained in analytic philosophy think of a philosophical topic as one that can be formulated in terms of a small number of familiar questions: what are the nature and limitations of knowledge in this area? What ethical or normative problems does this area raise? What kinds of conceptual issues need to be addressed before we can discuss problems in this area clearly and intelligently? Are there metaphysical issues raised by this area — special kinds of things that need special philosophical attention? Does “technology” support this kind of analytical approach?
We might choose to pursue a philosophy of technology in an especially minimalist (and somewhat Aristotelian) way, along these lines:

  • Human beings have needs and desires that require material objects for their satisfaction. 
  • Human beings engage in practical activity to satisfy their needs and desires.
  • Intelligent beings often seek to utilize and modify their environments so as to satisfy their needs and desires. 
  • Physical bodies are capable of rudimentary environment modification, which may permit adequate satisfaction of needs and desires in propitious environments (dolphins).
  • Intelligent beings often seek to develop “tools” to extend the powers of their bodies to engage in environment modification.
  • The use of tools produces benefits and harms for self and others, which raises ethical issues.

Now we can introduce the idea of the accumulation of knowledge (“science”):

  • Human beings have the capacity to learn how the world around them works, and they can learn the causal properties of materials and natural entities. 
  • Knowledge of causal properties permits intelligent intervention in the world.
  • Gaining scientific knowledge of the world creates the possibility of the invention of knowledge-based artifacts (instruments, tools, weapons).

And history suggests we need to add a few Hobbesian premises:

  • Human beings often find themselves in conflict with other agents for resources supporting the satisfaction of their needs and desires.
  • Intelligent beings seek to develop tools (weapons) to extend the powers of their bodies to engage in successful conflict with other agents.

Finally, history seems to make it clear that tools, machines, and weapons are not purely individual products; rather, social circumstances and social conflict influence the development of the specific kinds of tools, machines, and weapons that are created in a particular historical setting.

The idea of technology can now be fitted into the premises identified here. Technology is the sum of a set of tools, machines, and practical skills available at a given time in a given culture through which needs and interests are satisfied and the dialectic of power and conflict furthered.
This treatment suggests several leading questions for a philosophy of technology:

  1. How does technology relate to human nature and human needs?
  2. How does technology relate to intelligence and creativity?
  3. How does technology relate to scientific knowledge?
  4. How does technology fit into the logic of warfare?
  5. How does technology fit into the dialectic of social control among groups?
  6. How does technology relate to the social, historical, and cultural environment?
  7. Is the process of technology change determined by the technical characteristics of the technology?
  8. How does technology relate to issues of justice and morality?

Here are a few important contributions to several of these topics.

Lynn White’s Medieval Technology and Social Change illustrates almost all elements of this configuration. His classic book begins with the dynamics of medieval warfare (the impact of the development of the stirrup on mounted combat); proceeds to food production (the development and social impact of the heavy iron plough); and closes with medieval machines.

Charles Sabel’s treatment of industrialization and the creation of powered machinery in Work and Politics: The Division of Labour in Industry addresses topic 5; Sabel demonstrates how industrialization and the specific character of mechanization that ensued was a process substantially guided by conflicts of interest between workers and owners, and technologies were selected by owners that reduced the powers of resistance of workers. Sabel and Zeitlin make this argument in greater detail in World of Possibilities: Flexibility and Mass Production in Western Industrialization. One of their most basic arguments is the idea that firms are strategic and adaptive as they deal with a current set of business challenges. Rather than an inevitable logic of new technologies and their organizational needs, we see a highly adaptive and selective process in which firms pick and choose among alternatives, often mixing the choices to hedge against failure. They consider carefully a range of possible changes on the horizon, a set of possible strategic adaptations that might be selected; and they frequently hedge their bets by investing in both the old and the new technology. “Economic agents, we found again and again in the course of the seminar’s work, do not maximize so much as they strategize” (5). (Here is a more extensive discussion of Sabel and Zeitlin; link.)

The logic underlying the idea of technological inevitability (topic 7) goes something like this: a new technology creates a set of reasonably accessible new possibilities for achieving new forms of value: new products, more productive farming techniques, or new ways of satisfying common human needs. Once the technology exists, agents or organizations in society will recognize those new opportunities and will attempt to take advantage of them by investing in the technology and developing it more fully. Some of these attempts will fail, but others will succeed. So over time, the inherent potential of the technology will be realized; the technology will be fully exploited and utilized. And, often enough, the technology will both require and force a new set of social institutions to permit its full utilization; here again, agents will recognize opportunities for gain in the creation of social innovations, and will work towards implementing these social changes.

This view of history doesn’t stand up to scrutiny, however. There are many examples of technologies that failed to come to full development (the water mill in the ancient world and the Betamax in the contemporary world). There is nothing inevitable about the way in which a technology will develop — imposed, perhaps, by the underlying scientific realities of the technology; and there are numerous illustrations of a more complex back-and-forth between social conditions and the development of a technology. So technological determinism is not a credible historical theory.
Thomas Hughes addresses topic 6 in his book, Human-Built World: How to Think about Technology and Culture

Here Hughes considers how technology has affected our cultures in the past two centuries. The twentieth-century city, for example, could not have existed without the inventions of electricity, steel buildings, elevators, railroads, and modern waste-treatment technologies. So technology “created” the modern city. But it is also clear that life in the twentieth-century city was transformative for the several generations of rural people who migrated to them. And the literature, art, values, and social consciousness of people in the twentieth century have surely been affected by these new technology systems. Each part of this complex story involves processes that are highly contingent and highly intertwined with social, economic, and political relationships. And the ultimate shape of the technology is the result of decisions and pressures exerted throughout the web of relationships through which the technology took shape. But here is an important point: there is no moment in this story where it is possible to put “technology” on one side and “social context” on the other. Instead, the technology and the society develop together.

Peter Galison’s treatment of the simultaneous discovery of the relativity of time measurement by Einstein and Poincaré in Einstein’s Clocks and Poincaré’s Maps: Empires of Time provides a valuable set of insights into topic 3. Galison shows that Einstein’s thinking was very much influenced by practical issues in the measurement of time by mechanical devices. This has an interesting corollary: the scientific imagination is sometimes stimulated by technology issues, just as technology solutions are created through imaginative use of new scientific theories.

Topic 8 has produced an entire field of research of its own. The morality of the use of autonomous drones in warfare; the ethical issues raised by CRISPR technology in human embryos; the issues of justice and opportunity created by the digital divide between affluent people and poor people; privacy issues created by ubiquitous facial recognition technology — all these topics raise important moral and social-justice issues. Here is an interesting thought piece by Michael Lynch in the Guardian on the topic of digital privacy (link). Lynch is the author of The Internet of Us: Knowing More and Understanding Less in the Age of Big Data.

So, yes, there is such a thing as the philosophy of technology. But to be a vibrant and intellectually creative field, it needs to be cross-disciplinary, and as interested in the social and historical context of technology as it is the conceptual and normative issues raised by the field.

Conflicts of interest

The possibility or likelihood of conflict of interest is present in virtually all professions and occupations. We expect a researcher, a physician, or a legislator to perform her work according to the highest values and norms of their work (searching for objective knowledge, providing the best care possible for the patient, drafting and supporting legislation in order to enhance the public good). But there is always the possibility that the individual may have private financial interests that distort or bias the work she does, and there may be large companies that have a financial interest in one set of actions rather than another.

Marc Rodwin’s Conflicts of Interest and the Future of Medicine: The United States, France, and Japan is a rigorous and fair treatment of this issue with respect to conflicts of interest in the field of medicine. Rodwin has published extensively on this topic, and the current book is an important exploration of how professional ethics, individual interest, and business and professional institutions intersect to influence practitioner behavior in this field. The institutional actors in this story include the pharmaceutical companies and medical device manufacturers, insurers, hospitals and physician partnerships, and legislators and regulators. Rodwin shows in detail how differences in insurance policies, physician reimbursement policies, and gifts and benefits from health-related businesses to physicians contribute to an institutional environment where the physician’s choices are all too easily influenced by considerations other than the best health outcomes of the patient. Rodwin finds that the institutional setting for health economics is different in the US, France, and Japan, and that these differences lead to differences in physician behavior.

Here is Rodwin’s clear statement of the material situation that creates the possibility or likelihood of conflicts of interest in medicine.

Physicians earn their living through their medical work and so may practice in ways that enhance their income rather than the interests of patients. Moreover, when physicians prescribe drugs, devices, and treatments and choose who supplies these or refer patients to other providers, they affect the the fortunes of third parties. As a result, providers, suppliers, and insurers try to influence physicians’ clinical decisions for their own benefit. Thus, at the core of doctoring lies tension between self-interest and faithful service to patients and the public. The prevailing powerful medical ethos does influence physicians. Still, there is conflict between professional ethics and financial incentives. (kl 251)

Jerome Kassirer is a former editor-in-chief of the New England Journal of Medicine, and an expert observer of the field, and he provided a foreword to the book. Kassirer describes the current situation in the medical economy in these terms, drawing on his own synthesis of recent research and journalism:

Professionalism had been steadily eroded by complex financial ties between practicing physicians and academic physicians on the one hand and the pharmaceutical, medical device, and biotechnology industries on the other. These financial ties were deep and wide: they threatened to bias the clinical research on which physicians relied to care for the sick, and they permeated nearly every aspect of medical care. Physicians were accepting gifts, taking free trips, serving on companies’ speakers’ bureaus, signing their names to articles written for them by industry-paid ghostwriters, and engaging in research that endangered patient care. (kl 73)

The fundamental problem posed by Rodwin’s book is this set of questions:

In what context can physicians be trusted to act in their patients’ interests? How can medical practice be organized to minimize physicians’ conflicts of interest? How can society promote what is best in medical professionalism? What roles should physicians and organized medicine play in the medical economy? What roles should insurers, the state, and markets play in medical care? (kl 267)

The book sheds light on dozens of institutional arrangements that create the likelihood of conflicted choices, or that reduce that likelihood. One of those arrangements is the question for a non-profit hospital of whether the physicians are employed with a fixed salary or work on a fee-for-service basis. The latter system gives the physician a very different set of financial interests, including the possibility of making clinical choices that increase revenues to the physician or his or her group practice.

Consider physicians employed as public servants in public hospitals. Typically, they receive a fixed salary set by rank, enjoy tenure, and have clinical discretion. As a result, they lack financial incentives that bias their choices and have clinical freedom. Such arrangements preclude employment conflicts of interest. But relax some of these conditions and employers can compromise medical practice…. Furthermore, emplloyers can manage physicians to promote the organization’s goals. As a result, employed physicians might practice in ways that promote their employer’s over their patients’ interests. (kl 445)

And the disadvantages for the patient of the self-employed physician are also important:

Payment can encourage physicians to supply more, less, or different kinds of services, or to refer to particular providers. Each form of payment has some bias, but some compromise clinical decisions more than others do. (kl 445) 

Plainly, the circumstances and economic institutions described here are relevant to many other occupations as well. Scientists, policymakers, regulators, professors, and accountants all face similar circumstances — though the financial stakes in medicine are particularly high. (Here is an earlier post on corporate efforts to influence scientific research; link.)

 

This field of research makes an important contribution to a particular challenging topic in contemporary healthcare. But Rodwin’s study also provides an important contribution to the new institutionalism, since it serves as a micro-level case study of the differences in behavior created by differences in institutional rules and practices.

Each country’s laws, insurance, and medical institutions shape medical practice; and within each country, different forms of practice affect clinical choices. (kl 218)

This feature of the book allows it to contribute to the kinds of arguments on the causal and historical importance of specific configurations of institutions offered by Kathleen Thelen (link) and Frank Dobbin (link).

The Morandi Bridge collapse and regulatory capture

Lower image: Eugenio Ceroni and Luca Cozzi, Ponte Morandi – Autopsia di una strage

A recurring topic in Understanding Society is the question of the organizational causes that lie in the background of major accidents and technological disasters. One such disaster is the catastrophic collapse of the Morandi Bridge in Genoa in August, 2018, which resulted in the deaths of 43 people. Was this a technological failure, a design failure — or importantly a failure in which private and public organizational features led to the disaster?

A major story in the New York Times on March 5, 2019 (link) makes it clear that social and organizational causes were central to this horrendous failure. (What could be more terrifying than having the highway bridge under your vehicle collapse to the earth 150 feet beneath you?) In this case it is evident from the Times coverage that a major cause of the disaster was the relationship between Autostrade per l’Italia, the private company that manages the bridge and derives enormous profit from it, and the regulatory ministries responsible for regulating and supervising safe operations of highways and bridges.

In a sign of the arrogance of wealth and power involved in the relationship, the Benetton family threatened a multimillion dollar lawsuit against the economist Marco Ponti who had served on an expert panel advising the government and had made strong statements about the one-sided relationship that existed. The threat was not acted upon, but the abuse of power is clear.

This appears to be a textbook case of “regulatory capture”, a situation in which the private owners of a risky enterprise or activity use their economic power to influence or intimidate the government regulatory agencies that nominally oversee their activities. “Autostrade reaped huge profits and acquired so much power that the state became a largely passive regulatory” (NYT March 5, 2019). Moreover, independent governmental oversight was crippled by the fact that “the company effectively regulated itself– because Autostrade’s parent company owned the inspection company responsible for safety checks on the Morandi Bridge” (NYT). The Times quotes Carlo Scarpa, and economics professor at the University of Brescia:

Any investor would have been worried about bidding. The Benettons, though, knew the system and they understood that the Ministry of Infrastructure and Transport, which was supposed to supervise the whole thing, was weak. They were able to calculate the weight the company would have in the political arena. (NYT March 5, 2019)

And this seems to have worked out as the family expected:

Autostrade became a political powerhouse, acquiring clout that the Ministry of Infrastructure and Transport, perpetually underfunded and employing a small fraction of the staff, could not match. (NYT March 5, 2019)

The story notes that the private company made a great deal of money from this contract, but that the state also benefited financially. “Autostrade has poured billions of euros into state coffers, paying nearly 600 million euros a year in corporate taxes, V.A.T. and license fees.”

The story also surfaces other social factors that played a role in the disaster, including opposition by Genoa residents to the construction involved in creating a potential bypass to the bridge.

Here is what the Times story has to say about the inspections that occurred:

Beyond fixing blame for the bridge collapse, a central question of the Morandi tragedy is what happened to safety inspections. The answer is that the inspectors worked for Autostrade more than for the state. For decades, Spea Engineering, a Milan-based company, has performed inspections on the bridge. If nominally independent, Spea is owned by Autostrade’s parent company, Atlantia, and Autostrade is also Spea’s largest customer. Spea’s offices in Rome and elsewhere are housed inside Autostrade. One former bridge design engineer for Spea, Giulio Rambelli, described Autostrade’s control over Spea as “absolute,” (NYT March 5, 2019)

The story notes that this relationship raises the possibility of conflicts of interest that are prohibited in other countries. The story quotes Professor Giuliano Fonderico: “All this suggests a system failure.”

The failure appears to be first and foremost a failure of the state to fulfill its obligations of regulation and oversight of dangerous activities. By ceding any real and effective system of safety inspection to the business firms who are benefitting from the operations of the bridge, the state has essentially given up its responsibility of ensuring the safety of the public.

It is also worth underlining the point made in the article about the huge mismatch that exists between the capacities of the business firms in question and the agencies nominally charged to regulate and oversee them. This is a system-level failure at a higher level, since it highlights the fact of the power imbalance that almost always exists between large corporate wealth and the government agencies charged to oversee their activities.

Here is an editorial from the Guardian that makes some similar points; link. There don’t appear to be book-length treatments of the Morandi Bridge disaster available in English. Here is an Italian book on the subject by Eugenio Ceroni and Luca Cozzi, Ponte Morandi – Autopsia di una strage: I motivi tecnici, le colpe, gli errori. Quel che si poteva fare e non si è fatto (Italian Edition), which appears to be a technical civil-engineering analysis of the collapse. The Kindle translate option using Bing is helpful for non-Italian readers to get the thrust of this short book. In the engineering analysis inadequate inspection and incomplete maintenance remediation are key factors in the collapse.

The research university

Where do new ideas, new technologies, and new ways of thinking about the world come from in a modern society? Since World War II the answer to this question has largely been found in research universities. Research universities are doctoral institutions that employ professors who are advanced academic experts in a variety of fields and that expend significant amounts of external funds in support of ongoing research. Given the importance of innovation and new ideas in the knowledge economy of the twenty-first century, it is very important to understand the dynamics of research universities, and to understand factors that make them more or less productive in achieving new knowledge. And, crucially, we need to understand how public policy can enhance the effectiveness of the university research enterprise for the benefit of the whole of society.

Jason Owen-Smith’s recent Research Universities and the Public Good: Discovery for an Uncertain Future is a very welcome and insightful contribution to better understanding this topic. Owen-Smith is a sociology professor at the University of Michigan (itself a major research university with over 1.5 billion dollars in annual research funding), and he brings to his task some of the most insightful ideas currently transforming the field of organizational studies.

Owen-Smith analyzes research universities (RU) in terms of three fundamental ideas. RUs serves as sourceanchor, and hub for the generation of innovations and new ideas in a vast range of fields, from the humanities to basic science to engineering and medicine. And he believes that this triple function makes research universities virtually unique among American (or global) knowledge-producing organizations, including corporate and government laboratories (33).

The idea of the university as a source is fairly obvious: it is the idea that universities create and disseminate new knowledge in a very wide range of fields. Sometimes that knowledge is of interest to a hundred people worldwide; and sometimes it results in the creation of genuinely transformative technologies and methods. The idea of the university as “anchor” refers largely to the stability that research universities offer the knowledge enterprise. Another aspect of the idea of the university as an anchor is the fact that it helps to create a public infrastructure that encourages other kinds of innovation in the region that it serves — much as an anchor tenant helps to bring potential customers to smaller stores in a shopping mall. Unlike other knowledge-centered organizations like private research labs or federal laboratories, universities have a diverse portfolio of activity that confers a very high level of stability over time. This is a large asset for the country as a whole. It is also frequently an asset for the city or region in which it is located.

The idea of the university as a hub is perhaps the most innovative perspective offered here. The idea of a hub is a network concept. A hub is a node that links individuals and centers to each other in ways that transcend local organizational charts. And the power of a hub, and the networks that it joins, is that it facilitates the exchange of information and ideas and creates the possibility of new forms of cooperation and collaboration. Here the idea is that a research university is a place where researchers form working relationships, both on campus and in national networks of affiliation. And the density and configuration of these relationships serve to facilitate communication and diffusion of new ideas and approaches to a given problem, with the result that progress is more rapid. O-S makes use of Peter Galison’s treatment of the simultaneous discovery of the relativity of time measurement by Einstein and Poincaré in Einstein’s Clocks and Poincaré’s Maps: Empires of Time.  Galison shows that Einstein and Poincaré were both involved in extensive intellectual networks that were quite relevant to their discoveries; but that their innovations had substantially different effects because of differences in those networks. Owen-Smith believes that these differences are very relevant in the workings of modern RUs in the United States as well. (See also Galison’s Image and Logic: A Material Culture of Microphysics.)

Radical discoveries like the theory of special relativity are exceptionally rare, but the conditions that gave rise to them should also enable less radical insights. Imagining universities as organizational scaffolds for a complex collaboration networks and focal point where flows of ideas, people, and problems come together offers a systematic way to assess the potential for innovation and novelty as well as for multiple discoveries. (p. 15)

Treating a complex and interdependent social process that occurs across relatively long time scales as if it had certain needs, short time frames, and clear returns is not just incorrect, it’s destructive. The kinds of simple rules I suggested earlier represent what organizational theorist James March called “superstitious learning.” They were akin to arguing that because many successful Silicon Valley firms were founded in garages, economic growth is a simple matter of building more garages. (25)

Rather, O-S demonstrates in the case of the development of the key discoveries that led to the establishment of Google, the pathway was long, complex, and heavily dependent on social networks of scientists, funders, entrepreneurs, graduate students, and federal agencies.

A key observation in O-S’s narrative at numerous points is the futility — perhaps even harmfulness — of attempting to harness university research to specific, quantifiable economic or political goals. The idea of selecting university research and teaching programs on the basis of their ROI relative to economic goals is, according to O-S, deeply futile. The extended example he offers of the research that led to the establishment of Google as a company and a search engine illustrates this point very compellingly: much of the foundational research that made the search algorithms possible had the look of entirely non-pragmatic or utilitarian knowledge production at the time it was funded (chapter 1). (The development of the smart phone has a similar history; 63.) Philosophy, art history, and social theory can be as important to the overall success of the research enterprise as more intentionally directed areas of research (electrical engineering, genetic research, autonomous vehicle design). His discussion of Wisconsin Governor Scott Walker’s effort to revise the mission statement of the University of Wisconsin is exemplary (45 ff.).

Contra Governor Walker, the value of the university is found not in its ability to respond to immediate needs but in an expectation that joining systematic inquiry and education will result in people and ideas that reach beyond local, sometimes parochial, concerns. (46-47)

Also interesting is O-S’s discussion of the functionality of the extreme decentralization that is typical of most large research universities. In general O-S regards this decentralization as a positive thing, leading to greater independence for researchers and research teams and permitting higher levels of innovation and productive collaboration. In fact, O-S appears to believe that decentralization is a critical factor in the success of the research university as source, anchor, and hub in the creation of new knowledge.

The competition and collaboration enabled by decentralized organization, the pluralism and tension created when missions and fields collide, and the complex networks that emerge from knowledge work make universities sources by enabling them to produce new things on an ongoing basis. Their institutional and physical stability prevents them from succumbing to either internal strife or the kinds of ‘creative destruction’ that economist Joseph Schumpeter took to be a fundamental result of innovation under capitalism. (61)

O-S’s discussion of the micro-processes of discovery is particularly interesting (chapter 3). He makes a sustained attempt to dissect the interactive, networked ways in which multiple problems, methods, and perspectives occasionally come together to solve an important problem or develop a novel idea or technology. In O’S’s telling of the story, the existence of intellectual and scientific networks is crucial to the fecundity of these processes in and around research universities.

This is an important book and one that merits close reading. Nothing could be more critical to our future than the steady discovery of new ideas and solutions. Research universities have shown themselves to be uniquely powerful engines for discovery and dissemination of new knowledge. But the rapid decline of public appreciation of universities presents a serious risk to the continued vitality of the university-based knowledge sector. The most important contribution O-S has made here, in my reading, is the detailed work he has done to give exposition to the “micro-processes” of the research university — the collaborations, the networks, the unexpected contiguities of problems, and the high level of decentralization that American research universities embody. As O-S documents, these processes are difficult to present to the public in a compelling way, and the vitality of the research university itself is vulnerable to destructive interference in the current political environment. Providing a clear, well-documented account of how research universities work is a major and valuable contribution.

%d bloggers like this: