Testing the NRC

Serious nuclear accidents are rare but potentially devastating to people, land, and agriculture. (It appears that minor to moderate nuclear accidents are not nearly so rare, as James Mahaffey shows in Atomic Accidents: A History of Nuclear Meltdowns and Disasters: From the Ozark Mountains to Fukushima.) Three Mile Island, Chernobyl, and Fukushima are disasters that have given the public a better idea of how nuclear power reactors can go wrong, with serious and long-lasting effects. Reactors are also among the most complex industrial systems around, and accidents are common in complex, tightly coupled industrial systems. So how can we have reasonable confidence in the safety of nuclear reactors?

One possible answer is that we cannot have reasonable confidence at all. However, there are hundreds of large nuclear reactors in the world, and 98 active nuclear reactors in the United States alone. So it is critical to have highly effective safety regulation and oversight of the nuclear power industry. In the United States that regulatory authority rests with the Nuclear Regulatory Commission. So we need to ask the question: how good is the NRC at regulating, inspecting, and overseeing the safety of nuclear reactors in our country?

One would suppose that there would be excellent and detailed studies within the public administration literature that attempt to answer this question, and we might expect that researchers within the field of science and technology studies might have addressed it as well. However, this seems not to be the case. I have yet to find a full-length study of the NRC as a regulatory agency, and the NRC is mentioned only twice in the 600-plus page Oxford Handbook of Regulation. However, we can get an oblique view of the workings of the NRC through other sources. One set of observers who are in a position to evaluate the strengths and weaknesses of the NRC are nuclear experts who are independent of the nuclear industry. For example, publications from the Bulletin of the Atomic Scientists include many detailed reports on the operations and malfunctions of nuclear power plants that permit a degree of assessment of the quality of oversight provided by the NRC (link). And a detailed (and scathing) report by the General Accounting Office on the near-disaster at the Davis-Besse nuclear power plant is another expert assessment of NRC functioning (link).

David Lochbaum, Edwin Lyman, and Susan Stranahan fit the description of highly qualified independent scientists and observers, and their detailed case history of the Fukushima disaster provides a degree of insight into the workings of the NRC as well as the Japanese nuclear safety agency. Their book, Fukushima: The Story of a Nuclear Disaster, is jointly written by the authors under the auspices of the Union of Concerned Scientists, one of the best informed networks of nuclear experts we have in the United States. Lochbaum is director of the UCS Nuclear Safety Project and author of Nuclear Waste Disposal Crisis. The book provides a careful and scientific treatment of the unfolding of the Fukushima disaster hour by hour, and highlights the background errors that were made by regulators and owners in the design and operation of the Fukushima plant as well. The book makes numerous comparisons to the current workings of the NRC which permit a degree of assessment of the US regulatory agency.

In brief, Lochbaum and his co-authors appear to have a reasonably high opinion of the technical staff, scientists, and advisors who prepare recommendations for NRC consideration, but a low opinion of the willingness of the five commissioners to adopt costly recommendations that are strongly opposed by the nuclear industry. The authors express frustration that the nuclear safety agencies in both countries appear to have failed to have learned important lessons from the Fukushima disaster:

“The [Japanese] government simply seems in denial about the very real potential for another catastrophic accident…. In the United States, the NRC has also continued operating in denial mode. It turned down a petition requesting that it expand emergency evacuation planning to twenty-five miles from nuclear reactors despite the evidence at Fukushima that dangerous levels of radiation can extend at least that far if a meltdown occurs. It decided to do nothing about the risk of fire at over-stuffed spent fuel pools. And it rejected the main recommendation of its own Near-Term Task Force to revise its regulatory framework. The NRC and the industry instead are relying on the flawed FLEX program as a panacea for any and all safety vulnerabilities that go beyond the “design basis.” (kl 117)

They believe that the NRC is excessively vulnerable to influence by the nuclear power industry and to elected officials who favor economic growth over hypothetical safety concerns, with the result that it tends to err in favor of the economic interests of the industry.

Like many regulatory agencies, the NRC occupies uneasy ground between the need to guard public safety and the pressure from the industry it regulates to get off its back. When push comes to shove in that balancing act, the nuclear industry knows it can count on a sympathetic hearing in Congress; with millions of customers, the nation’s nuclear utilities are an influential lobbying group. (36)

They note that the NRC has consistently declined to undertake more substantial reform of its approach to safety, as recommended by its own panel of experts. The key recommendation of the Near-Term Task Force (NTTF) was that the regulatory framework should be anchored in a more strenuous standard of accident prevention, requiring plant owners to address “beyond-design-basis accidents”. The Fukushima earthquake and tsunami events were “beyond-design-basis”; nonetheless, they occurred, and the NTTF recommended that safety planning should incorporate consideration of these unlikely but possible events.

The task force members believed that once the first proposal was implemented, establishing a well-defined framework for decision making, their other recommendations would fall neatly into place. Absent that implementation, each recommendation would become bogged down as equipment quality specifications, maintenance requirements, and training protocols got hashed out on a case-by-case basis. But when the majority of the commissioners directed the staff in 2011 to postpone addressing the first recommendation and focus on the remaining recommendations, the game was lost even before the opening kickoff. The NTTF’s Recommendation 1 was akin to the severe accident rulemaking effort scuttled nearly three decades earlier, when the NRC considered expanding the scope of its regulations to address beyond-design accidents. Then, as now, the perceived need for regulatory “discipline,” as well as industry opposition to an expansion of the NRC’s enforcement powers, limited the scope of reform. The commission seemed to be ignoring a major lesson of Fukushima Daiichi: namely, that the “fighting the last war” approach taken after Three Mile Island was simply not good enough. (kl 253)

As a result, “regulatory discipline” (essentially the pro-business ideology that holds that regulation should be kept to a minimum) prevailed, and the primary recommendation was tabled. The issue was of great importance, in that it involved setting the standard of risk and accident severity for which the owner needed to plan. By staying with the lower standard, the NRC left the door open to the most severe kinds of accidents.

The NTTF task force also addressed the issue of “delegated regulation” (in which the agency defers to the industry in many issues of certification and risk assessment) (Here is the FAA’s definition of delegated regulation; link.)

The task force also wanted the NRC to reduce its reliance on industry voluntary initiatives, which were largely outside of regulatory control, and instead develop its own “strong program for dealing with the unexpected, including severe accidents.” (252)

Other more detail-oriented recommendations were refused as well — for example, a requirement to install reliable hardened containment vents in boiling water reactors, with a requirement that these vents should incorporate filters to remove radioactive gas before venting.

But what might seem a simple, logical decision—install a $15 million filter to reduce the chance of tens of billions of dollars’ worth of land contamination as well as harm to the public—got complicated. The nuclear industry launched a campaign to persuade the NRC commissioners that filters weren’t necessary. A key part of the industry’s argument was that plant owners could reduce radioactive releases more effectively by using FLEX equipment…. In March 2013, they voted 3–2 to delay a requirement that filters be installed, and recommended that the staff consider other alternatives to prevent the release of radiation during an accident. (254)

The NRC voted against including the requirement of filters on containment vents, a decision that was based on industry arguments that the cost of the filters was excessive and unnecessary.

The authors argue that the NRC needs to significantly rethink its standards of safety and foreseeable risk.

What is needed is a new, commonsense approach to safety, one that realistically weighs risks and counterbalances them with proven, not theoretical, safety requirements. The NRC must protect against severe accidents, not merely pretend they cannot occur. (257)

Their recommendation is to make use of an existing and rigorous plan for reactor safety incorporating the results of “severe accident mitigation alternatives” (SAMA) analysis already performed — but largely disregarded.

However, they are not optimistic that the NRC will be willing to undertake these substantial changes that would significantly enhance safety and make a Fukushima-scale disaster less likely. Reporting on a post-Fukushima conference sponsored by the NRC, they write:

But by now it was apparent that little sentiment existed within the NRC for major changes, including those urged by the commission’s own Near-Term Task Force to expand the realm of “adequate protection.”

Lochbaum and his co-authors also make an intriguing series of points about the use of modeling and simulation in the effort to evaluate safety in nuclear plants. They agree that simulation methods are an essential part of the toolkit for nuclear engineers seeking to evaluate accident scenarios; but they argue that the simulation tools currently available (or perhaps ever available) fall far short of the precision sometimes attributed to them. So simulation tools sometimes give a false sense of confidence in the existing safety arrangements in a particular setting.

Even so, the computer simulations could not reproduce numerous important aspects of the accidents. And in many cases, different computer codes gave different results. Sometimes the same code gave different results depending on who was using it. The inability of these state-of-the-art modeling codes to explain even some of the basic elements of the accident revealed their inherent weaknesses—and the hazards of putting too much faith in them. (263)

In addition to specific observations about the functioning of the NRC the authors identify chronic failures in the nuclear power system in Japan that should be of concern in the United States as well. Conflict of interest, falsification of records, and punishment of whistleblowers were part of the culture of nuclear power and nuclear regulation in Japan. And these problems can arise in the United States as well. Here are examples of the problems they identify in the Japanese nuclear power system; it is a valuable exercise to attempt to determine whether these issues arise in the US regulatory environment as well.

Non-compliance and falsification of records in Japan

Headlines scattered over the decades built a disturbing picture. Reactor owners falsified reports. Regulators failed to scrutinize safety claims. Nuclear boosters dominated safety panels. Rules were buried for years in endless committee reviews. “Independent” experts were financially beholden to the nuclear industry for jobs or research funding. “Public” meetings were padded with industry shills posing as ordinary citizens. Between 2005 and 2009, as local officials sponsored a series of meetings to gauge constituents’ views on nuclear power development in their communities, NISA encouraged the operators of five nuclear plants to send employees to the sessions, posing as members of the public, to sing the praises of nuclear technology. (46)

The authors do not provide evidence about similar practices in the United States, though the history of the Davis-Besse nuclear plant in Ohio suggests that similar things happen in the US industry. Charles Perrow treats the Davis-Besse near-disaster in a fair amount of detail; link. Descriptions of the Davis-Besse nuclear incident can be found herehere, here, and here.

Conflict of interest

Shortly after the Fukushima accident, Japan’s Yomiuri Shimbun reported that thirteen former officials of government agencies that regulate energy companies were currently working for TEPCO or other power firms. Another practice, known as amaagari, “ascent to heaven,” spins the revolving door in the opposite direction. Here, the nuclear industry sends retired nuclear utility officials to government agencies overseeing the nuclear industry. Again, ferreting out safety problems is not a high priority.

Punishment of whistle-blowers

In 2000, Kei Sugaoka, a nuclear inspector working for GE at Fukushima Daiichi, noticed a crack in a reactor’s steam dryer, which extracts excess moisture to prevent harm to the turbine. TEPCO directed Sugaoka to cover up the evidence. Eventually, Sugaoka notified government regulators of the problem. They ordered TEPCO to handle the matter on its own. Sugaoka was fired. (47)

There is a similar story in the Davis-Besse plant history.

Factors that interfere with effective regulation

In summary: there appear to be several structural factors that make nuclear regulation less effective than it needs to be.

First is the fact of the political power and influence of the nuclear industry itself. This was a major factor in the background of the Chernobyl disaster as well, where generals and party officials pushed incessantly for rapid completion of reactors; Serhii Plokhy, Chernobyl: The History of a Nuclear Catastrophe. Lochbaum and his collaborators demonstrate the power that TEPCO had in shaping the regulations under which it built the Fukushima complex, including the assumptions that were incorporated about earthquake risk and tsunami risk. Charles Perrow demonstrates a comparable ability by the nuclear industry in the United States to influence the rules and procedures that govern their use of nuclear power as well (link). This influence permits the owners of nuclear power plants to influence the content of regulation as well as the systems of inspection and oversight that the agency adopts.

A related factor is the set of influences and lobbying points that come from the needs of the economy and the production pressures of the energy industry. (Interestingly enough, this was also a major influence on Soviet decision-making in choosing the graphite-moderated light water reactor for use at Chernobyl and numerous other plants in the 1960s; Serhii Plokhy, Chernobyl: The History of a Nuclear Catastrophe.)

Third is the fact emphasized by Charles Perrow that the NRC is primarily governed by Congress, and legislators are themselves vulnerable to the pressures and blandishments of the industry and demands for a low-regulation business environment. This makes it difficult for the NRC to carry out its role as independent guarantor of the health and safety of the public. Here is Perrow’s description of the problem in The Next Catastrophe: Reducing Our Vulnerabilities to Natural, Industrial, and Terrorist Disasters (quoting Lochbaum from a 2004 Union of Concerned Scientists report):

With utilities profits falling when the NRC got tough after the Time story, the industry not only argued that excessive regulation was the problem, it did something about what it perceived as harassment. The industry used the Senate subcommittee that controls the agency’s budget, headed by a pro-nuclear Republican senator from New Mexico, Pete Domenici. Using the committee’s funds, he commissioned a special study by a consulting group that was used by the nuclear industry. It recommended cutting back on the agency’s budget and size. Using the consultant’s report, Domenici “declared that the NRC could get by just fine with a $90 million budget cut, 700 fewer employees, and a greatly reduced inspection effort.” (italics supplied) The beefed-up inspections ended soon after the threat of budget cuts for the agency. (Mangels 2003) And the possibility for public comment was also curtailed, just for good measure. Public participation in safety issues once was responsible for several important changes in NRC regulations, says David Lochbaum, a nuclear safety engineer with the Union of Concerned Scientists, but in 2004, the NRC, bowed to industry pressure and virtually eliminated public participation. (Lochbaum 2004) As Lochbaum told reporter Mangels, “The NRC is as good a regulator as Congress permits it to be. Right now, Congress doesn’t want a good regulator.”  (The Next Catastrophe, kl 2799)

A fourth important factor is a pervasive complacency within the professional nuclear community about the inherent safety of nuclear power. This is a factor mentioned by Lochbaum:

Although the accident involved a failure of technology, even more worrisome was the role of the worldwide nuclear establishment: the close-knit culture that has championed nuclear energy—politically, economically, socially—while refusing to acknowledge and reduce the risks that accompany its operation. Time and again, warning signs were ignored and near misses with calamity written off. (kl 87)

This is what we might call an ideological or cultural factor, in that it describes a mental framework for thinking about the technology and the public. It is very real factor in decision-making, both within the industry and in the regulatory world. Senior nuclear engineering experts at major research universities seem to share the view that the public “fear” of nuclear power is entirely misplaced, given the safety record of the industry. They believe the technical problems of nuclear power generation have been solved, and that a rational society would embrace nuclear power without anxiety. For rebuttal to this complacency, see Rose and Sweeting’s report in the Bulletin of the Atomic Scientists, “How safe is nuclear power? A statistical study suggests less than expected” (link). Here is the abstract to their paper:

After the Fukushima disaster, the authors analyzed all past core-melt accidents and estimated a failure rate of 1 per 3704 reactor years. This rate indicates that more than one such accident could occur somewhere in the world within the next decade. The authors also analyzed the role that learning from past accidents can play over time. This analysis showed few or no learning effects occurring, depending on the database used. Because the International Atomic Energy Agency (IAEA) has no publicly available list of nuclear accidents, the authors used data compiled by the Guardian newspaper and the energy researcher Benjamin Sovacool. The results suggest that there are likely to be more severe nuclear accidents than have been expected and support Charles Perrow’s “normal accidents” theory that nuclear power reactors cannot be operated without major accidents. However, a more detailed analysis of nuclear accident probabilities needs more transparency from the IAEA. Public support for nuclear power cannot currently be based on full knowledge simply because important information is not available.

Lee Clarke’s book on planning for disaster on the basis of unrealistic models and simulations is relevant here. In Mission Improbable: Using Fantasy Documents to Tame Disaster Clarke argues that much of the planning currently in place for largescale disasters depends upon models, simulations, and scenario-building tools in which we should have very little confidence.

The complacency about nuclear safety mentioned here makes safety regulation more difficult and, paradoxically, makes the safe use of nuclear power more unlikely. Only when the risks are confronted with complete transparency and honesty will it be possible to design regulatory systems that do an acceptable job of ensuring the safety and health of the public.

In short, Lochbaum and his co-authors seem to provide evidence for the conclusion that the NRC is not in a position to perform its primary function: to establish a rational and scientifically well grounded set of standards for safe reactor design and operation. Further, its ability to enforce through inspection seems impaired as well by the power and influence the nuclear industry can deploy through Congress to resist its regulatory efforts. Good expert knowledge is canvassed through the NRC’s processes; but the policy recommendations that flow from this scientific analysis are all too often short-circuited by the ability of the industry to fend off new regulatory requirements. Lochbaum’s comment quoted by Perrow above seems all too true: “The NRC is as good a regulator as Congress permits it to be. Right now, Congress doesn’t want a good regulator.” 

It is very interesting to read the transcript of a 2014 hearing of the Senate Committee on Environment and Public Works titled “NRC’S IMPLEMENTATION OF THE FUKUSHIMA NEAR-TERM TASK FORCE RECOMMENDATIONS AND OTHER ACTIONS TO ENHANCE AND MAINTAIN NUCLEAR SAFETY” (link). Senator Barbara Boxer, California Democrat and chair of the committee, opened the meeting with these words:

Although Chairman Macfarlane said, when she announced her resignation, she had assured that ‘‘the agency implemented lessons learned from the tragic accident at Fukushima.’’ She said, ‘‘the American people can be confident that such an accident will never take place here.’’

I say the reality is not a single one of the 12 key safety recommendations made by the Fukushima Near-Term Task Force has been implemented. Some reactor operators are still not in compliance with the safety requirements that were in place before the Fukushima disaster. The NRC has only completed its own action 4 of the 12 task force recommendations.

This is an alarming assessment, and one that is entirely in accord with the observations made by Lochbaum above.

The Morandi Bridge collapse and regulatory capture

Lower image: Eugenio Ceroni and Luca Cozzi, Ponte Morandi – Autopsia di una strage

A recurring topic in Understanding Society is the question of the organizational causes that lie in the background of major accidents and technological disasters. One such disaster is the catastrophic collapse of the Morandi Bridge in Genoa in August, 2018, which resulted in the deaths of 43 people. Was this a technological failure, a design failure — or importantly a failure in which private and public organizational features led to the disaster?

A major story in the New York Times on March 5, 2019 (link) makes it clear that social and organizational causes were central to this horrendous failure. (What could be more terrifying than having the highway bridge under your vehicle collapse to the earth 150 feet beneath you?) In this case it is evident from the Times coverage that a major cause of the disaster was the relationship between Autostrade per l’Italia, the private company that manages the bridge and derives enormous profit from it, and the regulatory ministries responsible for regulating and supervising safe operations of highways and bridges.

In a sign of the arrogance of wealth and power involved in the relationship, the Benetton family threatened a multimillion dollar lawsuit against the economist Marco Ponti who had served on an expert panel advising the government and had made strong statements about the one-sided relationship that existed. The threat was not acted upon, but the abuse of power is clear.

This appears to be a textbook case of “regulatory capture”, a situation in which the private owners of a risky enterprise or activity use their economic power to influence or intimidate the government regulatory agencies that nominally oversee their activities. “Autostrade reaped huge profits and acquired so much power that the state became a largely passive regulatory” (NYT March 5, 2019). Moreover, independent governmental oversight was crippled by the fact that “the company effectively regulated itself– because Autostrade’s parent company owned the inspection company responsible for safety checks on the Morandi Bridge” (NYT). The Times quotes Carlo Scarpa, and economics professor at the University of Brescia:

Any investor would have been worried about bidding. The Benettons, though, knew the system and they understood that the Ministry of Infrastructure and Transport, which was supposed to supervise the whole thing, was weak. They were able to calculate the weight the company would have in the political arena. (NYT March 5, 2019)

And this seems to have worked out as the family expected:

Autostrade became a political powerhouse, acquiring clout that the Ministry of Infrastructure and Transport, perpetually underfunded and employing a small fraction of the staff, could not match. (NYT March 5, 2019)

The story notes that the private company made a great deal of money from this contract, but that the state also benefited financially. “Autostrade has poured billions of euros into state coffers, paying nearly 600 million euros a year in corporate taxes, V.A.T. and license fees.”

The story also surfaces other social factors that played a role in the disaster, including opposition by Genoa residents to the construction involved in creating a potential bypass to the bridge.

Here is what the Times story has to say about the inspections that occurred:

Beyond fixing blame for the bridge collapse, a central question of the Morandi tragedy is what happened to safety inspections. The answer is that the inspectors worked for Autostrade more than for the state. For decades, Spea Engineering, a Milan-based company, has performed inspections on the bridge. If nominally independent, Spea is owned by Autostrade’s parent company, Atlantia, and Autostrade is also Spea’s largest customer. Spea’s offices in Rome and elsewhere are housed inside Autostrade. One former bridge design engineer for Spea, Giulio Rambelli, described Autostrade’s control over Spea as “absolute,” (NYT March 5, 2019)

The story notes that this relationship raises the possibility of conflicts of interest that are prohibited in other countries. The story quotes Professor Giuliano Fonderico: “All this suggests a system failure.”

The failure appears to be first and foremost a failure of the state to fulfill its obligations of regulation and oversight of dangerous activities. By ceding any real and effective system of safety inspection to the business firms who are benefitting from the operations of the bridge, the state has essentially given up its responsibility of ensuring the safety of the public.

It is also worth underlining the point made in the article about the huge mismatch that exists between the capacities of the business firms in question and the agencies nominally charged to regulate and oversee them. This is a system-level failure at a higher level, since it highlights the fact of the power imbalance that almost always exists between large corporate wealth and the government agencies charged to oversee their activities.

Here is an editorial from the Guardian that makes some similar points; link. There don’t appear to be book-length treatments of the Morandi Bridge disaster available in English. Here is an Italian book on the subject by Eugenio Ceroni and Luca Cozzi, Ponte Morandi – Autopsia di una strage: I motivi tecnici, le colpe, gli errori. Quel che si poteva fare e non si è fatto (Italian Edition), which appears to be a technical civil-engineering analysis of the collapse. The Kindle translate option using Bing is helpful for non-Italian readers to get the thrust of this short book. In the engineering analysis inadequate inspection and incomplete maintenance remediation are key factors in the collapse.

Regulatory failure

When we think of the issues of health and safety that exist in a modern complex economy, it is impossible to imagine that these social goods will be produced in sufficient quantity and quality by market forces alone. Safety and health hazards are typically regarded as “externalities” by private companies — if they can be “dumped” on the public without cost, this is good for the profitability of the company. And state regulation is the appropriate remedy for this tendency of a market-based economy to chronically produce hazards and harms, whether in the form of environmental pollution, unsafe foods and drugs, or unsafe industrial processes. David Moss and John Cisternino’s New Perspectives on Regulation provides some genuinely important perspectives on the role and effectiveness of government regulation in an epoch which has been shaped by virulent efforts to reduce or eliminate regulations on private activity. This volume is a report from the Tobin Project.

It is poignant to read the optimism that the editors and contributors have — in 2009 — about the resurgence of support for government regulation. The financial crisis of 2008 had stimulated a vigorous round of regulation of financial institutions, and most of the contributors took this as a harbinger of a fresh public support for regulation more generally. Of course events have shown this confidence to be sadly mistaken; the dismantling of Federal regulatory regimes by the Trump administration threatens to take the country back to the period described by Upton Sinclair in the early part of the prior century. But what this demonstrates is the great importance of the Tobin Project. We need to build a public understanding and consensus around the unavoidable necessity of effective and pervasive regulatory regimes in environment, health, product safety, and industrial safety.

Here is how Mitchell Weiss, Executive Director of the Tobin Project, describes the project culminating in this volume:

To this end, in the fall of 2008 the Tobin Project approached leading scholars in the social sciences with an unusual request: we asked them to think about the topic of economic regulation and share key insights from their fields in a manner that would be accessible to both policymakers and the public. Because we were concerned that a conventional literature survey might obscure as much as it revealed, we asked instead that the writers provide a broad sketch of the most promising research in their fields pertaining to regulation; that they identify guiding principles for policymakers wherever possible; that they animate these principles with concrete policy proposals; and, in general, that they keep academic language and footnotes to a minimum. (5)

The lead essay is provided by Joseph Stiglitz, who looks more closely than previous decades of economists had done at the real consequences of market failure. Stiglitz puts the point about market failure very crisply:

Only under certain ideal circumstances may individuals, acting on their own, obtain “pareto efficient” outcomes, that is, situations in which no one can be made better off without making another worse off. These individuals involved must be rational and well informed, and must operate in competitive market- places that encompass a full range of insurance and credit markets. In the absence of these ideal circumstances, there exist government interventions that can potentially increase societal efficiency and/or equity. (11)

And regulation is unpopular — with the businesses, landowners, and other powerful agents whose actions are constrained.

By its nature, a regulation restricts an individual or firm from doing what it otherwise would have done. Those whose behavior is so restricted may complain about, say, their loss of profits and potential adverse effects on innovation. But the purpose of government intervention is to address potential consequences that go beyond the parties directly involved, in situations in which private profit is not a good measure of social impact. Appropriate regulation may even advance welfare-enhancing innovations. (13)

Stiglitz pays attention to the pervasive problem of “regulatory capture”:

The current system has made regulatory capture too easy. The voices of those who have benefited from lax regulation are strong; the perspectives of the investment community have been well represented. Among those whose perspectives need to be better represented are the laborers whose jobs would be lost by macro-mismanagement, and the pension holders whose pension funds would be eviscerated by excessive risk taking.

One of the arguments for a financial products safety commission, which would assess the efficacy and risks of new products and ascertain appropriate usage, is that it would have a clear mandate, and be staffed by people whose only concern would be protecting the safety and efficacy of the products being sold. It would be focused on the interests of the ordinary consumer and investors, not the interests of the financial institutions selling the products. (18)

It is very interesting to read Stiglitz’s essay with attention to the economic focus he offers. His examples all come from the financial industry — the risk at hand in 2008-2009. But the arguments apply equally profoundly to manufacturing, the pharmaceutical and food industries, energy industries, farming and ranching, and the for-profit education sector. At the same time the institutional details are different, and an essay on this subject with a focus on nuclear or chemical plants would probably identify a different set of institutional barriers to effective regulation.

Also particularly interesting is the contribution by Michael Barr, Eldar Shafir, and Sendhil Mullainathan on how behavioral perspectives on “rational action” can lead to more effective regulatory regimes. This essay pays close attention to the findings of experimental economics and behavioral economics, and the deviations from “pure economic rationality” that are pervasive in ordinary economic decision making. These features of decision-making are likely to be relevant to the effectiveness of a regulatory regime as well. Further, it suggests important areas of consumer behavior that are particularly subject to exploitative practices by financial companies — creating a new need for regulation of these kinds of practices. Here is how they summarize their approach:

We propose a different approach to regulation. Whereas the classical perspective assumes that people generally know what is important and knowable, plan with insight and patience, and carry out their plans with wisdom and self-control, the central gist of the behavioral perspective is that people often fail to know and understand things that matter; that they misperceive, misallocate, and fail to carry out their intended plans; and that the context in which people function has great impact on their behavior, and, consequently, merits careful attention and constructive work. In our framework, successful regulation requires integrating this richer view of human behavior with our understanding of markets. Firms will operate on the contour de ned by this psychology and will respond strategically to regulations. As we describe above, because firms have a great deal of latitude in issue framing, product design, and so on, they have the capacity to a affect behavior and circumvent or pervert regulatory constraints. Ironically, firms’ capacity to do so is enhanced by their interaction with “behavioral” consumers (as opposed to the hypothetically rational actors of neoclassical economic theory), since so many of the things a regulator would find very hard to control (for example, frames, design, complexity, etc.) can greatly influence consumers’ behavior. e challenge of behaviorally informed regulation, therefore, is to be well designed and insightful both about human behavior and about the behaviors that firms are likely to exhibit in response to both consumer behavior and regulation. (55)

The contributions to this volume are very suggestive with regard to the issues of product safety, manufacturing safety, food and drug safety, and the like which constitute the larger core of the need for regulatory regimes. And the challenges faced in the areas of financial regulation discussed here are likely to be found to be illuminating in other sectors as well.

 

Patient safety — Canada and France


Patient safety is a key issue in managing and assessing a regional or national health system. There are very sizable variations in patient safety statistics across hospitals, with significantly higher rates of infection and mortality in some institutions than others. Why is this? And what can be done in order to improve the safety performance of low-safety institutions, and to improve the overall safety performance of the hospital environment nationally?

Previous posts have made the point that safety is the net effect of a complex system within a hospital or chemical plant, including institutions, rules, practices, training, supervision, and day-to-day behavior by staff and supervisors (post, post). And experts on hospital safety agree that improvements in safety require careful analysis of patient processes in order to redesign processes so as to make infections, falls, improper medications, and unnecessary mortality less likely. Institutional design and workplace culture have to change if safety performance is to improve consistently and sustainably. (Here is a posting providing a bit more discussion of the institutions of a hospital; post.)

But here is an important question: what are the features of the social and legal environment that will make it most likely that hospital administrators will commit themselves to a thorough-going culture and management of safety? What incentives or constraints need to exist to offset the impulses of cost-cutting and status quo management that threaten to undermine patient safety? What will drive the institutional change in a health system that improving patient safety requires?

Several measures seem clear. One is state regulation of hospitals. This exists in every state; but the effectiveness of regulatory regimes varies widely across context. So understanding the dynamics of regulation and enforcement is a crucial step to improving hospital quality and patient safety. The oversight of rigorous hospital accreditation agencies is another important factor for improvement. For example, the Joint Commission accredits thousands of hospitals in the United States (web page) through dozens of accreditation and certification programs. Patient safety is the highest priority underlying Joint Commission standards of accreditation. So regulation and the formulation of standards are part of the answer. But a particularly important policy tool for improving safety performance is the mandatory collection and publication of safety statistics, so that potential patients can decide between hospitals on the basis of their safety performance. Publicity and transparency are crucial parts of good management behavior; and secrecy is a refuge of poor performance in areas of public concern such as safety, corruption, or rule-setting. (See an earlier post on the relationship between publicity and corruption.)

But here we have a little bit of a conundrum: achieving mandatory publication of safety statistics is politically difficult, because hospitals have a business interest in keeping these data private. So there was a lot of resistance to mandatory reporting of basic patient safety data in the US over the past twenty years. Fortunately, the public interest in having these data readily available has largely prevailed, and hospitals are now required to publish a broader and broader range of data on patient safety, including hospital-induced infection rates, ventilator-induced pneumonias, patient falls, and mortality rates. Here is a useful tool from USA Today that lets the public and the patient gather information about his/her hospital options and how these compare with other hospitals regionally and nationally. This is an effective accountability mechanism that inevitably drives hospitals towards better performance.

Canada has been very active in this area. Here is a website published by the Ontario Ministry of Health and Long-Term Care. The province requires hospitals to report a number of factors that are good indicators of patient safety: several kinds of hospital-born infections; central-line primary bloodstream infection and ventilator-associated pneumonia; surgical-site infection prevention activity; and hospital-standardized mortality ratio. The user can explore the site and find that there are in fact wide variations across hospitals in the province. This is likely to change patient choice; but it also serves as an instant guide for regulatory agencies and local hospital administrators as they attempt to focus attention on poor management practices and institutional arrangements. (It would be helpful for the purpose of comparison if the data could be easily downloaded into a spreadsheet.)

On first principles, it seems likely that any country that has a hospital system in which the safety performance of each hospital is kept secret will also show a wide distribution of patient safety outcomes across institutions, and will have an overall safety record that is much lower than it could be. This is because secrecy gives hospital administrators the ability to conceal the risks their institutions impose on patients through bad practices. So publicity and regular publication of patient safety information seems to be a necessary precondition to maintaining a high-safety hospital system.

But here is the crucial point: many countries continue to permit secrecy when it comes to hospital safety. In particular, this seems to be true in France. It seems that the French medical and hospital system continues to display a very high degree of secrecy and opacity when it comes to patient safety. In fact, anecdotal information about French hospitals suggests a wide range of levels of hospital-born infections in different hospitals. Hospital-born infections (infections nosocomiales) are an important and rising cause of patient illness and morbidity. And there are well-known practices and technologies that substantially reduce the incidence of these infections. But the implementation of these practices requires strong commitment and dedication at the unit level; and this degree of commitment is unlikely to occur in an environment of secrecy.

In fact, I have not been able to discover any of the tools that are now available for measuring patient safety in hospitals in North America in application to hospitals in France. But without this regular reporting, there is no mechanism through which institutions with bad safety performance can be “ratcheted” up into better practices and better safety outcomes. The impression that is given in the French medical system is that the doctors and the medical authorities are sacrosanct; patients are not expected to question their judgment, and the state appears not to require institutions to report and publish fundamental safety information. Patients have very little power and the media so far seem to have paid little attention to the issues of patient safety in French hospitals. This 2007 article in LePoint seems to be a first for France in that it provides quantitative rankings of a large number of hospitals in their treatment of a number of diseases. But it does not provide the kinds of safety information — infections, falls, pneumonias — that are core measures of patient safety.

There is a French state agency, OFFICE NATIONAL D’INDEMNISATION DES ACCIDENTS MÉDICAUX (ONIAM), that provides compensation to patients who can demonstrate that their injuries are the result of hospital-induced causes, including especially hospital-associated infections. But it appears that this agency is restricted to after-the-fact recognition of hospital errors rather than pro-active programs designed to reduce hospital errors. And here is a French government web site devoted to the issue of hospital infections. It announces a multi-pronged strategy for controlling the problem of infections nosocomiales, including the establishment of a national program of surveillance of the rates of these infections. So far, however, I have not been able to locate web resources that would provide hospital-level data about infection rates.

So I am offering a hypothesis that I would be very happy to find to be refuted: that the French medical establishment continues to be bureaucratically administered with very little public exposure of actual performance when it comes to patient safety. And without this system of publicity, it seems very likely that there are wide and tragic variations across French hospitals with regard to patient safety.

Are there French medical sociologists and public health researchers who are working on the issue of patient safety in French hospitals? Can good contemporary French sociologists like Céline Béraud, Baptiste Coulmont, and Philippe Masson offer some guidance on this topic (post)? If readers are aware of databases and patient safety research programs in France that are relevant to these topics, I would be very happy to hear about them.

Update: Baptiste Coulmont (blog) passes on this link to Réseau d’alerte d’investigations et de surveillance des infections nosocomia (RAISIN) within the Institut de veille sanitaire. The site provides research reports and regional assessments of nosocomia incidence. It does not appear to provide data at the level of the specific hospitals and medical centers. Baptiste refers also to work by Jean Peneff, a French medical sociologist and author of La France malade de ses médecins. Here is a link to a subsequent research report by Peneff. Thanks, Baptiste.

Trust and corruption

The recent collapse of a major skyscraper crane in New York City last month led to a surprising result: the arrest of the city’s chief crane inspector on charges of bribery. (See the New York Times story here.) (The story indicates that the facts surrounding the charges are unrelated to this particular crane collapse.) Several weeks earlier, a Congressional committee heard testimony from three F.A.A. inspectors to the effect that the agency had permitted Southwest Airlines to fly uninspected planes (story), and some attributed this lapse to too cozy a relationship between the F.A.A. and the airline industry:

The F.A.A.’s watchdog role, to many Democrats in Congress who now oversee airline regulators, grew toothless. “We had drifted a little bit too much toward the over-closeness and coziness between regulator and regulated,” said H. Clayton Foushee Jr., a former F.A.A. official who led a recent inquiry by Mr. Oberstar’s committee. (story)

The basic systems of a complex society depend upon the good-faith commitment of providers to give top priority to safety, health, and quality, but they also depend upon regulation, inspection, and certification. Caveat emptor doesn’t work when it comes to airline travel or working in a skyscraper; we simply have to trust that the airliner or the building is built and maintained to a high level of safety standards. The food we eat, the restaurants we patronize, the airlines and railroads we travel on, and the buildings we live and work in (and send our children to) provide complex products for our use that we can’t independently evaluate. Instead, we are obliged to trust the providers — the builders, the airline companies and their pilots and mechanics, the restaurant operators — and the regulatory and inspection regimes that are intended to provide an assurance of quality, safety, and health.

And yet there are two imperatives that work against public health and safety in most modern societies: the private incentive that the provider has to cut corners, and the perennial temptation of corruption that is inherent within a regulatory process. On the providers’ side, there is a constant business incentive to lower costs by substituting inferior ingredients or materials, to tolerate less-than-sanitary conditions in the back-of-restaurant areas, or to skimp on necessary maintenance of inherently dangerous systems. And on the regulatory side, there is the omnipresent possibility of collusion between inspectors and providers. Inspectors have it in their power to impose costs or savings on providers; so the provider has an economic interest in making payments to inspectors to save themselves these costs. (See Robert Klitgaard’s fascinating book, Controlling Corruption, for a political scientist’s analysis of this problem.)

In a purely laissez-faire environment we would expect there to be recurring instances of health and safety disasters in food production, building construction, transportation, and the healthcare system; this seems to be the logical result of a purely cost- and profit-driven system of production. (This seems to be what lies at the heart of the Chinese pet food and toy product scandals of several months ago, and it was at the heart of the food industries chronicled by Upton Sinclair a century ago in this country.)

But an inadequate system of regulation and enforcement seems equally likely to lead to health and safety crises for society, if inspection regimes are inadequate or if inspectors are corrupt. The two stories about inspection mentioned above point to different ways in which a regulatory system can go wrong: individual inspectors can be corrupted, or honest inspectors can be improperly managed by their regulatory organization. And, of course, there is a third possibility as well: the regulatory system may be fully honest and well-managed but wholly insufficient to the task presented to it in terms of the resources and personnel devoted to the regulatory task.

These two tendencies appear to be resulting in major social problems in China today. There is little confidence in the Chinese public in building standards in even the major civil engineering projects that the country has undertaken in the past ten years (CNN story, BBC story), there is widespread concern about corruption in many aspects of ordinary life, and there is growing concern among consumers about the safety of the system of food production, public water sources, and pharmaceuticals (story). (The anger and anguish expressed by parents whose children were lost in collapsed schools in Sichuan appear to derive from these kinds of mistrust.) So one of China’s major challenges for the coming years is to create credible, effective, and trusted regulatory regimes for the areas of public life that most directly affect health and safety.

But the stories mentioned above don’t have to do with China, or India, or Brazil; they have to do with the United States. We have lived through a period of determined deregulation since 1980, and have been subjected to a political ideology that minimized and demeaned the role of government in protecting the health and safety of the public — in banking no less than air safety. It seems very pressing for us now to ask ourselves: how effective are the systems of regulation and inspection that we have in our key industries — food, pharmaceuticals, hospitals, transportation, and construction? How much confidence can we have in the basis health and safety features of these fundamental social goods? And what sorts of institutional reforms do we need to undertake?

%d bloggers like this: