O-rings and production pressure

Allan McDonald’s Truth, Lies, and O-Rings: Inside the Space Shuttle Challenger Disaster (2009) has given me a somewhat different understanding of the Challenger launch disaster than I’ve gained from other sources, including Diane Vaughan’s excellent book The Challenger Launch Decision: Risky Technology, Culture, and Deviance at NASA. McDonald is a Morton Thiokol (MTI) insider who was present through virtually all aspects of the evolving solid rocket program at NASA in the two years leading up to the explosion in January 1986. He was director of the Space Shuttle Solid Rocket Motor Project during part of this time and he represented MTI at the formal Launch Readiness Review panels (LRRs) for several shuttle launches, including the fateful Challenger launch. He was senior management representative for MTI for the launch of STS-51L Challenger. His account gives a great deal of engineering detail about the Morton Thiokol engineering group’s ongoing concerns about the O-rings in the months preceding the Challenger disaster. This serves as a backdrop for a detailed analysis of the dysfunctions in decision-making in both NASA and Morton Thiokol that led to an insufficient priority being given to safety assessments.

It is worth noting that O-rings were a key part of other large solid-fuel rockets, including the Titan rocket. So there was a large base of engineering and test experience with the performance of the O-rings when exposed to the high temperatures and pressures of ignition and firing.

The biggest surprise to me is the level of informed, rigorous, and evidence-based concern that MTI engineers had about the reliability of joint seal afforded by the primary and secondary seals on the solid rocket motors on the Shuttle system. These specialists had a very good and precise understanding of the mechanics of the problem. Further, there was a good engineering understanding of the expected (and required) time-sequence performance of the O-rings during ignition and firing. If the sealing action were delayed by even a few hundredths of a second, hot gas would be able to penetrate past the seal. These were not hypothetical worries, but instead were based on data from earlier launches demonstrating O-ring erosion and soot between the primary and secondary rings showing that super-hot gases had penetrated the primary seal. The worst damage and evidence of blowby had occurred on flight STS-51C January 25, 1985, one year earlier, the lowest-temperature launch yet attempted. And that launch took place when the temperature was 53 degrees.

Launch temperatures for the rescheduled January 28 launch were projected to be extremely cold — 22-26 degrees was forecast on January 27, roughly 30 degrees colder than the previous January launch. The projected temperatures immediately raised alarm concerning the potential effects on the O-rings with the Utah-based engineering team and with McDonald himself. A teleconference meeting was scheduled for January 27 to receive recommendations from the Utah-based Morton Thiokol engineers who were focused on the O-rings problem about the minimum acceptable temperature for launch (95).

I tried to reach Larry Mulloy at his hotel but failed, so I called Cecil Houston, the NASA/MSFC Resident Manager at KSC. I alerted him of our concerns about the sealing capability of the field-joint O-rings at the predicted cold temperatures and asked him to set up the teleconference. (96)

The teleconference began at 8:30 pm on the evening before the launch. McDonald was present in Cape Canaveral for the Flight Readiness Review panel and participated in the teleconference involving the analysis and recommendations from MTI engineering, leading to a recommendation against launching in the expected cold weather conditions.

Thiokol’s engineering presentation consisted of about a dozen charts summarizing the history of the performance of the field-joints, some engineering analysis on the operation of the joints, and some laboratory and full-scale static test data relative to the performance of the O-rings at various temperatures. About half the charts had been prepared by Roger Boisjoly, our chief seal expert on the O-ring Seal Task Force and staff engineer to Jack Kapp, Manager of Applied Mechanics. The remainder were presented by Arnie Thompson, the supervisor of our Structures Section under Jack Kapp, and by Brian Russell, a program manager working for Bob Ebeling. (97)

Boisjoly’s next chart showed how cold temperature would reduce all the factors that helped maintain a good seal in the joint: lower O-ring squeeze due to thermal shrinkage of the O-ring; thicker and more viscous grease around the O-ring, making it slower to move across the O-ring groove; and higher O-ring hardness due to low temperature, making it more difficult for the O-ring to extrude dynamically into the gap for proper sealing. All of these things increased the dynamic actuation time, or timing function, of the O-ring, when at the very same time the O-ring could be eroding, creating a situation where the secondary seal might not be able to seal the motor, not if the primary O-ring was sufficiently eroded to prevent sealing in the joint. (99)

Based on their concerns about temperature and effectiveness of the seals in the critical half-second of ignition, MTI engineering staff prepared the foundation for a recommendation to not launch in temperatures lower than 53 degrees. Their conclusion as presented at the January 27 teleconference was unequivocal against launch under these temperature conditions:

The final chart included the recommendations, which resulted in several strong comments and many very surprising reactions from the NASA participants in the teleconference. The first statement on the “Recommendations” chart stated that the O-ring temperature must be equal to or greater than 53° at launch, and this was primarily based upon the fact that SRM-15, which was the best simulation of this condition, worked at 53 °. The chart ended with a statement that we should project the ambient conditions (temperature and wind) to determine the launch time. (102)

NASA lead Larry Mulloy contested the analysis and evidence in the slides and expressed great concern about the negative launch recommendation, and he asserted that the data were “inconclusive” in establishing a relationship between temperature and O-ring failure.

Mulloy immediately said he could not accept the rationale that was used in arriving at that recommendation. Stan Reinartz then asked George Hardy, Deputy Director of Science and Engineering at NASA/MSFC, for his opinion. Hardy said he was “appalled” that we could make such a recommendation, but that he wouldn’t fly without Morton Thiokol’s concurrence. Hardy also stated that we had only addressed the primary O-ring, and did not address the secondary O-ring, which was in a better position to seal because of the leak-check. Mulloy then shouted, “My God, Thiokol, when do you want me to launch, next April?” He also stated that “the eve of a launch is a helluva time to be generating new launch commit criteria!” Stan Reinartz entered the conversation by saying that he was under the impression that the solid rocket motors were qualified from 40° to 90° and that the 53° recommendation certainly was not consistent with that.” (103)

Joe Kilminster, VP of Space Booster Programs at MTI, then requested a short caucus for the engineering team in Utah to reevaluate the data and consider their response to the skepticism voiced by NASA officials. McDonald did not participate in the caucus, but his reconstruction based on the memories of persons present paints a clear picture. The engineering experts did not change their assessment, and they were overriden by MTI executives Cal Wiggins (VP and General Manager of the Space Division) and Jerry Mason (Senior VP of Wasatch Operations). In opening the caucus discussion, Mason is quoted as saying “we need to make a management decision”. Engineers Boisjoly and Thompson reiterated their technical concerns about the functionality of the O-ring seals at low temperature, with no response from the senior executives. No members of the engineering team spoke up to support a decision to launch. Mason polled the senior executives, including Bob Lund (VP of Engineering), and said to Lund, “It’s time for you, Bob, to take off your engineering hat and put on your management hat.” (111) A positive launch recommendation was then conveyed to NASA, and the process in Florida resumed towards launch.

McDonald spends considerable time indicating the business pressure that MTI was subject to from its largest customer, NASA. NASA was considering creating a second-source option for competing companies for solid fuel motors from MTI and had also delayed signing a large contract (Buy-III fixed cost bid) for the next batch of motors. The collective impact of these actions by NASA could cost MTI over a billion dollars. So MTI management appears to have been under great pressure to accommodate to NASA managers’ preferences concerning the launch decision. And it is hard to avoid the conclusion that their decision placed business interests first and the professional judgments of their safety engineers second. In doing so they placed the lives of seven astronauts at risk, with tragic consequences.

And what about NASA? Here the pressures are somewhat less fully developed than in Vaughan’s account, but the driving commitment to achieve a 24-launch per year schedule seems to have been a primary motivation. Delayed launches significantly undermined this goal, which threatened both the prestige of NASA, the hope of significant commercial revenue for the program, and the assurance of continuing funding from Congress.

McDonald was not a participant in the caucus conference call. But he provides a reconstruction based on information provided by participants. In his understanding the engineers continued to defend their recommendation based on very concrete concerns about the effectiveness of the O-rings in extreme cold. Senior managers indicated their lack of support for this engineering judgment, and in the end Jerry Mason indicated that this would need to be a management decision. The FRR team was then informed that MTI has reconsidered its negative recommendation concerning launch. McDonald refused to sign the launch recommendation document, which was signed by his boss Joe Kilminster and faxed to the LRR team.

In hindsight it seems clear that both MTI executives and NASA executives deferred to business pressures of their respective organizations in the face of well-supported doubts about the safety of the launch. Is this a case of 20-20 vision after the fact? It distinctly appears not to be. The depth of knowledge, analysis, and rational concern that was present in the engineering group for at least a year prior to the Challenger disaster gave very specific and evidence-based reasons to abort this launch. This was not some intuitive, unspecific set of worries; it was an ongoing research problem that greatly concerned the engineers who were directly involved. And it appears there was no significant disagreement or uncertainty among them.

So it is hard to avoid a rather terrible conclusion, that the Challenger disaster was avoidable and should have been prevented. And the culpability lies with senior NASA and MTI executives who placed production pressures and business interests ahead of normal safety assessment procedures, and ahead of safety itself.

It is worth noting that Diane Vaughan’s assessment is directly at odds with this assessment. She writes:

We now return to the eve of the launch. Accounts emphasizing valiant attempts by Thiokol engineers to stop the launch, actions of a few powerful managers who overruled a unanimous engineering position, and managerial failure to pass information about the teleconference to senior NASA administrators, coupled with news of economic strain and production pressure at NASA, led many to suspect that NASA managers had acted as amoral calculators, knowingly violating rules and taking extraordinary risk with human lives in order to keep the shuttle on schedule. However, like the history of decision making, I found that events on the eve of the launch were vastly more complex than the published accounts and media representations of it. From the profusion of information available after the accident, some actions, comments, and actors were brought repeatedly to public attention, finding their way into recorded history. Others, receiving less attention or none, were omitted. The omissions became, for me, details of social context essential for explanation. (LC 6215)

Young, Cook, Boisjoly, and Feynman. Concluding this list of puzzles and contradictions, I found that no one accused any of the NASA managers associated with the launch decision of being an amoral calculator. Although the Presidential Commission report extensively documented and decried the production pressures under which the Shuttle Program operated, no individuals were confirmed or even alleged to have placed economic interests over safety in the decision to launch the Space Shuttle Challenger. For the Commission to acknowledge production pressures and simultaneously fail to connect economic interests and individual actions is, prima facie, extremely suspect. But NASA’s most outspoken critics—Astronaut John Young, Morton Thiokol engineers Al McDonald and Roger Boisjoly, NASA Resource Analyst Richard Cook, and Presidential Commissioner Richard Feynman, who frequently aired their opinions to the media—did not accuse anyone of knowingly violating safety rules, risking lives on the night of January 27 and morning of January 28 to meet a schedule commitment. (kl 1627)

Vaughan’s account includes many of the pivot-points of McDonald’s narrative, but she assigns a different significance to many of them. She prefers her “normalization of deviance” explanation over the “amoral calculator” explanation.

(The Rogers Commission report and supporting documents are available online. Here is a portion of the hearings transcripts in which senior NASA officials provide testimony; link. This segment is critical to the issues raised in McDonald’s account, since it addresses the January 27, 1986 teleconference FRR session in which a recommendation against launch was put forward by MTI engineering and was challenged by NASA senior administrators.)

Regulatory delegation at the FAA

Earlier posts have focused on the role of inadequate regulatory oversight as part of the tragedy of the Boeing 737 MAX (link, link). (Also of interest is an earlier discussion of the “quiet power” through which business achieves its goals in legislation and agency rules (link).) Reporting in the New York Times this week by Natalie Kitroeff and David Gelles provides a smoking gun for the idea of regulatory capture by industry over the regulatory agency established to ensure its safe operations (link). The article quotes a former attorney in the FAA office of chief counsel:

“The reauthorization act mandated regulatory capture,” said Doug Anderson, a former attorney in the agency’s office of chief counsel who reviewed the legislation. “It set the F.A.A. up for being totally deferential to the industry.”

Based on exhaustive investigative journalism, Kitroeff and Gelles provide a detailed account of the lobbying strategy and efforts by Boeing and the aircraft manufacturing industry group that led to the incorporation of industry-favored language into the FAA Reauthorization Act of 2018, and it is a profoundly discouraging account for anyone interested in the idea that the public good should drive legislation. The new paragraphs introduced into the final legislation stipulate full implementation of the philosophy of regulatory delegation and establish an industry-centered group empowered to oversee the agency’s performance and to make recommendations about FAA employees’ compensation. “Now, the agency, at the outset of the development process, has to hand over responsibility for certifying almost every aspect of new planes.” Under the new legislation the FAA is forbidden from taking back control of the certification process for a new aircraft without a full investigation or inspection justifying such an action.

As the article notes, the 737 MAX was certified under the old rules. The new rules give the FAA even less oversight powers and responsibilities for the certification of new aircraft and major redesigns of existing aircraft. And the fact that the MCAS system was never fully reviewed by the FAA, based on assurances of its safety from Boeing, reduces even further our confidence in the effectiveness of the FAA process. From the article:

The F.A.A. never fully analyzed the automated system known as MCAS, while Boeing played down its risks. Late in the plane’s development, Boeing made the system more aggressive, changes that were not submitted in a safety assessment to the agency.

Boeing, the Aerospace Industries Association, and the General Aviation Manufacturers Association exercised influence on the 2018 legislation through a variety of mechanisms. Legislators and lobbyists alike were guided by a report on regulation authored by Boeing itself. Executives and lobbyists exercised their ability to influence powerful senators and members of Congress through person-to-person interactions. And elected representatives from both parties favored “less regulation” as a way of supporting the economic interests of businesses in their states. For example:

They also helped persuade Senator Maria Cantwell, Democrat of Washington State, where Boeing has its manufacturing hub, to introduce language that requires the F.A.A. to relinquish control of many parts of the certification process.

And, of course, it is important not to forget about the “revolving door” from industry to government to lobbying firm. Ali Bahrami was an FAA official who subsequently became a lobbyist for the aerospace industry; Stephen Dixon is a former executive of Delta Airlines who now serves as Administrator of the FAA; and in 2007 former FAA Administrator Marion Blakey became CEO of the Aerospace Industries Association, the industry’s chief advocacy and lobbying group (link). It is hard to envision neutral, objective judgment in ensuring the safety of the public from such appointments.

Boeing and its allies found a receptive audience in the head of the House transportation committee, Bill Shuster, a Pennsylvania Republican staunchly in favor of deregulation, and his aide working on the legislation, Holly Woodruff Lyons.

These kinds of influence on legislation and agency action provide crystal-clear illustrations of the mechanisms cited by Pepper Culpepper in Quiet Politics and Business Power: Corporate Control in Europe and Japan explaining the political influence of business. Here is my description of his views in an earlier post:

Culpepper unpacks the political advantage residing with business elites and managers in terms of acknowledged expertise about the intricacies of corporate organization, an ability to frame the issues for policy makers and journalists, and ready access to rule-writing committees and task forces. These factors give elite business managers positional advantage, from which they can exert a great deal of influence on how an issue is formulated when it comes into the forum of public policy formation.

It seems abundantly clear that the “regulatory delegation” movement and its underlying effort to reduce regulatory burden on industry have gone too far in the case of aviation; and the same seems true in other industries such as the nuclear industry. The much harder question is organizational: what form of regulatory oversight would permit a regulatory industry to genuinely enhance the safety of the regulated industry and protect the public from unnecessary hazards? Even if we could take the anti-regulation ideology that has governed much public discourse since the Reagan years out of the picture, there are the continuing issues of expertise, funding, and industry power of resistance that make effective regulation a huge challenge.

The US Chemical Safety Board

The Federal agency responsible for investigating chemical and petrochemical accidents in the United States is the Chemical Safety Board (link). The mission of the Board is described in these terms:

The CSB is an independent federal agency charged with investigating industrial chemical accidents. Headquartered in Washington, DC, the agency’s board members are appointed by the President and confirmed by the Senate.

The CSB’s mission is to “drive chemical safety change through independent investigation to protect people and the environment.”The CSB’s vision is “a nation safe from chemical disasters.”The CSB conducts root cause investigations of chemical accidents at fixed industrial facilities. Root causes are usually deficiencies in safety management systems, but can be any factor that would have prevented the accident if that factor had not occurred. Other accident causes often involve equipment failures, human errors, unforeseen chemical reactions or other hazards. The agency does not issue fines or citations, but does make recommendations to plants, regulatory agencies such as the Occupational Safety and Health Administration (OSHA) and the Environmental Protection Agency (EPA), industry organizations, and labor groups. Congress designed the CSB to be non-regulatory and independent of other agencies so that its investigations might, where appropriate, review the effectiveness of regulations and regulatory enforcement.

CSB was legislatively conceived in analogy with the National Transportation Safety Board, and its sole responsibility is to conduct investigations of major chemical accidents in the United States and report its findings to the public. It is not subordinate to OSHA or EPA, but it collaborates with those (and other) Federal agencies as appropriate (link). It has no enforcement powers; its sole function is to investigate, report, and recommend when serious chemical or petrochemical accidents have occurred.

One of its most important investigations concerned the March 23, 2005 Texas City BP refinery explosion. A massive explosion resulted in the deaths of 15 workers, injuries to over 170 workers, and substantial destruction of the refinery infrastructure. CSB conducted an extensive investigation into the “root causes” of the accident, and assigned substantial responsibility to BP’s corporate management of the facility. Here is the final report of that investigation (link), and here is a video prepared by CSB summarizing its main findings (link).

The key findings of the CSB report focus on the responsibility of BP management for the accident. Here is a summary of the CSB assessment of root causes:

The BP Texas City tragedy is an accident with organizational causes embedded in the refinery’s culture. The CSB investigation found that organizational causes linked the numerous safety system failures that extended beyond the ISOM unit. The organizational causes of the March 23, 2005, ISOM explosion are

  • BP Texas City lacked a reporting and learning culture. Reporting bad news was not encouraged, and often Texas City managers did not effectively investigate incidents or take appropriate corrective action.
  • BP Group lacked focus on controlling major hazard risk. BP management paid attention to, measured, and rewarded personal safety rather than process safety.
  • BP Group and Texas City managers provided ineffective leadership and oversight. BP management did not implement adequate safety oversight, provide needed human and economic resources, or consistently model adherence to safety rules and procedures.
  • BP Group and Texas City did not effectively evaluate the safety implications of major organizational, personnel, and policy changes.

Underlying almost all of these failures to manage this complex process with a priority on “process safety” rather than simply personal safety is a corporate mandate for cost reduction:

In late 2004, BP Group refining leadership ordered a 25 percent budget reduction “challenge” for 2005. The Texas City Business Unit Leader asked for more funds based on the conditions of the Texas City plant, but the Group refining managers did not, at first, agree to his request. Initial budget documents for 2005 reflect a proposed 25 percent cutback in capital expenditures, including on compliance, HSE, and capital expenditures needed to maintain safe plant operations.[208] The Texas City Business Unit Leader told the Group refining executives that the 25 percent cut was too deep, and argued for restoration of the HSE and maintenance-related capital to sustain existing assets in the 2005 budget. The Business Unit Leader was able to negotiate a restoration of less than half the 25 percent cut; however, he indicated that the news of the budget cut negatively affected workforce morale and the belief that the BP Group and Texas City managers were sincere about culture change. (176)

And what about corporate accountability? What did BP have to pay in recompense for its faulty management of the Texas City refinery and the subsequent damages to workers and local residents? The answer is, remarkably little. OSHA assessed a fine of $50.6 million for its violations of safety regulations (link, link), and it committed to spend at least $500M to take corrective steps within the plant to protect the safety of workers. This was a record fine at the time; and yet it might very well be seen by BP corporate executives as a modest cost of doing business in this industry. It does not seem to be of the magnitude that would lead to fundamental change of culture, action, and management within the company.

BP commissioned a major review of BP refinery safety in all five of its US-based refineries following release of the CSB report. This study became the Baker Panel REPORT OF THE BP U.S. REFINERIES INDEPENDENT SAFETY REVIEW PANEL (JANUARY 2007) (link). The Baker Panel consisted of fully qualified experts on industrial and technological safety who were in a very good position to assess the safety management and culture of BP in its operations of its five US-based refineries. The Baker Panel was specifically directed to refrain from attempting to analyze responsibility for the Texas City disaster and to focus its efforts on assessing the safety culture and management direction that were currently to be found in BP’s five refineries. Here are some central findings:

  • Based on its review, the Panel believes that BP has not provided effective process safety leadership and has not adequately established process safety as a core value across all its five U.S. refineries.
  • BP has not always ensured that it identified and provided the resources required for strong process safety performance at its U.S. refineries. Despite having numerous staff at different levels of the organization that support process safety, BP does not have a designated, high-ranking leader for process safety dedicated to its refining business.
  • The Panel also found that BP did not effectively incorporate process safety into management decision-making. BP tended to have a short-term focus, and its decentralized management system and entrepreneurial culture have delegated substantial discretion to U.S. refinery plant managers without clearly defining process safety expectations, responsibilities, or accountabilities.
  • BP has not instilled a common, unifying process safety culture among its U.S. refineries.
  • While all of BP’s U.S. refineries have active programs to analyze process hazards, the system as a whole does not ensure adequate identification and rigorous analysis of those hazards.
  • The Panel’s technical consultants and the Panel observed that BP does have internal standards and programs for managing process risks. However, the Panel’s examination found that BP’s corporate safety management system does not ensure timely compliance with internal process safety standards and programs at BP’s five U.S. refineries.
  • The Panel also found that BP’s corporate safety management system does not ensure timely implementation of external good engineering practices that support and could improve process safety performance at BP’s five U.S. refineries. (Summary of findings, xii-xiii)

These findings largely validate and support the critical assessment of BP’s safety management practices in the CSB report.

It seems clear that an important part of the substantial improvement that has occurred in aviation safety in the past fifty years is the effective investigation and reporting provided by the NTSB. NTSB is an authoritative and respected bureau of experts whom the public trusts when it comes to discovering the causes of aviation disasters. The CSB has a much shorter institutional history — it was created in 1990 — but we need to ask a parallel question here as well: Does the CSB provide a strong lever for improving safety practices in the chemical and petrochemical industries through its accident investigations; or are industry actors largely free to continue their poor management practices indefinitely, safe in the realization that large chemical accidents are rare and the costs of occasional liability judgments are manageable?

Pervasive organizational and regulatory failures

It is intriguing to observe how pervasive organizational and regulatory failures are in our collective lives. Once you are sensitized to these factors, you see them everywhere. A good example is in the business section of today’s print version of the New York Times, August 1, 2019. There are at least five stories in this section that reflect the consequences of organizational and regulatory failure.

The first and most obvious story is one that has received frequent mention in Understanding Society, the Boeing 737 Max disaster. In a story titled “FAA oversight of Boeing scrutinized”, the reporters give information about a Senate hearing on FAA oversight earlier this week.  Members of the Senate Appropriations Committee questioned the process of certification of new aircraft currently in use by the FAA.

Citing the Times story, Ms. Collins raised concerns over “instances in which FAA managers appeared to be more concerned with Boeing’s production timeline, rather than the safety recommendations of its own engineers.”

Senator Jack Reed referred to the need for a culture change to rebalance the relationship between regulator and industry. Agency officials continued to defend the certification process, which delegates 96% of the work of certification to the manufacturer.

This story highlights two common sources of organizational and regulatory failure. There is first the fact of “production pressure” coming from the owner of a risky process, involving timing, supply of product, and profitability. This pressure leads the owner to push the organization hard in an effort to achieve goals — often leading to safety and design failures. The second factor identified here is the structural imbalance that exists between powerful companies running complex and costly processes, and the safety agencies tasked to oversee and regulate their behavior. The regulatory agency, in this case the FAA, is under-resourced and lacks the expert staff needed to carry out in depth a serious process of technical oversight.  The article does not identify the third factor which has been noted in prior posts on the Boeing disaster, the influence which Boeing has on legislators, government officials, and the executive branch.

 A second relevant story (on the same page as the Boeing story) refers to charges filed in Germany against the former CEO of Audi who has been charged concerning his role in the vehicle emissions scandal. This is part of the long-standing deliberate effort by Volkswagen to deceive regulators about the emissions characteristics of their diesel engine and exhaust systems. The charges against the Audi executive involved ordering the development of software designed to cheat diesel emissions testing for their vehicles. This ongoing story is primarily a story about corporate dysfunction, in which corporate leaders were involved in unethical and dishonest activities on behalf of the company. Regulatory failure is not a prominent part of this story, because the efforts at deception were so carefully calculated that it is difficult to see how normal standards of regulatory testing could have defeated them. Here the pressing problem is to understand how professional, experienced executives could have been led to undertake such actions, and how the corporation was vulnerable to this kind of improper behavior at multiple levels within the corporation. Presumably there were staff at multiple levels within these automobile companies who were aware of improper behavior. The story quotes a mid-level staff person who writes in an email that “we won’t make it without a few dirty tricks.” So the difficult question for these corporations is how their internal systems were inadequate to take note of dangerously improper behavior. The costs to Volkswagen and Audi in liability judgments and government penalties are truly vast, and surely outweigh the possible gains of the deception. These costs in the United States alone exceed $22 billion.

A similar story, this time from the tech industry, concerns a settlement of civil claims against Cisco Systems to settle claims “that it sold video surveillance technology that it knew had a significant security flaw to federal, state and local government agencies.” Here again we find a case of corporate dishonesty concerning some of its central products, leading to a public finding of malfeasance. The hard question is, what systems are in place for companies like Cisco that ensure ethical and honest presentation of the characteristics and potential defects of the products that they sell? The imperatives of working always to maximize profits and reduce costs lead to many kinds of dysfunctions within organizations, but this is a well understood hazard. So profit-based companies need to have active and effective programs in place that encourage and enforce honest and safe practices by managers, executives, and frontline workers. Plainly those programs broke down at Cisco, Volkswagen, and Audi. (One of the very useful features of Tom Beauchamp’s book Case Studies in Business, Society, and Ethics is the light Beauchamp sheds through case studies on the genesis of unethical and dishonest behavior within a corporate setting.)

Now we go on to Christopher Flavelle’s story about home-building in flood zones. From a social point of view, it makes no sense to continue to build homes, hotels, and resorts in flood zones. The increasing destruction of violent storms and extreme weather events has been evident at least since the devastation of Hurricane Katrina. Flavelle writes:

There is overwhelming scientific consensus that rising temperatures will increase the frequency and severity of coastal flooding caused by hurricanes, storm surges, heavy rain and tidal floods. At the same time there is the long-term threat of rising seas pushing the high-tide line inexorably inland.

However, Flavelle reports research by Climate Central that shows that the rate of home-building in flood zones since 2010 exceeds the rate of home-building in non-flood zones in eight states. So what are the institutional and behavioral factors that produce this amazingly perverse outcome? The article refers to incentives of local municipalities in generating property-tax revenues and of potential homeowners subject to urban sprawl and desires for second-home properties on the water. Here is a tragically short-sighted development official in Galveston who finds that “the city has been able to deal with the encroaching water, through the installation of pumps and other infrastructure upgrades”: “You can build around it, at least for the circumstances today. It’s really not affected the vitality of things here on the island at all.” The factor that is not emphasized in this article is the role played by the National Flood Insurance Program in the problem of coastal (and riverine) development. If flood insurance rates were calculated in terms of the true riskiness of the proposed residence, hotel, or resort, then it would no longer be economically attractive to do the development. But, as the article makes clear, local officials do not like that answer because it interferes with “development” and property tax growth. ProPublica has an excellent 2013 story on the perverse incentives created by the National Flood Insurance Program, and its inequitable impact on wealthier home-owners and developers (link). Here is an article by Christine Klein and Sandra Zellmer in the SMU Law Review on the dysfunctions of Federal flood policy (link):

Taken together, the stories reveal important lessons, including the inadequacy of engineered flood control structures such as levees and dams, the perverse incentives created by the national flood insurance program, and the need to reform federal leadership over flood hazard control, particularly as delegated to the Army Corps of Engineers.

Here is a final story from the business section of the New York Times illustrating organizational and regulatory dysfunctions — this time from the interface between the health industry and big tech. The story here is an effort that is being made by DeepMind researchers to use artificial intelligence techniques to provide early diagnosis of otherwise mysterious medical conditions like “acute kidney injury” (AKI). The approach proceeds by analyzing large numbers of patient medical records and attempting to identify precursor conditions that would predict the occurrence of AKI. The primary analytical tool mentioned in the article is the set of algorithms associated with neural networks. In this instance the organizational / regulatory dysfunction is latent rather than explicit and has to do with patient privacy. DeepMind is a business unit within the Google empire of businesses, Alphabet. DeepMind researchers gained access to large volumes of patient data from the UK National Health Service. There is now regulatory concern in the UK and the US concerning the privacy of patients whose data may wind up in the DeepMind analysis and ultimately in Google’s direct control. “Some critics question whether corporate labs like DeepMind are the right organization to handle the development of technology with such broad implications for the public.” Here the issue is a complicated one. It is of course a good thing to be able to diagnose disorders like AKI in time to be able to correct them. But the misuse and careless custody of user data by numerous big tech companies, including especially Facebook, suggests that sensitive personal data like medical files need to be carefully secured by effective legislation and regulation. And so far the regulatory system appears to be inadequate for the protection of individual privacy in a world of massive databases and largescale computing capabilities. The recent FTC $5 billion settlement imposed on Facebook, large as it is, may not suffice to change the business practices of Facebook (link).

(I didn’t find anything in the sports section today that illustrates organizational and regulatory dysfunction, but of course these kinds of failures occur in professional and college sports as well. Think of doping scandals in baseball, cycling, and track and field, sexual abuse scandals in gymnastics and swimming, and efforts by top college football programs to evade NCAA regulations on practice time and academic performance.)

%d bloggers like this: