Why do regulatory organizations fail?

Why is Charles Perrow a pessimist about government regulation?

Perrow is a leading researcher in the sociology of organizations, and he is a singular expert on accidents and failures. Several of his books are classics in their field — Normal Accidents: Living with High-Risk Technologies, The Next Catastrophe: Reducing Our Vulnerabilities to Natural, Industrial, and Terrorist Disasters, Organizing America: Wealth, Power, and the Origins of Corporate Capitalism. So why is he so gloomy about the ability of governmental organizations to protect the public from large failures and disasters of various kinds — hurricanes, floods, chemical plant fires, software failures, terrorism? He is not a relentless critic of organizations such as the EPA, the Department of Justice, or the Food and Drug Administration, but his assessment of their capacity for success is dismal.

We should not expect too much of organizations, but the DHS is extreme in its dysfunctions. As with all organizations, the DHS has been used by its masters and outsiders for purposes that are beyond its mandate, and the usage of the DHS has been extreme. One major user of the DHS is Congress. While Congress is the arm of the government that is closest to the people, it is also the one that is most influenced by corporations and local interest groups that do not have the interests of the larger community in mind. (The Next Catastrophe, kl 205)

I don’t think that Perrow’s views derive from the general skeptical view that organizations never succeed in accomplishing the functions we assign to them — hospitals, police departments, labor unions, universities, public health departments. And in fact his important book Complex Organizations: A Critical Essay provided a constructive description of the field of organizational studies when it appeared in 1972 and was updated in 2014 (link).)

Instead, there seem to be particular reasons why large governmental organizations designed to protect the public are likely to fail, in Perrow’s assessment. It is organizations that are designed to regulate risky activities and those that are charged to create prudent longterm plans for the future that seem particularly vulnerable, in his account. So what are those reasons for failure in these kinds of organizations?

FEMA is faulted, for example, because of its failure to adequately plan for and provide emergency relief to the people of New Orleans and other parts of the Gulf region from the effects of Hurricane Katrina. Poor planning, incompetent executives at the top, politicized directions coming from the White House, poor coordination across sub-units, and poor internal controls eventually resulted in a historic failure. These are fairly routine organizational failures that could happen within the United Parcel Service corporate headquarters as easily as Washington.

The Nuclear Regulatory Commission is faulted for its oversight of safety in nuclear plants, including Three Mile Island, Davis-Besse, and Shoreham. Key organizational faults include regulatory capture by owners and the nuclear industry, excessive dependence on specific key legislators, commissioners who are politically beholden, and insufficient personnel to carry out intensive inspection regimes.

Perrow’s key ideas about failures in the industrial systems themselves seem not to be central in his negative assessment of government regulatory organizations. The features of “complex systems” and “tightly coupled processes” that are so central to his theory of normal accidents in industrial systems like nuclear power plants play only incidental roles in his analysis of regulatory failure. Agencies are neither complex nor tightly coupled in the way a petroleum processing plant is. In fact, an outside observer might hypothesize that a somewhat more tightly coupled system in the NRC or the EPA (a more direct connection among the scientists, engineering experts, inspectors, and commissioners) might actually improve performance.

Instead, his analysis of regulatory failure depends on a different set of axes: interests, influence, and power. Regulatory agencies fail, in Perrow’s accounts, when their top administrators have bureaucratic interests and dependencies that diverge from the mission of safety, when powerful outsiders and owners have the capacity to influence rules, policies, and implementation, and when political and economic power is deployed to protect the interests of powerful actors. (All these defects are apparent in Trump administration appointments to federal agencies with regulatory responsibilities.)

Interestingly, these factors have also played a central role in Perrow’s sociological thinking about the emergence of the twentieth-century corporation; he views corporations as vehicles for the concentration of power:

Our economic organizations — business and industry — concentrate wealth and power; socialize employees and customers alike to meet their needs; and pass off to the rest of society the cost of their pollution, crowding, accidents, and encouragement of destructive life styles. In the vaunted “free market” economy of the United States, regulation of business and industry to prevent or mitigate this market failure is relatively ineffective, as compared to that enacted by other industrialized countries. (Organizing America, 1-2)

So the primary foundation of Perrow’s assessment of the linked of organizational failure when it comes to government regulation derives from the role that economic and political power plays in deforming the operations of major government organizations to serve the interests of the powerful. Regulatory agencies are “captured” by the powerful industries they are supposed to oversee, whether through influence on the executive branch or through merciless lobbying of the legislative branch. Commissioners are often very sympathetic to the business needs of the sector they regulate, and strive to avoid “undue regulatory burden”.

This leads us to a fascinating question: is there a powerful constituency for safety that could be a counterweight to corporate power and a bulwark for honest, scientifically guided regulatory regimes? Is a more level playing field between economic interests and the public’s interests in effective safety regulation possible?

We may want to invoke the public at large, and it is true that public opinion sometimes effectively demands government intervention for safety. But the public is generally limited in several important ways. Only a small set of issues manage to become salient for the public. Further, issues only remain salient for a limited period of time. And the salience of an issue is often geographically and demographically bounded. There was intense opposition to the Shoreham nuclear plant siting decision on Long Island, but the public in Chicago and Dallas did not mobilize around the issue. Sometimes vocal public opinion prevails, but much more common is the scenario where public interest wanes and profit-motivated corporate interests persists. (Pepper Culpepper lays out the logic of salience and unequal power between a diffuse public and a concentrated corporate interest in Quiet Politics and Business Power: Corporate Control in Europe and Japan.)

Other pertinent voices for safety are public interest organizations — the Union of Concerned Scientists, Friends of the Earth, Bulletin of Atomic Scientists. Organizations like these have succeeded in creating a national base of support, they have drawn resources in support of their efforts, and they have a greater organizational capacity to persist over an extended period of time. (In another field of advocacy, organizations like Anti-Defamation League and the Southern Poverty Law Center have succeeded in maintaining organizational focus on the dangers of hate-based movements.) So public interest organizations sometimes have the capacity and staying power to advocate for stronger regulation.

Investigative journalism and a free press are also highly relevant in exposing regulatory failures and enhancing performance of safety organizations. The New York Times and Washington Post coverage of the FAA’s role in certification of the 737 Max will almost certainly lead to improvements in this area of aircraft safety. (Significantly, when I made this statement concerning the link between industrial safety in China and a free press, I was told that “this is a sensitive subject in China.”)

(These examples are drawn from the national level of government. Sometimes local government — e.g. police departments and zoning boards — are captured as well, when organized crime “firms” and land developers are able to distort regulations and enforcement in their favor. But it may be that organizations at this level of government are a bit more visible to their publics, and therefore somewhat less likely to bend to the dictates of powerful local interests. Jessica Troundstine addresses these kinds of issues in Political Monopolies in American Cities: The Rise and Fall of Bosses and Reformers (link).

The second primitive accumulation

One of the more memorable parts of Capital is Marx’s description of the “so-called primitive accumulation of capital” — the historical process where rural people were dispossessed of access to land and forced into industrial employment in cities like Birmingham and Manchester (link). It seems as though we’ve seen another kind of primitive accumulation in the past thirty years — the ruin of well-paid manufacturing jobs based on unionized labor, the disappearance of local retail stores, the extinction of bookstores and locally owned hardware stores, all of which offered a large number of satisfying jobs. We’ve seen a new set of bad choices for displaced workers — McDonald’s servers, Walmart greeters, and Amazon fulfillment workers. And this structural economic change threatens to create a permanent under-class of workers earning just enough to get by.

So what is the future of work and class in advanced economies? Scott Shane’s major investigative story in the New York Times describing Amazon’s operations in Baltimore (link) makes for sobering reading on this question. The story describes work conditions in an Amazon fulfillment center in Baltimore that documents the intensity, pressure, and stress created for Amazon workers by Amazon’s system of work control. This system depends on real-time monitoring of worker performance, with automatic firings coming to workers who fall short on speed and accuracy after two warnings. Other outlets have highlighted the health and safety problems created by the Amazon system, including this piece on worker safety in the Atlantic by Will Evans; link. It is a nightmarish description of a work environment, and hundreds of thousands of workers are employed under these conditions.

Imagine the difference you would experience as a worker in the hardware store mentioned in the New York Times story (driven out of business by online competition) and as a worker in an Amazon fulfillment center. In the hardware store you provide value to the business and the customers; you have social interaction with your fellow workers, your boss, and the customers; you work in a human-scale enterprise that actually cares whether you live or die, whether you are sick or well; and to a reasonable degree you have a degree of self-direction in your work. Your expertise in home improvement, tools, and materials is valuable to the customers, which brings them back for the next project, and it is valuable to you as well. You have the satisfaction of having knowledge and skills that make a difference in other people’s lives. In the fulfillment center your every move is digitally monitored over the course of your 10-hour shift, and if you fall short in productivity or quality after two warnings, you are fired. You have no meaningful relationships with fellow workers — how can you, with the digital quotas you must fulfill every minute, every hour, every day? And you have no — literally no — satisfaction and fulfillment as a human being in your work. The only value of the work is the $15 per hour that you are paid; and yet it is not enough to support you or your family (about $30,000 per year). As technology writer Amy Webb of the Future Today Institute is quoted in the Times article, [It’s not that we may be replaced by robots,] “it’s that we’ve been relegated to robot status.”

What kind of company is that? It is hard to avoid the idea that it is the purest expression that we have ever seen of the ideal type of a capitalist enterprise: devoted to growth, cost avoidance, process efficiency, use of technology, labor control, rational management, and strategic and tactical reasoning based solely on business growth and profit-maximizing calculations. It is a Leviathan that neither Hobbes nor Marx could really have visualized. And social wellbeing — of workers, of communities, of country, of the global future — appears to have no role whatsoever in these calculations. The only affirmative values expressed by the company are “serving the consumer” and being a super-efficient business entity.

What is most worrisome about the Amazon employment philosophy is its single-minded focus on “worker efficiency” at every level, using strict monitoring techniques and quotas to enforce efficient work. And the ability to monitor is increased asymptotically by the use of technology — sensors, cameras, and software that monitor the worker’s every movement. It is the apotheosis of F.W. Taylor’s theories from the 1900s of “scientific management” and time-motion studies. Fundamentally Taylor regarded the worker as a machine-like component of the manufacturing process, whose motions needed to be specified and monitored so as to bring about the most efficient possible process. And, as commentators of many ideological stripes have observed, this is a fundamentally dehumanizing view of labor and the worker. This seems to be precisely the ideal model adopted by Amazon, not only in its fulfillment centers but its delivery drivers, its professional staff, and every other segment of the workforce Amazon can capture.

Business and technology historian David Hounshell presciently noticed the resurgence of Taylorism in a 1988 Harvard Business Review article on “modern manufacturing”; link. (This was well before the advent of online business and technology-based mega-companies.) Here are a few relevant paragraphs from his piece:

Rather than seeing workers as assets to be nurtured and developed, manufacturing companies have often viewed them as objects to be manipulated or as burdens to be borne. And the science of manufacturing has taken its toll. Where workers were not deskilled through extreme divisions of labor, they were often displaced by machinery. For many companies, the ideal factory has been — and continues to be — a totally automated, workerless facility. 

Now in the wake of the eroding competitive position of U.S. manufacturing companies, is it time for an end to Taylor’s management tradition? The books answer in the affirmative, calling for the institution of a less mechanistic, less authoritarian, less functionally divided approach to manufacturing. Dynamic Manufacturing focuses explicitly on repudiating Taylorism, which it takes to be a system of “command and control.” American Business: A Two-Minute Warning is written in a more popular vein, but characterizes U.S. manufacturing methods and the underlying mind-set of manufacturing managers in unmistakably similar ways. Taylorism is the villain and the anachronism. 

Predictably, both books arrive at their diagnoses and prescriptions through their respective evaluations of the “Japanese miracle.” Whereas U.S. manufacturing is rigid and hierarchical, Japanese manufacturing is flexible, agile, organic, and holistic. In the new competitive environment — which favors the company that can continually generate new, high-quality products — the Japanese are more responsive. They will continue to dominate until U.S. manufacturers develop manufacturing units that are, in Hayes, Wheelwright, and Clark’s words, “dynamic learning organizations.” Their book is intended as a primer. (link)

Plainly the more positive ideas associated with positive human resources theory about worker motivation, knowledge, and creativity play no role in Amazon’s thinking about the workplace. And this implies a grim future for work — not only in this company, but in many others who emulate the workplace model pioneered by Amazon.

The abuses of the first fifty years of industrial capitalism eventually came to an end through a powerful union movement. Workers in railroads, textiles, steel, and the automobile industry eventually succeeded in creating union organizations that were able to effectively represent their interests in the workplace. So where is the Amazon worker’s ability to resist? The New York Times story (link) makes it clear that individual workers have almost no ability to influence Amazon’s practices. They can choose not to work for Amazon, but they can’t join a union, because Amazon has effectively resisted unionization. And in places like Baltimore and other cities where Amazon is hiring, the other job choices are even worse (even lower paid, if they exist at all). Amazon makes a great deal of money on their work, and it manages its great initiatives based on their Chaplin-esque speed of completion (one-day delivery). But there is very little ability to change the workplace towards a more human-scale one, and a workplace where the worker’s positive human capacities find fulfillment. An Amazon fulfillment center is anything but that when it comes to the lives of the workers who make it run.

Is there a better philosophy that Amazon might adopt for its work environments? Yes. It is a framework that places worker wellbeing at the same level as efficiency, “1-day delivery” and profitability. It is an approach that gives greater flexibility to shop-floor-level workers, and relaxes to some degree the ever-rising quotas for piece work per minute. It is an approach that sets workplace expectations in a way that fully considers the safety, stress, and health of the workers. It is an approach that embodies genuine respect and concern for its workers — not as public relations initiative, but as a guiding philosophy of the workplace.

There is a hard question and a harder question posed by this idea, however. Is there any reason to think that Amazon will ever evolve in this more humane direction? And harder, is there any reason to think that any large modern corporation can embody these values? Based on the current behavior of Amazon as a company, from top to bottom, the answer to the first question is “no, not unless workers gain real power in the workplace through unionization or some other form of representation in production decisions.” And to the second question, a qualified yes: “yes, a more humane workplace is possible, if there is broad involvement in business decisions by workers as well as shareholders and top executives.” But this too requires a resurgence of some form of organized labor — which our politics of the past 20 years have discouraged at every turn.

Or to quote Oliver Goldsmith in The Deserted Village (1770):

Ill fares the land, to hastening ills a prey,
Where wealth accumulates, and men decay.
Princes and lords may flourish, or may fade;
A breath can make them, as a breath has made:
But a bold peasantry, their country’s pride,
When once destroy’d, can never be supplied.

So where did the dispossessed wind up in nineteenth century Britain? Here is how Engels described the social consequences of this “primitive accumulation” for the working people of Britain in his book, The Condition of the Working Class in England:

It is only when [the observer] has visited the slums of this great city that it dawns upon him that the inhabitants of modern London have had to sacrifice so much that is best in human nature in order to create those wonders of civilisation with which their city teems. The vast majority of Londoners have had to let so many of their potential creative faculties lie dormant, stunted and unused in order that a small, closely-knit group of their fellow citizens could develop to the full the qualities with which nature has endowed them. (30)

This passage, written in 1845, could with minor changes of detail describe the situation of Amazon workers today. “The vast majority … have had to let so many of their potential creative faculties lie dormant, stunted and unused in order that a small, closely-knit group of their fellow citizens could develop to the full the qualities with which nature has endowed them.”

And what about income and standard of living? The graph of median US income by quintile above in constant 2018 dollars tells a very stark story. Since 1967 only the top quintile of household income has demonstrated significant growth (in a timeframe of more than fifty years); and the top 5% of households shows the greatest increase of any group. 80% of US households are barely better off today than they were in 1967; whereas the top 5% of households have increased their incomes by almost 250% in real terms. This has a very clear, unmistakeable implication: that working people, including service workers, industrial workers, and most professionals have received a declining share of the economic product of the nation. Amazon warehouse workers fall in the 2nd-lowest quintile (poorest 21-40%). (It would be very interesting to have a time series of Amazon’s wage bill for blue-collar and white-collar wages excluding top management as a fraction of company revenues and net revenues since 2005.)

Here is a relevant post on the possibilities created for a more fair industrial society by the institution of worker-owned enterprises (link), and here is a post on the European system of workers councils (link), a system that gives workers greater input into decisions about operations and work conditions on the shop floor.

Organizations as open systems

Key to understanding the “ontology of government” is the empirical and theoretical challenge of understanding how organizations work. The activities of government encompass organizations across a wide range of scales, from the local office of the Department of Motor Vehicles (40 employees) to the Department of Defense (861,000 civilian employees). Having the best understanding possible of how organizations work and fail is crucial to understanding the workings of government.

I have given substantial attention to the theory of strategic action fields as a basis for understanding organizations in previous posts (link, link). The basic idea in that approach is that organizations are a bit like social movements, with active coalition-building, conflicting goals, and strategic jockeying making up much of the substantive behavior of the organization. It is significant that organizational theory as a field has moved in this direction in the past fifteen years or so as well. A good example is Scott and Davis, Organizations and Organizing: Rational, Natural and Open System Perspectives (2007). Their book is intended as a “state of the art” textbook in the field of organizational studies. And the title expresses some of the shifts that have taken place in the field since the work of March, Simon, and Perrow (link, link). The word “organizing” in the title signals the idea that organizations are no longer looked at as static structures within which actors carry out well defined roles; but are instead dynamic processes in which active efforts by leaders, managers, and employees define goals and strategies and work to carry them out. And the “open system” phrase highlights the point that organizations always exist and function within a broader environment — political constraints, economic forces, public opinion, technological innovation, other organizations, and today climate change and environmental disaster.

Organizations themselves exist only as a complex set of social processes, some of which reproduce existing modes of behavior and others that serve to challenge, undermine, contradict, and transform current routines. Individual actors are constrained by, make use of, and modify existing structures. (20)

Most analysts have conceived of organizations as social structures created by individuals to support the collaborative pursuit of specified goals. Given this conception, all organizations confront a number of common problems: all must define (and redefine) their objectives; all must induce participants to contribute services; all must control and coordinate these contributions; resources must be garnered from the environment and products or services dispensed; participants must be selected, trained, and replaced; and some sort of working accommodation with the neighbors must be achieved. (23)

Scott and Davis analyze the field of organizational studies in several dimensions: sector (for-profit, public, non-profit), levels of analysis (social psychological level, organizational level, ecological level), and theoretical perspective. They emphasize several key “ontological” elements that any theory of organizations needs to address: the environment in which an organization functions; the strategy and goals of the organization and its powerful actors; the features of work and technology chosen by the organization; the features of formal organization that have been codified (human resources, job design, organizational structure); the elements of “informal organization” that exist in the entity (culture, social networks); and the people of the organization.

They describe three theoretical frameworks through which organizational theories have attempted to approach the empirical analysis of organizations. First, the rational framework:

Organizations are collectivities oriented to the pursuit of relatively specific goals. They are “purposeful” in the sense that the activities and interactions of participants are coordinated to achieve specified goals….. Organizations are collectivities that exhibit a relatively high degree of formalization. The cooperation among participants is “conscious” and “deliberate”; the structure of relations is made explicit. (38)

From the rational system perspective, organizations are instruments designed to attain specified goals. How blunt or fine an instrument they are depends on many factors that are summarized by the concept of rationality of structure. The term rationality in this context is used in the narrow sense of technical or functional rationality (Mannheim, 1950 trans.: 53) and refers to the extent to which a series of actions is organized in such a way as to lead to predetermined goals with maximum efficiency. (45)

Here is a description of the natural-systems framework:

Organizations are collectivities whose participants are pursuing multiple interests, both disparate and common, but who recognize the value of perpetuating the organization as an important resource. The natural system view emphasizes the common attributes that organizations share with all social collectivities. (39)

Organizational goals and their relation to the behavior of participants are much more problematic for the natural than the rational system theorist. This is largely because natural system analysts pay more attention to behavior and hence worry more about the complex interconnections between the normative and the behavioral structures of organizations. Two general themes characterize their views of organizational goals. First, there is frequently a disparity between the stated and the “real” goals pursued by organizations—between the professed or official goals that are announced and the actual or operative goals that can be observed to govern the activities of participants. Second, natural system analysts emphasize that even when the stated goals are actually being pursued, they are never the only goals governing participants’ behavior. They point out that all organizations must pursue support or “maintenance” goals in addition to their output goals (Gross, 1968; Perrow, 1970:135). No organization can devote its full resources to producing products or services; each must expend energies maintaining itself. (67)

And the “open-system” definition:

From the open system perspective, environments shape, support, and infiltrate organizations. Connections with “external” elements can be more critical than those among “internal” components; indeed, for many functions the distinction between organization and environment is revealed to be shifting, ambiguous, and arbitrary…. Organizations are congeries of interdependent flows and activities linking shifting coalitions of participants embedded in wider material-resource and institutional environments.  (40)

(Note that the natural-system and “open-system” definitions are very consistent with the strategic-action-field approach.)

Here is a useful table provided by Scott and Davis to illustrate the three approaches to organizational studies:

An important characteristic of recent organizational theory has to do with the way that theorists think about the actors within organizations. Instead of looking at individual behavior within an organization as being fundamentally rational and goal-directed, primarily responsive to incentives and punishments, organizational theorists have come to pay more attention to the non-rational components of organizational behavior — values, cultural affinities, cognitive frameworks and expectations.

This emphasis on culture and mental frameworks leads to another important shift of emphasis in next-generation ideas about organizations, involving an emphasis on informal practices, norms, and behaviors that exist within organizations. Rather than looking at an organization as a rational structure implementing mission and strategy, contemporary organization theory confirms the idea that informal practices, norms, and cultural expectations are ineliminable parts of organizational behavior. Here is a good description of the concept of culture provided by Scott and Davis in the context of organizations:

Culture describes the pattern of values, beliefs, and expectations more or less shared by the organization’s members. Schein (1992) analyzes culture in terms of underlying assumptions about the organization’s relationship to its environment (that is, what business are we in, and why); the nature of reality and truth (how do we decide which interpretations of information and events are correct, and how do we make decisions); the nature of human nature (are people basically lazy or industrious, fixed or malleable); the nature of human activity (what are the “right” things to do, and what is the best way to influence human action); and the nature of human relationships (should people relate as competitors or cooperators, individualists or collaborators). These components hang together as a more-or-less coherent theory that guides the organization’s more formalized policies and strategies. Of course, the extent to which these elements are “shared” or even coherent within a culture is likely to be highly contentious (see Martin, 2002)—there can be subcultures and even countercultures within an organization. (33)

Also of interest is Scott’s earlier book Institutions and Organizations: Ideas, Interests, and Identities, which first appeared in 1995 and is now in its 4th edition (2014). Scott looks at organizations as a particular kind of institution, with differentiating characteristics but commonalities as well. The IBM Corporation is an organization; the practice of youth soccer in the United States is an institution; but both have features in common. In some contexts, however, he appears to distinguish between institutions and organizations, with institutions constituting the larger normative, regulative, and opportunity-creating environment within which organizations emerge.

Scott opens with a series of crucial questions about organizations — questions for which we need answers if we want to know how organizations work, what confers stability upon them, and why and how they change. Out of a long list of questions, these seem particularly important for our purposes here: “How are we to regard behavior in organizational settings? Does it reflect the pursuit of rational interests and the exercise of conscious choice, or is it primarily shaped by conventions, routines, and habits?” “Why do individuals and organizations conform to institutions? Is it because they are rewarded for doing so, because they believe they are morally obligated to obey, or because they can conceive of no other way of behaving?” “Why is the behavior of organizational participants often observed to depart from the formal rules and stated goals of the organization?” “Do control systems function only when they are associated with incentives … or are other processes sometimes at work?” “How do differences in cultural beliefs shape the nature and operation of organizations?” (Introduction).

Scott and Davis’s work is of particular interest here because it supports analysis of a key question I’ve pursued over the past year: how does government work, and what ontological assumptions do we need to make in order to better understand the successes and failures of government action? What I have called organizational dysfunction in earlier posts (link, link) finds a very comfortable home in the theoretical spaces created by the intellectual frameworks of organizational studies described by Scott and Davis.

Personalized power at the local level

How does government work? We often understand this question as one involving the institutions and actors within the Federal government. But there is a different zone of government and politics that is also very important in public life in the United States, the practical politics and exercise of power at the state and local levels.

Here is an earlier post that addresses some of these issues as well; link. There I present three scenarios for how our democracy, the ideal case, the “not-so-ideal” case, and the “nightmare” case:

The Nightmare Scenario Elected officials have no sincere adherence to the public good; they pursue their own private and political interests through all the powers available to them. Elected officials are sometimes overtly corruptible, accepting significant gifts in exchange for official performance. Elected officials are intimidated by the power of private interests (corporations) to fund electoral opposition to their re-election. Regulatory agencies are dominated by the industries they regulate; independent commissioners are forced out of office; and regulations are toothless when it comes to environmental protection, wilderness protection, health and safety in the workplace, and food safety. Lobbyists for special interests and corporations have almost unrestricted access to legislators and regulators, and are generally able to achieve their goals.

This is the nightmare scenario if one cares about democracy, because it implies that the apparatus of government is essentially controlled by private interests rather than the common good and the broad interests of society as a whole. It isn’t “pluralism”, because there are many important social interests not represented in this system in any meaningful way: poor people, non-unionized workers, people without health insurance, inner-city youth, the environment, people exposed to toxic waste, …

 If anything, personal networks of power and influence appear to be of even greater importance at this level of government than at the Federal level.

So how does personal power work at the local level? Power within a democracy is gained and wielded through a variety of means: holding office within an important institution, marshaling support from a political party, possessing a network of powerful supporters in business, labor, and advocacy groups; securing access to significant sources of political funding; and other mechanisms we can think of. Mayors, governors, and county executives have powers of appointment to reward or punish their supporters and competitors; they have the ability to influence purchasing and other economic levers of the municipality; and they have favors to trade with legislators.

Essentially the question to consider here is how power is acquired, exercised, and maintained by a few powerful leaders in state, county, and city, and what are the barely-visible lines through which these power relations are implemented and maintained. This used to be called “machine politics,” but as Jessica Trounstine demonstrates in Political Monopolies in American Cities: The Rise and Fall of Bosses and Reformers, the phenomenon is broader than Tammany Hall and the mayor-boss politics of the nineteenth century through Mayor Daly’s reign in Chicago. The term Trounstine prefers is “political monopolies”:

I argue that it is not whether a government is machine or reform that determines its propensity to represent the people, but rather its success at stacking the deck in its favor. When political coalitions successfully limit the probability that they will be defeated over the long term — when they eliminate effective competition — they achieve a political monopoly. In these circumstances the governing coalition gains the freedom to be responsive to a narrow segment of the electorate at the expense of the broader community. (KL 140)

What are the levers of influence available to a politician in state and local government that permit some executives to achieve monopoly power? How do mayors, county executives, and political party leaders exercise power over the decisions that are to be made? Once they have executive power they are able to reward friends and punish enemies through appointments to desirable jobs, through favorable access to government contracts (corrupt behavior!), through the power of their Rolodexes (their networks of relationships with other powerful people), through their influence on political party decision-making, through the power of some of their allies (labor unions, business associations, corporations), and through their ability to influence the flow of campaign funding. They have favors to dispense and they have punishments they can dole out.

Consider Southeast Michigan as an interesting example. Michigan’s largest counties have a history of longterm “monopoly” leadership. Wayne County was led for 15 years by Ed McNamara and Oakland County was led by L. Brooks Patterson, and both men wielded a great deal of power in their offices during their tenure. Neither was seriously challenged by strong competing candidates, and Patterson died in office at the age of 80. Some of the levers of power in Wayne County came to light during a corruption investigation in 2011. Below are links to several 2011 stories in MLive on the details of this controversy involving the Wayne County Executive and the Airport Authority Board.

Labor unions have a great deal of influence on the internal politics of the Democratic Party in Michigan. Dudley Buffa’s Union Power and American Democracy: The UAW and the Democratic Party, 1972-83 describes this set of political realities through the 1980s. Buffa shows that the UAW had extraordinary influence in the Democratic Party into the 1980s, and even with the decline of the size and influence of organized labor, it still has virtually veto power on important Democratic Party decisions today.

As noted in many places in Understanding Society, corporations have a great deal of power in political decision-making in the United States. Corporate influence is wielded through effective lobbying, political and political action contributions, and the “social capital” of networks of powerful individuals. (Just consider the influence of Boeing on the actions of the FAA or the influence of the nuclear industry on the actions of the NRC.) G. William Domhoff (Who Rules America? Challenges to Corporate and Class Dominance) provided a classic treatment of the influence of corporate and business elites in the sphere of political power in the United States. He has also created a very useful website dedicated to helping other researchers discover the networks of power in other settings (link). Senator Sheldon Whitehouse and Melanie Wachtell Stinnett provide a more contemporary overview of the power that businesses have in American politics in Captured: The Corporate Infiltration of American Democracy.

When I speak of corporate power in politics, let me be very clear: I do not mean just the activities of the incorporated entities themselves. The billionaire owners of corporations are often actively engaged in battle to expand the influence of the corporations that give them their power and their wealth. Front groups and lobbying groups are often the ground troops when corporate powers don’t want to get their own hands dirty or when they want to institutionalize their influence. So-called philanthropic foundations are often the proxies for billionaire families who want influence and who launch these tools. (kl 214)

Contributors to Corporations and American Democracy provide extensive understanding of the legal and political history through which corporations came to have such extensive legal rights in the United States.

Business executives too have a great deal of influence on the Michigan legislature. Here is a Crain’s Detroit Business assessment of the top influencers in Lansing, “Michigan’s top power players as Lansing insiders see them — and how they wield that influence” (link). Top influencers in the business community, according to the Crain’s article, include Dan Gilbert, chairman of Quicken Loans Inc., Daniel Loepp, president and CEO of Blue Cross Blue Shield of Michigan, Rich Studley, CEO of Michigan Chamber of Commerce, Patti Poppe, CEO of Consumers Energy Co., and Mary Barra, CEO of General Motors Co. Most of these individuals are members of the state’s leading business organization, Business Leaders for Michigan (link). Collectively and individually these business leaders have a great deal of influence on the elected officials of the state.

Finally, elected officials themselves sometimes act in direct self-interest, either electoral or financial, and corruption is a recurring issue in local and state government in many states. Detroit’s mayor Kwame Kilpatrick, a string of Illinois governors, and other elected officials throughout the country were all convicted of corrupt actions leading to personal gain (link).

These kinds of influence and actions underline the extensive and anti-democratic role that a range of political actors play within the decision-making and rule-setting of local government: monopoly-holding political executives, political party officials, big business and propertied interests, labor unions, and special advocacy groups. It would be interesting to put together a scorecard of issues of interest to business, labor, unions, and environmental groups, and see how often each constituency prevails. It is suggestive about the relative power of these various actors that the two issues of the greatest interest to the business community in Michigan in recent years, repeal of the Michigan Business Tax and passage of “Freedom to Work” legislation, were both successful. (Here is an earlier post on the business tax reform issue in Michigan; link.)

Data for case study about networks of influence in SE Michigan

Jeff Wattrick, November 2, 2011. “This didn’t start with Turkia Mullin: The inter-connected web of Wayne County politics from Ed McNamara to Renee Axt”, MLive (link)

___________, November 4, 2011: “Wayne County Executive Bob Ficano replaces top officials, vows to end ‘business as usual'”, MLive (link)

___________, November 7, 2011: “Renee Axt resigns as Chair of Wayne County Airport Authority”, MLive (link)

___________, November 8, 2011: “Almost half of Wayne County voters say Executive Bob Ficano should resign”, MLive (link)

Jim Schaefer and John Wisely. November 15, 2011. “Wayne Co. lawyer who quit is back”. Detroit Free Press. (link)

David Sands. November 15, 2011. “Wayne County Corruption Probe Gathers Speed, Turkia Mullin To Testify”, Huffington Post (link)

Detroit had its own nationally visible political corruption scandal when Mayor Kwame Kilpatrick was charged with multiple counts of racketeering and corruption, for which he was eventually convicted. Stephen Yaccino, October 10, 2013. “Kwame M. Kilpatrick, Former Detroit Mayor, Sentenced to 28 Years in Corruption Case”, New York Times (link).

The internal machinations of Michigan’s political parties with respect to choosing candidates for office reflect the power of major “influencers”. Here is a piece about the choice of candidate for the office of secretary of state in the Democratic Party in 2002: Jack Lessenberry. March 30, 2002. “Austin has uphill fight in Michigan secretary of state race”, Toledo Blade (link).

Electronic Health Records and medical mistakes

Electronic Health Record systems (EHRs) have been broadly implemented by hospitals and health systems around the country as a way of increasing the accuracy, availability, and timeliness of patient health status and treatment information. (These systems are also sometimes called “Digital Medical Records” (DMRs).) They are generally regarded as an important forward step in improving the quality of healthcare. Here is a description of the advantages of Electronic Health Record systems, according to Athena Health:

The advantages of electronic health records in the clinical setting are numerous and important. In the 2012 edition of the Physician Sentiment IndexTM, published by athenahealth and Sermo, 81% of physicians said they believe EHRs improve access to clinical data. More than two-thirds said an EHR can actually improve patient care.

The use of an electronic health records system offers these clinical advantages:

  • No bulky paper records to store, manage and retrieve
  • Easier access to clinical data
  • The ability to establish and maintain effective clinical workflows
  • Fewer medical errors, improved patient safety and stronger support for clinical decision-making
  • Easier participation in Meaningful Use, Patient-Centered Medical Home (PCMH) and other quality programs, with electronic prompts ensuring that required data is recorded at the point of care
  • The ability to gather and analyze patient data that enables outreach to discreet populations
  • The opportunity to interact seamlessly with affiliated hospitals, clinics, labs and pharmacies

Considering all the advantages of electronic health records, and the rapidly growing electronic interconnectedness of the health care world, even if EHRs had not been mandated by health care reform, their development and eventual ubiquity in the health care industry was inevitable.

And yet, like any software system, EHR systems are capable of creating new errors; and some of those errors can be harmful to patients.

Nancy Leveson is an important expert on software system safety who has written extensively on the challenges of writing highly reliable software in safety-critical applications. Here are a few apt observations from her book Safeware: System Safety and Computers (1995).

Although it might seem that automation would decrease the risk of operator error, the truth is that automation does not remove people from systems — it merely moves them to maintenance and repair functions and to higher-level supervisory control and decision making. The effects of human decisions and actions can then be extremely serious. At the same time, the increased system complexity makes the decision-making process more difficult. (10)

The increased pace of change lessens opportunity to learn from experience. Small-scale and relatively nonhazardous systems can evolve gradually by trial and error. But learning by trial and error is not possible for many modern products and processes because the pace of change is too fast and the penalties of failure are too great. Design and operating procedures must be right the first time when there is potential for a major fire, explosion, or release of toxic materials. (12)

(To the last statement we might add “or harm to hospital patients through incorrect prescriptions or failed transmission of lab results”.)

The safety implications of computers exercising direct control over potentially dangerous processes are obvious. Less obvious are the dangers when … software generated data is used to make safety-critical decisions, … software is used in design analysis, … safety-critical data (such as blood bank data) is stored in computer databases. The FDA has received reports of software errors in medical instruments that led to mixing up patient names and data, as well as reports of incorrect outputs from laboratory and diagnostic instruments (such as patient monitors, electrocardiogram analyzers, and imaging devices”. (23)

Automatic control systems [like aircraft autopilots] are designed to cope with the immediate effects of a deviation in the process — they are feedback loops that attempt to maintain a constant system state, and as such, they mask the occurrence of a problem in its early stages. An operator will be aware of such problems only if adequate information to detect them is provided. That such information is often not provided may be the result of the different mental models of the designers and experienced operators, or it may merely reflect financial pressures on designers due to the cost of providing operators with independent information. (117)

One of the cases examined in detail in Safeware is the Therac-25 radiation-therapy device, which due to a minor software flaw in the treatment-entry plan module began seriously injuring patients with excessive doses of radiation in 1985-87 (515 ff.). It had operated without incident thousands of times before the first accident.

So Leveson gives ample reason to be cautious about the safety implications of DMRs and the “fault pathways” through which their normal functioning might harm patients. What has been the experience so far, now that the healthcare industry has witnessed widespread adoption of DMR systems?

Two specific issues involving EHR errors affecting patient care have been identified in the past several years. The first is in the area of errors in the administration of prescription drugs, and the second is in the area of the handling and routing of medical test results. Both errors have the potential for harming the patient.

Jennifer Bresnick (link) summarizes the results of a report by the Pennsylvania Patient Safety Authority concerning medication errors caused by DMR systems. Medication errors (wrong medication, wrong dose, wrong patient, wrong frequency) can occur at several stages of the clinical process, including prescribing, transcribing, dispensing, and administration. The digital medical record is intended to dramatically reduce all these sources of error, the Pennsylvania study shows that the DMR can also contribute to errors at each of these stages.

While EHRs and other technologies are intended to reduce errors and improve the safe, standardized, and well-documented delivery of care, some stakeholders believe that digital tools can simply serve to swap one set of mistakes for another. Poor implementation and lackluster user training can leave patients just as vulnerable to medication errors as they were when providers used paper charts, commented Staley Lawes, PharmD, BCPS, Patient Safety Analyst, and Matthew Grissinger, RPh, FISMP, FASCP, Manager of Medication Safety Analysis in the brief. (link)

Part of the blame, according to the Pennsylvania report, belongs to the design of the user interface:

For this reason, it is important to design a system with an intuitive user interface to minimize the risk for human error. Users should be able to easily enter and retrieve data and share information with other healthcare professionals.  When systems are designed without these considerations in mind, patients are subject to undue risk. (link)

The report contains several specific design standards that would improve the safety of the DMR system:

The interaction between clinician and software is a key component that is to be taken into consideration when trying to improve the safety of health IT,” the report says. “Incident reports can provide valuable information about the types of HIT-related issues that can cause patient harm, and ongoing HIT system surveillance can help in developing medication safety interventions. (link)

It is clear that ongoing health IT system surveillance and remedial interventions are needed. Efforts to improve health IT safety should include attention to software interoperability, usability, and workflow. The relationship between clinician and software includes complex interactions that must be considered to optimize health IT’s contribution to medication safety.

Yackel and Embi (link) treat the problem of test result management errors in “Unintended errors with EHR-based result management: a case series”. Here is their abstract:

Test result management is an integral aspect of quality clinical care and a crucial part of the ambulatory medicine workflow. Correct and timely communication of results to a provider is the necessary first step in ambulatory result management and has been identified as a weakness in many paper-based systems. While electronic health records (EHRs) hold promise for improving the reliability of result management, the complexities involved make this a challenging task. Experience with test result management is reported, four new categories of result management errors identified are outlined, and solutions developed during a 2-year deployment of a commercial EHR are described. Recommendations for improving test result management with EHRs are then given.

They identify test management errors at four stages of the clinical process:

  • results not correctly communicated to provider;
  • results communicated but never received or reviewed by the provider;
  • results reviewed, but appropriate action not recommended by provider;
  • appropriate recommendation made by provider, but action not carried out.

They make several key recommendations for improving the performance of DMR systems in managing test results: Develop fault-tolerant systems that automatically report delivery failures; use robust testing to find rare errors that occur both within and between systems; implement tracking mechanisms for critical tests, such as cancer screening and diagnostics; and deliver results directly to patients.

These are just two types of errors that can arise in digital medical record management systems. It is evident that the designers and implementers of DMRs need to take the systems-safety approach described by Nancy Leveson and implement comprehensive safety failure analysis, both in terms of “safety case analysis” (discovery of failure scenarios) and after-event investigation to identify the source of the failure in the software and its human interface.

These examples are not intended to suggest that DMRs are hazardous and should be avoided. On the contrary, the consolidation and convenient presentation of patient information for the provider is clearly an important step forward. But it is crucial that designers and implementers keep safety at the center of their attention, and to have a healthy respect for the ways in which automated systems can incorporate incorrect assumptions, can produce unintended interactions among components, and can be presented in such a confusing way to the human provider that patient care is harmed.

(Here is a case of treatment involving several different errors conveyed through the digital medical record system that involved attaching biopsy and test results to the wrong patient, leading to the wrong treatment for the patient. It is interesting to read because it reflects some of the complexity identified by Leveson in other system failures.) 

Twelve years of Understanding Society

 


Understanding Society has now reached its twelfth anniversary of continuous publication. This represents 1,271 posts, and over 1.3 million words. According to Google Blogspot statistics, the blog has gained over 11 million pageviews since 2010. Just over half of visitors came from the United States, Great Britain, and Canada, with the remainder spread out over the rest of the world. The most popular posts are “Lukes on power” (134K) and “What is a social structure?” (124K).

I’ve continued to find writing the blog to be a great way of keeping several different lines of thought and research going. My current interest in “organizational causes of technology failures” has had a large presence in the blog in the past year, with just under half of the posts in 2019 on this topic. Likewise, a lot of the thinking I’ve done on the topic of “a new ontology of government” has unfolded in the blog. Other topic areas include the philosophy of social science, philosophy of technology, and theories of social ontology. A theme that was prominent in 2018 that is not represented in the current year is “Democracy and the politics of hate”, but I’m sure I’ll return to this topic in the coming months because I’ll be teaching a course on this subject in the spring.

I continue to look at academic blogging as a powerful medium for academic communication, creativity, and testing out new ideas. I began in 2007 by describing the blog as “open-source philosophy”, and it still has that character for me. And I continue to believe that my best thinking finds expression in Understanding Society. Every post that I begin starts with an idea or a question that is of interest to me on that particular day, and it almost always leads me to learning something new along the way.

I’ve also looked at the blog as a kind of experiment in exploration of social media for serious academic purposes. Can blogging platforms and social media platforms like Twitter or Facebook contribute to academic progress? So it is worth examining the reach of the blog over time, and the population of readers whom it has touched. The graph of pageviews over time is interesting in this respect.

Traffic to the blog increased in a fairly linear way from the beginning date of the data collection in 2010 through about 2017, and then declined more steeply from 2017 through to the present. (The data points are pageviews per month.) At its peak the blog received about 150K pageviews per month, and it seems to be stabilizing now at about 100K pageviews per month. My impression is that a lot of the variation has to do with unobserved changes in search engine page ranking algorithms, resulting in falling numbers of referrals. The Twitter feed associated with the blog has just over 2,100 followers (@dlittle30), and the Facebook page for the blog registers 12,800 followers. The Facebook page is not a very efficient way of disseminating new posts from the blog, though, because Facebook’s algorithm for placing an item into the feed of a “follower” is extremely selective and opaque. A typical item may be fed into 200-400 of the feeds of the almost 13,000 individuals who have expressed interest in the page.

A surprising statistic is that about 75% of pageviews on the blog came through desktop requests rather than mobile requests (phone and tablet). We tend to think that most web viewing is occurring on mobile devices now, but that does not seem to be the case. Also interesting is that the content of the blog is mirrored to a WordPress platform (www.undsoc.org), and the traffic there is a small fraction of the traffic on the Blogspot platform (1,500 pageviews versus 80,000 pageviews).

So thanks to the readers who keep coming back for more, and thanks as well to those other visitors who come because of an interest in a very specific topic. It’s genuinely rewarding and enjoyable to be connected to an international network of people, young and old, who share an interest in how the social world works.

O-rings and production pressure

Allan McDonald’s Truth, Lies, and O-Rings: Inside the Space Shuttle Challenger Disaster (2009) has given me a somewhat different understanding of the Challenger launch disaster than I’ve gained from other sources, including Diane Vaughan’s excellent book The Challenger Launch Decision: Risky Technology, Culture, and Deviance at NASA. McDonald is a Morton Thiokol (MTI) insider who was present through virtually all aspects of the evolving solid rocket program at NASA in the two years leading up to the explosion in January 1986. He was director of the Space Shuttle Solid Rocket Motor Project during part of this time and he represented MTI at the formal Launch Readiness Review panels (LRRs) for several shuttle launches, including the fateful Challenger launch. He was senior management representative for MTI for the launch of STS-51L Challenger. His account gives a great deal of engineering detail about the Morton Thiokol engineering group’s ongoing concerns about the O-rings in the months preceding the Challenger disaster. This serves as a backdrop for a detailed analysis of the dysfunctions in decision-making in both NASA and Morton Thiokol that led to an insufficient priority being given to safety assessments.

It is worth noting that O-rings were a key part of other large solid-fuel rockets, including the Titan rocket. So there was a large base of engineering and test experience with the performance of the O-rings when exposed to the high temperatures and pressures of ignition and firing.

The biggest surprise to me is the level of informed, rigorous, and evidence-based concern that MTI engineers had about the reliability of joint seal afforded by the primary and secondary seals on the solid rocket motors on the Shuttle system. These specialists had a very good and precise understanding of the mechanics of the problem. Further, there was a good engineering understanding of the expected (and required) time-sequence performance of the O-rings during ignition and firing. If the sealing action were delayed by even a few hundredths of a second, hot gas would be able to penetrate past the seal. These were not hypothetical worries, but instead were based on data from earlier launches demonstrating O-ring erosion and soot between the primary and secondary rings showing that super-hot gases had penetrated the primary seal. The worst damage and evidence of blowby had occurred on flight STS-51C January 25, 1985, one year earlier, the lowest-temperature launch yet attempted. And that launch took place when the temperature was 53 degrees.

Launch temperatures for the rescheduled January 28 launch were projected to be extremely cold — 22-26 degrees was forecast on January 27, roughly 30 degrees colder than the previous January launch. The projected temperatures immediately raised alarm concerning the potential effects on the O-rings with the Utah-based engineering team and with McDonald himself. A teleconference meeting was scheduled for January 27 to receive recommendations from the Utah-based Morton Thiokol engineers who were focused on the O-rings problem about the minimum acceptable temperature for launch (95).

I tried to reach Larry Mulloy at his hotel but failed, so I called Cecil Houston, the NASA/MSFC Resident Manager at KSC. I alerted him of our concerns about the sealing capability of the field-joint O-rings at the predicted cold temperatures and asked him to set up the teleconference. (96)

The teleconference began at 8:30 pm on the evening before the launch. McDonald was present in Cape Canaveral for the Flight Readiness Review panel and participated in the teleconference involving the analysis and recommendations from MTI engineering, leading to a recommendation against launching in the expected cold weather conditions.

Thiokol’s engineering presentation consisted of about a dozen charts summarizing the history of the performance of the field-joints, some engineering analysis on the operation of the joints, and some laboratory and full-scale static test data relative to the performance of the O-rings at various temperatures. About half the charts had been prepared by Roger Boisjoly, our chief seal expert on the O-ring Seal Task Force and staff engineer to Jack Kapp, Manager of Applied Mechanics. The remainder were presented by Arnie Thompson, the supervisor of our Structures Section under Jack Kapp, and by Brian Russell, a program manager working for Bob Ebeling. (97)

Boisjoly’s next chart showed how cold temperature would reduce all the factors that helped maintain a good seal in the joint: lower O-ring squeeze due to thermal shrinkage of the O-ring; thicker and more viscous grease around the O-ring, making it slower to move across the O-ring groove; and higher O-ring hardness due to low temperature, making it more difficult for the O-ring to extrude dynamically into the gap for proper sealing. All of these things increased the dynamic actuation time, or timing function, of the O-ring, when at the very same time the O-ring could be eroding, creating a situation where the secondary seal might not be able to seal the motor, not if the primary O-ring was sufficiently eroded to prevent sealing in the joint. (99)

Based on their concerns about temperature and effectiveness of the seals in the critical half-second of ignition, MTI engineering staff prepared the foundation for a recommendation to not launch in temperatures lower than 53 degrees. Their conclusion as presented at the January 27 teleconference was unequivocal against launch under these temperature conditions:

The final chart included the recommendations, which resulted in several strong comments and many very surprising reactions from the NASA participants in the teleconference. The first statement on the “Recommendations” chart stated that the O-ring temperature must be equal to or greater than 53° at launch, and this was primarily based upon the fact that SRM-15, which was the best simulation of this condition, worked at 53 °. The chart ended with a statement that we should project the ambient conditions (temperature and wind) to determine the launch time. (102)

NASA lead Larry Mulloy contested the analysis and evidence in the slides and expressed great concern about the negative launch recommendation, and he asserted that the data were “inconclusive” in establishing a relationship between temperature and O-ring failure.

Mulloy immediately said he could not accept the rationale that was used in arriving at that recommendation. Stan Reinartz then asked George Hardy, Deputy Director of Science and Engineering at NASA/MSFC, for his opinion. Hardy said he was “appalled” that we could make such a recommendation, but that he wouldn’t fly without Morton Thiokol’s concurrence. Hardy also stated that we had only addressed the primary O-ring, and did not address the secondary O-ring, which was in a better position to seal because of the leak-check. Mulloy then shouted, “My God, Thiokol, when do you want me to launch, next April?” He also stated that “the eve of a launch is a helluva time to be generating new launch commit criteria!” Stan Reinartz entered the conversation by saying that he was under the impression that the solid rocket motors were qualified from 40° to 90° and that the 53° recommendation certainly was not consistent with that.” (103)

Joe Kilminster, VP of Space Booster Programs at MTI, then requested a short caucus for the engineering team in Utah to reevaluate the data and consider their response to the skepticism voiced by NASA officials. McDonald did not participate in the caucus, but his reconstruction based on the memories of persons present paints a clear picture. The engineering experts did not change their assessment, and they were overriden by MTI executives Cal Wiggins (VP and General Manager of the Space Division) and Jerry Mason (Senior VP of Wasatch Operations). In opening the caucus discussion, Mason is quoted as saying “we need to make a management decision”. Engineers Boisjoly and Thompson reiterated their technical concerns about the functionality of the O-ring seals at low temperature, with no response from the senior executives. No members of the engineering team spoke up to support a decision to launch. Mason polled the senior executives, including Bob Lund (VP of Engineering), and said to Lund, “It’s time for you, Bob, to take off your engineering hat and put on your management hat.” (111) A positive launch recommendation was then conveyed to NASA, and the process in Florida resumed towards launch.

McDonald spends considerable time indicating the business pressure that MTI was subject to from its largest customer, NASA. NASA was considering creating a second-source option for competing companies for solid fuel motors from MTI and had also delayed signing a large contract (Buy-III fixed cost bid) for the next batch of motors. The collective impact of these actions by NASA could cost MTI over a billion dollars. So MTI management appears to have been under great pressure to accommodate to NASA managers’ preferences concerning the launch decision. And it is hard to avoid the conclusion that their decision placed business interests first and the professional judgments of their safety engineers second. In doing so they placed the lives of seven astronauts at risk, with tragic consequences.

And what about NASA? Here the pressures are somewhat less fully developed than in Vaughan’s account, but the driving commitment to achieve a 24-launch per year schedule seems to have been a primary motivation. Delayed launches significantly undermined this goal, which threatened both the prestige of NASA, the hope of significant commercial revenue for the program, and the assurance of continuing funding from Congress.

McDonald was not a participant in the caucus conference call. But he provides a reconstruction based on information provided by participants. In his understanding the engineers continued to defend their recommendation based on very concrete concerns about the effectiveness of the O-rings in extreme cold. Senior managers indicated their lack of support for this engineering judgment, and in the end Jerry Mason indicated that this would need to be a management decision. The FRR team was then informed that MTI has reconsidered its negative recommendation concerning launch. McDonald refused to sign the launch recommendation document, which was signed by his boss Joe Kilminster and faxed to the LRR team.

In hindsight it seems clear that both MTI executives and NASA executives deferred to business pressures of their respective organizations in the face of well-supported doubts about the safety of the launch. Is this a case of 20-20 vision after the fact? It distinctly appears not to be. The depth of knowledge, analysis, and rational concern that was present in the engineering group for at least a year prior to the Challenger disaster gave very specific and evidence-based reasons to abort this launch. This was not some intuitive, unspecific set of worries; it was an ongoing research problem that greatly concerned the engineers who were directly involved. And it appears there was no significant disagreement or uncertainty among them.

So it is hard to avoid a rather terrible conclusion, that the Challenger disaster was avoidable and should have been prevented. And the culpability lies with senior NASA and MTI executives who placed production pressures and business interests ahead of normal safety assessment procedures, and ahead of safety itself.

It is worth noting that Diane Vaughan’s assessment is directly at odds with this assessment. She writes:

We now return to the eve of the launch. Accounts emphasizing valiant attempts by Thiokol engineers to stop the launch, actions of a few powerful managers who overruled a unanimous engineering position, and managerial failure to pass information about the teleconference to senior NASA administrators, coupled with news of economic strain and production pressure at NASA, led many to suspect that NASA managers had acted as amoral calculators, knowingly violating rules and taking extraordinary risk with human lives in order to keep the shuttle on schedule. However, like the history of decision making, I found that events on the eve of the launch were vastly more complex than the published accounts and media representations of it. From the profusion of information available after the accident, some actions, comments, and actors were brought repeatedly to public attention, finding their way into recorded history. Others, receiving less attention or none, were omitted. The omissions became, for me, details of social context essential for explanation. (LC 6215)

Young, Cook, Boisjoly, and Feynman. Concluding this list of puzzles and contradictions, I found that no one accused any of the NASA managers associated with the launch decision of being an amoral calculator. Although the Presidential Commission report extensively documented and decried the production pressures under which the Shuttle Program operated, no individuals were confirmed or even alleged to have placed economic interests over safety in the decision to launch the Space Shuttle Challenger. For the Commission to acknowledge production pressures and simultaneously fail to connect economic interests and individual actions is, prima facie, extremely suspect. But NASA’s most outspoken critics—Astronaut John Young, Morton Thiokol engineers Al McDonald and Roger Boisjoly, NASA Resource Analyst Richard Cook, and Presidential Commissioner Richard Feynman, who frequently aired their opinions to the media—did not accuse anyone of knowingly violating safety rules, risking lives on the night of January 27 and morning of January 28 to meet a schedule commitment. (kl 1627)

Vaughan’s account includes many of the pivot-points of McDonald’s narrative, but she assigns a different significance to many of them. She prefers her “normalization of deviance” explanation over the “amoral calculator” explanation.

(The Rogers Commission report and supporting documents are available online. Here is a portion of the hearings transcripts in which senior NASA officials provide testimony; link. This segment is critical to the issues raised in McDonald’s account, since it addresses the January 27, 1986 teleconference FRR session in which a recommendation against launch was put forward by MTI engineering and was challenged by NASA senior administrators.)

Ethical principles for assessing new technologies

Technologies and technology systems have deep and pervasive effects on the human beings who live within their reach. How do normative principles and principles of social and political justice apply to technology? Is there such a thing as “the ethics of technology”?

There is a reasonably active literature on questions that sound a lot like these. (See, for example, the contributions included in Winston and Edelbach, eds., Society, Ethics, and Technology.) But all too often the focus narrows too quickly to ethical issues raised by a particular example of contemporary technology — genetic engineering, human cloning, encryption, surveillance, and privacy, artificial intelligence, autonomous vehicles, and so forth. These are important questions; but it is also possible to ask more general questions as well, about the normative space within which technology, private activity, government action, and the public live together. What principles allow us to judge the overall justice, fairness, and legitimacy of a given technology or technology system?

There is a reasonably active literature on questions that sound a lot like these. (See, for example, the contributions included in Winston and Edelbach, eds., Society, Ethics, and Technology.) But all too often the focus narrows too quickly to ethical issues raised by a particular example of contemporary technology — genetic engineering, human cloning, encryption, surveillance, and privacy, artificial intelligence, autonomous vehicles, and so forth. These are important questions; but it is also possible to ask more general questions as well, about the normative space within which technology, private activity, government action, and the public live together. What principles allow us to judge the overall justice, fairness, and legitimacy of a given technology or technology system?

There is an overriding fact about technology that needs to be considered in every discussion of the ethics of technology. It is a basic principle of liberal democracy that individual freedom and liberty should be respected. Individuals should have the right to act and create as they choose, subject to something like Mill’s harm principle. The harm principle holds that liberty should be restricted only when the activity in question imposes harm on other individuals. Applied to the topic of technology innovation, we can derive a strong principle of “liberty of innovation and creation” — individuals (and their organizations, such as business firms) should have a presumptive right to create new technologies constrained only by something like the harm principle.

Often we want to go beyond this basic principle of liberty to ask what the good and bad of technology might be. Why is technological innovation a good thing, all things considered? And what considerations should we keep in mind as we consider legitimate regulations or limitations on technology?

Consider three large principles that have emerged in other areas of social and political ethics as a basis for judging the legitimacy and fairness of a given set of social arrangements:

 A. Technologies should contribute to some form of human good, some activity or outcome that is desired by human beings — health, education, enjoyment, pleasure, sociality, friendship, fitness, spirituality, …

B. Technologies ought to be consistent with the fullest development of the human capabilities and freedoms of the individuals whom they affect. [Or stronger: “promote the fullest development …”]

C. Technologies ought to have population effects that are fair, equal, and just.

The first principle attempts to address the question, “What is technology good for? What is the substantive moral good that is served by technology development?” The basic idea is that human beings have wants and needs, and contributing to their ability to fulfill these wants is itself a good thing (if in so doing other greater harms are not created as well). This principle captures what is right about utilitarianism and hedonism — the inherent value of human happiness and satisfaction. This means that entertainment and enjoyment are legitimate goals of technology development.

The second principle links technology to the “highest good” of human wellbeing — the full development of human capabilities and freedoms. As is evident, the principle offered here derives from Amartya Sen’s theory of capabilities and functionings, expressed in Development as Freedom. This principle recalls Mill’s distinction between higher and lower pleasures:

Mill always insisted that the ultimate test of his own doctrine was utility, but for him the idea of the greatest happiness of the greatest number included qualitative judgements about different levels or kinds of human happiness. Pushpin was not as good as poetry; only Pushkin was…. Cultivation of one’s own individuality should be the goal of human existence. (J.S. McClelland, A History of Western Political Thought : 454)

The third principle addresses the question of fairness and equity. Thinking about justice has evolved a great deal in the past fifty years, and one thing that emerges clearly is the intimate connection between injustice and invidious discrimination — even if unintended. Social institutions that arbitrarily assign significantly different opportunities and life outcomes to individuals based on characteristics such as race, gender, income, neighborhood, or religion are unfair and unjust, and need to be reformed. This approach derives as much from current discussions of racial health disparities as it does from philosophical theories along the lines of Rawls and Sen.

On these principles a given technology can be criticized, first, if it has no positive contribution to make for the things that make people happy or satisfied; second, if it has the effect of stunting the development of human capabilities and freedoms; and third, if it has discriminatory effects on quality of life across the population it effects.

One important puzzle facing the ethics of technology is a question about the intended audience of such a discussion. We are compelled to ask, to whom is a philosophical discussion of the normative principles that ought to govern our thinking about technology aimed? Whose choices, actions, and norms are we attempting to influence? There appear to be several possible answers to this question.

Corporate ethics. Entrepreneurs and corporate boards and executives have an ethical responsibility to consider the impact of the technologies that they introduce into the market. If we believe that codes of corporate ethics have any real effect on corporate decision-making, then we need to have a basis in normative philosophy for a relevant set of principles that should guide business decision-making about the creation and implementation of new technologies by businesses. A current example is the use of facial recognition for the purpose of marketing or store security; does a company have a moral obligation to consider the negative social effects it may be promoting by adopting such a technology?

Governments and regulators. Government has an overriding responsibility of preserving and enhancing the public good and minimizing harmful effects of private activities. This is the fundamental justification for government regulation of industry. Since various technologies have the potential of creating harms for some segments of the public, it is legitimate for government to enact regulatory systems to prevent reckless or unreasonable levels of risk. Government also has a responsibility for ensuring a fair and just environment for all citizens, and enacting policies that serve to eliminate inequalities based on discriminatory social institutions. So here too governments have a role in regulating technologies, and a careful study of the normative principles that should govern our thinking about the fairness and justice of technologies is relevant to this process of government decision-making as well.

Public interest advocacy groups. One way in which important social issues can be debated and sometimes resolved is through the advocacy of well-organized advocacy groups such as the Union of Concerned Scientists, the Sierra Club, or Greenpeace. Organizations like these are in a position to argue in favor of or against a variety of social changes, and raising concerns about specific kinds of technologies certainly falls within this scope. There are only a small number of grounds for this kind of advocacy: the innovation will harm the public, the innovation will create unacceptable hidden costs, or the innovation raises unacceptable risks of unjust treatment of various groups. In order to make the latter kind of argument, the advocacy group needs to be able to articulate a clear and justified argument for its position about “unjust treatment”.

The public. Citizens themselves have an interest in being able to make normative judgments about new technologies as they arise. “This technology looks as though it will improve life for everyone and should be favored; that technology looks as though it will create invidious and discriminatory sets of winners and losers and should be carefully regulated.” But for citizens to have a basis for making judgments like these, they need to have a normative framework within which to think and reason about the social role of technology. Public discussion of the ethical principles underlying the legitimacy and justice of technology innovations will deepen and refine these normative frameworks.

Considered as proposed here, the topic of “ethics of technology” is part of a broad theory of social and political philosophy more generally. It invokes some of our best reasoning about what constitutes the human good (fulfillment of capabilities and freedoms) and about what constitutes a fair social system (elimination of invidious discrimination in the effects of social institutions on segments of population). Only when we have settled these foundational questions are we able to turn to the more specific issues often discussed under the rubric of the ethics of technology.

Regulatory delegation at the FAA

Earlier posts have focused on the role of inadequate regulatory oversight as part of the tragedy of the Boeing 737 MAX (link, link). (Also of interest is an earlier discussion of the “quiet power” through which business achieves its goals in legislation and agency rules (link).) Reporting in the New York Times this week by Natalie Kitroeff and David Gelles provides a smoking gun for the idea of regulatory capture by industry over the regulatory agency established to ensure its safe operations (link). The article quotes a former attorney in the FAA office of chief counsel:

“The reauthorization act mandated regulatory capture,” said Doug Anderson, a former attorney in the agency’s office of chief counsel who reviewed the legislation. “It set the F.A.A. up for being totally deferential to the industry.”

Based on exhaustive investigative journalism, Kitroeff and Gelles provide a detailed account of the lobbying strategy and efforts by Boeing and the aircraft manufacturing industry group that led to the incorporation of industry-favored language into the FAA Reauthorization Act of 2018, and it is a profoundly discouraging account for anyone interested in the idea that the public good should drive legislation. The new paragraphs introduced into the final legislation stipulate full implementation of the philosophy of regulatory delegation and establish an industry-centered group empowered to oversee the agency’s performance and to make recommendations about FAA employees’ compensation. “Now, the agency, at the outset of the development process, has to hand over responsibility for certifying almost every aspect of new planes.” Under the new legislation the FAA is forbidden from taking back control of the certification process for a new aircraft without a full investigation or inspection justifying such an action.

As the article notes, the 737 MAX was certified under the old rules. The new rules give the FAA even less oversight powers and responsibilities for the certification of new aircraft and major redesigns of existing aircraft. And the fact that the MCAS system was never fully reviewed by the FAA, based on assurances of its safety from Boeing, reduces even further our confidence in the effectiveness of the FAA process. From the article:

The F.A.A. never fully analyzed the automated system known as MCAS, while Boeing played down its risks. Late in the plane’s development, Boeing made the system more aggressive, changes that were not submitted in a safety assessment to the agency.

Boeing, the Aerospace Industries Association, and the General Aviation Manufacturers Association exercised influence on the 2018 legislation through a variety of mechanisms. Legislators and lobbyists alike were guided by a report on regulation authored by Boeing itself. Executives and lobbyists exercised their ability to influence powerful senators and members of Congress through person-to-person interactions. And elected representatives from both parties favored “less regulation” as a way of supporting the economic interests of businesses in their states. For example:

They also helped persuade Senator Maria Cantwell, Democrat of Washington State, where Boeing has its manufacturing hub, to introduce language that requires the F.A.A. to relinquish control of many parts of the certification process.

And, of course, it is important not to forget about the “revolving door” from industry to government to lobbying firm. Ali Bahrami was an FAA official who subsequently became a lobbyist for the aerospace industry; Stephen Dixon is a former executive of Delta Airlines who now serves as Administrator of the FAA; and in 2007 former FAA Administrator Marion Blakey became CEO of the Aerospace Industries Association, the industry’s chief advocacy and lobbying group (link). It is hard to envision neutral, objective judgment in ensuring the safety of the public from such appointments.

Boeing and its allies found a receptive audience in the head of the House transportation committee, Bill Shuster, a Pennsylvania Republican staunchly in favor of deregulation, and his aide working on the legislation, Holly Woodruff Lyons.

These kinds of influence on legislation and agency action provide crystal-clear illustrations of the mechanisms cited by Pepper Culpepper in Quiet Politics and Business Power: Corporate Control in Europe and Japan explaining the political influence of business. Here is my description of his views in an earlier post:

Culpepper unpacks the political advantage residing with business elites and managers in terms of acknowledged expertise about the intricacies of corporate organization, an ability to frame the issues for policy makers and journalists, and ready access to rule-writing committees and task forces. These factors give elite business managers positional advantage, from which they can exert a great deal of influence on how an issue is formulated when it comes into the forum of public policy formation.

It seems abundantly clear that the “regulatory delegation” movement and its underlying effort to reduce regulatory burden on industry have gone too far in the case of aviation; and the same seems true in other industries such as the nuclear industry. The much harder question is organizational: what form of regulatory oversight would permit a regulatory industry to genuinely enhance the safety of the regulated industry and protect the public from unnecessary hazards? Even if we could take the anti-regulation ideology that has governed much public discourse since the Reagan years out of the picture, there are the continuing issues of expertise, funding, and industry power of resistance that make effective regulation a huge challenge.

The tempos of capitalism

I’ve been interested in the economic history of capitalism since the 1970s, and there are a few titles that stand out in my memory. There were the Marxist and neo-Marxist economic historians (Marx’s Capital, E.P. Thompson, Eric Hobsbawm, Rodney Hilton, Robert Brenner, Charles Sabel); the debate over the nature of the industrial revolution (Deane and Cole, NFR Crafts, RM Hartwell, EL Jones); and volumes of the Cambridge Economic History of Europe. The history of British capitalism poses important questions for social theory: is there such a thing as “capitalism”, or are there many capitalisms? What are the features of the capitalist social order that are most fundamental to its functioning and dynamics of development? Is Marx’s intellectual construction of the “capitalist mode of production” a useful one? And does capitalism have a logic or tendency of development, as Marx believed, or is its history fundamentally contingent and path-dependent? Putting the point in concrete terms, was there a probable path of development from the “so-called primitive accumulation” to the establishment of factory production and urbanization to the extension of capitalist property relations throughout much of the world?
 
Part of the interest of detailed research in economic history in different places — England, Sweden, Japan, the United States, China — is the light that economic historians have been able to shed on the particulars of modern economic organization and development, and the range of institutions and “life histories” they have identified for these different historically embodied social-economic systems. For this reason I have found it especially interesting to read and learn about the ways in which the early modern Chinese economy developed, and different theories of why China and Europe diverged in this period. Kenneth Pomeranz, Philip Huang, William Skinner, Mark Elvin, Bozhong Li, James Lee, and Joseph Needham all shed light on different aspects of this set of questions, and once again the Cambridge Economic History of China was a deep and valuable resource.
 
A  new title that recently caught my eye is Pierre Dockès’ Le Capitalisme Et Ses Rythmes, quatre siècles en perspective: Tome I Sous Le Regard Des Géants. Intriguing features of the book include the long sweep of the book (400 years, over 950 pages, with volume II to come), and the question of whether there is something new to say about this topic. After reading large parts of the book, I think the answer to the last question is “yes”.
 
Dockès is interested in both the history of capitalism as an economic system and the history of economic science and political economy during the past four centuries. And he is particularly interested in discovering what we can learn about our current economic challenges from both these stories.
 
He specifically distances himself from “mainstream” economic theory and couches his own analysis in a less orthodox and more eclectic set of ideas. He defines mainstream economics in terms of five ideas: first, its strong commitment to mathematization and formalization of economic ideas; second, its disciplinary tendency towards hyper-specialization; third, its tendency to take the standpoint of the capitalist and the free market in its analyses; fourth, the propensity to extend these neoliberal biases to the process of selection and hiring of academics; and fifth, its underlying “scientism” and positivism leads its practitioners to devalue the history of the discipline or the historical conditions through which modern institutions came to be (9-12).
 
Dockès holds that the history of the economic facts and the ideas researchers have had about these facts go hand in hand; economic history and the history of economics need to be studied together. Moreover, Dockès believes that mainstream economics has lost sight of insights from the innovators in the history of economics which still have value — Ricardo, Smith, Keynes, Walras, Sismondi, Hobbes. The solitary focus of the discipline of mainstream economics in the past forty years on formal, mathematical representations of a market economy precludes these economists from “seeing” the economic world through the conceptual lenses of gifted predecessors. They are trapped in a paradigm or an “epistemological framework” from which they cannot escape. (These ideas are explored in the introduction to the volume.)
 
The substantive foundation of the book is Dockès’ idea that capitalism has long-term rhythms punctuated by crises, and that these fluctuations themselves are amenable to historical-causal and institutional analysis.

En un mot, croissance et crise sont inséparables et inhérents au processus de développement capitaliste laissé à lui-même.

[In a word, growth and crisis are inseparable and inherent in the process of capitalist development left to itself.] (13)

The fluctuations of capitalism over the longterm are linked in a single system of causation — growth, depression, financial crisis, and growth again are linked. Therefore, Dockès believes, it should be possible to discover the systemic causes of the development of various capitalist economies by uncovering the dynamics of crisis. Further, he underlines the serious social and political consequences that have ensued from economic crises in the past, including the rise of the Nazi regime out of the global economic crisis of the 1930s.

Etudier ces rythmes impose une analyse des logiques de fonctionnement du capitalism.

[Studying these rhythms imposes an analysis of the logic of functioning of capitalism.] (12).

Dockès is explicit in saying that economic history does not “repeat” itself, and the crises of capitalism are not replicas of each other over the decades or centuries. Historicity of the time and place is fundamental, and he underlines the path dependency of economic development in some of its aspects as well. But he argues that there are important similarities across various kinds of economic crises, and it is worthwhile discovering these similarities. He takes debt crises as an example: there are great differences among several centuries of experience of debt crisis. But there is something in common as well:

Permanence aussi dans les relations de pouvoir et dans let intérêts des uns (les créanciers partisans de la déflation, des taux élevés) et des autres (les débiteurs inflationnistes), dan les jeux de l’état entre ces deux groupes de pression. On peut tirer deux conséquences des homologies entre le passé et le présent.

[Permanence also in the relations of power and in the interests of some (creditors who favor deflation, high rates) and others (inflationary debtors), in the games of the state between these two pressure groups. We can draw two resulting homologies between the past and the present.] (20)

And failing to consider carefully and critically the economies and crises of the past is a mistake that may lead contemporary economic experts and advisors into ever-deeper economic crises in the future.

L’oubli est dommageable, celui des catastrophes, celui des enseignements qu’elles ont rendu possible, celui des corpus théoriques du passé. Ouvrir la perspective par l’économie historique peut aider à une meilleure compréhension du présent, voire à préparer l’avenir. (21)

[Forgetting is harmful, especially forgetting past catastrophes, forgetting the lessons they have made possible, forgetting the theoretical corpus of the past. Embracing the perspective of the concrete economic history can help lead to a better understanding of the present, or even prepare for the future.] (21)

The scope and content of the book are evident in the list of the book’s chapters:
  1. Crises et rythmes économiques
  2. Périodisation, mutations et rythmes longs
  3. Le capitalism d’Ancien Régime, ses crises
  4. Le “Haut Capitalism”, ses crises et leur théorisation (1800-1870)
  5. Karl Marx et les crises
  6. Capitalisme “Monopoliste” et grande industrie (1870-1914)
  7. Interlude
  8. Á l’âge de l’acier, les rythmes de l’investissement et de l’innovation
  9. Impulsion monétaire et effets réels
  10. La monnaie hégémonique
  11. “Le chien dans la mangeoire”
  12. La grande crise des années trente
  13. Keynes et la “Théorie Générale”La “Haute Théorie”, la dynamique, le cycle (1926-1946)
  14. En guise de conclusion d’étape
As the chapter titles make evident, Dockès delivers on his promise of treating both the episodes, trends, and facts of economic history as well as the history of the theories through which economists have sought to understand those facts and their dynamics.
 
%d bloggers like this: