Alternative social systems and individual wellbeing

Communism …
or Capitalism?

A joke from Poland in the 1970s: “In capitalism it is a question of man’s exploitation by man. In communism it is the reverse.”

A modern social system is an environment where millions of people find opportunities, develop their talents, express their beliefs, and earn their livings within the context of a set of economic and political institutions. Specific institutions of property, power, ideology, education, and healthcare create an environment within which citizens of all circumstances pursue their life interests. Individuals exercise their freedoms through the institutions within which they live, and those institutions also determine the quality and depth of social resources available to the individual that determine the degree to which he or she is able to develop talents and skills and gain access to opportunities. Institutions create constraints on freedom of choice that range from the nearly invisible to the intensely coercive. Institutions create inequalities of opportunity and outcome for different groups of citizens; rural people may have more limited access to higher education, immigrant and minority communities may experience discrimination in health and employment; and so on. Further, social systems differ in the balance they achieve between “soft” constraints (market, education, regional differences) and “hard” constraints (police, regulations governing behavior, extra-legal uses of violence).

To describe a set of institutions and a population of individual actors as a system has a number of implications, some of which are unjustified. The idea of “system” suggests a kind of functional interconnection across its components, with needs in one subsystem eliciting adjustments in the activities of other subsystems to satisfy those needs. This mental model is misleading, however. Better is the ontology of “assemblage” discussed frequently here (linklinklink). This is the idea that a complex social thing is the unintended and largely undesigned accumulation of multiple independent components. One set of processes leads to the development of the logistics infrastructure of a society; another set of processes leads to the development of the institutions of government; and yet other path-dependent and contingent processes contribute to the system of labor education, management, and discipline that exists in a society. These various institutional ensembles are overlaid with each other; sometimes there are painful inconsistencies among them that are resolved by entrepreneurs or officials; and the result is a heterogeneous and largely unplanned agglomeration of social arrangements and practices that add up to “the social system”. 

A central premise of some classics of social theory, including Marx, is that the institutions through which social interactions take place form large and relatively stable configurations that fall into fairly distinct groups — feudalism, capitalism, communism, socialism, social democracy, authoritarianism. And different configurations of institutions do better or worse in terms of the degree to which they allow their members to satisfy their needs and live satisfying lives. The ontology associated with the theory of assemblage, however, is anti-essentialist in a very important sense: it denies that there are “essentially similar configurations” of institutions that play crucial roles in history, or that there is a tendency towards convergence around “typical” ensembles of institutions. In particular, it suggests that we reject the idea that there are only a few historically possible configurations of institutions — capitalism, socialism, authoritarianism, democracy — and rather analyze each social order as a fairly unique configuration — assemblage — of specific institutional arrangements.

This perspective casts doubt on the value of singling out “capitalism,” “liberal democracy,” “religious autocracy,” “apartheid society,” “military dictatorship,” or “one-party dictatorship” as schemes for understanding distinctive and sociologically important patterns of un-freedom. Rather than considering these different “ideal types” of social-political systems as structures with distinctive dynamics, perhaps it would be more satisfactory to consider the problem from the point of view of the citizens of various societies and the degree to which existing social, political, and economic institutions serve their development as full and free human beings.

Amartya Sen’s framework for understanding human wellbeing in Development as Freedom is valuable in this context (link). Sen understands wellbeing in terms of the individual’s ability to realize his or her capabilities fully and to live within an environment enabling as much freedom of action as freely as possible. Sen’s framework gives a powerful basis for paying close attention to inequalities within society; a society in which one-third of the population have exceptional freedom and opportunities for development, one-third have indifferent attainments in these crucial dimensions, and one-third have extremely limited freedoms and opportunities is plainly a less just and desirable society than one in which everyone has the same freedoms and a relatively high level of opportunities for development — even if the average attainment for the population is the same in the two scenarios.

This prism permits us to attempt to understand the structural characteristics of society — political, cultural, religious, economic, or civic — in terms of the effects that those institutional arrangements have on the freedoms and capacity for development of the population. This is the underlying rationale for the Human Development Index, but the HDI is primarily focused on development rather than freedom.

We might try to evaluate the workings of a given ensemble of social and political institutions by devising an index of human wellbeing and freedom that can be applied to each society. Examples of indexes along these lines include the Human Development Index, the Opportunity Index, and the Cato Institute Freedom Index. Every index is selective. It is interesting and important to observe, for example, that political freedom plays no role in the Human Development Index, while the Cato Institute index pays no attention to the prerequisites of freedom: access to education, access to health care, freedom from racial or ethnic discrimination.

So each of these indices is limited as a scheme for evaluating the overall success a particular institutional configuration has in creating a free and enabling environment for its citizens. But suppose we had a composite index that reflected both freedom (broadly construed) and wellbeing? Such an index might look something like this:

  • Rule of law 
  • Security and safety 
  • Movement 
  • Religion 
  • Association, assembly, and civil society 
  • Expression and information
  • Identity and relationships 
  • access to quality education
  • access to quality healthcare
  • freedom from racial, ethnic, and gender discrimination in employment, education, housing, and other social goods
  • equality of opportunity

(It is interesting to observe that these characteristics align fairly well with the contents of the Universal Declaration of Human Rights.)

Using an index like this, we could then ask important comparative questions: How do ordinary citizens fare under the institutions of contemporary Finland, Spain, China, Russia, Nigeria, Brazil, Romania, France, and the United States? An assessment along these lines would put us in a position to give a normative evaluation of the various social systems mentioned above: what social, political, and economic arrangements do the best job of securing each of these freedoms and opportunities for all citizens? Under what kinds of institutions — economic, political, social, and cultural — are citizens most free and most enabled to fully develop their capacities as human beings?

It seems evident that the answer to this question is not very esoteric or difficult. Freedom requires the rule of law, respect for equal rights, and democratic institutions. Real freedom requires access to the social resources that permit an individual to fully develop his or her talents. A decent life requires a secure and adequate material standard of living. These obvious truths point towards a social system that embodies the protections of a constitutional liberal democracy; extensive public support for the social resources necessary for full human development (education, healthcare, nutrition, housing); and an extensive social welfare net that ensures that all members of society can thrive. There is a name for this set of institutions; it is called social democracy. (Here are several earlier posts that reach a similar conclusion from different starting points; linklink.)

And where are the social democracies in the world today? They are largely the Nordic countries: Finland, Denmark, Norway, Sweden, and Iceland. Significantly, these five countries consistently rank in the top ten countries in the World Happiness Report (link), a rigorous and well-funded attempt to measure citizen satisfaction with the same care as we measure GDP or national health statistics. The editors of the 2020 report describe the particular success of Nordic societies in supporting citizen satisfaction in these terms:

From 2013 until today, every time the World Happiness Report (WHR) has published its annual ranking of countries, the five Nordic countries – Finland, Denmark, Norway, Sweden, and Iceland – have all been in the top ten, with Nordic countries occupying the top three spots in 2017, 2018, and 2019. Clearly, when it comes to the level of average life evaluations, the Nordic states are doing something right, but Nordic exceptionalism isn’t confined to citizen’s happiness. No matter whether we look at the state of democracy and political rights, lack of corruption, trust between citizens, felt safety, social cohesion, gender equality, equal distribution of incomes, Human Development Index, or many other global comparisons, one tends to find the Nordic countries in the global top spots.

And here is their considered judgment about the circumstances that have led to this high level of satisfaction in the Nordic countries:

We find that the most prominent explanations include factors related to the quality of institutions, such as reliable and extensive welfare benefits, low corruption, and well-functioning democracy and state institutions. Furthermore, Nordic citizens experience a high sense of autonomy and freedom, as well as high levels of social trust towards each other, which play an important role in determining life satisfaction. (131)

The key factors mentioned here are worth calling out for the light they shed on the current dysfunctions of politics in the United States: effective institutions, extensive welfare benefits, low corruption, well-functioning democracy, a limited range of economic inequalities, a strong sense of autonomy and freedom, and high levels of social trust and social cohesion. It is evident that American society is being tested in each of these areas by the current administration, and none more so than the areas of trust and social cohesion. The current administration actively strives to undermine both trust and social cohesion, and goes out of its way to undermine confidence in government. These are very disturbing signs about what the future may bring. Severe inequalities of income, wealth, and social resources (including especially healthcare) have become painfully evident through the effects of the Covid-19 epidemic. And the weakness of the social safety net in the United States has left millions of adults and children in dire circumstances of unemployment and hunger. The United States today is not a happy place for many of its citizens.

Organizations as open systems

Key to understanding the “ontology of government” is the empirical and theoretical challenge of understanding how organizations work. The activities of government encompass organizations across a wide range of scales, from the local office of the Department of Motor Vehicles (40 employees) to the Department of Defense (861,000 civilian employees). Having the best understanding possible of how organizations work and fail is crucial to understanding the workings of government.

I have given substantial attention to the theory of strategic action fields as a basis for understanding organizations in previous posts (link, link). The basic idea in that approach is that organizations are a bit like social movements, with active coalition-building, conflicting goals, and strategic jockeying making up much of the substantive behavior of the organization. It is significant that organizational theory as a field has moved in this direction in the past fifteen years or so as well. A good example is Scott and Davis, Organizations and Organizing: Rational, Natural and Open System Perspectives (2007). Their book is intended as a “state of the art” textbook in the field of organizational studies. And the title expresses some of the shifts that have taken place in the field since the work of March, Simon, and Perrow (link, link). The word “organizing” in the title signals the idea that organizations are no longer looked at as static structures within which actors carry out well defined roles; but are instead dynamic processes in which active efforts by leaders, managers, and employees define goals and strategies and work to carry them out. And the “open system” phrase highlights the point that organizations always exist and function within a broader environment — political constraints, economic forces, public opinion, technological innovation, other organizations, and today climate change and environmental disaster.

Organizations themselves exist only as a complex set of social processes, some of which reproduce existing modes of behavior and others that serve to challenge, undermine, contradict, and transform current routines. Individual actors are constrained by, make use of, and modify existing structures. (20)

Most analysts have conceived of organizations as social structures created by individuals to support the collaborative pursuit of specified goals. Given this conception, all organizations confront a number of common problems: all must define (and redefine) their objectives; all must induce participants to contribute services; all must control and coordinate these contributions; resources must be garnered from the environment and products or services dispensed; participants must be selected, trained, and replaced; and some sort of working accommodation with the neighbors must be achieved. (23)

Scott and Davis analyze the field of organizational studies in several dimensions: sector (for-profit, public, non-profit), levels of analysis (social psychological level, organizational level, ecological level), and theoretical perspective. They emphasize several key “ontological” elements that any theory of organizations needs to address: the environment in which an organization functions; the strategy and goals of the organization and its powerful actors; the features of work and technology chosen by the organization; the features of formal organization that have been codified (human resources, job design, organizational structure); the elements of “informal organization” that exist in the entity (culture, social networks); and the people of the organization.

They describe three theoretical frameworks through which organizational theories have attempted to approach the empirical analysis of organizations. First, the rational framework:

Organizations are collectivities oriented to the pursuit of relatively specific goals. They are “purposeful” in the sense that the activities and interactions of participants are coordinated to achieve specified goals….. Organizations are collectivities that exhibit a relatively high degree of formalization. The cooperation among participants is “conscious” and “deliberate”; the structure of relations is made explicit. (38)

From the rational system perspective, organizations are instruments designed to attain specified goals. How blunt or fine an instrument they are depends on many factors that are summarized by the concept of rationality of structure. The term rationality in this context is used in the narrow sense of technical or functional rationality (Mannheim, 1950 trans.: 53) and refers to the extent to which a series of actions is organized in such a way as to lead to predetermined goals with maximum efficiency. (45)

Here is a description of the natural-systems framework:

Organizations are collectivities whose participants are pursuing multiple interests, both disparate and common, but who recognize the value of perpetuating the organization as an important resource. The natural system view emphasizes the common attributes that organizations share with all social collectivities. (39)

Organizational goals and their relation to the behavior of participants are much more problematic for the natural than the rational system theorist. This is largely because natural system analysts pay more attention to behavior and hence worry more about the complex interconnections between the normative and the behavioral structures of organizations. Two general themes characterize their views of organizational goals. First, there is frequently a disparity between the stated and the “real” goals pursued by organizations—between the professed or official goals that are announced and the actual or operative goals that can be observed to govern the activities of participants. Second, natural system analysts emphasize that even when the stated goals are actually being pursued, they are never the only goals governing participants’ behavior. They point out that all organizations must pursue support or “maintenance” goals in addition to their output goals (Gross, 1968; Perrow, 1970:135). No organization can devote its full resources to producing products or services; each must expend energies maintaining itself. (67)

And the “open-system” definition:

From the open system perspective, environments shape, support, and infiltrate organizations. Connections with “external” elements can be more critical than those among “internal” components; indeed, for many functions the distinction between organization and environment is revealed to be shifting, ambiguous, and arbitrary…. Organizations are congeries of interdependent flows and activities linking shifting coalitions of participants embedded in wider material-resource and institutional environments.  (40)

(Note that the natural-system and “open-system” definitions are very consistent with the strategic-action-field approach.)

Here is a useful table provided by Scott and Davis to illustrate the three approaches to organizational studies:

An important characteristic of recent organizational theory has to do with the way that theorists think about the actors within organizations. Instead of looking at individual behavior within an organization as being fundamentally rational and goal-directed, primarily responsive to incentives and punishments, organizational theorists have come to pay more attention to the non-rational components of organizational behavior — values, cultural affinities, cognitive frameworks and expectations.

This emphasis on culture and mental frameworks leads to another important shift of emphasis in next-generation ideas about organizations, involving an emphasis on informal practices, norms, and behaviors that exist within organizations. Rather than looking at an organization as a rational structure implementing mission and strategy, contemporary organization theory confirms the idea that informal practices, norms, and cultural expectations are ineliminable parts of organizational behavior. Here is a good description of the concept of culture provided by Scott and Davis in the context of organizations:

Culture describes the pattern of values, beliefs, and expectations more or less shared by the organization’s members. Schein (1992) analyzes culture in terms of underlying assumptions about the organization’s relationship to its environment (that is, what business are we in, and why); the nature of reality and truth (how do we decide which interpretations of information and events are correct, and how do we make decisions); the nature of human nature (are people basically lazy or industrious, fixed or malleable); the nature of human activity (what are the “right” things to do, and what is the best way to influence human action); and the nature of human relationships (should people relate as competitors or cooperators, individualists or collaborators). These components hang together as a more-or-less coherent theory that guides the organization’s more formalized policies and strategies. Of course, the extent to which these elements are “shared” or even coherent within a culture is likely to be highly contentious (see Martin, 2002)—there can be subcultures and even countercultures within an organization. (33)

Also of interest is Scott’s earlier book Institutions and Organizations: Ideas, Interests, and Identities, which first appeared in 1995 and is now in its 4th edition (2014). Scott looks at organizations as a particular kind of institution, with differentiating characteristics but commonalities as well. The IBM Corporation is an organization; the practice of youth soccer in the United States is an institution; but both have features in common. In some contexts, however, he appears to distinguish between institutions and organizations, with institutions constituting the larger normative, regulative, and opportunity-creating environment within which organizations emerge.

Scott opens with a series of crucial questions about organizations — questions for which we need answers if we want to know how organizations work, what confers stability upon them, and why and how they change. Out of a long list of questions, these seem particularly important for our purposes here: “How are we to regard behavior in organizational settings? Does it reflect the pursuit of rational interests and the exercise of conscious choice, or is it primarily shaped by conventions, routines, and habits?” “Why do individuals and organizations conform to institutions? Is it because they are rewarded for doing so, because they believe they are morally obligated to obey, or because they can conceive of no other way of behaving?” “Why is the behavior of organizational participants often observed to depart from the formal rules and stated goals of the organization?” “Do control systems function only when they are associated with incentives … or are other processes sometimes at work?” “How do differences in cultural beliefs shape the nature and operation of organizations?” (Introduction).

Scott and Davis’s work is of particular interest here because it supports analysis of a key question I’ve pursued over the past year: how does government work, and what ontological assumptions do we need to make in order to better understand the successes and failures of government action? What I have called organizational dysfunction in earlier posts (link, link) finds a very comfortable home in the theoretical spaces created by the intellectual frameworks of organizational studies described by Scott and Davis.

Organizational culture

It is of both intellectual and practical interest to understand how organizations function and how the actors within them choose the actions that they pursue. A common answer to these questions is to refer to the rules and incentives of the organization, and then to attempt to understand the actor’s choices through the lens of rational preference theory. However, it is now increasingly clear that organizations embody distinctive “cultures” that significantly affect the actions of the individuals who operate within their scope. Edgar Schein is a leading expert on the topic of organizational culture. Here is how he defines the concept in Organizational Culture and Leadership. Organizational culture, according to Schein, consists of a set of “basic assumptions about the correct way to perceive, think, feel, and behave, driven by (implicit and explicit) values, norms, and ideals” (Schein, 1990).

Culture is both a dynamic phenomenon that surrounds us at all times, being constantly enacted and created by our interactions with others and shaped by leadership behavior, and a set of structures, routines, rules, and norms that guide and constrain behavior. When one brings culture to the level of the organization and even down to groups within the organization, one can see clearly how culture is created, embedded, evolved, and ultimately manipulated, and, at the same time, how culture constrains, stabilizes, and provides structure and meaning to the group members. These dynamic processes of culture creation and management are the essence of leadership and make one realize that leadership and culture are two sides of the same coin. (3rd edition, p. 1)

According to Schein, there is a cognitive and affective component of action within an organization that has little to do with rational calculation of interests and more to do with how the actors frame their choices. The values and expectations of the organization help to shape the actions of the participants. And one crucial aspect of leaders, according to Schein, is the role they play in helping to shape the culture of the organizations they lead.

It is intriguing that several pressing organizational problems have been found to rotate around the culture of the organization within which behavior takes place. The prevalence of sexual and gender harassment appears to depend a great deal on the culture of respect and civility that an organization has embodied — or has failed to embody. The ways in which accidents occur in large industrial systems seems to depend in part on the culture of safety that has been established within the organization. And the incidence of corrupt and dishonest practices within businesses seems to be influenced by the culture of integrity that the organization has managed to create. In each instance experience seems to demonstrate that “good” culture leads to less socially harmful behavior, while “bad” culture leads to more such behavior.

Consider first the prominence that the idea of safety culture has come to play in the nuclear industry after Three Mile Island and Chernobyl. Here are a few passages from a review document authored by the Advisory Committee on Reactor Safeguards (link).

There also seems to be a general agreement in the nuclear community on the elements of safety culture. Elements commonly included at the organization level are senior management commitment to safety, organizational effectiveness, effective communications, organizational learning, and a working environment that rewards identifying safety issues. Elements commonly identified at the individual level include personal accountability, questioning attitude, and procedural adherence. Financial health of the organization and the impact of regulatory bodies are occasionally identified as external factors potentially affecting safety culture. 

The working paper goes on to consider two issues: has research validated the causal relationship between safety culture and safe performance? And should the NRC create regulatory requirements aimed at observing and enhancing the safety culture in a nuclear plant? They note that current safety statistics do not permit measurement of the association between safety culture and safe performance, but that experience in the industry suggests that the answers to both questions are probably affirmative:

On the other hand, even at the current level of industry maturity, we are confronted with events such as the recent reactor vessel head corrosion identified so belatedly at the Davis-Besse Nuclear Power Plant. Problems subsequently identified in other programmatic areas suggest that these may not be isolated events, but the result of a generally degraded plant safety culture. The head degradation was so severe that a major accident could have resulted and was possibly imminent. If, indeed, the true cause of such an event proves to be degradation of the facility’s safety culture, is it acceptable that the reactor oversight program has to wait for an event of such significance to occur before its true root cause, degraded culture, is identified? This event seems to make the case for the need to better understand the issues driving the culture of nuclear power plants and to strive to identify effective performance indicators of resulting latent conditions that would provide leading, rather than lagging, indications of future plant problems. (7-8)

Researchers in the area of sexual harassment have devoted quite a bit of attention to the topic of workplace culture as well. This theme is emphasized in the National Academy study on sexual and gender harassment (link); the authors make the point that gender harassment is chiefly aimed at expressing disrespect towards the target rather than sexual exploitation. This has an important implication for institutional change. An institution that creates a strong core set of values emphasizing civility and respect is less conducive to gender harassment. They summarize this analysis in the statement of findings as well:

Organizational climate is, by far, the greatest predictor of the occurrence of sexual harassment, and ameliorating it can prevent people from sexually harassing others. A person more likely to engage in harassing behaviors is significantly less likely to do so in an environment that does not support harassing behaviors and/or has strong, clear, transparent consequences for these behaviors. (50)

Ben Walsh is representative of this approach. Here is the abstract of a research article by Walsh, Lee, Jensen, McGonagle, and Samnani on workplace incivility (link):

Scholars have called for research on the antecedents of mistreatment in organizations such as workplace incivility, as well as the theoretical mechanisms that explain their linkage. To address this call, the present study draws upon social information processing and social cognitive theories to investigate the relationship between positive leader behaviors—those associated with charismatic leadership and ethical leadership—and workers’ experiences of workplace incivility through their perceptions of norms for respect. Relationships were separately examined in two field studies using multi- source data (employees and coworkers in study 1, employees and supervisors in study 2). Results suggest that charismatic leadership (study 1) and ethical leadership (study 2) are negatively related to employee experiences of workplace incivility through employee perceptions of norms for respect. Norms for respect appear to operate as a mediating mechanism through which positive forms of leadership may negatively relate to workplace incivility. The paper concludes with a discussion of implications for organizations regarding leader behaviors that foster norms for respect and curb uncivil behaviors at work.

David Hess, an expert on corporate corruption, takes a similar approach to the problem of corruption and bribery by officials of multinational corporations (link). Hess argues that bribery often has to do with organizational culture and individual behavior, and that effective steps to reduce the incidence of bribery must proceed on the basis of an adequate analysis of both culture and behavior. And he links this issue to fundamental problems in the area of corporate social responsibility.

Corporations must combat corruption. By allowing their employees to pay bribes they are contributing to a system that prevents the realization of basic human rights in many countries. Ensuring that employees do not pay bribes is not accomplished by simply adopting a compliance and ethics program, however. This essay provided a brief overview of why otherwise good employees pay bribes in the wrong organizational environment, and what corporations must focus on to prevent those situations from arising. In short, preventing bribe payments must be treated as an ethical issue, not just a legal compliance issue, and the corporation must actively manage its corporate culture to ensure it supports the ethical behavior of employees.

As this passage emphasizes, Hess believes that controlling corrupt practices requires changing incentives within the corporation while equally changing the ethical culture of the corporation; he believes that the ethical culture of a company can have effects on the degree to which employees engage in bribery and other corrupt practices.

What is in common among each of these examples — and other examples are available as well — is that intangible features of the work environment are likely to influence behavior of the actors in that environment, and thereby affect the favorable and unfavorable outcomes of the organization’s functioning as well. Moreover, if we take the lead offered by Schein and work on the assumption that leaders can influence culture through their advocacy for the values that the organization embodies, then leadership has a core responsibility to facilitate a work culture that embodies these favorable outcomes. Work culture can be cultivated to encourage safety and to discourage bad outcomes like sexual harassment and corruption.

Testing the NRC

Serious nuclear accidents are rare but potentially devastating to people, land, and agriculture. (It appears that minor to moderate nuclear accidents are not nearly so rare, as James Mahaffey shows in Atomic Accidents: A History of Nuclear Meltdowns and Disasters: From the Ozark Mountains to Fukushima.) Three Mile Island, Chernobyl, and Fukushima are disasters that have given the public a better idea of how nuclear power reactors can go wrong, with serious and long-lasting effects. Reactors are also among the most complex industrial systems around, and accidents are common in complex, tightly coupled industrial systems. So how can we have reasonable confidence in the safety of nuclear reactors?

One possible answer is that we cannot have reasonable confidence at all. However, there are hundreds of large nuclear reactors in the world, and 98 active nuclear reactors in the United States alone. So it is critical to have highly effective safety regulation and oversight of the nuclear power industry. In the United States that regulatory authority rests with the Nuclear Regulatory Commission. So we need to ask the question: how good is the NRC at regulating, inspecting, and overseeing the safety of nuclear reactors in our country?

One would suppose that there would be excellent and detailed studies within the public administration literature that attempt to answer this question, and we might expect that researchers within the field of science and technology studies might have addressed it as well. However, this seems not to be the case. I have yet to find a full-length study of the NRC as a regulatory agency, and the NRC is mentioned only twice in the 600-plus page Oxford Handbook of Regulation. However, we can get an oblique view of the workings of the NRC through other sources. One set of observers who are in a position to evaluate the strengths and weaknesses of the NRC are nuclear experts who are independent of the nuclear industry. For example, publications from the Bulletin of the Atomic Scientists include many detailed reports on the operations and malfunctions of nuclear power plants that permit a degree of assessment of the quality of oversight provided by the NRC (link). And a detailed (and scathing) report by the General Accounting Office on the near-disaster at the Davis-Besse nuclear power plant is another expert assessment of NRC functioning (link).

David Lochbaum, Edwin Lyman, and Susan Stranahan fit the description of highly qualified independent scientists and observers, and their detailed case history of the Fukushima disaster provides a degree of insight into the workings of the NRC as well as the Japanese nuclear safety agency. Their book, Fukushima: The Story of a Nuclear Disaster, is jointly written by the authors under the auspices of the Union of Concerned Scientists, one of the best informed networks of nuclear experts we have in the United States. Lochbaum is director of the UCS Nuclear Safety Project and author of Nuclear Waste Disposal Crisis. The book provides a careful and scientific treatment of the unfolding of the Fukushima disaster hour by hour, and highlights the background errors that were made by regulators and owners in the design and operation of the Fukushima plant as well. The book makes numerous comparisons to the current workings of the NRC which permit a degree of assessment of the US regulatory agency.

In brief, Lochbaum and his co-authors appear to have a reasonably high opinion of the technical staff, scientists, and advisors who prepare recommendations for NRC consideration, but a low opinion of the willingness of the five commissioners to adopt costly recommendations that are strongly opposed by the nuclear industry. The authors express frustration that the nuclear safety agencies in both countries appear to have failed to have learned important lessons from the Fukushima disaster:

“The [Japanese] government simply seems in denial about the very real potential for another catastrophic accident…. In the United States, the NRC has also continued operating in denial mode. It turned down a petition requesting that it expand emergency evacuation planning to twenty-five miles from nuclear reactors despite the evidence at Fukushima that dangerous levels of radiation can extend at least that far if a meltdown occurs. It decided to do nothing about the risk of fire at over-stuffed spent fuel pools. And it rejected the main recommendation of its own Near-Term Task Force to revise its regulatory framework. The NRC and the industry instead are relying on the flawed FLEX program as a panacea for any and all safety vulnerabilities that go beyond the “design basis.” (kl 117)

They believe that the NRC is excessively vulnerable to influence by the nuclear power industry and to elected officials who favor economic growth over hypothetical safety concerns, with the result that it tends to err in favor of the economic interests of the industry.

Like many regulatory agencies, the NRC occupies uneasy ground between the need to guard public safety and the pressure from the industry it regulates to get off its back. When push comes to shove in that balancing act, the nuclear industry knows it can count on a sympathetic hearing in Congress; with millions of customers, the nation’s nuclear utilities are an influential lobbying group. (36)

They note that the NRC has consistently declined to undertake more substantial reform of its approach to safety, as recommended by its own panel of experts. The key recommendation of the Near-Term Task Force (NTTF) was that the regulatory framework should be anchored in a more strenuous standard of accident prevention, requiring plant owners to address “beyond-design-basis accidents”. The Fukushima earthquake and tsunami events were “beyond-design-basis”; nonetheless, they occurred, and the NTTF recommended that safety planning should incorporate consideration of these unlikely but possible events.

The task force members believed that once the first proposal was implemented, establishing a well-defined framework for decision making, their other recommendations would fall neatly into place. Absent that implementation, each recommendation would become bogged down as equipment quality specifications, maintenance requirements, and training protocols got hashed out on a case-by-case basis. But when the majority of the commissioners directed the staff in 2011 to postpone addressing the first recommendation and focus on the remaining recommendations, the game was lost even before the opening kickoff. The NTTF’s Recommendation 1 was akin to the severe accident rulemaking effort scuttled nearly three decades earlier, when the NRC considered expanding the scope of its regulations to address beyond-design accidents. Then, as now, the perceived need for regulatory “discipline,” as well as industry opposition to an expansion of the NRC’s enforcement powers, limited the scope of reform. The commission seemed to be ignoring a major lesson of Fukushima Daiichi: namely, that the “fighting the last war” approach taken after Three Mile Island was simply not good enough. (kl 253)

As a result, “regulatory discipline” (essentially the pro-business ideology that holds that regulation should be kept to a minimum) prevailed, and the primary recommendation was tabled. The issue was of great importance, in that it involved setting the standard of risk and accident severity for which the owner needed to plan. By staying with the lower standard, the NRC left the door open to the most severe kinds of accidents.

The NTTF task force also addressed the issue of “delegated regulation” (in which the agency defers to the industry in many issues of certification and risk assessment) (Here is the FAA’s definition of delegated regulation; link.)

The task force also wanted the NRC to reduce its reliance on industry voluntary initiatives, which were largely outside of regulatory control, and instead develop its own “strong program for dealing with the unexpected, including severe accidents.” (252)

Other more detail-oriented recommendations were refused as well — for example, a requirement to install reliable hardened containment vents in boiling water reactors, with a requirement that these vents should incorporate filters to remove radioactive gas before venting.

But what might seem a simple, logical decision—install a $15 million filter to reduce the chance of tens of billions of dollars’ worth of land contamination as well as harm to the public—got complicated. The nuclear industry launched a campaign to persuade the NRC commissioners that filters weren’t necessary. A key part of the industry’s argument was that plant owners could reduce radioactive releases more effectively by using FLEX equipment…. In March 2013, they voted 3–2 to delay a requirement that filters be installed, and recommended that the staff consider other alternatives to prevent the release of radiation during an accident. (254)

The NRC voted against including the requirement of filters on containment vents, a decision that was based on industry arguments that the cost of the filters was excessive and unnecessary.

The authors argue that the NRC needs to significantly rethink its standards of safety and foreseeable risk.

What is needed is a new, commonsense approach to safety, one that realistically weighs risks and counterbalances them with proven, not theoretical, safety requirements. The NRC must protect against severe accidents, not merely pretend they cannot occur. (257)

Their recommendation is to make use of an existing and rigorous plan for reactor safety incorporating the results of “severe accident mitigation alternatives” (SAMA) analysis already performed — but largely disregarded.

However, they are not optimistic that the NRC will be willing to undertake these substantial changes that would significantly enhance safety and make a Fukushima-scale disaster less likely. Reporting on a post-Fukushima conference sponsored by the NRC, they write:

But by now it was apparent that little sentiment existed within the NRC for major changes, including those urged by the commission’s own Near-Term Task Force to expand the realm of “adequate protection.”

Lochbaum and his co-authors also make an intriguing series of points about the use of modeling and simulation in the effort to evaluate safety in nuclear plants. They agree that simulation methods are an essential part of the toolkit for nuclear engineers seeking to evaluate accident scenarios; but they argue that the simulation tools currently available (or perhaps ever available) fall far short of the precision sometimes attributed to them. So simulation tools sometimes give a false sense of confidence in the existing safety arrangements in a particular setting.

Even so, the computer simulations could not reproduce numerous important aspects of the accidents. And in many cases, different computer codes gave different results. Sometimes the same code gave different results depending on who was using it. The inability of these state-of-the-art modeling codes to explain even some of the basic elements of the accident revealed their inherent weaknesses—and the hazards of putting too much faith in them. (263)

In addition to specific observations about the functioning of the NRC the authors identify chronic failures in the nuclear power system in Japan that should be of concern in the United States as well. Conflict of interest, falsification of records, and punishment of whistleblowers were part of the culture of nuclear power and nuclear regulation in Japan. And these problems can arise in the United States as well. Here are examples of the problems they identify in the Japanese nuclear power system; it is a valuable exercise to attempt to determine whether these issues arise in the US regulatory environment as well.

Non-compliance and falsification of records in Japan

Headlines scattered over the decades built a disturbing picture. Reactor owners falsified reports. Regulators failed to scrutinize safety claims. Nuclear boosters dominated safety panels. Rules were buried for years in endless committee reviews. “Independent” experts were financially beholden to the nuclear industry for jobs or research funding. “Public” meetings were padded with industry shills posing as ordinary citizens. Between 2005 and 2009, as local officials sponsored a series of meetings to gauge constituents’ views on nuclear power development in their communities, NISA encouraged the operators of five nuclear plants to send employees to the sessions, posing as members of the public, to sing the praises of nuclear technology. (46)

The authors do not provide evidence about similar practices in the United States, though the history of the Davis-Besse nuclear plant in Ohio suggests that similar things happen in the US industry. Charles Perrow treats the Davis-Besse near-disaster in a fair amount of detail; link. Descriptions of the Davis-Besse nuclear incident can be found herehere, here, and here.

Conflict of interest

Shortly after the Fukushima accident, Japan’s Yomiuri Shimbun reported that thirteen former officials of government agencies that regulate energy companies were currently working for TEPCO or other power firms. Another practice, known as amaagari, “ascent to heaven,” spins the revolving door in the opposite direction. Here, the nuclear industry sends retired nuclear utility officials to government agencies overseeing the nuclear industry. Again, ferreting out safety problems is not a high priority.

Punishment of whistle-blowers

In 2000, Kei Sugaoka, a nuclear inspector working for GE at Fukushima Daiichi, noticed a crack in a reactor’s steam dryer, which extracts excess moisture to prevent harm to the turbine. TEPCO directed Sugaoka to cover up the evidence. Eventually, Sugaoka notified government regulators of the problem. They ordered TEPCO to handle the matter on its own. Sugaoka was fired. (47)

There is a similar story in the Davis-Besse plant history.

Factors that interfere with effective regulation

In summary: there appear to be several structural factors that make nuclear regulation less effective than it needs to be.

First is the fact of the political power and influence of the nuclear industry itself. This was a major factor in the background of the Chernobyl disaster as well, where generals and party officials pushed incessantly for rapid completion of reactors; Serhii Plokhy, Chernobyl: The History of a Nuclear Catastrophe. Lochbaum and his collaborators demonstrate the power that TEPCO had in shaping the regulations under which it built the Fukushima complex, including the assumptions that were incorporated about earthquake risk and tsunami risk. Charles Perrow demonstrates a comparable ability by the nuclear industry in the United States to influence the rules and procedures that govern their use of nuclear power as well (link). This influence permits the owners of nuclear power plants to influence the content of regulation as well as the systems of inspection and oversight that the agency adopts.

A related factor is the set of influences and lobbying points that come from the needs of the economy and the production pressures of the energy industry. (Interestingly enough, this was also a major influence on Soviet decision-making in choosing the graphite-moderated light water reactor for use at Chernobyl and numerous other plants in the 1960s; Serhii Plokhy, Chernobyl: The History of a Nuclear Catastrophe.)

Third is the fact emphasized by Charles Perrow that the NRC is primarily governed by Congress, and legislators are themselves vulnerable to the pressures and blandishments of the industry and demands for a low-regulation business environment. This makes it difficult for the NRC to carry out its role as independent guarantor of the health and safety of the public. Here is Perrow’s description of the problem in The Next Catastrophe: Reducing Our Vulnerabilities to Natural, Industrial, and Terrorist Disasters (quoting Lochbaum from a 2004 Union of Concerned Scientists report):

With utilities profits falling when the NRC got tough after the Time story, the industry not only argued that excessive regulation was the problem, it did something about what it perceived as harassment. The industry used the Senate subcommittee that controls the agency’s budget, headed by a pro-nuclear Republican senator from New Mexico, Pete Domenici. Using the committee’s funds, he commissioned a special study by a consulting group that was used by the nuclear industry. It recommended cutting back on the agency’s budget and size. Using the consultant’s report, Domenici “declared that the NRC could get by just fine with a $90 million budget cut, 700 fewer employees, and a greatly reduced inspection effort.” (italics supplied) The beefed-up inspections ended soon after the threat of budget cuts for the agency. (Mangels 2003) And the possibility for public comment was also curtailed, just for good measure. Public participation in safety issues once was responsible for several important changes in NRC regulations, says David Lochbaum, a nuclear safety engineer with the Union of Concerned Scientists, but in 2004, the NRC, bowed to industry pressure and virtually eliminated public participation. (Lochbaum 2004) As Lochbaum told reporter Mangels, “The NRC is as good a regulator as Congress permits it to be. Right now, Congress doesn’t want a good regulator.”  (The Next Catastrophe, kl 2799)

A fourth important factor is a pervasive complacency within the professional nuclear community about the inherent safety of nuclear power. This is a factor mentioned by Lochbaum:

Although the accident involved a failure of technology, even more worrisome was the role of the worldwide nuclear establishment: the close-knit culture that has championed nuclear energy—politically, economically, socially—while refusing to acknowledge and reduce the risks that accompany its operation. Time and again, warning signs were ignored and near misses with calamity written off. (kl 87)

This is what we might call an ideological or cultural factor, in that it describes a mental framework for thinking about the technology and the public. It is very real factor in decision-making, both within the industry and in the regulatory world. Senior nuclear engineering experts at major research universities seem to share the view that the public “fear” of nuclear power is entirely misplaced, given the safety record of the industry. They believe the technical problems of nuclear power generation have been solved, and that a rational society would embrace nuclear power without anxiety. For rebuttal to this complacency, see Rose and Sweeting’s report in the Bulletin of the Atomic Scientists, “How safe is nuclear power? A statistical study suggests less than expected” (link). Here is the abstract to their paper:

After the Fukushima disaster, the authors analyzed all past core-melt accidents and estimated a failure rate of 1 per 3704 reactor years. This rate indicates that more than one such accident could occur somewhere in the world within the next decade. The authors also analyzed the role that learning from past accidents can play over time. This analysis showed few or no learning effects occurring, depending on the database used. Because the International Atomic Energy Agency (IAEA) has no publicly available list of nuclear accidents, the authors used data compiled by the Guardian newspaper and the energy researcher Benjamin Sovacool. The results suggest that there are likely to be more severe nuclear accidents than have been expected and support Charles Perrow’s “normal accidents” theory that nuclear power reactors cannot be operated without major accidents. However, a more detailed analysis of nuclear accident probabilities needs more transparency from the IAEA. Public support for nuclear power cannot currently be based on full knowledge simply because important information is not available.

Lee Clarke’s book on planning for disaster on the basis of unrealistic models and simulations is relevant here. In Mission Improbable: Using Fantasy Documents to Tame Disaster Clarke argues that much of the planning currently in place for largescale disasters depends upon models, simulations, and scenario-building tools in which we should have very little confidence.

The complacency about nuclear safety mentioned here makes safety regulation more difficult and, paradoxically, makes the safe use of nuclear power more unlikely. Only when the risks are confronted with complete transparency and honesty will it be possible to design regulatory systems that do an acceptable job of ensuring the safety and health of the public.

In short, Lochbaum and his co-authors seem to provide evidence for the conclusion that the NRC is not in a position to perform its primary function: to establish a rational and scientifically well grounded set of standards for safe reactor design and operation. Further, its ability to enforce through inspection seems impaired as well by the power and influence the nuclear industry can deploy through Congress to resist its regulatory efforts. Good expert knowledge is canvassed through the NRC’s processes; but the policy recommendations that flow from this scientific analysis are all too often short-circuited by the ability of the industry to fend off new regulatory requirements. Lochbaum’s comment quoted by Perrow above seems all too true: “The NRC is as good a regulator as Congress permits it to be. Right now, Congress doesn’t want a good regulator.” 

It is very interesting to read the transcript of a 2014 hearing of the Senate Committee on Environment and Public Works titled “NRC’S IMPLEMENTATION OF THE FUKUSHIMA NEAR-TERM TASK FORCE RECOMMENDATIONS AND OTHER ACTIONS TO ENHANCE AND MAINTAIN NUCLEAR SAFETY” (link). Senator Barbara Boxer, California Democrat and chair of the committee, opened the meeting with these words:

Although Chairman Macfarlane said, when she announced her resignation, she had assured that ‘‘the agency implemented lessons learned from the tragic accident at Fukushima.’’ She said, ‘‘the American people can be confident that such an accident will never take place here.’’

I say the reality is not a single one of the 12 key safety recommendations made by the Fukushima Near-Term Task Force has been implemented. Some reactor operators are still not in compliance with the safety requirements that were in place before the Fukushima disaster. The NRC has only completed its own action 4 of the 12 task force recommendations.

This is an alarming assessment, and one that is entirely in accord with the observations made by Lochbaum above.

The 737 MAX disaster as an organizational failure

The topic of the organizational causes of technology failure comes up frequently in Understanding Society. The tragic crashes of two Boeing 737 MAX aircraft in the past year present an important case to study. Is this an instance of pilot error (as has occasionally been suggested)? Is it a case of engineering and design failures? Or are there important corporate and regulatory failures that created the environment in which the accidents occurred, as the public record seems to suggest?

The formal accident investigations are not yet complete, and the FAA and other air safety agencies around the world have not yet approved the aircraft for flight following the suspension of certification following the second crash. There will certainly be a detailed and expert case study of this case at some point in the future, and I will be eager to read the resulting book. In the meantime, though, it is  useful to bring the perspectives of Charles Perrow, Diane Vaughan, and Andrew Hopkins to bear on what we can learn about this case from the public media sources that are available. The preliminary sketch of a case study offered below is a first effort and is intended simply to help us learn more about the social and organizational processes that govern the complex technologies upon which we depend. Many of the dysfunctions identified in the safety literature appear to have had a role in this disaster.

I have made every effort to offer an accurate summary based on publicly available sources, but readers should bear in mind that it is a preliminary effort.

The key conclusions I’ve been led to include these:

The updated flight control system of the aircraft (MCAS) created the conditions for crashes in rare flight conditions and instrument failures.

  • Faults in the AOA sensor and the MCAS flight control system persisted through the design process
  • pilot training and information about changes in the flight control system were likely inadequate to permit pilots to override the control system when necessary

There were fairly clear signs of organizational dysfunction in the development and design process for the aircraft:

  • Disempowered mid-level experts (engineers, designers, software experts)
  • Inadequate organizational embodiment of safety oversight
  • Business priorities placing cost savings, timeliness, profits over safety
  • Executives with divided incentives
  • Breakdown of internal management controls leading to faulty manufacturing processes

Cost-containment and speed trumped safety. It is hard to avoid the conclusion that the corporation put cost-cutting and speed ahead of the professional advice and judgment of the engineers. Management pushed the design and certification process aggressively, leading to implementation of a control system that could fail in foreseeable flight conditions.

The regulatory system seems to have been at fault as well, with the FAA taking a deferential attitude towards the company’s assertions of expertise throughout the certification process. The regulatory process was “outsourced” to a company that already has inordinate political clout in Congress and the agencies.

  • Inadequate government regulation
  • FAA lacked direct expertise and oversight sufficient to detect design failures.
  • Too much influence by the company over regulators and legislators

Here is a video presentation of the case as I currently understand it (link).

 See also this earlier discussion of regulatory failure in the 737 MAX case (link). Here are several experts on the topic of organizational failure whose work is especially relevant to the current case:

Organizations and dysfunction

Ford Rouge Plant

A recurring theme in recent months in Understanding Society is organizational dysfunction and the organizational causes of technology failure. Helmut Anheier’s volume When Things Go Wrong: Organizational Failures and Breakdowns is highly relevant to this topic, and it makes for very interesting reading. The volume includes contributions by a number of leading scholars in the sociology of organizations.

And yet the volume seems to miss the mark in some important ways. For one thing, it is unduly focused on the question of “mortality” of firms and other organizations. Bankruptcy and organizational death are frequent synonyms for “failure” here. This frame is evident in the summary the introduction offers of existing approaches in the field: organizational aspects, political aspects, cognitive aspects, and structural aspects. All bring us back to the causes of extinction and bankruptcy in a business organization. Further, the approach highlights the importance of internal conflict within an organization as a source of eventual failure. But it gives no insight into the internal structure and workings of the organization itself, the ways in which behavior and internal structure function to systematically produce certain kinds of outcomes that we can identify as dysfunctional.

Significantly, however, dysfunction does not routinely lead to death of a firm. (Seibel’s contribution in the volume raises this possibility, which Seibel refers to as “successful failures“). This is a familiar observation from political science: what looks dysfunctional from the outside may be perfectly well tuned to a different set of interests (for example, in Robert Bates’s account of pricing boards in Africa in Markets and States in Tropical Africa: The Political Basis of Agricultural Policies). In their introduction to this volume Anheier and Moulton refer to this possibility as a direction for future research: “successful for whom, a failure for whom?” (14).

The volume tends to look at success and failure in terms of profitability and the satisfaction of stakeholders. But we can define dysfunction in a more granular way by linking characteristics of performance to the perceived “purposes and goals” of the organization. A regulatory agency exists in order to effectively project the health and safety of the public. In this kind of case, failure is any outcome in which the agency flagrantly and avoidably fails to prevent a serious harm — release of radioactive material, contamination of food, a building fire resulting from defects that should have been detected by inspection. If it fails to do so as well as it might then it is dysfunctional.

Why do dysfunctions persist in organizations? It is possible to identify several possible causes. The first is that a dysfunction from one point of view may well be a desirable feature from another point of view. The lack of an authoritative safety officer in a chemical plant may be thought to be dysfunctional if we are thinking about the safety of workers and the public as a primary goal of the plant (link). But if profitability and cost-savings are the primary goals from the point of view of the stakeholders, then the cost-benefit analysis may favor the lack of the safety officer.

Second, there may be internal failures within an organization that are beyond the reach of any executive or manager who might want to correct them. The complexity and loose-coupling of large organizations militate against house cleaning on a large scale.

Third, there may be powerful factions within an organization for whom the “dysfunctional” feature is an important component of their own set of purposes and goals. Fligstein and McAdam argue for this kind of disaggregation with their theory of strategic action fields (link). By disaggregating purposes and goals to the various actors who figure in the life cycle of the organization – founders, stakeholders, executives, managers, experts, frontline workers, labor organizers – it is possible to see the organization as a whole as simply the aggregation of the multiple actions and purposes of the actors within and adjacent to the organization. This aggregation does not imply that the organization is carefully adjusted to serve the public good or to maximize efficiency or to protect the health and safety of the public. Rather, it suggests that the resultant organizational structure serves the interests of the various actors to the fullest extent each actor is able to manage.

Consider the account offered by Thomas Misa of the decline of the steel industry in the United States in the first part of the twentieth century in A Nation of Steel: The Making of Modern America, 1865-1925. Misa’s account seems to point to a massive dysfunction in the steel corporations of the inter-war period, a deliberate and sustained failure to invest in research on new steel technologies in metallurgy and production. Misa argues that the great steel corporations — US Steel in particular — failed to remain competitive in their industry in the early years of the twentieth century because management persistently pursued short-term profits and financial advantage for the company through domination of the market at the expense of research and development. It relied on market domination instead of research and development for its source of revenue and profits.

In short, U.S. Steel was big but not illegal. Its price leadership resulted from its complete dominance in the core markets for steel…. Indeed, many steelmakers had grown comfortable with U.S. Steel’s overriding policy of price and technical stability, which permitted them to create or develop markets where the combine chose not to compete, and they testified to the court in favor of the combine. The real price of stability … was the stifling of technological innovation. (255)

The result was that the modernized steel industries in Europe leap-frogged the previous US advantage and eventually led to unviable production technology in the United States.

At the periphery of the newest and most promising alloy steels, dismissive of continuous-sheet rolling, actively hostile to new structural shapes, a price leader but not a technical leader: this was U.S. Steel. What was the company doing with technological innovation? (257)

Misa is interested in arriving at a better way of understanding the imperatives leading to technical change — better than neoclassical economics and labor history. His solution highlights the changing relationships that developed between industrial consumers and producers in the steel industry.

We now possess a series of powerful insights into the dynamics of technology and social change. Together, these insights offer the realistic promise of being better able, if we choose, to modulate the complex process of technical change. We can now locate the range of sites for technical decision making, including private companies, trade organizations, engineering societies, and government agencies. We can suggest a typology of user-producer interactions, including centralized, multicentered, decentralized, and direct-consumer interactions, that will enable certain kinds of actions while constraining others. We can even suggest a range of activities that are likely to effect technical change, including standards setting, building and zoning codes, and government procurement. Furthermore, we can also suggest a range of strategies by which citizens supposedly on the “outside” may be able to influence decisions supposedly made on the “inside” about technical change, including credibility pressure, forced technology choice, and regulatory issues. (277-278)

In fact Misa places the dynamic of relationship between producer and large consumer at the center of the imperatives towards technological innovation:

In retrospect, what was wrong with U.S. Steel was not its size or even its market power but its policy of isolating itself from the new demands from users that might have spurred technical change. The resulting technological torpidity that doomed the industry was not primarily a matter of industrial concentration, outrageous behavior on the part of white- and blue-collar employees, or even dysfunctional relations among management, labor, and government. What went wrong was the industry’s relations with its consumers. (278)

This relative “callous treatment of consumers” was profoundly harmful when international competition gave large industrial users of steel a choice. When US Steel had market dominance, large industrial users had little choice; but this situation changed after WWII. “This favorable balance of trade eroded during the 1950s as German and Japanese steelmakers rebuilt their bombed-out plants with a new production technology, the basic oxygen furnace (BOF), which American steelmakers had dismissed as unproven and unworkable” (279). Misa quotes a president of a small steel producer: “The Big Steel companies tend to resist new technologies as long as they can … They only accept a new technology when they need it to survive” (280).

*****

Here is an interesting table from Misa’s book that sheds light on some of the economic and political history in the United States since the post-war period, leading right up to the populist politics of 2016 in the Midwest. This chart provides mute testimony to the decline of the rustbelt industrial cities. Michigan, Illinois, Ohio, Pennsylvania, and western New York account for 83% of the steel production on this table. When American producers lost the competitive battle for steel production in the 1980s, the Rustbelt suffered disproportionately, and eventually blue collar workers lost their places in the affluent economy.

Social ontology of government

I am currently writing a book on the topic of the “social ontology of government”. My goal is to provide a short treatment of the social mechanisms and entities that constitute the workings of government. The book will ask some important basic questions: what kind of thing is “government”? (I suggest it is an agglomeration of organizations, social networks, and rules and practices, with no overriding unity.) What does government do? (I simplify and suggest that governments create the conditions of social order and formulate policies and rules aimed at bringing about various social priorities that have been selected through the governmental process.) How does government work — what do we know about the social and institutional processes that constitute its metabolism? (How do government entities make decisions, gather needed information, and enforce the policies they construct?)

In my treatment of the topic of the workings of government I treat the idea of “dysfunction” with the same seriousness as I do topics concerning the effective and functional aspects of governmental action. Examples of dysfunctions include principal-agent problems, conflict of interest, loose coupling of agencies, corruption, bribery, and the corrosive influence of powerful outsiders. It is interesting to me that this topic — ontology of government — has unexpectedly crossed over with another of my interests, the organizational causes of largescale accidents.

If there are guiding perspectives in my treatment, they are eclectic: Neil Fligstein and Doug McAdam, Manuel DeLanda, Nicos Poulantzas, Charles Perrow, Nancy Leveson, and Lyndon B. Johnson, for example.

In light of these interests, I find the front page of the New York Times on March 28, 2019 to be a truly fascinating amalgam of the social ontology of government, with a heavy dose of dysfunction. Every story on the front page highlights one feature or another of the workings and failures of government. Let’s briefly name these features. (The item numbers flow roughly from upper right to lower left.)

Item 1 is the latest installment of the Boeing 737 MAX story. Failures of regulation and a growing regime of “collaborative regulation” in which the FAA delegates much of the work of certification of aircraft safety to the manufacturer appear at this early stage to be a part of the explanation of this systems failure. This was the topic of a recent post (link).

Items 2 and 3 feature the processes and consequences of failed government — the social crisis in Venezuela created in part by the breakdown of legitimate government, and the fundamental and continuing inability of the British government and its prime minister to arrive at a rational and acceptable policy on an issue of the greatest importance for the country. Given that decision-making and effective administration of law are fundamental functions of government, these two examples are key contributions to the ontology of government. The Brexit story also highlights the dysfunctions that flow from the shameful self-dealing of politicians and leaders who privilege their own political interests over the public good. Boris Johnson, this one’s for you!

Item 4 turns us to the  dynamics of presidential political competition. This item falls on the favorable side of the ledger, illustrating the important role that a strong independent press has in helping to inform the public about the past performance and behavior of candidates for high office. It is an important example of depth journalism and provides the public with accurate, nuanced information about an appealing candidate with a policy history as mayor that many may find unpalatable. The story also highlights the role that non-governmental organizations have in politics and government action, in this instance the ACLU.

Item 5 brings us inside the White House and gives the reader a look at the dynamics and mechanisms through which a small circle of presidential advisors are able to determine a particular approach to a policy issue that they favor. It displays the vulnerability the office of president shows to the privileged insiders’ advice concerning policies they personally favor. Whether it is Mick Mulvaney, acting chief of staff to the current president, or Robert McNamara’s advice to JFK and LBJ leading to escalation in Vietnam, the process permits ideologically committed insiders to wield extraordinary policy power.

Item 6 turns to the legislative process, this time in the New Jersey legislature, on the topic of the legalization of marijuana. This story too falls on the positive side of the “function-dysfunction” spectrum, in that it describes a fairly rational and publicly visible process of fact-gathering and policy assessment by a number of New Jersey legislators, leading to the withdrawal of the legislation.

Item 7 turns to the mechanisms of private influence on government, in a particularly unsavory but revealing way. The story reveals details of a high-end dinner “to pa tribute to the guest of honor, Gov. Andrew M. Cuomo.” The article writes, “Lobbyists told their clients that the event would be a good thing to go to”, at a minimum ticket price of $25,000 per couple. This story connects the dots between private interest and efforts to influence governmental policy. In this case the dots are not very far apart.

With a little effort all these items could be mapped onto the diagram of the interconnections within and across government and external social groups provided above.

Nuclear power plant siting decisions

Readers may be skeptical about the practical importance of the topic of nuclear power plant siting decisions, since very few new nuclear plants have been proposed or approved in the United States for decades. However, the topic is one for which there is an extensive historical record, and it is a process that illuminates the challenge for government to balance risk and benefit, private gain and public cost.  Moreover, siting inherently brings up issues that are both of concern to the public in general (throughout a state or region of the country) and to the citizens who live in close proximity to the recommended site. The NIMBY problem is unavoidable — it is someone’s backyard, and it is a worrisome neighbor. So this is a good case in terms of which to think creatively about the responsibilities of government for ensuring the public good in the face of risky private activity, and the detailed institutions of regulation and oversight that would work to make wise public outcomes more likely.

I’ve been thinking quite a bit recently about technology failure, government regulation, and risky technologies, and there is a lot to learn about these subjects by looking at the history of nuclear power in the United States. Two books in particular have been interesting to me. Neither is particularly recent, but both shed valuable light on the public-policy context of nuclear decision-making. The first is Joan Aron’s account of the processes that led to the cancellation of the Shoreham nuclear power plant on Long Island in the 1970s (Licensed To Kill?: The Nuclear Regulatory Commission and the Shoreham Power Plant) and the second is Donald Stever, Jr.’s account of the licensing process for the Seabrook nuclear power plant in Seabrook and The Nuclear Regulatory Commission: The Licensing of a Nuclear Power Plant. Both are fascinating books and well worthy of study as a window into government decision-making and regulation. Stever’s book is especially interesting because it is a highly capable analysis of the licensing process, both at the state level and at the level of the NRC, and because Stever himself was a participant. As an assistant attorney general in New Hampshire he was assigned the role of Counsel for the Public throughout the process in New Hampshire.

Joan Aron’s 1997 book Licensed to Kill? is a detailed case study the effort to establish the Shoreham nuclear power plant on Long Island in the 1980s. LILCO had proposed the plant to respond to rising demand for electricity on Long Island as population and energy use rose. And Long Island is a long, narrow island on which traffic congestion at certain times of day is legendary. Evacuation planning was both crucial and in the end, perhaps impossible.

This is an intriguing story, because it led eventually to the cancellation of the operating license for the plant by the NRC after completion of the plant. And the cancellation resulted largely from the effectiveness of public opposition and interest-group political pressure. Aron provides a detailed account of the decisions made by the public utility company LILCO, the AEC and NRC, New York state and local authorities, and citizen activist groups that led to the costliest failed investment in the history of nuclear power in the United States.

In 1991 the NRC made the decision to rescind the operating license for the Shoreham plant, after completion at a cost of over $5 billion but before it had generated a kilowatt of electricity.

Aron’s basic finding is that the project collapsed in costly fiasco because of a loss of trust among the diverse stakeholders: LILCO, the Long Island public, state and local agencies and officials, scientific experts, and the Nuclear Regulatory Commission. The Long Island tabloid Newsday played a role as well, sensationalizing every step of the process and contributing to public distrust of the process. Aron finds that the NRC and LILCO underestimated the need for full analysis of safety and emergency preparedness issues raised by the plant’s design, including the issue of evacuation from a largely inaccessible island full of two million people in the event of disaster. LILCO’s decision to upscale the capacity of the plant in the middle of the process contributed to the failure as well. And the occurrence of the Three Mile Island disaster in 1979 gave new urgency to the concerns experienced by citizens living within fifty miles of the Shoreham site about the risks of a nuclear plant.

As we have seen, Shoreham failed to operate because of intense public opposition, in which the governor played a key role, inspired in part by the utility’s management incompetence and distrust of the NRC. Inefficiencies in the NRC licensing process were largely irrelevant to the outcome. The public by and large ignored NRC’s findings and took the nonsafety of the plant for granted. (131)

The most influential issue was public safety: would it be possible to perform an orderly evacuation of the population near the plant in the event of a serious emergency? Clarke and Perrow (included in Helmut Anheier, ed., When Things Go Wrong: Organizational Failures and Breakdowns) provide an extensive analysis of the failures that occurred during tests of the emergency evacuation plan designed by LILCO. As they demonstrate, the errors that occurred during the evacuation test were both “normal” and potentially deadly.

One thing that comes out of both books is the fact that the commissioning and regulatory processes are far from ideal examples of the rational development of sound public policy. Rather, business interests, institutional shortcomings, lack of procedural knowledge by committee chairs, and dozens of other factors lead to outcomes that appear to fall far short of what the public needs. But in addition to ordinary intrusions into otherwise rational policy deliberations, there are other reasons to believe that decision-making is more complicated and less rational than a simple model of rational public policy formation would suggest. Every decision-maker brings a set of “framing assumptions” about the reality concerning which he or she is deliberating. These framing assumptions impose an unavoidable kind of cognitive bias into collective decision-making. A business executive brings a worldview to the question of regulation of risk that is quite different from that of an ecologist or an environmental activist. This is different from the point often made about self-interest; our framing assumptions do not feel like expressions of self-interest, but rather simply secure convictions about how the world works and what is important in the world. This is one reason why the work of social scientists like Scott Page (The Difference: How the Power of Diversity Creates Better Groups, Firms, Schools, and Societies) on the value of diversity in problem-solving and decision-making is so important: by bringing multiple perspectives and cognitive frames to a problem, we are more likely to get a balanced decision that gives appropriate weight to the legitimate interests and concerns of all involved.

Here is an interesting concrete illustration of cognitive bias (with a generous measure of self-interest as well) in Stever’s discussion of siting decisions for nuclear power plants:

From the time a utility makes the critical in-house decision to choose a site, any further study of alternatives is necessarily negative in approach. Once sufficient corporate assets have been sunk into the chosen site to produce data adequate for state site review, the company’s management has a large enough stake in it to resist suggestions that a full study of site alternatives be undertaken as a part of the state (or for that matter as a part of the NEPA) review process. hence, the company’s methodological approach to evaluating alternates to the chosen site will always be oriented toward the desired conclusion that the chosen site is superior. (Stever 1980 : 30)

This is the bias of sunk costs, both inside the organization and in the cognitive frames of independent decision makers in state agencies.

Stever’s central point here is a very important one: the pace of site selection favors the energy company’s choices over the concerns and preferences of affected groups because the company is in a position to have dedicated substantial resources to development of the preferred site proposal. Likewise, scientific experts have a difficult time making their concerns about habitat or traffic flow heard in the context.
But here is a crucial thing to observe: the siting decision is only one of dozens in the development of a new power plant, which is itself only one of hundreds of government / business decisions made every year. What Stever describes is a structural bias in the regulatory process, not a one-off flaw. At its bottom, this is the task that government faces when considering the creation of a new nuclear power plant: “to assess the various public and private costs and benefits of a site proposed by a utility” (32); and Stever’s analysis makes it doubtful that existing public processes do this in a consistent and effective way. Stever argues that government needs to have more of a role in site selection, not less, as pro-market advocates demand: “The kind of social and environmental cost accounting required for a balanced initial assessment of, and development of, alternative sites should be done by a public body acting not as a reviewer of private choices, but as an active planner” (32).

Notice how this scheme shifts the pace and process from the company to the relevant state agency. The preliminary site selection and screening is done by a state site planning agency, with input then invited from the utilities companies, interest groups, and a formal environmental assessment. This places the power squarely in the hands of the government agency rather than the private owner of the plant — reflecting the overriding interest the public has in ensuring health, safety, and environmental controls.
Stever closes a chapter on regulatory issues with these cogent recommendations (38-39):

  1. Electric utility companies should not be responsible for decisions concerning early nuclear-site planning.
  2. Early site identification, evaluation, and inventorying is a public responsibility that should be undertaken by a public agency, with formal participation by utilities and interest groups, based upon criteria developed by the state legislature.
  3. Prior to the use of a particular site, the state should prepare a complete environmental assessment for it, and hold adjudicatory hearings on contested issues.
  4. Further effort should be made toward assessing the public risk of nuclear power plant sites.
  5. In areas like New England, characterized by geographically small states and high energy demand, serious efforts should be made to develop regional site planning and evaluation.
  6. Nuclear licensing reform should focus on the quality of decision-making.
  7. There should be a continued federal presence in nuclear site selection, and the resolution of environmental problems should not be delegated entirely to the states. 

(It is very interesting to me that I have not been able to locate a full organizational study of the Nuclear Regulatory Commission itself.)

Deficiencies of practical rationality in organizations

Suppose we are willing to take seriously the idea that organizations possess a kind of intentionality — beliefs, goals, and purposive actions — and suppose that we believe that the microfoundations of these quasi-intentional states depend on the workings of individual purposive actors within specific sets of relations, incentives, and practices. How does the resulting form of “bureaucratic intelligence” compare with human thought and action?

There is a major set of differences between organizational “intelligence” and human intelligence that turn on the unity of human action compared to the fundamental disunity of organizational action. An individual human being gathers a set of beliefs about a situation, reflects on a range of possible actions, and chooses a line of action designed to bring about his/her goals. An organization is disjointed in each of these activities. The belief-setting part of an organization usually consists of multiple separate processes culminating in an amalgamated set of beliefs or representations. And this amalgamation often reflects deep differences in perspective and method across various sub-departments. (Consider inputs into an international crisis incorporating assessments from intelligence, military, and trade specialists.)

Second, individual intentionality possess a substantial degree of practical autonomy. The individual assesses and adopts the set of beliefs that seem best to him or her in current circumstances. The organization in its belief-acquisition is subject to conflicting interests, both internal and external, that bias the belief set in one direction or the other. (This is the central thrust of experts on science policy like Naomi Oreskes.) The organization is not autonomous in its belief formation processes.

Third, an individual’s actions have a reasonable level of consistency and coherence over time. The individual seeks to avoid being self-defeating by doing X and Y while knowing that X undercuts Y. An organization is entirely capable of pursuing a suite of actions which embody exactly this kind of inconsistency, precisely because the actions chosen are the result of multiple disagreeing sub-agencies and officers.

Fourth, we have some reason to expect a degree of stability in the goals and values that underlie actions by an individual. But organizations, exactly because their behavior is a joint product of sub-agents with conflicting plans and goals, are entirely capable of rapid change of goals and values. Deepening this instability is the fluctuating powers and interests of external stakeholders who apply pressure for different values and goals over time.

Finally, human thinkers are potentially epistemic thinkers — they are at least potentially capable of following disciplines of analysis, reasoning, and evidence in their practical engagement with the world. By contrast, because of the influence of interests, both internal and external, organizations are perpetually subject to the distortion of belief, intention, and implementation by actors who have an interest in the outcome of the project. And organizations have little ability to apply rational rational standards to their processes of belief, intention, and implementation formation. Organizational intentionality lacks overriding rational control.

Consider more briefly the topic of action. Human actors suffer various deficiencies of performance when it comes to purposive action, including weakness of the will and self deception. But organizations are altogether less capable of effectively mounting the steps needed to fully implement a plan or a complicated policy or action. This is because of the looseness of linkages that exist between executive and agent within an organization, the perennial possibility of principal-agent problems, and the potential interference with performance created by interested parties outside the organization.

This line of thought suggests that organizational lack “unity of apperception and intention”. There are multiple levels and zones of intention formation, and much of this plurality persists throughout real processes of organizational thinking. And this disunity affects both belief, intention and action. Organizations are not univocal at any point. Belief formation, intention formation, and action remain fragmented and multivocal.

These observations are somewhat parallel to the paradoxes of social choice and various voting systems governing a social choice function. Kenneth Arrow demonstrated it is impossible to design a voting system that guarantees consistency of choice by a group of individual consistency voters. The analogy here is the idea that there is no organizational design possible that guarantees a high degree of consistency and rationality in large organizational decision processes at any stage of quasi-intentionality, including belief acquisition, policy formulation, and policy implementation. 

Is corruption a social thing?

When we discuss the ontology of various aspects of the social world, we are often thinking of such things as institutions, organizations, social networks, value systems, and the like. These examples pick out features of the world that are relatively stable and functional. Where does an imperfection or dysfunction of social life like corruption fit into our social ontology?

We might say that “corruption” is a descriptive category that is aimed at capturing a particular range of behavior, like stealing, gossiping, or asceticism. This makes corruption a kind of individual behavior, or even a characteristic of some individuals. “Mayor X is corrupt.”

This initial effort does not seem satisfactory, however. The idea of corruption is tied to institutions, roles, and rules in a very direct way, and therefore we cannot really present the concept accurately without articulating these institutional features of the concept of corruption. Corruption might be paraphrased in these terms:

  • Individual X plays a role Y in institution Z; role Y prescribes honest and impersonal performance of duties; individual X accepts private benefits to take actions that are contrary to the prescriptions of Y. In virtue of these facts X behaves corruptly.

Corruption, then, involves actions taken by officials that deviate from the rules governing their role, in order to receive private benefits from the subjects of those actions. Absent the rules and role, corruption cannot exist. So corruption is a feature that presupposes certain social facts about institutions. (Perhaps there is a link to Searle’s social ontology here; link.)

We might consider that corruption is analogous to friction in physical systems. Friction is a factor that affects the performance of virtually all mechanical systems, but that is a second-order factor within classical mechanics. And it is possible to give mechanical explanations of the ubiquity of friction, in terms of the geometry of adjoining physical surfaces, the strength of inter-molecular attractions, and the like. Analogously, we can offer theories of the frequency with which corruption occurs in organizations, public and private, in terms of the interests and decision-making frameworks of variously situated actors (e.g. real estate developers, land value assessors, tax assessors, zoning authorities …). Developers have a business interest in favorable rulings from assessors and zoning authorities; some officials have an interest in accepting gifts and favors to increase personal income and wealth; each makes an estimate of the likelihood of detection and punishment; and a certain rate of corrupt exchanges is the result.

This line of thought once again makes corruption a feature of the actors and their calculations. But it is important to note that organizations themselves have features that make corrupt exchanges either more likely or less likely (link, link). Some organizations are corruption-resistant in ways in which others are corruption-neutral or corruption-enhancing. These features include internal accounting and auditing procedures; whistle-blowing practices; executive and supervisor vigilance; and other organizational features. Further, governments and systems of law can make arrangements that discourage corruption; the incidence of corruption is influenced by public policy. For example, legal requirements on transparency in financial practices by firms, investment in investigatory resources in oversight agencies, and weighty penalties to companies found guilty of corrupt practices can affect the incidence of corruption. (Robert Klitgaard’s treatment of corruption is relevant here; he provides careful analysis of some of the institutional and governmental measures that can be taken that discourage corrupt practices; link, link. And there are cross-country indices of corruption (e.g. Transparency International) that demonstrate the causal effectiveness of anti-corruption measures at the state level. Finland, Norway, and Switzerland rank well on the Transparency International index.)

So — is corruption a thing? Does corruption need to be included in a social ontology? Does a realist ontology of government and business organization have a place for corruption? Yes, yes, and yes. Corruption is a real property of individual actors’ behavior, observable in social life. It is a consequence of strategic rationality by various actors. Corruption is a social practice with its own supporting or inhibiting culture. Some organizations effectively espouse a core set of values of honesty and correct performance that make corruption less frequent. And corruption is a feature of the design of an organization or bureau, analogous to “mean-time-between-failure” as a feature of a mechanical design. Organizations can adopt institutional protections and cultural commitments that minimize corrupt behavior, while other organizations fail to do so and thereby encourage corrupt behavior. So “corruption-vulnerability” is a real feature of organizations and corruption has a social reality.

%d bloggers like this: