Declining industries

Why is it so difficult for leaders in various industries and sectors to seriously address the existential threats that sometimes arise? Planning for marginal changes in the business environment is fairly simple; problems can be solved, costs can be cut, and the firm can stay in the black. But how about more radical but distant threats? What about the grocery sector when confronted by Amazon’s radical steps in food selling? What about Polaroid or Kodak when confronted by the rise of digital photography in the 1990s? What about the US steel industry in the 1960s when confronted with rising Asian competition and declining manufacturing facilities?

From the outside these companies and sectors seem like dodos incapable of confronting the threats that imperil them. They seem to be ignoring oncoming train wrecks simply because these catastrophes are still in the distant future. And yet the leaders in these companies were generally speaking talented, motivated men and women. So what are the organizational or cognitive barriers that arise to make it difficult for leaders to successfully confront the biggest threats they face?

Part of the answer seems to be the fact that distant hazards seem smaller than the more immediate and near-term challenges that an organization must face; so there is a systematic bias towards myopic decision-making. This sounds like a Kahneman-Tversky kind of cognitive shortcoming.

A second possible explanation is that it is easy enough to persuade oneself that distant threats will either resolve themselves organically or that the organization will discover novel solutions in the future. This seems to be part of the reason that climate-change foot-draggers take the position they do: that “things will sort out”, “new technologies will help solve the problems in the future.” This sounds like a classic example of weakness of the will — an unwillingness to rationally confront hard truths about the future that ought to influence choices today but often don’t.

Then there is the timeframe of accountability that is in place in government, business, and non-profit organizations alike. Leaders are rewarded and punished for short-term successes and failures, not prudent longterm planning and preparation. This is clearly true for term-limited elected officials, but it is equally true for executives whose stakeholders evaluate performance based on quarterly profits rather than longterm objectives and threats.

We judge harshly those leaders who allow their firms or organizations to perish because of a chronic failure to plan for substantial change in the environments in which they will need to operate in the future. Nero is not remembered kindly for his dedication to his fiddle. And yet at any given time, many industries are in precisely that situation. What kind of discipline and commitment can protect organizations against this risk?

This is an interesting question in the abstract. But it is also a challenging question for people who care about the longterm viability of colleges and universities. Are there forces at work today that will bring about existential crisis for universities in twenty years (enrollments, tuition pressure, technology change)? Are there technological or organizational choices that should be made today that would help to avert those crises in the future? And are university leaders taking the right steps to prepare their institutions for the futures they will face in several decades?

Gaining compliance

Organizations always involve numerous staff members whose behavior has the potential for creating significant risk for individuals and the organization but who are only loosely supervised. This situation unavoidably raises principal-agent problems. Let’s assume that the great majority of staff members are motivated by good intentions and ethical standards. That means that there are a small number of individuals whose behavior is not ethical and well intentioned. What arrangements can an organization put in place to prevent bad behavior and protect individuals and the integrity of the organization?

For certain kinds of bad behavior there are well understood institutional arrangements that work well to detect and deter the wrong actions. This is especially true for business transactions, purchasing, control of cash, expense reporting and reimbursement, and other financial processes within the organization. The audit and accounting functions within almost every sophisticated organization permit a reasonably high level of confidence in the likelihood of detection of fraud, theft, and misreporting. This doesn’t mean that corrupt financial behavior does not occur; but audits make it much more difficult to succeed in persistent dishonest behavior. So an organization with an effective audit function is likely to have a reasonably high level of compliance in the areas where standard audits can be effectively conducted.

A second kind of compliance effort has to do with the culture and practice of observer reporting of misbehavior. Compliance hotlines allow individuals who have observed (or suspected) bad behavior to report that behavior to responsible agents who are obligated to investigate these allegations. Policies that require reporting of certain kinds of bad behavior to responsible officers of the organization — sexual harassment, racial discrimination, or fraudulent actions, for example — should have the effect of revealing some kinds of misbehavior, and deterring others from engaging in bad behavior. So a culture and expectation of reporting is helpful in controlling bad behavior.

A third approach that some organizations take to compliance is to place a great deal of emphasis the moral culture of the organization — shared values, professional duty, and role responsibilities. Leaders can support and facilitate a culture of voluntary adherence to the values and policies of the organization, so that virtually all members of the organization fall in the “well-intentioned” category. The thrust off this approach is to make large efforts at eliciting voluntary good behavior. Business professor David Hess has done a substantial amount of research on these final two topics (link, link).

Each of these organizational mechanisms has some efficacy. But unfortunately they do not suffice to create an environment where we can be highly confident that serious forms of misconduct do not occur. In particular, reporting and culture are only partially efficacious when it comes to private and covert behavior like sexual assault, bullying, and discriminatory speech and behavior in the workplace. This leads to an important question: are there more intrusive mechanisms of supervision and observation that would permit organizations to discover patterns of misconduct even if they remain unreported by observers and victims? Are there better ways for an organization to ensure that no one is subject to the harmful actions of a predator or harasser?

A more active strategy for an organization committed to eliminating sexual assault is to attempt to predict the environments where inappropriate interpersonal behavior is possible and to redesign the setting so the behavior is substantially less likely. For example, a hospital may require that any physical examinations of minors must be conducted in the presence of a chaperone or other health professional. A school of music or art may require that after-hours private lessons are conducted in semi-public locations. These rules would deprive a potential predator of the seclusion needed for the bad behavior. And the practitioner who is observed violating the rule would then be suspect and subject to further investigation and disciplinary action.

Here is perhaps a farfetched idea: a “behavior audit” that is periodically performed in settings where inappropriate covert behavior is possible. Here we might imagine a process in which a random set of people are periodically selected for interview who might have been in a position to have been subject to inappropriate behavior. These individuals would then be interviewed with an eye to helping to surface possible negative or harmful experiences that they have had. This process might be carried out for groups of patients, students, athletes, performers, or auditioners in the entertainment industry. And the goal would be to uncover traces of the kinds of behavior involving sexual harassment and assault that are at the heart of recent revelations in a myriad of industries and organizations. The results of such an audit would occasionally reveal a pattern of previously unknown behavior requiring additional investigation, while the more frequent results would be negative. This process would lead to a higher level of confidence that the organization has reasonably good knowledge of the frequency and scope of bad behavior and a better system for putting in place a plan of remediation.

All of these organizational strategies serve fundamentally as attempts to solve principal-agent problems within the organization. The principals of the organization have expectations about the norms that ought to govern behavior within the organization. These mechanisms are intended to increase the likelihood that there is conformance between the principal’s expectations and the agent’s behavior. And, when they fail, several of these mechanisms are intended to make it more likely that bad behavior is identified and corrected.

(Here is an earlier post treating scientific misconduct as a principal-agent problem; link.)

Trust and organizational effectiveness

It is fairly well agreed that organizations require a degree of trust among the participants in order for the organization to function at all. But what does this mean? How much trust is needed? How is trust cultivated among participants? And what are the mechanisms through which trust enhances organizational effectiveness?

The minimal requirements of cooperation presuppose a certain level of trust. As A plans and undertakes a sequence of actions designed to bring about Y, his or her efforts must rely upon the coordination promised by other actors. If A does not have a sufficiently high level of confidence in B’s assurances and compliance, then he will be rationally compelled to choose another series of actions. If Larry Bird didn’t have trust in his teammate Dennis Johnson, the famous steal would not have happened.

 

First, what do we mean by trust in the current context? Each actor in an organization or group has intentions, engages in behavior, and communicates with other actors. Part of communication is often in the form of sharing information and agreeing upon a plan of coordinated action. Agreeing upon a plan in turn often requires statements and commitments from various actors about the future actions they will take. Trust is the circumstance that permits others to rely upon those statements and commitments. We might say, then, that A trusts B just in case —

  • A believes that when B asserts P, this is an honest expression of B’s beliefs.
  • A believes that when B says he/she will do X, this is an honest commitment on B’s part and B will carry it out (absent extraordinary reasons to the contrary).
  • A believes that when B asserts that his/her actions will be guided by his best understanding of the purposes and goals of the organization, this is a truthful expression.
  • A believes that B’s future actions, observed and unobserved, will be consistent with his/her avowals of intentions, values, and commitments.

So what are some reasons why mistrust might rear its ugly head between actors in an organization? Why might A fail to trust B?

  • A may believe that B’s private interests are driving B’s actions (rather than adherence to prior commitments and values).
  • A may believe that B suffers from weakness of the will, an inability to carry out his honest intentions.
  • A may believe that B manipulates his statements of fact to suit his private interests.
  • Or less dramatically: A may not have high confidence in these features of B’s behavior.
  • B may have no real interest or intention in behaving in a truthful way.

And what features of organizational life and practice might be expected to either enhance inter-personal trust or to undermine it?

Trust is enhanced by individuals having the opportunity to get acquainted with their collaborators in a more personal way — to see from non-organizational contexts that they are generally well intentioned; that they make serious efforts to live up to their stated intentions and commitments; and that they are generally honest. So perhaps there is a rationale for the bonding exercises that many companies undertake for their workers.

Likewise, trust is enhanced by the presence of a shared and practiced commitment to the value of trustworthiness. An organization itself can enhance trust in its participants by performing the actions that its participants expect the organization to perform. For example, an organization that abruptly and without consultation ends an important employee benefit undermines trust in the employees that the organization has their best interests at heart. This abrogation of prior obligations may in turn lead individuals to behave in a less trustworthy way, and lead others to have lower levels of trust in each other.

How does enhancing trust have the promise of bringing about higher levels of organizational effectiveness? Fundamentally this comes down to the question of the value of teamwork and the burden of unnecessary transaction costs. If every expense report requires investigation, the amount of resources spent on accountants will be much greater than a situation where only the outlying reports are questioned. If each vice president needs to defend him or herself against the possibility that another vice president is conspiring against him, then less time and energy are available to do the work of the organization. If the CEO doesn’t have high confidence that her executive team will work wholeheartedly to bring about a successful implementation of a risky investment, then the CEO will choose less risky investments.

In other words, trust is crucial for collaboration and teamwork. And organizations that manage to help to cultivate a high level of trust among its participants is likely to perform better than one that depends primarily on supervision and enforcement.

Varieties of organizational dysfunction

Several earlier posts have made the point that important technology failures often include organizational faults in their causal background.

It is certainly true that most important accidents have multiple causes, and it is crucial to have as good an understanding as possible of the range of causal pathways that have led to air crashes, chemical plant explosions, or drug contamination incidents. But in the background we almost always find organizations and practices through which complex technical activities are designed, implemented, and regulated. Human actors, organized into patterns of cooperation, collaboration, competition, and command, are as crucial to technical processes as are power lines, cooling towers, and control systems in computers. So it is imperative that we follow the lead of researchers like Charles Perrow (The Next Catastrophe: Reducing Our Vulnerabilities to Natural, Industrial, and Terrorist Disasters), Kathleen Tierney (The Social Roots of Risk: Producing Disasters, Promoting Resilience), or Diane Vaughan (The Challenger Launch Decision: Risky Technology, Culture, and Deviance at NASA) and give close attention to the social- and organization-level failures that sometimes lead to massive technological failures.

It is useful to have a few examples in mind as we undertake to probe this question more deeply. Here are a number of important accidents and failures that have been carefully studied.

  • Three Mile Island, Chernobyl nuclear disasters
  • Challenger and Columbia space shuttle disasters
  • Failure of United States anti-submarine warfare in 1942-43
  • Flawed policy and decision-making in US leading to escalation of Vietnam War
  • Flawed policy and decision-making in France leading to Dien Bien Phu defeat
  • Failure of Nuclear Regulatory Commission to ensure reactor safety
  • DC-10 design process
  • Osprey design process
  • failure of Federal flood insurance to appropriately guide rational land use
  • FEMA failure in Katrina aftermath
  • Design and manufacture of the Edsel sedan
  • High rates of hospital-born infections in some hospitals

Examples like these allow us to begin to create an inventory of organizational flaws that sometimes lead to failures and accidents:

  • siloed decision-making (design division, marketing division, manufacturing division all have different priorities and interests)
  • lax implementation of formal processes
  • strategic bureaucratic manipulation of outcomes 
    • information withholding, lying
    • corrupt practices, conflicts of interest and commitment
  • short-term calculation of costs and benefits
  • indifference to public goods
  • poor evaluation of data; misinterpretation of data
  • lack of high-level officials responsible for compliance and safety

These deficiencies may be analyzed in terms of a more abstract list of organizational failures:

  • Poor decisions given existing priorities and facts
    • poor priority-setting processes
    • poor information-gathering and analysis
  • failure to learn and adapt from changing circumstances
  • internal capture of decision-making; corruption, conflict of interest
  • vulnerability of decision-making to external pressures (external capture)
  • faulty or ineffective implementation of policies, procedures, and regulations
******

Nancy Leveson is a leading authority on the systems-level causes of accidents and failures. A recent white paper can be found here. Here is the abstract for that paper:

New technology is making fundamental changes in the etiology of accidents and is creating a need for changes in the explanatory mechanisms used. We need better and less subjective understanding of why accidents occur and how to prevent future ones. The most effective models will go beyond assigning blame and instead help engineers to learn as much as possible about all the factors involved, including those related to social and organizational structures. This paper presents a new accident model founded on basic systems theory concepts. The use of such a model provides a theoretical foundation for the introduction of unique new types of accident analysis, hazard analysis, accident prevention strategies including new approaches to designing for safety, risk assessment techniques, and approaches to designing performance monitoring and safety metrics. (1; italics added)

Here is what Leveson has to say about the social and organizational causes of accidents:

2.1 Social and Organizational Factors

Event-based models are poor at representing systemic accident factors such as structural deficiencies in the organization, management deficiencies, and flaws in the safety culture of the company or industry. An accident model should encourage a broad view of accident mechanisms that expands the investigation from beyond the proximate events.

Ralph Miles Jr., in describing the basic concepts of systems theory, noted that:

Underlying every technology is at least one basic science, although the technology may be well developed long before the science emerges. Overlying every technical or civil system is a social system that provides purpose, goals, and decision criteria (Miles, 1973, p. 1).

Effectively preventing accidents in complex systems requires using accident models that include that social system as well as the technology and its underlying science. Without understanding the purpose, goals, and decision criteria used to construct and operate systems, it is not possible to completely understand and most effectively prevent accidents. (6)

Collapse of Eastern European communisms

An earlier post commented on Tony Judt’s magnificent book Postwar: A History of Europe Since 1945. There I focused on the story he tells of the brutality of the creation of Communist Party dictatorships across Eastern Europe (link). Equally fascinating is his narrative of the abrupt collapse of those states in 1989. In short order the world witnessed the collapse of communism in Poland (June 1989), East Germany (November 1989), Czechoslovakia (November 1989), Bulgaria (November 1989), Romania (December 1989), Hungary (March 1990), and the USSR (December 1991). Most of this narrative occurs in chapter 19.

The sudden collapse of multiple Communist states in a period of roughly a year requires explanation. These were not sham states; they had formidable forced of repression and control; and there were few avenues of public protest available to opponents of the regimes. So their collapse is worth of careful assessment.

There seem to be several crucial ingredients in the sudden collapse of these dictatorships. One is the persistence of an intellectual and practical opposition to Communism and single-party rule in almost all these countries. The brutality of violent repression in Poland, Hungary, Czechoslovakia, and other countries did not succeed in permanently suppressing opposition based on demands for greater freedom and greater self-determination through political participation. And this was true in the fields of the arts and literature as much as it was in the disciplines of law and politics. Individuals and organizations reemerged at various important junctures to advocate again for political and legal reforms, in Poland, Czechoslovakia, Hungary, and even the USSR.

Second was the chronic inability of these states to achieve economic success and rising standards of living for their populations. Price riots in Poland in the 1970s and elsewhere signaled a fundamental discontent by consumers and workers who were aware of the living conditions of people living in other parts of non-Communist Europe. Material discontent was a powerful factor in the repeated periods of organized protest that occurred in several of these states prior to 1989. (Remember the joke from Poland in the 1970s — “If they pretend to pay us, we pretend to work.”)

And third was the position taken by Mikhail Gorbachev on the use of force to maintain Communist regimes in satellite countries. The use of violence and armed force had sufficed to quell popular movements in Hungary, Czechoslovakia, and Poland in years past. But when Gorbachev made it credible and irreversible that the USSR would no longer use tanks to reinforce the satellite regimes — for example, in his speech to the United Nations in December 1988 — local parties were suddenly exposed to new realities. Domestic repression was still possible, but it was no longer obvious that it would succeed.

And the results were dramatic. In a period of months the world witnessed the sudden collapse of Communist rule in country after country; and in most instances the transitions were relatively free of large-scale violence. (The public executions of Romania’s Nicolae and Elena Ceaușescu on Christmas Day, 1889 were a highly visible exception.)

There seem to be many historical lessons to learn from this short period of history. Particularly sharp are the implications for other single-party dictatorships. So let’s reflect on the behavior of the single-party state in China since the mid-1980s. The Chinese party-state has had several consistent action plans since the 1980s. First, it has focused great effort on economic reform, rising incomes, and improving standards of living for the bulk of its population. In these efforts it has been largely successful — in strong contrast to the USSR and its satellite states. Second, the Chinese government has intensified its ability to control ideology and debate, culminating in the current consolidation of power under President Xi. And third, it used brutal force against the one movement that emerged in 1989 with substantial and broad public involvement, the Democracy Movement. The use of force against demonstrations in Tiananmen Square and other cities in China demonstrated the party’s determination to prevent largescale public mobilization with force if needed.

It is difficult to avoid the conclusion that China’s leaders have reflected very carefully on the collapse of single-party states in 1989, culminating in the collapse of the Soviet Union itself. They appear to have settled on a longterm coordinated strategy aimed at preventing the emergence of the particular factors that led to those political catastrophes. They are committed to fulfilling the expectations of the public that the economy will continue to grow and support rising standards of living for the mass of the population. So economic growth has remained a very high priority. Second, they are vigilant in monitoring ideological correctness, suppressing individuals and groups who continue to advocate for universal human rights, democracy, and individual freedoms. And they are unstinting in providing the resources needed by the state organizations through which censorship, political repression, and ideological correctness are maintained. And finally, they appear to be willing to use overwhelming force if necessary to prevent largescale public protests. The regime seems very confident that a pathway of future development that continues to support material improvement for the population while tightly controlling ideas and public discussions of political issues will be successful. And it is hard to see that this calculation is fundamentally incorrect.

Corruption and institutional design

Robert Klitgaard is an insightful expert on the institutional causes of corruption in various social arrangements. His 1988 book, Controlling Corruption, laid out several case studies in detail, demonstrating specific features of institutional design that either encouraged or discouraged corrupt behavior by social and political actors.

More recently Klitgaard prepared a major report for the OECD on the topic of corruption and development assistance (2015; link). This working paper is worth reading in detail for anyone interested in understanding the dysfunctional origins of corruption as an institutional fact. Here is an early statement of the kinds of institutional facts that lead to higher levels of corruption:

Corruption is a crime of calculation. Information and incentives alter patterns of corruption. Processes with strong monopoly power, wide discretion for officials and weak accountability are prone to corruption. (7)

Corruption can go beyond bribery to include nepotism, neglect of duty and favouritism. Corrupt acts can involve third parties outside the organisation (in transactions with clients and citizens, such as extortion and bribery) or be internal to an organisation (theft, embezzlement, some types of fraud). Corruption can occur in government, business, civil society organisations and international agencies. Each of these varieties has the dimension of scale, from episodic to systemic. (18)

Here is an early definition of corruption that Klitgaard offers:

Corruption is a term of many meanings, but at the broadest level, corruption is the misuse of office for unofficial ends. Office is a position of duty, or should be; the office-holder is supposed to put the interests of the institution and the people first. In its most pernicious forms, systemic corruption creates the shells of modern institutions, full of official ranks and rules but “institutions” in inverted commas only. V.S. Naipaul, the Trinidad-born Nobel Prize winner, once noted that underdevelopment is characterised by a duplicitous emphasis on honorific titles and simultaneously the abuse of those titles: judges who love to be called “your honour” even as they accept bribes, civil servants who are uncivil and serve themselves. (18)

The bulk of Klitgaard’s report is devoted to outlining mechanisms through which governments, international agencies, and donor agencies can attempt to initiate effective reform processes leading to lower levels of corruption. There are two theoretical foundations underlying the recommendations, one having to do with the internal factors that enhance or reduce corruption and the other having to do with a theory of effective institutional change. The internal theory is couched as a piece of algebra: corruption is the result of monopoly power plus official discretion minus accountability (37). So effective interventions should be designed around reducing monopoly power and official discretion while increasing accountability.

The premise about reform process that Klitgaard favors involves what he refers to as “convening” — assembling working groups of influential and knowledgeable stakeholders in a given country and setting them the task of addressing corruption in the country. Examples and case studies include the Philippines, Columbia, Georgia, and Indonesia. Here is a high-level description of what he has in mind:

The recommended process – referred to in this paper as convening – invites development assistance providers to share international data, case studies and theory, and invites national leaders from recipient countries to provide local knowledge and creative problem-solving skills. (5)

Klitgaard spends a fair amount of time on the problem of measuring corruption at the national level. He refers to several international indices that are relevant: Transparency International’s Corruption Perceptions Index, the World Economic Forum’s Global Competitiveness Index, the Global Integrity index, and the International Finance Corporation’s ranking of nations in terms of “ease of doing business” (11).

What this report does not attempt to do is to address specific institutional arrangements in order to discover the propensities for corrupt behavior that they create. This is the strength of Klitgaard’s earlier book, where he looks at alternative forms of social or political arrangements for policing or collecting taxes. In this report there is none of that micro detail. What specific institutional arrangements can be designed that have the effect of reducing official monopoly power and discretion, or the effect of increasing official accountability? Implicitly Klitgaard suggests that these are questions best posed to the experts who participate in the national convening on corruption, because they have the best local knowledge of government and business practices. But here are a few mechanisms that Klitgaard specifically highlights: punish major offenders, pick visible, low-hanging fruit, bring in new leaders and reformers, coordinate government institutions, involve officials, and mobilize citizens and the business community (chapter 5).

A more micro perspective on international corruption is provided by a recent study by David Hess, “Combating Corruption in International Business: The Big Questions” (link). Hess focuses on the Foreign Corrupt Practices Act in the United States, and he asks the question, why do large corporations pay bribes when this is clearly illegal under the FCPA? Moreover, given that FCPA has the power to assess very large fines against corporations that violate its strictures, how can violation be a rational strategy? Hess considers the case of Siemens, which was fined over $1.5 billion in 2008 for repeated acts of bribery in the pursuit of contracts (3). He considers two theories of corporate bribing: a cost-benefit analysis showing that the practice of bribing leads to higher returns, and the “rogue employee” view, according to which the corporation is unable to control the actions of its occasionally unscrupulous employees. On the latter view, bribery is essentially a principal-agent problem.

Hess takes the position that bribery often has to do with organizational culture and individual behavior, and that effective steps to reduce the incidence of bribery must proceed on the basis of an adequate analysis of both culture and behavior. And he links this issue to fundamental problems in the area of corporate social responsibility.

Corporations must combat corruption. By allowing their employees to pay bribes they are contributing to a system that prevents the realization of basic human rights in many countries. Ensuring that employees do not pay bribes is not accomplished by simply adopting a compliance and ethics program, however. This essay provided a brief overview of why otherwise good employees pay bribes in the wrong organizational environment, and what corporations must focus on to prevent those situations from arising. In short, preventing bribe payments must be treated as an ethical issue, not just a legal compliance issue, and the corporation must actively manage its corporate culture to ensure it supports the ethical behavior of employees.

As this passage emphasizes, Hess believes that controlling corrupt practices requires changing incentives within the corporation while equally changing the ethical culture of the corporation; he believes that the ethical culture of a company can have effects on the degree to which employees engage in bribery and other corrupt practices.

The study of corruption is an ideal case for the general topic of institutional dysfunction. And, as many countries have demonstrated, it is remarkably difficult to alter the pattern of corrupt behavior in a large, complex society.

A new model of organization?

In Team of Teams: New Rules of Engagement for a Complex World General Stanley McChrystal (with Tantum Collins, David Silverman, and Chris Fussell) describes a new, 21st-century conception of organization for large, complex activities involving thousands of individuals and hundreds of major sub-tasks. His concept is grounded in his experience in counter-insurgency warfare in Iraq. Rather than being constructed as centrally organized, bureaucratic, hierarchical processes with commanders and scripted agents, McChrystal argues that modern counter-terrorism requires a more decentralized and flexible system of action, which he refers to as “teams of teams”. Information is shared freely, local commanders have ready access to resources and knowledge from other experts, and they make decisions in a more flexible way. The model hopes to capture the benefits of improvisation, flexibility, and a much higher level of trust and communication than is characteristic of typical military and corporate organizations.

 

One place where the “team of teams” structure is plausible is in the context of a focused technology startup company, where the whole group of participants need to be in regular and frequent collaboration with each other. Indeed, Paul Rabinow’s ethnography in 1996 of the Cetus Corporation in its pursuit of PCR (polymerase chain reaction) in Making PCR: A Story of Biotechnology reflects a very similar topology of information flows and collaboration links across and within working subgroups (link). But the vision does not fit very well the organizational and operational needs of a large hospital, a railroad company, or a research university. It seems plausible that the challenges the US military faced in fighting Al-Qaeda and ISIL are not really analogous to those faced by less dramatic organizations like hospitals, universities, and corporations. The decentralized and improvisational circumstances of urban warfare against loosely organized terrorists may be sui generis

McChrystal proposes an organizational structure that is more decentralized, more open to local decision-making, and more flexible and resilient. These are unmistakeable virtues in some circumstances; but not in all circumstances and all organizations. And arguably such a structure would have been impossible in the planning and execution of the French defense of Dien Bien Phu or the US decision to wage war against the Vietnamese insurgency ten years later. These were situations where central decisions needed to be made, and the decisions needed to be implemented through well organized bureaucracies. The problem in both instances is that the wrong decisions were made, based on the wrong information and assessments. What was needed, it would appear, was better executive leadership and decision-making — not a fundamentally decentralized pattern of response and counter-response.

One thing that deserves comment in the context of McChrystal’s book is the history of bad organization, bad intelligence, and bad decision-making the world has witnessed in the military experiences of the past century. The radical miscalculations and failures of planning involved in the first months of the Korean War, the painful and tragic misjudgments made by the French military in preparing for Dien Bien Phu, the equally bad thinking and planning done by Robert McNamara and the whiz kids leading to the Vietnam War — these examples stand out as sentinel illustrations of the failures of large organizations that have been tasked to carry out large, complex activities involving numerous operational units. The military and the national security establishments were good at some tasks, and disastrously bad at others. And the things they were bad at were both systemic and devastating. Bernard Fall illustrates these failures in Hell In A Very Small Place: The Siege Of Dien Bien Phu, and David Halberstam does so for the decision-making that led to the war in Vietnam in The Best and the Brightest.

So devising new ideas about command, planning, intelligence gathering and analysis, and priority-setting that are more effective would be a big contribution to humanity. But the deficiencies in Dien Bien Phu, Korea, or Vietnam seem different from those McChrystal identifies in Iraq. What was needed in these portentous moments of policy choice was clear-eyed establishment of appropriate priorities and goals, honest collection of intelligence and sources of information, and disinterested implementation of policies and plans that served the highest interests of the country. The “team of teams” approach doesn’t seem to be a general solution to the wide range of military and political challenges nations face.

What one would have wanted to see in the French military or the US national security apparatus is something different from the kind of teamwork described by McChrystal: greater honesty on all parts, a commitment to taking seriously the assessments of experts and participants in the field, an openness to questioning strongly held assumptions, and a greater capacity for institutional wisdom in arriving at decisions of this magnitude. We would have wanted to see a process that was not dominated by large egos, self-interest, and fixed ideas. We would have wanted French generals and their civilian masters to soberly assess the military function that a fortress camp at Dien Bien Phu could satisfy; the realistic military requirements that would need to be satisfied in order to defend the location; and an honest effort to solicit the very best information and judgment from experienced commanders and officials about what a Viet-Minh siege might look like. Instead, the French military was guided by complacent assumptions about French military superiority, which led to a genuine catastrophe for the soldiers assigned to the task and to French society more broadly.

There are valid insights contained in McChrystal’s book about the urgency of breaking down obstacles to communication and action within sprawling organizations as they confront a changing environment. But it doesn’t add up to a model that is well designed for most contexts in which large organizations actually function.

How organizations adapt

Organizations do things; they depend upon the coordinated efforts of numerous individuals; and they exist in environments that affect their ongoing success or failure. Moreover, organizations are to some extent plastic: the practices and rules that make them up can change over time. Sometimes these changes happen as the result of deliberate design choices by individuals inside or outside the organization; so a manager may alter the rules through which decisions are made about hiring new staff in order to improve the quality of work. And sometimes they happen through gradual processes over time that no one is specifically aware of. The question arises, then, whether organizations evolve toward higher functioning based on the signals from the environments in which they live; or on the contrary, whether organizational change is stochastic, without a gradient of change towards more effective functioning? Do changes within an organization add up over time to improved functioning? What kinds of social mechanisms might bring about such an outcome?

One way of addressing this topic is to consider organizations as mid-level social entities that are potentially capable of adaptation and learning. An organization has identifiable internal processes of functioning as well as a delineated boundary of activity. It has a degree of control over its functioning. And it is situated in an environment that signals differential success/failure through a variety of means (profitability, success in gaining adherents, improvement in market share, number of patents issued, …). So the environment responds favorably or unfavorably, and change occurs.

Is there anything in this specification of the structure, composition, and environmental location of an organization that suggests the possibility or likelihood of adaptation over time in the direction of improvement of some measure of organizational success? Do institutions and organizations get better as a result of their interactions with their environments and their internal structure and actors?

There are a few possible social mechanisms that would support the possibility of adaptation towards higher functioning. One is the fact that purposive agents are involved in maintaining and changing institutional practices. Those agents are capable of perceiving inefficiencies and potential gains from innovation, and are sometimes in a position to introduce appropriate innovations. This is true at various levels within an organization, from the supervisor of a custodial staff to a vice president for marketing to a CEO. If the incentives presented to these agents are aligned with the important needs of the organization, then we can expect that they will introduce innovations that enhance functioning. So one mechanism through which we might expect that organizations will get better over time is the fact that some agents within an organization have the knowledge and power necessary to enact changes that will improve performance, and they sometimes have an interest in doing so. In other words, there is a degree of intelligent intentionality within an organization that might work in favor of enhancement.

This line of thought should not be over-emphasized, however, because there are competing forces and interests within most organizations. Previous posts have focused on current organizational theory based on the idea of a “strategic action field” of insiders and outsiders who determine the activities of the organization (Fligstein and McAdam, Crozier; linklink). This framework suggests that the structure and functioning of an organization is not wholly determined by a single intelligent actor (“the founder”), but is rather the temporally extended result of interactions among actors in the pursuit of diverse aims. This heterogeneity of purposive actions by actors within an institution means that the direction of change is indeterminate; it is possible that the coalitions that form will bring about positive change, but the reverse is possible as well.

And in fact, many authors and participants have pointed out that it is often enough not the case that the agents’ interests are aligned with the priorities and needs of the organization. Jack Knight offers persuasive critique of the idea that organizations and institutions tend to increase in their ability to provide collective benefits in Institutions and Social Conflict. CEOs who have a financial interest in a rapid stock price increase may take steps that worsen functioning for shortterm market gain; supervisors may avoid work-flow innovations because they don’t want the headache of an extended change process; vice presidents may deny information to other divisions in order to enhance appreciation of the efforts of their own division. Here is a short description from Knight’s book of the way that institutional adjustment occurs as a result of conflict among players of unequal powers:

Individual bargaining is resolved by the commitments of those who enjoy a relative advantage in substantive resources. Through a series of interactions with various members of the group, actors with similar resources establish a pattern of successful action in a particular type of interaction. As others recognize that they are interacting with one of the actors who possess these resources, they adjust their strategies to achieve their best outcome given the anticipated commitments of others. Over time rational actors continue to adjust their strategies until an equilibrium is reached. As this becomes recognized as the socially expected combination of equilibrium strategies, a self-enforcing social institution is established. (Knight, 143)

A very different possible mechanism is unit selection, where more successful innovations or firms survive and less successful innovations and firms fail. This is the premise of the evolutionary theory of the firm (Nelson and Winter, An Evolutionary Theory of Economic Change). In a competitive market, firms with low internal efficiency will have a difficult time competing on price with more efficient firms; so these low-efficiency firms will go out of business occasionally. Here the question of “units of selection” arises: is it firms over which selection operates, or is it lower-level innovations that are the object of selection?

Geoffrey Hodgson provides a thoughtful review of this set of theories here, part of what he calls “competence-based theories of the firm”. Here is Hobson’s diagram of the relationships that exist among several different approaches to study of the firm.

The market mechanism does not work very well as a selection mechanism for some important categories of organizations — government agencies, legislative systems, or non-profit organizations. This is so, because the criterion of selection is “profitability / efficiency within a competitive market”; and government and non-profit organizations are not importantly subject to the workings of a market.

In short, the answer to the fundamental question here is mixed. There are factors that unquestionably work to enhance effectiveness in an organization. But these factors are weak and defeasible, and the countervailing factors (internal conflict, divided interests of actors, slackness of corporate marketplace) leave open the possibility that institutions change but they do not evolve in a consistent direction. And the glaring dysfunctions that have afflicted many organizations, both corporate and governmental, make this conclusion even more persuasive. Perhaps what demands explanation is the rare case where an organization achieves a high level of effectiveness and consistency in its actions, rather than the many cases that come to mind of dysfunctional organizational activity.

(The examples of organizational dysfunction that come to mind are many — the failures of nuclear regulation of the civilian nuclear industry (Perrow, The Next Catastrophe: Reducing Our Vulnerabilities to Natural, Industrial, and Terrorist Disasters); the failure of US anti-submarine warfare in World War II (Cohen, Military Misfortunes: The Anatomy of Failure in War); and the failure of chemical companies to ensure safe operations of their plants (Shrivastava, Bhopal: Anatomy of Crisis). Here is an earlier post that addresses some of these examples; link. And here are several earlier posts on the topic of institutional change and organizational behavior; linklink.)

Rational choice institutionalism

Where do institutions come from? And what kinds of social forces are at work to stabilize them once they are up and running?  These are questions that historical institutionalists like Kathleen Thelen have considered in substantial depth (linklink, link). But the rational-choice paradigm has also offered some answers to these questions as well. The basic idea presented by the RCT paradigm is that institutions are the result of purposive agents coping with existential problems, forming alliances, and pursuing their interests in a rational way. James Coleman is one of the exponents of this approach in Foundations of Social Theory, where he treats institutions and norms as coordinated and mutually reinforcing patterns of individual behavior (link).

An actor-centered theory of institutions requires a substantial amount of boot-strapping: we need to have an account of how a set of rules and practices could have emerged from the purposive but often conflictual activities of individuals, and we need a similar account of how those rules are stabilized and enforced by individuals who have no inherent interest in the stability of the rules within which they act. Further, we need to take account of well-known conflicts between private and public benefits, short-term and long-term benefits, and intended and unintended benefits. Rational-choice theorists since Mancur Olson in The Logic of Collective Action: Public Goods and the Theory of Groups have made it clear that we cannot explain social outcomes on the basis of the collective benefits that they provide; rather, we need to show how those arrangements result from relatively myopic, relatively self-interested actors with bounded ability to foresee consequences.

Ken Shepsle is a leading advocate for a rational-choice theory of institutions within political science. He offers an exposition of his thinking in his contribution to The Oxford Handbook of Political Institutions (link). He distinguishes between institutions as exogenous and institutions as endogenous. The first conception takes the rules and practices of an institution as fixed and external to the individuals who operate within them, while the second looks at the rules and practices as being the net result of the intentions and actions of those individuals themselves. On the second view, it is open to the individuals within an activity to attempt to change the rules; and one set of rules will perhaps have better results for one set of interests than another. So the choice of rules in an activity is not a matter of indifference to the participants. (For example, untenured faculty might undertake a campaign to change the way the university evaluates teaching in the context of the tenure process, or to change the relative weights assigned to teaching and research.) Shepsle also distinguishes between structured and unstructured institutions — a distinction that other authors characterize as “formal/informal”. The distinction has to do with the degree to which the rules of the activity are codified and reinforced by strong external pressures. Shepsle encompasses various informal solutions to collective action problems under the rubric of unstructured institutions — fluid solutions to a transient problem.

This description of institutions begins to frame the problem, but it doesn’t go very far. In particular, it doesn’t provide much insight into the dynamics of conflict over rule-setting among parties with different interests in a process. Other scholars have pushed the analysis further.

French sociologists Crozier and Friedberg address this problem in Actors and Systems: The Politics of Collective Action (1980 [1977]). Their premise is that actors within organizations have substantially more agency and freedom than they are generally afforded by orthodox organization theory, and we can best understand the workings and evolution of the organization as (partially) the result of the strategic actions of the participants (instead of understanding the conduct of the participants as a function of the rules of the organization). They look at institutions as solutions to collective action problems — tasks or performances that allow attainment of a goal that is of interest to a broad public, but for which there are no antecedent private incentives for cooperation. Organized solutions to collective problems — of which organizations are key examples — do not emerge spontaneously; instead, “they consist of nothing other than solutions, always specific, that relatively autonomous actors have created, invented, established, with their particular resources and capacities, to solve these challenges for collective action” (15). And Crozier and Friedberg emphasize the inherent contingency of these particular solutions; there are always alternative solutions, neither better nor worse. This is a rational-choice analysis, though couched in sociological terms rather than economists’ terms. (Here is a more extensive discussion of Crozier and Friedberg; link.)

Jack Knight brings conflict and power into the rational-choice analysis of the emergence of institutions in Institutions and Social Conflict.

I argue that the emphasis on collective benefits in theories of social institutions fails to capture crucial features of institutional development and change. I further argue that our explanations should invoke the distributional effects of such institutions and the conflict inherent in those effects. This requires an investigation of those factors that determine how these distributional conflicts are resolved. (13-14)

Institutions are not created to constrain groups or societies in an effort to avoid suboptimal outcomes but, rather, are the by-product of substantive conflicts over the distributions inherent in social outcomes. (40)

Knight believes that we need to have microfoundations for the ways in which institutions emerge and behave (14), and he seems those mechanisms in the workings of rational choices by the participants within the field of interaction within which the institution emerges.

Actors choose their strategies under various circumstances. In some situations individuals regard the rest of their environment, including the actions of others, as given. They calculate their optimal strategy within the constraints of fixed parameters…. But actors are often confronted by situations characterized by an interdependence between other actors and themselves…. Under these circumstances individuals must choose strategically by incorporating the expectations of the actions of others into their own decision making. (17)

This implies, in particular, that we should not expect socially optimal or efficient outcomes in the emergence of institutions; rather, we should expect institutions that differentially favor the interests of some groups and disfavor those of other groups — even if the social total is lower than a more egalitarian arrangement.

I conclude that social efficiency cannot provide the substantive content of institutional rules. Rational self-interested actors will not be the initiators of such rules if they diminish their own utility. Therefore rational-choice explanations of social institutions based on gains in social efficiency fail as long as they are grounded in the intentions of social actors. (34)

Knight’s work explicitly refutes the occasional Panglossian (or Smithian) assumptions sometimes associated with rational choice theory and micro-economics: the idea that individually rational action leads to a collectively efficient outcome (the invisible hand). This may be true in the context of certain kinds of markets; but it is not generally true in the social and political world. And Knight shows in detail how the assumption fails in the case of institutional emergence and ongoing workings.

Rational choice theory is one particular and specialized version of actor-centered social science (link). It differs from other approaches in the very narrow assumptions it makes about the actor’s particular form of agency; it assumes narrow economic rationality rather than a broader conception of agency or practical rationality (link). What seems clear to me is that we need to take an actor-centered approach if we want to understand institutions — either their emergence or their continuing functioning and change. So the approach taken by rational-choice theorists is ontologically correct. If RCT fails to provide an adequate analysis of institutions, it is because the underlying theory of agency is fundamentally unrealistic about human actors.

Accident analysis and systems thinking

Complex socio-technical systems fail; that is, accidents occur. And it is enormously important for engineers and policy makers to have a better way of thinking about accidents than is the current protocol following an air crash, a chemical plant fire, or the release of a contaminated drug. We need to understand better what the systems and organizational causes of an accident are; even more importantly, we need to have a basis for improving the safe functioning of complex socio-technical systems by identifying better processes and better warning indicators of impending failure.

A long-term leader in the field of systems-safety thinking is Nancy Leveson, a professor of aeronautics and astronautics at MIT and the author of Safeware: System Safety and Computers (1995) and Engineering a Safer World: Systems Thinking Applied to Safety (2012). Leveson has been a particular advocate for two insights: looking at safety as a systems characteristic, and looking for the organizational and social components of safety and accidents as well as the technical event histories that are more often the focus of accident analysis. Her approach to safety and accidents involves looking at a technology system in terms of the set of controls and constraints that have been designed into the process to prevent accidents. “Accidents are seen as resulting from inadequate control or enforcement of constraints on safety-related behavior at each level of the system development and system operations control structures.” (25)

The abstract for her essay “A New Accident Model for Engineering Safety” (link) captures both points.

New technology is making fundamental changes in the etiology of accidents and is creating a need for changes in the explanatory mechanisms used. We need better and less subjective understanding of why accidents occur and how to prevent future ones. The most effective models will go beyond assigning blame and instead help engineers to learn as much as possible about all the factors involved, including those related to social and organizational structures. This paper presents a new accident model founded on basic systems theory concepts. The use of such a model provides a theoretical foundation for the introduction of unique new types of accident analysis, hazard analysis, accident prevention strategies including new approaches to designing for safety, risk assessment techniques, and approaches to designing performance monitoring and safety metrics.

The accident model she describes in this article and elsewhere is STAMP (Systems-Theoretic Accident Model and Processes). Here is a short description of the approach.

In STAMP, systems are viewed as interrelated components that are kept in a state of dynamic equilibrium by feedback loops of information and control. A system in this conceptualization is not a static design—it is a dynamic process that is continually adapting to achieve its ends and to react to changes in itself and its environment. The original design must not only enforce appropriate constraints on behavior to ensure safe operation, but the system must continue to operate safely as changes occur. The process leading up to an accident (loss event) can be described in terms of an adaptive feedback function that fails to maintain safety as performance changes over time to meet a complex set of goals and values…. 

The basic concepts in STAMP are constraints, control loops and process models, and levels of control. (12)

The other point of emphasis in Leveson’s treatment of safety is her consistent effort to include the social and organizational forms of control that are a part of the safe functioning of a complex technological system.

Event-based models are poor at representing systemic accident factors such as structural deficiencies in the organization, management deficiencies, and flaws in the safety culture of the company or industry. An accident model should encourage a broad view of accident mechanisms that expands the investigation from beyond the proximate events. (6)

She treats the organizational backdrop of the technology process in question as being a crucial component of the safe functioning of the process.

Social and organizational factors, such as structural deficiencies in the organization, flaws in the safety culture, and inadequate management decision making and control are directly represented in the model and treated as complex processes rather than simply modeling their reflection in an event chain. (26)

And she treats organizational features as another form of control system (along the lines of Jay Forrester’s early definitions of systems in Industrial Dynamics.

Modeling complex organizations or industries using system theory involves dividing them into hierarchical levels with control processes operating at the interfaces between levels (Rasmussen, 1997). Figure 4 shows a generic socio-technical control model. Each system, of course, must be modeled to reflect its specific features, but all will have a structure that is a variant on this one. (17)

Here is figure 4:

The approach embodied in the STAMP framework is that safety is a systems effect, dynamically influenced by the control systems embodied in the total process in question.

In STAMP, systems are viewed as interrelated components that are kept in a state of dynamic equilibrium by feedback loops of information and control. A system in this conceptualization is not a static design—it is a dynamic process that is continually adapting to achieve its ends and to react to changes in itself and its environment. The original design must not only enforce appropriate constraints on behavior to ensure safe operation, but the system must continue to operate safely as changes occur. The process leading up to an accident (loss event) can be described in terms of an adaptive feedback function that fails to maintain safety as performance changes over time to meet a complex set of goals and values. (12) 

And:

In systems theory, systems are viewed as hierarchical structures where each level imposes constraints on the activity of the level beneath it—that is, constraints or lack of constraints at a higher level allow or control lower-level behavior (Checkland, 1981). Control laws are constraints on the relationships between the values of system variables. Safety-related control laws or constraints therefore specify those relationships between system variables that constitute the nonhazardous system states, for example, the power must never be on when the access door is open. The control processes (including the physical design) that enforce these constraints will limit system behavior to safe changes and adaptations. (17)

Leveson’s understanding of systems theory brings along with it a strong conception of “emergence”. She argues that higher levels of systems possess properties that cannot be reduced to the properties of the components, and that safety is one such property:

In systems theory, complex systems are modeled as a hierarchy of levels of organization, each more complex than the one below, where a level is characterized by having emergent or irreducible properties. Hierarchy theory deals with the fundamental differences between one level of complexity and another. Its ultimate aim is to explain the relationships between different levels: what generates the levels, what separates them, and what links them. Emergent properties associated with a set of components at one level in a hierarchy are related to constraints upon the degree of freedom of those components. (11)

But her understanding of “irreducible” seems to be different from that commonly used in the philosophy of science. She does in fact believe that these higher-level properties can be explained by the system of properties at the lower levels — for example, in this passage she asks “… what generates the levels” and how the emergent properties are “related to constraints” imposed on the lower levels. In other words, her position seems to be similar to that advanced by Dave Elder-Vass (link): emergent properties are properties at a higher level that are not possessed by the components, but which depend upon the interactions and composition of the lower-level components.

The domain of safety engineering and accident analysis seems like a particularly suitable place for Bayesian analysis. It seems unavoidable that accident analysis involves both frequency-based probabilities (e.g. the frequency of pump failure) and expert-based estimates of the likelihood of a particular kind of failure (e.g. the likelihood that a train operator will slacken attention to track warnings in response to company pressure on timetable). Bayesian techniques are suitable for the task of combining these various kinds of estimates of risk into a unified calculation.

The topic of safety and accidents is particularly relevant to Understanding Society because it expresses very clearly the causal complexity of the social world in which we live. And rather than simply ignoring that complexity, the systematic study of accidents gives us an avenue for arriving at better ways of representing, modeling, and intervening in parts of that complex world.

 
%d bloggers like this: