Is the Xerox Corporation supervenient?

Supervenience is the view that the properties of some composite entity B are wholly fixed by the properties and relations of the items A of which it is composed (link, link). The transparency of glass supervenes upon the properties of the atoms of silicon and oxygen of which it is composed and their arrangement.

Can the same be said of a business firm like Xerox when we consider its constituents to be its employees, stakeholders, and other influential actors and their relations and actions? (Call that total field of factors S.) Or is it possible that exactly these actors at exactly the same time could have manifested a corporation with different characteristics?

 
Let’s say the organizational properties we are interested in include internal organizational structure, innovativeness, market adaptability, and level of internal trust among employees. And S consists of the specific individuals and their properties and relations that make up the corporation at a given time. Could this same S have manifested with different properties for Xerox?

One thing is clear. If a highly similar group of individuals had been involved in the creation and development of Xerox, it is entirely possible that the organization would have been substantially different today. We could expect that contingent events and a high level of path dependency would have led to substantial differences in organization, functioning, and internal structure. So the company does not supervene upon a generic group of actors defined in terms of a certain set of beliefs, goals, and modes of decision making over the history of its founding and development. I have sometimes thought this path dependency itself if enough to refute supervenience.

But the claim of supervenience is not a temporal or diachronic claim, but instead a synchronic claim: the current features of structure, causal powers, functioning, etc., of the higher-level entity today are thought to be entirely fixed by the supervenience base (in this case, the particular individuals and their relations and actions). Putting the idea in terms of possible-world theory, there is no possible world in which exactly similar individuals in exactly similar states of relationship and action would underlie a business firm Xerox* which had properties different from the current Xerox firm.

 
One way in which this counterfactual might be true is if a property P of the corporation depended on the states of the agents plus something else — say, the conductivity of copper in its pure state. In the real world W copper is highly conductive, while in W* copper is not-conductive. And in W*, let’s suppose, Xerox has property P* rather than P. On this scenario Xerox does not supervene upon the states of the actors, since these states are identical in W and W*. This is because dependence on the conductivity of copper make a difference not reflected in a difference in the states of the actors. 
 
But this is a pretty hypothetical case. We would only be justified in thinking Xerox does not supervene on S if we had a credible candidate for another property that would make a difference, and I’m hard pressed to do so.  
 
There is another possible line of response for the hardcore supervenience advocate in this case. I’ve assumed the conductivity of copper makes a difference to the corporation without making a difference for the actors. But I suppose it might be maintained that this is impossible: only the states of the actors affect the corporation, since they constitute the corporation; so the scenario I describe is impossible. 
 
The upshot seems to be this: there is no way of resolving the question at the level of pure philosophy. The best we can do is to do concrete empirical work on the actual causal and organizational processes through which the properties of the whole are constituted through the actions and thoughts of the individuals who make it up.

But here is a deeper concern. What makes supervenience minimally plausible in the case of social entities is the insistence on synchronic dependence. But generally speaking, we are always interested in the diachronic behavior and evolution of a social entity. And here the idea of path dependence is more credible than the idea of moment-to-moment dependency on the “supervenience base”. We might say that the property of “innovativeness” displayed by the Xerox Corporation at some periods in its history supervenes moment-to-moment on the actions and thoughts of its constituent individuals; but we might also say that this fact does not explain the higher-level property of innovativeness. Instead, some set of events in the past set the corporation on a path that favored innovation; this corporate culture or climate influenced the selection and behavior of the individuals who make it up; and the day-to-day behavior reflects both the path-dependent history of its higher-level properties and the current configuration of its parts.

Deficiencies of practical rationality in organizations

Suppose we are willing to take seriously the idea that organizations possess a kind of intentionally — beliefs, goals, and purposive actions — and suppose that we believe that the microfoundations of these quasi-intentional states depend on the workings of individual purposive actors within specific sets of relations, incentives, and practices. How does the resulting form of “bureaucratic intelligence” compare with human thought and action?

There is a major set of differences between organizational “intelligence” and human intelligence that turn on the unity of human action compared to the fundamental disunity of organizational action. An individual human being gathers a set of beliefs about a situation, reflects on a range of possible actions, and chooses a line of action designed to bring about his/her goals. An organization is disjointed in each of these activities. The belief-setting part of an organization usually consists of multiple separate processes culminating in an amalgamated set of beliefs or representations. And this amalgamation often reflects deep differences in perspective and method across various sub-departments. (Consider inputs into an international crisis incorporating assessments from intelligence, military, and trade specialists.)

Second, individual intentionality possess a substantial degree of practical autonomy. The individual assesses and adopts the set of beliefs that seem best to him or her in current circumstances. The organization in its belief-acquisition is subject to conflicting interests, both internal and external, that bias the belief set in one direction or the other. (This is the central thrust of experts on science policy like Naomi Oreskes.) The organization is not autonomous in its belief formation processes.

Third, an individual’s actions have a reasonable level of consistency and coherence over time. The individual seeks to avoid being self-defeating by doing X and Y while knowing that X undercuts Y. An organization is entirely capable of pursuing a suite of actions which embody exactly this kind of inconsistency, precisely because the actions chosen are the result of multiple disagreeing sub-agencies and officers.

Fourth, we have some reason to expect a degree of stability in the goals and values that underlie actions by an individual. But organizations, exactly because their behavior is a joint product of sub-agents with conflicting plans and goals, are entirely capable of rapid change of goals and values. Deepening this instability is the fluctuating powers and interests of external stakeholders who apply pressure for different values and goals over time.

Finally, human thinkers are potentially epistemic thinkers — they are at least potentially capable of following disciplines of analysis, reasoning, and evidence in their practical engagement with the world. By contrast, because of the influence of interests, both internal and external, organizations are perpetually subject to the distortion of belief, intention, and implementation by actors who have an interest in the outcome of the project. And organizations have little ability to apply rational rational standards to their processes of belief, intention, and implementation formation. Organizational intentionality lacks overriding rational control.

Consider more briefly the topic of action. Human actors suffer various deficiencies of performance when it comes to purposive action, including weakness of the will and self deception. But organizations are altogether less capable of effectively mounting the steps needed to fully implement a plan or a complicated policy or action. This is because of the looseness of linkages that exist between executive and agent within an organization, the perennial possibility of principal-agent problems, and the potential interference with performance created by interested parties outside the organization.

This line of thought suggests that organizational lack “unity of apperception and intention”. There are multiple levels and zones of intention formation, and much of this plurality persists throughout real processes of organizational thinking. And this disunity affects both belief, intention and action. Organizations are not univocal at any point. Belief formation, intention formation, and action remain fragmented and multivocal.

These observations are somewhat parallel to the paradoxes of social choice and various voting systems governing a social choice function. Kenneth Arrow demonstrated it is impossible to design a voting system that guarantees consistency of choice by a group of individual consistency voters. The analogy here is the idea that there is no organizational design possible that guarantees a high degree of consistency and rationality in large organizational decision processes at any stage of quasi-intentionality, including belief acquisition, policy formulation, and policy implementation. 

The mind of government

We often speak of government as if it has intentions, beliefs, fears, plans, and phobias. This sounds a lot like a mind. But this impression is fundamentally misleading. “Government” is not a conscious entity with a unified apperception of the world and its own intentions. So it is worth teasing out the ways in which government nonetheless arrives at “beliefs”, “intentions”, and “decisions”.

Let’s first address the question of the mythical unity of government. In brief, government is not one unified thing. Rather, it is an extended network of offices, bureaus, departments, analysts, decision-makers, and authority structures, each of which has its own reticulated internal structure.

This has an important consequence. Instead of asking “what is the policy of the United States government towards Africa?”, we are driven to ask subordinate questions: what are the policies towards Africa of the State Department, the Department of Defense, the Department of Commerce, the Central Intelligence Agency, or the Agency for International Development? And for each of these departments we are forced to recognize that each is itself a large bureaucracy, with sub-units that have chosen or adapted their own working policy objectives and priorities. There are chief executives at a range of levels — President of the United States, Secretary of State, Secretary of Defense, Director of CIA — and each often has the aspiration of directing his or her organization as a tightly unified and purposive unit. But it is perfectly plain that the behavior of functional units within agencies are only loosely controlled by the will of the executive. This does not mean that executives have no control over the activities and priorities of subordinate units. But it does reflect a simple and unavoidable fact about large organizations. An organization is more like a slime mold than it is like a control algorithm in a factory.

This said, organizational units at all levels arrive at something analogous to beliefs (assessments of fact and probable future outcomes), assessments of priorities and their interactions, plans, and decisions (actions to take in the near and intermediate future). And governments make decisions at the highest level (leave the EU, raise taxes on fuel, prohibit immigration from certain countries, …). How does the analytical and factual part of this process proceed? And how does the decision-making part unfold?

One factor is particularly evident in the current political environment in the United States. Sometimes the analysis and decision-making activities of government are short-circuited and taken by individual executives without an underlying organizational process. A president arrives at his view of the facts of global climate change based on his “gut instincts” rather than an objective and disinterested assessment of the scientific analysis available to him. An Administrator of the EPA acts to eliminate long-standing environmental protections based on his own particular ideological and personal interests. A Secretary of the Department of Energy takes leadership of the department without requesting a briefing on any of its current projects. These are instances of the dictator strategy (in the social-choice sense), where a single actor substitutes his will for the collective aggregation of beliefs and desires associated with both bureaucracy and democracy. In this instance the answer to our question is a simple one: in cases like these government has beliefs and intentions because particular actors have beliefs and intentions and those actors have the power and authority to impose their beliefs and intentions on government.

The more interesting cases involve situations where there is a genuine collective process through which analysis and assessment takes place (of facts and priorities), and through which strategies are considered and ultimately adopted. Agencies usually make decisions through extended and formalized processes. There is generally an organized process of fact gathering and scientific assessment, followed by an assessment of various policy options with public exposure. Final a policy is adopted (the moment of decision).

The decision by the EPA to ban DDT in 1972 is illustrative (link, linklink). This was a decision of government which thereby became the will of government. It was the result of several important sub-processes: citizen and NGO activism about the possible toxic harms created by DDT, non-governmental scientific research assessing the toxicity of DDT, an internal EPA process designed to assess the scientific conclusions about the environmental and human-health effects of DDT, an analysis of the competing priorities involved in this issue (farming, forestry, and malaria control versus public health), and a decision recommended to the Administrator and adopted that concluded that the priority of public health and environmental safety was weightier than the economic interests served by the use of the pesticide.

Other examples of agency decision-making follow a similar pattern. The development of policy concerning science and technology is particularly interesting in this context. Consider, for example, Susan Wright (link) on the politics of regulation of recombinant DNA. This issue is explored more fully in her book Molecular Politics: Developing American and British Regulatory Policy for Genetic Engineering, 1972-1982. This is a good case study of “government making up its mind”. Another interesting case study is the development of US policy concerning ozone depletion; link.

These cases of science and technology policy illustrate two dimensions of the processes through which a government agency “makes up its mind” about a complex issue. There is an analytical component in which the scientific facts and the policy goals and priorities are gathered and assessed. And there is a decision-making component in which these analytical findings are crafted into a decision — a policy, a set of regulations, or a funding program, for example. It is routine in science and technology policy studies to observe that there is commonly a substantial degree of intertwining between factual judgments and political preferences and influences brought to bear by powerful outsiders. (Here is an earlier discussion of these processes; link.)

Ideally we would like to imagine a process of government decision-making that proceeds along these lines: careful gathering and assessment of the best available scientific evidence about an issue through expert specialist panels and sections; careful analysis of the consequences of available policy choices measured against a clear understanding of goals and priorities of the government; and selection of a policy or action that is best, all things considered, for forwarding the public interest and minimizing public harms. Unfortunately, as the experience of government policies concerning climate change in both the Bush administration and the Trump administration illustrates, ideology and private interest distort every phase of this idealized process.

(Philip Tetlock’s Superforecasting: The Art and Science of Prediction offers an interesting analysis of the process of expert factual assessment and prediction. Particularly interesting is his treatment of intelligence estimates.)

Patient safety

An issue which is of concern to anyone who receives treatment in a hospital is the topic of patient safety. How likely is it that there will be a serious mistake in treatment — wrong-site surgery, incorrect medication or radiation dose, exposure to a hospital-acquired infection? The current evidence is alarming. (Martin Makary et al estimate that over 250,000 deaths per year result from medical mistakes — making medical error now the third leading cause of mortality in the United States (link).) And when these events occur, where should we look for assigning responsibility — at the individual providers, at the systems that have been implemented for patient care, at the regulatory agencies responsible for overseeing patient safety?

Medical accidents commonly demonstrate a complex interaction of factors, from the individual provider to the technologies in use to failures of regulation and oversight. We can look at a hospital as a place where caring professionals do their best to improve the health of their patients while scrupulously avoiding errors. Or we can look at it as an intricate system involving the recording and dissemination of information about patients; the administration of procedures to patients (surgery, medication, radiation therapy). In this sense a hospital is similar to a factory with multiple intersecting locations of activity. Finally, we can look at it as an organization — a system of division of labor, cooperation, and supervision by large numbers of staff whose joint efforts lead to health and accidents alike. Obviously each of these perspectives is partially correct. Doctors, nurses, and technicians are carefully and extensively trained to diagnose and treat their patients. The technology of the hospital — the digital patient record system, the devices that administer drugs, the surgical robots — can be designed better or worse from a safety point of view. And the social organization of the hospital can be effective and safe, or it can be dysfunctional and unsafe. So all three aspects are relevant both to safe operations and the possibility of chronic lack of safety.

So how should we analyze the phenomenon of patient safety? What factors can be identified that distinguish high safety hospitals from low safety? What lessons can be learned from the study of accidents and mistakes that cumulatively lead to a hospitals patient safety record?

The view that primarily emphasizes expertise and training of individual practitioners is very common in the healthcare industry, and yet this approach is not particularly useful as a basis for improving the safety of healthcare systems. Skill and expertise are necessary conditions for effective medical treatment; but the other two zones of accident space are probably more important for reducing accidents — the design of treatment systems and the organizational features that coordinate the activities of the various individuals within the system.

Dr. James Bagian is a strong advocate for the perspective of treating healthcare institutions as systems. Bagian considers both technical systems characteristics of processes and the organizational forms through which these processes are carried out and monitored. And he is very skilled at teasing out some of the ways in which features of both system and organization lead to avoidable accidents and failures. I recall his description of a safety walkthrough he had done in a major hospital. He said that during the tour he noticed a number of nurses’ stations which were covered with yellow sticky notes. He observed that this is both a symptom and a cause of an accident-prone organization. It means that individual caregivers were obligated to remind themselves of tasks and exceptions that needed to be observed. Far better was to have a set of systems and protocols that made sticky notes unnecessary. Here is the abstract from a short summary article by Bagian on the current state of patient safety:

Abstract

The traditional approach to patient safety in health care has ranged from reticence to outward denial of serious flaws. This undermines the otherwise remarkable advances in technology and information that have characterized the specialty of medical practice. In addition, lessons learned in industries outside health care, such as in aviation, provide opportunities for improvements that successfully reduce mishaps and errors while maintaining a standard of excellence. This is precisely the call in medicine prompted by the 1999 Institute of Medicine report “To Err Is Human: Building a Safer Health System.” However, to effect these changes, key components of a successful safety system must include: (1) communication, (2) a shift from a posture of reliance on human infallibility (hence “shame and blame”) to checklists that recognize the contribution of the system and account for human limitations, and (3) a cultivation of non-punitive open and/or de-identified/anonymous reporting of safety concerns, including close calls, in addition to adverse events.

(Here is the Institute of Medicine study to which Bagian refers; link.)

Nancy Leveson is an aeronautical and software engineer who has spent most of her career devoted to designing safe systems. Her book Engineering a Safer World: Systems Thinking Applied to Safety is a recent presentation of her theories of systems safety. She applies these approaches to problems of patient safety with several co-authors in “A Systems Approach to Analyzing and Preventing Hospital Adverse Events” (link). Here is the abstract and summary of findings for that article:

Objective:

This study aimed to demonstrate the use of a systems theory-based accident analysis technique in health care applications as a more powerful alternative to the chain-of-event accident models currently underpinning root cause analysis methods.

Method:

A new accident analysis technique, CAST [Causal Analysis based on Systems Theory], is described and illustrated on a set of adverse cardiovascular surgery events at a large medical center. The lessons that can be learned from the analysis are compared with those that can be derived from the typical root cause analysis techniques used today.

Results:

The analysis of the 30 cardiovascular surgery adverse events using CAST revealed the reasons behind unsafe individual behavior, which were related to the design of the system involved and not negligence or incompetence on the part of individuals. With the use of the system-theoretic analysis results, recommendations can be generated to change the context in which decisions are made and thus improve decision making and reduce the risk of an accident.

Conclusions:

The use of a systems-theoretic accident analysis technique can assist in identifying causal factors at all levels of the system without simply assigning blame to either the frontline clinicians or technicians involved. Identification of these causal factors in accidents will help health care systems learn from mistakes and design system-level changes to prevent them in the future.

Crucial in this article is this research group’s effort to identify causes “at all levels of the system without simply assigning blame to either the frontline clinicians or technicians involved”. The key result is this: “The analysis of the 30 cardiovascular surgery adverse events using CAST revealed the reasons behind unsafe individual behavior, which were related to the design of the system involved and not negligence or incompetence on the part of individuals.”

Bagian, Leveson, and others make a crucial point: in order to substantially increase the performance of hospitals and the healthcare system more generally when it comes to patient safety, it will be necessary to extend the focus of safety analysis from individual incidents and agents to the systems and organizations through which these accidents were possible. In other words, attention to systems and organizations is crucial if we are to significantly reduce the frequency of medical and hospital mistakes.

(The Makary et al estimate of 250,000 deaths caused by medical error has been questioned on methodological grounds. See Aaron Carroll’s thoughtful rebuttal (NYT 8/15/16; link).)

Nuclear accidents

 
diagrams: Chernobyl reactor before and after
 

Nuclear fission is one of the world-changing discoveries of the mid-twentieth century. The atomic bomb projects of the United States led to the atomic bombing of Japan in August 1945, and the hope for limitless electricity brought about the proliferation of a variety of nuclear reactors around the world in the decades following World War II. And, of course, nuclear weapons proliferated to other countries beyond the original circle of atomic powers.

Given the enormous energies associated with fission and the dangerous and toxic properties of radioactive components of fission processes, the possibility of a nuclear accident is a particularly frightening one for the modern public. The world has seen the results of several massive nuclear accidents — Chernobyl and Fukushima in particular — and the devastating results they have had on human populations and the social and economic wellbeing of the regions in which they occurred.

Safety is therefore a paramount priority in the nuclear industry, both in research labs and military and civilian applications. So what is the situation of safety in the nuclear sector? Jim Mahaffey’s Atomic Accidents: A History of Nuclear Meltdowns and Disasters: From the Ozark Mountains to Fukushima is a detailed and carefully researched attempt to answer this question. And the information he provides is not reassuring. Beyond the celebrated and well-known disasters at nuclear power plants (Three Mile Island, Chernobyl, Fukushima), Mahaffey refers to hundreds of accidents involving reactors, research laboratories, weapons plants, and deployed nuclear weapons that have had less public awareness. These accidents resulted in a very low number of lives lost, but their frequency is alarming. They are indeed “normal accidents” (Perrow, Normal Accidents: Living with High-Risk Technologies. For example:

  • a Japanese fishing boat is contaminated by fallout from Castle Bravo test of hydrogen bomb; lots of radioactive fish at the markets in Japan (March 1, 1954) (kl 1706)
  • one MK-6 atomic bomb is dropped on Mars Bluff, South Carolina, after a crew member accidentally pulled the emergency bomb release handle (February 5, 1958) (kl 5774)
  • Fermi 1 liquid sodium plutonium breeder reactor experiences fuel meltdown during startup trials near Detroit (October 4, 1966) (kl 4127)

Mahaffey also provides detailed accounts of the most serious nuclear accidents and meltdowns during the past forty years, Three Mile Island, Chernobyl, and Fukushima.

The safety and control of nuclear weapons is of particular interest. Here is Mahaffey’s summary of “Broken Arrow” events — the loss of atomic and fusion weapons:

Did the Air Force ever lose an A-bomb, or did they just misplace a few of them for a short time? Did they ever drop anything that could be picked up by someone else and used against us? Is humanity going to perish because of poisonous plutonium spread that was snapped up by the wrong people after being somehow misplaced? Several examples will follow. You be the judge. 

Chuck Hansen [

U.S. Nuclear Weapons – The Secret History

] was wrong about one thing. He counted thirty-two “Broken Arrow” accidents. There are now sixty-five documented incidents in which nuclear weapons owned by the United States were lost, destroyed, or damaged between 1945 and 1989. These bombs and warheads, which contain hundreds of pounds of high explosive, have been abused in a wide range of unfortunate events. They have been accidentally dropped from high altitude, dropped from low altitude, crashed through the bomb bay doors while standing on the runway, tumbled off a fork lift, escaped from a chain hoist, and rolled off an aircraft carrier into the ocean. Bombs have been abandoned at the bottom of a test shaft, left buried in a crater, and lost in the mud off the coast of Georgia. Nuclear devices have been pounded with artillery of a foreign nature, struck by lightning, smashed to pieces, scorched, toasted, and burned beyond recognition. Incredibly, in all this mayhem, not a single nuclear weapon has gone off accidentally, anywhere in the world. If it had, the public would know about it. That type of accident would be almost impossible to conceal. (kl 5527)

There are a few common threads in the stories of accident and malfunction that Mahaffey provides. First, there are failures of training and knowledge on the part of front-line workers. The physics of nuclear fission are often counter-intuitive, and the idea of critical mass does not fully capture the danger of a quantity of fissionable material. The geometry of the storage of the material makes a critical difference in going critical. Fissionable material is often transported and manipulated in liquid solution; and the shape and configuration of the vessel in which the solution is held makes a difference to the probability of exponential growth of neutron emission — leading to runaway fission of the material. Mahaffey documents accidents that occurred in nuclear materials processing plants that resulted from plant workers applying what they knew from industrial plumbing to their efforts to solve basic shop-floor problems. All too often the result was a flash of blue light and the release of a great deal of heat and radioactive material.

Second, there is a fault at the opposite end of the knowledge spectrum — the tendency of expert engineers and scientists to believe that they can solve complicated reactor problems on the fly. This turned out to be a critical problem at Chernobyl (kl 6859).

The most difficult problem to handle is that the reactor operator, highly trained and educated with an active and disciplined mind, is liable to think beyond the rote procedures and carefully scheduled tasks. The operator is not a computer, and he or she cannot think like a machine. When the operator at NRX saw some untidy valve handles in the basement, he stepped outside the procedures and straightened them out, so that they were all facing the same way. (kl 2057)

There are also clear examples of inappropriate supervision in the accounts shared by Mahaffey. Here is an example from Chernobyl.

[Deputy chief engineer] Dyatlov was enraged. He paced up and down the control panel, berating the operators, cursing, spitting, threatening, and waving his arms. He demanded that the power be brought back up to 1,500 megawatts, where it was supposed to be for the test. The operators, Toptunov and Akimov, refused on grounds that it was against the rules to do so, even if they were not sure why. 

Dyatlov turned on Toptunov. “You lying idiot! If you don’t increase power, Tregub will!”  

Tregub, the Shift Foreman from the previous shift, was officially off the clock, but he had stayed around just to see the test. He tried to stay out of it. 

Toptunov, in fear of losing his job, started pulling rods. By the time he had wrestled it back to 200 megawatts, 205 of the 211 control rods were all the way out. In this unusual condition, there was danger of an emergency shutdown causing prompt supercriticality and a resulting steam explosion. At 1: 22: 30 a.m., a read-out from the operations computer advised that the reserve reactivity was too low for controlling the reactor, and it should be shut down immediately. Dyatlov was not worried. “Another two or three minutes, and it will be all over. Get moving, boys! (kl 6887)

This was the turning point in the disaster.

A related fault is the intrusion of political and business interests into the design and conduct of high-risk nuclear actions. Leaders want a given outcome without understanding the technical details of the processes they are demanding; subordinates like Toptunov are eventually cajoled or coerced into taking the problematic actions. The persistence of advocates for liquid sodium breeder reactors represents a higher-level example of the same fault. Associated with this role of political and business interests is an impulse towards secrecy and concealment when accidents occur and deliberate understatement of the public dangers created by an accident — a fault amply demonstrated in the Fukushima disaster.

Atomic Accidents provides a fascinating history of events of which most of us are unaware. The book is not primarily intended to offer an account of the causes of these accidents, but rather the ways in which they unfolded and the consequences they had for human welfare. (Generally speaking his view is that nuclear accidents in North America and Western Europe have had remarkably few human casualties.) And many of the accidents he describes are exactly the sorts of failures that are common in all largescale industrial and military processes.

(Largescale technology failure has come up frequently here. See these posts for analysis of some of the organizational causes of technology failure (link, link, link).)

Trust and organizational effectiveness

It is fairly well agreed that organizations require a degree of trust among the participants in order for the organization to function at all. But what does this mean? How much trust is needed? How is trust cultivated among participants? And what are the mechanisms through which trust enhances organizational effectiveness?

The minimal requirements of cooperation presuppose a certain level of trust. As A plans and undertakes a sequence of actions designed to bring about Y, his or her efforts must rely upon the coordination promised by other actors. If A does not have a sufficiently high level of confidence in B’s assurances and compliance, then he will be rationally compelled to choose another series of actions. If Larry Bird didn’t have trust in his teammate Dennis Johnson, the famous steal would not have happened.

 

First, what do we mean by trust in the current context? Each actor in an organization or group has intentions, engages in behavior, and communicates with other actors. Part of communication is often in the form of sharing information and agreeing upon a plan of coordinated action. Agreeing upon a plan in turn often requires statements and commitments from various actors about the future actions they will take. Trust is the circumstance that permits others to rely upon those statements and commitments. We might say, then, that A trusts B just in case —

  • A believes that when B asserts P, this is an honest expression of B’s beliefs.
  • A believes that when B says he/she will do X, this is an honest commitment on B’s part and B will carry it out (absent extraordinary reasons to the contrary).
  • A believes that when B asserts that his/her actions will be guided by his best understanding of the purposes and goals of the organization, this is a truthful expression.
  • A believes that B’s future actions, observed and unobserved, will be consistent with his/her avowals of intentions, values, and commitments.

So what are some reasons why mistrust might rear its ugly head between actors in an organization? Why might A fail to trust B?

  • A may believe that B’s private interests are driving B’s actions (rather than adherence to prior commitments and values).
  • A may believe that B suffers from weakness of the will, an inability to carry out his honest intentions.
  • A may believe that B manipulates his statements of fact to suit his private interests.
  • Or less dramatically: A may not have high confidence in these features of B’s behavior.
  • B may have no real interest or intention in behaving in a truthful way.

And what features of organizational life and practice might be expected to either enhance inter-personal trust or to undermine it?

Trust is enhanced by individuals having the opportunity to get acquainted with their collaborators in a more personal way — to see from non-organizational contexts that they are generally well intentioned; that they make serious efforts to live up to their stated intentions and commitments; and that they are generally honest. So perhaps there is a rationale for the bonding exercises that many companies undertake for their workers.

Likewise, trust is enhanced by the presence of a shared and practiced commitment to the value of trustworthiness. An organization itself can enhance trust in its participants by performing the actions that its participants expect the organization to perform. For example, an organization that abruptly and without consultation ends an important employee benefit undermines trust in the employees that the organization has their best interests at heart. This abrogation of prior obligations may in turn lead individuals to behave in a less trustworthy way, and lead others to have lower levels of trust in each other.

How does enhancing trust have the promise of bringing about higher levels of organizational effectiveness? Fundamentally this comes down to the question of the value of teamwork and the burden of unnecessary transaction costs. If every expense report requires investigation, the amount of resources spent on accountants will be much greater than a situation where only the outlying reports are questioned. If each vice president needs to defend him or herself against the possibility that another vice president is conspiring against him, then less time and energy are available to do the work of the organization. If the CEO doesn’t have high confidence that her executive team will work wholeheartedly to bring about a successful implementation of a risky investment, then the CEO will choose less risky investments.

In other words, trust is crucial for collaboration and teamwork. And organizations that manage to help to cultivate a high level of trust among its participants is likely to perform better than one that depends primarily on supervision and enforcement.

The culture of an organization

It is often held that the behavior of a particular organization is affected by its culture. Two banks may have very similar organizational structures but show rather different patterns of behavior, and those differences are ascribed to differences in culture. What does this mean? Clifford Geertz is one of the most articulate theorists of culture — especially in his earlier works. Here is a statement couched in terms of religion as a cultural system from The Interpretation Of Cultures. A religion is …

(1) a system of symbols which act to (2) establish powerful, pervasive, and long-lasting moods and motivations in men by (3) formulating conceptions of a general order of existence and (4) clothing these conceptions with such an aura of factuality that (5) the moods and motivations seem uniquely realistic. (90)

And again:

The concept of culture I espouse, and whose utility the essays below attempt to demonstrate, is essentially a semiotic one. Believing, with Max Weber, that man is an animal suspended in webs of significance he himself has spun, I take culture to be those webs, and the analysis of it to be therefore not an experimental science in search of law but an interpretive one in search of meaning. (5)

On its face this idea seems fairly simple. We might stipulate that “culture” refers to a set of beliefs, values, and practices that are shared by a number of individuals within the group, including leaders, managers, and staff members. But, as we have seen repeatedly in other posts, we need to think of these statements in terms of a distribution across a population rather than as a uniform set of values.

Consider this hypothetical comparison of two organizations with respect to employees’ attitudes towards working with colleagues of a different religion. (This example is fictitious.) Suppose that the employees of two organizations have been surveyed on the topic of their comfort level at working with other people of different religious beliefs, on a scale of 0-21. Low values indicate a lower level of comfort.

The blue organization shows a distribution of individuals who are on average more accepting of religious diversity than the gold organization. The weighted score for the blue population is about 10.4, in comparison to a weighted score of 9.9 for the gold population. This is a relatively small difference between the two populations; but it may be enough to generate meaningful differences in behavior and performance. If, for example, the attitude measured here leads to an increased likelihood for individuals to make disparaging comments about the co-worker’s religion, then we might predict that the gold group will have a somewhat higher level of incidents of religious intolerance. And if we further hypothesize that a disparaging work environment has some effect on work productivity, then we might predict that the blue group will have somewhat higher productivity.

Current discussions of sexual harassment in the workplace are often couched in terms of organizational culture. It appears that sexual harassment is more frequent and flagrant in some organizations than others. Women are particularly likely to be harassed in a work culture in which men believe and act as though they are at liberty to impose sexual language and action on female co-workers and in which the formal processes of reporting of harassment are weak or disregarded. The first is a cultural fact and the second is a structural or institutional fact.

We can ask several causal questions about this interpretation of organizational culture. What are the factors that lead to the establishment and currency of a given profile of beliefs, values, and practices within an organization? And what factors exist that either reproduce those beliefs or undermine them? Finally we can ask what the consequences of a given culture profile are in the internal and external performance of the organization.

There seem to be two large causal mechanisms responsible for establishment and maintenance of a particular cultural constellation within an organization. First is recruitment. One organization may make a specific effort to screen candidates so as to select in favor of a particular set of values and attitudes — acceptance, collaboration, trustworthiness, openness to others. And another may favor attitudes and values that are thought to be more directly related to profitability or employee malleability. These selection mechanisms can lead to significant differences in the overall culture of the organization. And the decision to orient recruitment in one way rather than another is itself an expression of values.

The second large mechanism is the internal socialization and leadership processes of the organization. We can hypothesize that an organization whose leaders and supervisors both articulate the values of equality and respect in the workplace and who demonstrate that commitment in their own actions will be one in which more people in the organization will adopt those values. And we can likewise hypothesize that the training and evaluation processes of an organization can be effective in cultivating the values of the organization. In other words, it seems evident that leadership and training are particularly relevant to the establishment of a particular organizational culture.

The other large causal question is how and to what extent cultural differences across organizations have effects on the performance and behavior of those organizations. We can hypothesize that differences in organizational values and culture lead to differences in behavior within the organization — more or less collaboration, more or less harassment, more or less bad behavior of various kinds. These differences are themselves highly important. But we can also hypothesize that differences like these can lead to differences in organizational effectiveness. This is the central idea of the field of positive organizational studies. Scholars like Kim Cameron and others argue, on the basis of empirical studies across organizational settings, that organizations that embody the values of mutual acceptance, equality, and a positive orientation towards each others’ contributions are in fact more productive organizations as well (Competing Values Leadership: Second Edition; link).

A new model of organization?

In Team of Teams: New Rules of Engagement for a Complex World General Stanley McChrystal (with Tantum Collins, David Silverman, and Chris Fussell) describes a new, 21st-century conception of organization for large, complex activities involving thousands of individuals and hundreds of major sub-tasks. His concept is grounded in his experience in counter-insurgency warfare in Iraq. Rather than being constructed as centrally organized, bureaucratic, hierarchical processes with commanders and scripted agents, McChrystal argues that modern counter-terrorism requires a more decentralized and flexible system of action, which he refers to as “teams of teams”. Information is shared freely, local commanders have ready access to resources and knowledge from other experts, and they make decisions in a more flexible way. The model hopes to capture the benefits of improvisation, flexibility, and a much higher level of trust and communication than is characteristic of typical military and corporate organizations.

 

One place where the “team of teams” structure is plausible is in the context of a focused technology startup company, where the whole group of participants need to be in regular and frequent collaboration with each other. Indeed, Paul Rabinow’s ethnography in 1996 of the Cetus Corporation in its pursuit of PCR (polymerase chain reaction) in Making PCR: A Story of Biotechnology reflects a very similar topology of information flows and collaboration links across and within working subgroups (link). But the vision does not fit very well the organizational and operational needs of a large hospital, a railroad company, or a research university. It seems plausible that the challenges the US military faced in fighting Al-Qaeda and ISIL are not really analogous to those faced by less dramatic organizations like hospitals, universities, and corporations. The decentralized and improvisational circumstances of urban warfare against loosely organized terrorists may be sui generis

McChrystal proposes an organizational structure that is more decentralized, more open to local decision-making, and more flexible and resilient. These are unmistakeable virtues in some circumstances; but not in all circumstances and all organizations. And arguably such a structure would have been impossible in the planning and execution of the French defense of Dien Bien Phu or the US decision to wage war against the Vietnamese insurgency ten years later. These were situations where central decisions needed to be made, and the decisions needed to be implemented through well organized bureaucracies. The problem in both instances is that the wrong decisions were made, based on the wrong information and assessments. What was needed, it would appear, was better executive leadership and decision-making — not a fundamentally decentralized pattern of response and counter-response.

One thing that deserves comment in the context of McChrystal’s book is the history of bad organization, bad intelligence, and bad decision-making the world has witnessed in the military experiences of the past century. The radical miscalculations and failures of planning involved in the first months of the Korean War, the painful and tragic misjudgments made by the French military in preparing for Dien Bien Phu, the equally bad thinking and planning done by Robert McNamara and the whiz kids leading to the Vietnam War — these examples stand out as sentinel illustrations of the failures of large organizations that have been tasked to carry out large, complex activities involving numerous operational units. The military and the national security establishments were good at some tasks, and disastrously bad at others. And the things they were bad at were both systemic and devastating. Bernard Fall illustrates these failures in Hell In A Very Small Place: The Siege Of Dien Bien Phu, and David Halberstam does so for the decision-making that led to the war in Vietnam in The Best and the Brightest.

So devising new ideas about command, planning, intelligence gathering and analysis, and priority-setting that are more effective would be a big contribution to humanity. But the deficiencies in Dien Bien Phu, Korea, or Vietnam seem different from those McChrystal identifies in Iraq. What was needed in these portentous moments of policy choice was clear-eyed establishment of appropriate priorities and goals, honest collection of intelligence and sources of information, and disinterested implementation of policies and plans that served the highest interests of the country. The “team of teams” approach doesn’t seem to be a general solution to the wide range of military and political challenges nations face.

What one would have wanted to see in the French military or the US national security apparatus is something different from the kind of teamwork described by McChrystal: greater honesty on all parts, a commitment to taking seriously the assessments of experts and participants in the field, an openness to questioning strongly held assumptions, and a greater capacity for institutional wisdom in arriving at decisions of this magnitude. We would have wanted to see a process that was not dominated by large egos, self-interest, and fixed ideas. We would have wanted French generals and their civilian masters to soberly assess the military function that a fortress camp at Dien Bien Phu could satisfy; the realistic military requirements that would need to be satisfied in order to defend the location; and an honest effort to solicit the very best information and judgment from experienced commanders and officials about what a Viet-Minh siege might look like. Instead, the French military was guided by complacent assumptions about French military superiority, which led to a genuine catastrophe for the soldiers assigned to the task and to French society more broadly.

There are valid insights contained in McChrystal’s book about the urgency of breaking down obstacles to communication and action within sprawling organizations as they confront a changing environment. But it doesn’t add up to a model that is well designed for most contexts in which large organizations actually function.

How organizations adapt

Organizations do things; they depend upon the coordinated efforts of numerous individuals; and they exist in environments that affect their ongoing success or failure. Moreover, organizations are to some extent plastic: the practices and rules that make them up can change over time. Sometimes these changes happen as the result of deliberate design choices by individuals inside or outside the organization; so a manager may alter the rules through which decisions are made about hiring new staff in order to improve the quality of work. And sometimes they happen through gradual processes over time that no one is specifically aware of. The question arises, then, whether organizations evolve toward higher functioning based on the signals from the environments in which they live; or on the contrary, whether organizational change is stochastic, without a gradient of change towards more effective functioning? Do changes within an organization add up over time to improved functioning? What kinds of social mechanisms might bring about such an outcome?

One way of addressing this topic is to consider organizations as mid-level social entities that are potentially capable of adaptation and learning. An organization has identifiable internal processes of functioning as well as a delineated boundary of activity. It has a degree of control over its functioning. And it is situated in an environment that signals differential success/failure through a variety of means (profitability, success in gaining adherents, improvement in market share, number of patents issued, …). So the environment responds favorably or unfavorably, and change occurs.

Is there anything in this specification of the structure, composition, and environmental location of an organization that suggests the possibility or likelihood of adaptation over time in the direction of improvement of some measure of organizational success? Do institutions and organizations get better as a result of their interactions with their environments and their internal structure and actors?

There are a few possible social mechanisms that would support the possibility of adaptation towards higher functioning. One is the fact that purposive agents are involved in maintaining and changing institutional practices. Those agents are capable of perceiving inefficiencies and potential gains from innovation, and are sometimes in a position to introduce appropriate innovations. This is true at various levels within an organization, from the supervisor of a custodial staff to a vice president for marketing to a CEO. If the incentives presented to these agents are aligned with the important needs of the organization, then we can expect that they will introduce innovations that enhance functioning. So one mechanism through which we might expect that organizations will get better over time is the fact that some agents within an organization have the knowledge and power necessary to enact changes that will improve performance, and they sometimes have an interest in doing so. In other words, there is a degree of intelligent intentionality within an organization that might work in favor of enhancement.

This line of thought should not be over-emphasized, however, because there are competing forces and interests within most organizations. Previous posts have focused on current organizational theory based on the idea of a “strategic action field” of insiders and outsiders who determine the activities of the organization (Fligstein and McAdam, Crozier; linklink). This framework suggests that the structure and functioning of an organization is not wholly determined by a single intelligent actor (“the founder”), but is rather the temporally extended result of interactions among actors in the pursuit of diverse aims. This heterogeneity of purposive actions by actors within an institution means that the direction of change is indeterminate; it is possible that the coalitions that form will bring about positive change, but the reverse is possible as well.

And in fact, many authors and participants have pointed out that it is often enough not the case that the agents’ interests are aligned with the priorities and needs of the organization. Jack Knight offers persuasive critique of the idea that organizations and institutions tend to increase in their ability to provide collective benefits in Institutions and Social Conflict. CEOs who have a financial interest in a rapid stock price increase may take steps that worsen functioning for shortterm market gain; supervisors may avoid work-flow innovations because they don’t want the headache of an extended change process; vice presidents may deny information to other divisions in order to enhance appreciation of the efforts of their own division. Here is a short description from Knight’s book of the way that institutional adjustment occurs as a result of conflict among players of unequal powers:

Individual bargaining is resolved by the commitments of those who enjoy a relative advantage in substantive resources. Through a series of interactions with various members of the group, actors with similar resources establish a pattern of successful action in a particular type of interaction. As others recognize that they are interacting with one of the actors who possess these resources, they adjust their strategies to achieve their best outcome given the anticipated commitments of others. Over time rational actors continue to adjust their strategies until an equilibrium is reached. As this becomes recognized as the socially expected combination of equilibrium strategies, a self-enforcing social institution is established. (Knight, 143)

A very different possible mechanism is unit selection, where more successful innovations or firms survive and less successful innovations and firms fail. This is the premise of the evolutionary theory of the firm (Nelson and Winter, An Evolutionary Theory of Economic Change). In a competitive market, firms with low internal efficiency will have a difficult time competing on price with more efficient firms; so these low-efficiency firms will go out of business occasionally. Here the question of “units of selection” arises: is it firms over which selection operates, or is it lower-level innovations that are the object of selection?

Geoffrey Hodgson provides a thoughtful review of this set of theories here, part of what he calls “competence-based theories of the firm”. Here is Hobson’s diagram of the relationships that exist among several different approaches to study of the firm.

The market mechanism does not work very well as a selection mechanism for some important categories of organizations — government agencies, legislative systems, or non-profit organizations. This is so, because the criterion of selection is “profitability / efficiency within a competitive market”; and government and non-profit organizations are not importantly subject to the workings of a market.

In short, the answer to the fundamental question here is mixed. There are factors that unquestionably work to enhance effectiveness in an organization. But these factors are weak and defeasible, and the countervailing factors (internal conflict, divided interests of actors, slackness of corporate marketplace) leave open the possibility that institutions change but they do not evolve in a consistent direction. And the glaring dysfunctions that have afflicted many organizations, both corporate and governmental, make this conclusion even more persuasive. Perhaps what demands explanation is the rare case where an organization achieves a high level of effectiveness and consistency in its actions, rather than the many cases that come to mind of dysfunctional organizational activity.

(The examples of organizational dysfunction that come to mind are many — the failures of nuclear regulation of the civilian nuclear industry (Perrow, The Next Catastrophe: Reducing Our Vulnerabilities to Natural, Industrial, and Terrorist Disasters); the failure of US anti-submarine warfare in World War II (Cohen, Military Misfortunes: The Anatomy of Failure in War); and the failure of chemical companies to ensure safe operations of their plants (Shrivastava, Bhopal: Anatomy of Crisis). Here is an earlier post that addresses some of these examples; link. And here are several earlier posts on the topic of institutional change and organizational behavior; linklink.)

Errors in organizations

Organizations do things — process tax returns, deploy armies, send spacecraft to Mars. And in order to do these various things, organizations have people with job descriptions; organization charts; internal rules and procedures; information flows and pathways; leaders, supervisors, and frontline staff; training and professional development programs; and other particular characteristics that make up the decision-making and action implementation of the organization. These individuals and sub-units take on tasks, communicate with each other, and give rise to action steps.

And often enough organizations make mistakes — sometimes small mistakes (a tax return is sent to the wrong person, a hospital patient is administered two aspirins rather than one) and sometimes large mistakes (the space shuttle Challenger is cleared for launch on January 28, 1986, a Union Carbide plant accidentally releases toxic gases over a large population in Bhopal, FEMA bungles its response to Hurricane Katrina). What can we say about the causes of organizational mistakes? And how can organizations and their processes be improved so mistakes are less common and less harmful?

Charles Perrow has devoted much of his career to studying these questions. Two books in particular have shed a great deal of light on the organizational causes of industrial and technological accidents, Normal Accidents: Living with High-Risk Technologies and The Next Catastrophe: Reducing Our Vulnerabilities to Natural, Industrial, and Terrorist Disasters. (Perrow’s work has been discussed in several earlier posts; linklinklink.) The first book emphasizes that errors and accidents are unavoidable; they are the random noise of the workings of a complex organization. So the key challenge is to have processes that detect errors and that are resilient to the ones that make it through. One of Perrow’s central findings in The Next Catastrophe is the importance of achieving a higher level of system resilience by decentralizing risk and potential damage. Don’t route tanker cars of chlorine through dense urban populations; don’t place nuclear power plants adjacent to cities; don’t create an Internet or a power grid with a very small number of critical nodes. Kathleen Tierney’s The Social Roots of Risk: Producing Disasters, Promoting Resilience (High Reliability and Crisis Management) emphasizes the need for system resilience as well (link).

Is it possible to arrive at a more granular understanding of organizational errors and their sources? A good place to begin is with the theory of organizations as “strategic action fields” in the sense advocated by Fligstein and McAdam in A Theory of Fields. This approach imposes an important discipline on us — it discourages the mental mistake of reification when we think about organizations. Organizations are not unitary decision and action bodies; instead, they are networks of people linked in a variety of forms of dependency and cooperation. Various sub-entities consider tasks, gather information, and arrive at decisions for action, and each of these steps is vulnerable to errors and shortfalls. The activities of individuals and sub-groups are stimulated and conveyed through these networks of association; and, like any network of control or communication, there is always the possibility of a broken link or a faulty action step within the extended set of relationships that exist.

Errors can derive from individual mistakes; they can derive from miscommunication across individuals and sub-units within the organization; they can derive from more intentional sources, including self-interested or corrupt behavior on the part of internal participants. And they can derive from conflicts of interest between units within an organization (the manufacturing unit has an interest in maximizing throughput, the quality control unit has an interest in minimizing faulty products).

Errors are likely in every part of an organization’s life. Errors occur in the data-gathering and analysis functions of an organization. A sloppy market study is incorporated into a planning process leading to a substantial over-estimate of demand for a product; a survey of suppliers makes use of ambiguous questions that lead to misinterpretation of the results; a vice president under-estimates the risk posed by a competitor’s advertising campaign. For an organization to pursue its mission effectively, it needs to have accurate information about the external circumstances that are most relevant to its goals. But “relevance” is a judgment issue; and it is possible for an organization to devote its intelligence-gathering resources to the collection of data that are only tangentially helpful for the task of designing actions to carry out the mission of the institution.

Errors occur in implementation as well. The action initiatives that emerge from an organization’s processes — from committees, from CEOs, from intermediate-level leaders, from informal groups of staff — are also vulnerable to errors of implementation. The facilities team formulates a plan for re-surfacing a group of parking lots; this plan depends upon closing these lots several days in advance; but the safety department delays in implementing the closure and the lots have hundreds of cars in them when the resurfacing equipment arrives. An error of implementation.

One way of describing these kinds of errors is to recognize that organizations are “loosely connected” when it comes to internal processes of information gathering, decision making, and action. The CFO stipulates that the internal audit function should be based on best practices nationally; the chief of internal audit interprets this as an expectation that processes should be designed based on the example of top-tier companies in the same industry; and the subordinate operationalizes this expectation by doing a survey of business-school case studies of internal audit functions at 10 companies. But the data collection that occurs now has only a loose relationship to the higher-level expectation formulated by the CFO. Similar disconnects — or loose connections — occur on the side of implementation of action steps as well. Presumably top FEMA officials did not intend that FEMA’s actions in response to Hurricane Katrina would be as ineffective and sporadic as they turned out to be.

Organizations also have a tendency towards acting on the basis of collective habits and traditions of behavior. It is easier for a university’s admissions department to continue the same programs of recruitment and enrollment year after year than it is to rethink the approach to recruitment in a fundamental way. And yet it may be that the circumstances of the external environment have changed so dramatically that the habitual practices will no longer achieve similar results. A good example is the emergence of social media marketing in admissions; in a very short period of time the 17- and 18-year-old young people whom admissions departments want to influence went from willing recipients of glossy admissions publications in the mail to “Facebook-only” readers. Yesterday’s correct solution to an organizational problem may become tomorrow’s serious error, because the environment has changed.

In a way the problem of organizational errors is analogous to the problem of software bugs in large, complex computer systems. It is recognized by software experts that bugs are inevitable; and some of these coding errors or design errors may have catastrophic consequences in unusual settings. (Nancy Leveson’s Safeware: System Safety and Computers provides an excellent review of these possibilities.) So the task for software engineers and organizational designers and leaders is similar: designing fallible systems that do a pretty good job almost all of the time, and are likely to fail gracefully when errors inevitably occur.

%d bloggers like this: