How organizations adapt

Organizations do things; they depend upon the coordinated efforts of numerous individuals; and they exist in environments that affect their ongoing success or failure. Moreover, organizations are to some extent plastic: the practices and rules that make them up can change over time. Sometimes these changes happen as the result of deliberate design choices by individuals inside or outside the organization; so a manager may alter the rules through which decisions are made about hiring new staff in order to improve the quality of work. And sometimes they happen through gradual processes over time that no one is specifically aware of. The question arises, then, whether organizations evolve toward higher functioning based on the signals from the environments in which they live; or on the contrary, whether organizational change is stochastic, without a gradient of change towards more effective functioning? Do changes within an organization add up over time to improved functioning? What kinds of social mechanisms might bring about such an outcome?

One way of addressing this topic is to consider organizations as mid-level social entities that are potentially capable of adaptation and learning. An organization has identifiable internal processes of functioning as well as a delineated boundary of activity. It has a degree of control over its functioning. And it is situated in an environment that signals differential success/failure through a variety of means (profitability, success in gaining adherents, improvement in market share, number of patents issued, …). So the environment responds favorably or unfavorably, and change occurs.

Is there anything in this specification of the structure, composition, and environmental location of an organization that suggests the possibility or likelihood of adaptation over time in the direction of improvement of some measure of organizational success? Do institutions and organizations get better as a result of their interactions with their environments and their internal structure and actors?

There are a few possible social mechanisms that would support the possibility of adaptation towards higher functioning. One is the fact that purposive agents are involved in maintaining and changing institutional practices. Those agents are capable of perceiving inefficiencies and potential gains from innovation, and are sometimes in a position to introduce appropriate innovations. This is true at various levels within an organization, from the supervisor of a custodial staff to a vice president for marketing to a CEO. If the incentives presented to these agents are aligned with the important needs of the organization, then we can expect that they will introduce innovations that enhance functioning. So one mechanism through which we might expect that organizations will get better over time is the fact that some agents within an organization have the knowledge and power necessary to enact changes that will improve performance, and they sometimes have an interest in doing so. In other words, there is a degree of intelligent intentionality within an organization that might work in favor of enhancement.

This line of thought should not be over-emphasized, however, because there are competing forces and interests within most organizations. Previous posts have focused on current organizational theory based on the idea of a “strategic action field” of insiders and outsiders who determine the activities of the organization (Fligstein and McAdam, Crozier; linklink). This framework suggests that the structure and functioning of an organization is not wholly determined by a single intelligent actor (“the founder”), but is rather the temporally extended result of interactions among actors in the pursuit of diverse aims. This heterogeneity of purposive actions by actors within an institution means that the direction of change is indeterminate; it is possible that the coalitions that form will bring about positive change, but the reverse is possible as well.

And in fact, many authors and participants have pointed out that it is often enough not the case that the agents’ interests are aligned with the priorities and needs of the organization. Jack Knight offers persuasive critique of the idea that organizations and institutions tend to increase in their ability to provide collective benefits in Institutions and Social Conflict. CEOs who have a financial interest in a rapid stock price increase may take steps that worsen functioning for shortterm market gain; supervisors may avoid work-flow innovations because they don’t want the headache of an extended change process; vice presidents may deny information to other divisions in order to enhance appreciation of the efforts of their own division. Here is a short description from Knight’s book of the way that institutional adjustment occurs as a result of conflict among players of unequal powers:

Individual bargaining is resolved by the commitments of those who enjoy a relative advantage in substantive resources. Through a series of interactions with various members of the group, actors with similar resources establish a pattern of successful action in a particular type of interaction. As others recognize that they are interacting with one of the actors who possess these resources, they adjust their strategies to achieve their best outcome given the anticipated commitments of others. Over time rational actors continue to adjust their strategies until an equilibrium is reached. As this becomes recognized as the socially expected combination of equilibrium strategies, a self-enforcing social institution is established. (Knight, 143)

A very different possible mechanism is unit selection, where more successful innovations or firms survive and less successful innovations and firms fail. This is the premise of the evolutionary theory of the firm (Nelson and Winter, An Evolutionary Theory of Economic Change). In a competitive market, firms with low internal efficiency will have a difficult time competing on price with more efficient firms; so these low-efficiency firms will go out of business occasionally. Here the question of “units of selection” arises: is it firms over which selection operates, or is it lower-level innovations that are the object of selection?

Geoffrey Hodgson provides a thoughtful review of this set of theories here, part of what he calls “competence-based theories of the firm”. Here is Hobson’s diagram of the relationships that exist among several different approaches to study of the firm.

The market mechanism does not work very well as a selection mechanism for some important categories of organizations — government agencies, legislative systems, or non-profit organizations. This is so, because the criterion of selection is “profitability / efficiency within a competitive market”; and government and non-profit organizations are not importantly subject to the workings of a market.

In short, the answer to the fundamental question here is mixed. There are factors that unquestionably work to enhance effectiveness in an organization. But these factors are weak and defeasible, and the countervailing factors (internal conflict, divided interests of actors, slackness of corporate marketplace) leave open the possibility that institutions change but they do not evolve in a consistent direction. And the glaring dysfunctions that have afflicted many organizations, both corporate and governmental, make this conclusion even more persuasive. Perhaps what demands explanation is the rare case where an organization achieves a high level of effectiveness and consistency in its actions, rather than the many cases that come to mind of dysfunctional organizational activity.

(The examples of organizational dysfunction that come to mind are many — the failures of nuclear regulation of the civilian nuclear industry (Perrow, The Next Catastrophe: Reducing Our Vulnerabilities to Natural, Industrial, and Terrorist Disasters); the failure of US anti-submarine warfare in World War II (Cohen, Military Misfortunes: The Anatomy of Failure in War); and the failure of chemical companies to ensure safe operations of their plants (Shrivastava, Bhopal: Anatomy of Crisis). Here is an earlier post that addresses some of these examples; link. And here are several earlier posts on the topic of institutional change and organizational behavior; linklink.)

Rational choice institutionalism

Where do institutions come from? And what kinds of social forces are at work to stabilize them once they are up and running?  These are questions that historical institutionalists like Kathleen Thelen have considered in substantial depth (linklink, link). But the rational-choice paradigm has also offered some answers to these questions as well. The basic idea presented by the RCT paradigm is that institutions are the result of purposive agents coping with existential problems, forming alliances, and pursuing their interests in a rational way. James Coleman is one of the exponents of this approach in Foundations of Social Theory, where he treats institutions and norms as coordinated and mutually reinforcing patterns of individual behavior (link).

An actor-centered theory of institutions requires a substantial amount of boot-strapping: we need to have an account of how a set of rules and practices could have emerged from the purposive but often conflictual activities of individuals, and we need a similar account of how those rules are stabilized and enforced by individuals who have no inherent interest in the stability of the rules within which they act. Further, we need to take account of well-known conflicts between private and public benefits, short-term and long-term benefits, and intended and unintended benefits. Rational-choice theorists since Mancur Olson in The Logic of Collective Action: Public Goods and the Theory of Groups have made it clear that we cannot explain social outcomes on the basis of the collective benefits that they provide; rather, we need to show how those arrangements result from relatively myopic, relatively self-interested actors with bounded ability to foresee consequences.

Ken Shepsle is a leading advocate for a rational-choice theory of institutions within political science. He offers an exposition of his thinking in his contribution to The Oxford Handbook of Political Institutions (link). He distinguishes between institutions as exogenous and institutions as endogenous. The first conception takes the rules and practices of an institution as fixed and external to the individuals who operate within them, while the second looks at the rules and practices as being the net result of the intentions and actions of those individuals themselves. On the second view, it is open to the individuals within an activity to attempt to change the rules; and one set of rules will perhaps have better results for one set of interests than another. So the choice of rules in an activity is not a matter of indifference to the participants. (For example, untenured faculty might undertake a campaign to change the way the university evaluates teaching in the context of the tenure process, or to change the relative weights assigned to teaching and research.) Shepsle also distinguishes between structured and unstructured institutions — a distinction that other authors characterize as “formal/informal”. The distinction has to do with the degree to which the rules of the activity are codified and reinforced by strong external pressures. Shepsle encompasses various informal solutions to collective action problems under the rubric of unstructured institutions — fluid solutions to a transient problem.

This description of institutions begins to frame the problem, but it doesn’t go very far. In particular, it doesn’t provide much insight into the dynamics of conflict over rule-setting among parties with different interests in a process. Other scholars have pushed the analysis further.

French sociologists Crozier and Friedberg address this problem in Actors and Systems: The Politics of Collective Action (1980 [1977]). Their premise is that actors within organizations have substantially more agency and freedom than they are generally afforded by orthodox organization theory, and we can best understand the workings and evolution of the organization as (partially) the result of the strategic actions of the participants (instead of understanding the conduct of the participants as a function of the rules of the organization). They look at institutions as solutions to collective action problems — tasks or performances that allow attainment of a goal that is of interest to a broad public, but for which there are no antecedent private incentives for cooperation. Organized solutions to collective problems — of which organizations are key examples — do not emerge spontaneously; instead, “they consist of nothing other than solutions, always specific, that relatively autonomous actors have created, invented, established, with their particular resources and capacities, to solve these challenges for collective action” (15). And Crozier and Friedberg emphasize the inherent contingency of these particular solutions; there are always alternative solutions, neither better nor worse. This is a rational-choice analysis, though couched in sociological terms rather than economists’ terms. (Here is a more extensive discussion of Crozier and Friedberg; link.)

Jack Knight brings conflict and power into the rational-choice analysis of the emergence of institutions in Institutions and Social Conflict.

I argue that the emphasis on collective benefits in theories of social institutions fails to capture crucial features of institutional development and change. I further argue that our explanations should invoke the distributional effects of such institutions and the conflict inherent in those effects. This requires an investigation of those factors that determine how these distributional conflicts are resolved. (13-14)

Institutions are not created to constrain groups or societies in an effort to avoid suboptimal outcomes but, rather, are the by-product of substantive conflicts over the distributions inherent in social outcomes. (40)

Knight believes that we need to have microfoundations for the ways in which institutions emerge and behave (14), and he seems those mechanisms in the workings of rational choices by the participants within the field of interaction within which the institution emerges.

Actors choose their strategies under various circumstances. In some situations individuals regard the rest of their environment, including the actions of others, as given. They calculate their optimal strategy within the constraints of fixed parameters…. But actors are often confronted by situations characterized by an interdependence between other actors and themselves…. Under these circumstances individuals must choose strategically by incorporating the expectations of the actions of others into their own decision making. (17)

This implies, in particular, that we should not expect socially optimal or efficient outcomes in the emergence of institutions; rather, we should expect institutions that differentially favor the interests of some groups and disfavor those of other groups — even if the social total is lower than a more egalitarian arrangement.

I conclude that social efficiency cannot provide the substantive content of institutional rules. Rational self-interested actors will not be the initiators of such rules if they diminish their own utility. Therefore rational-choice explanations of social institutions based on gains in social efficiency fail as long as they are grounded in the intentions of social actors. (34)

Knight’s work explicitly refutes the occasional Panglossian (or Smithian) assumptions sometimes associated with rational choice theory and micro-economics: the idea that individually rational action leads to a collectively efficient outcome (the invisible hand). This may be true in the context of certain kinds of markets; but it is not generally true in the social and political world. And Knight shows in detail how the assumption fails in the case of institutional emergence and ongoing workings.

Rational choice theory is one particular and specialized version of actor-centered social science (link). It differs from other approaches in the very narrow assumptions it makes about the actor’s particular form of agency; it assumes narrow economic rationality rather than a broader conception of agency or practical rationality (link). What seems clear to me is that we need to take an actor-centered approach if we want to understand institutions — either their emergence or their continuing functioning and change. So the approach taken by rational-choice theorists is ontologically correct. If RCT fails to provide an adequate analysis of institutions, it is because the underlying theory of agency is fundamentally unrealistic about human actors.

Accident analysis and systems thinking

Complex socio-technical systems fail; that is, accidents occur. And it is enormously important for engineers and policy makers to have a better way of thinking about accidents than is the current protocol following an air crash, a chemical plant fire, or the release of a contaminated drug. We need to understand better what the systems and organizational causes of an accident are; even more importantly, we need to have a basis for improving the safe functioning of complex socio-technical systems by identifying better processes and better warning indicators of impending failure.

A long-term leader in the field of systems-safety thinking is Nancy Leveson, a professor of aeronautics and astronautics at MIT and the author of Safeware: System Safety and Computers (1995) and Engineering a Safer World: Systems Thinking Applied to Safety (2012). Leveson has been a particular advocate for two insights: looking at safety as a systems characteristic, and looking for the organizational and social components of safety and accidents as well as the technical event histories that are more often the focus of accident analysis. Her approach to safety and accidents involves looking at a technology system in terms of the set of controls and constraints that have been designed into the process to prevent accidents. “Accidents are seen as resulting from inadequate control or enforcement of constraints on safety-related behavior at each level of the system development and system operations control structures.” (25)

The abstract for her essay “A New Accident Model for Engineering Safety” (link) captures both points.

New technology is making fundamental changes in the etiology of accidents and is creating a need for changes in the explanatory mechanisms used. We need better and less subjective understanding of why accidents occur and how to prevent future ones. The most effective models will go beyond assigning blame and instead help engineers to learn as much as possible about all the factors involved, including those related to social and organizational structures. This paper presents a new accident model founded on basic systems theory concepts. The use of such a model provides a theoretical foundation for the introduction of unique new types of accident analysis, hazard analysis, accident prevention strategies including new approaches to designing for safety, risk assessment techniques, and approaches to designing performance monitoring and safety metrics.

The accident model she describes in this article and elsewhere is STAMP (Systems-Theoretic Accident Model and Processes). Here is a short description of the approach.

In STAMP, systems are viewed as interrelated components that are kept in a state of dynamic equilibrium by feedback loops of information and control. A system in this conceptualization is not a static design—it is a dynamic process that is continually adapting to achieve its ends and to react to changes in itself and its environment. The original design must not only enforce appropriate constraints on behavior to ensure safe operation, but the system must continue to operate safely as changes occur. The process leading up to an accident (loss event) can be described in terms of an adaptive feedback function that fails to maintain safety as performance changes over time to meet a complex set of goals and values…. 

The basic concepts in STAMP are constraints, control loops and process models, and levels of control. (12)

The other point of emphasis in Leveson’s treatment of safety is her consistent effort to include the social and organizational forms of control that are a part of the safe functioning of a complex technological system.

Event-based models are poor at representing systemic accident factors such as structural deficiencies in the organization, management deficiencies, and flaws in the safety culture of the company or industry. An accident model should encourage a broad view of accident mechanisms that expands the investigation from beyond the proximate events. (6)

She treats the organizational backdrop of the technology process in question as being a crucial component of the safe functioning of the process.

Social and organizational factors, such as structural deficiencies in the organization, flaws in the safety culture, and inadequate management decision making and control are directly represented in the model and treated as complex processes rather than simply modeling their reflection in an event chain. (26)

And she treats organizational features as another form of control system (along the lines of Jay Forrester’s early definitions of systems in Industrial Dynamics.

Modeling complex organizations or industries using system theory involves dividing them into hierarchical levels with control processes operating at the interfaces between levels (Rasmussen, 1997). Figure 4 shows a generic socio-technical control model. Each system, of course, must be modeled to reflect its specific features, but all will have a structure that is a variant on this one. (17)

Here is figure 4:

The approach embodied in the STAMP framework is that safety is a systems effect, dynamically influenced by the control systems embodied in the total process in question.

In STAMP, systems are viewed as interrelated components that are kept in a state of dynamic equilibrium by feedback loops of information and control. A system in this conceptualization is not a static design—it is a dynamic process that is continually adapting to achieve its ends and to react to changes in itself and its environment. The original design must not only enforce appropriate constraints on behavior to ensure safe operation, but the system must continue to operate safely as changes occur. The process leading up to an accident (loss event) can be described in terms of an adaptive feedback function that fails to maintain safety as performance changes over time to meet a complex set of goals and values. (12) 


In systems theory, systems are viewed as hierarchical structures where each level imposes constraints on the activity of the level beneath it—that is, constraints or lack of constraints at a higher level allow or control lower-level behavior (Checkland, 1981). Control laws are constraints on the relationships between the values of system variables. Safety-related control laws or constraints therefore specify those relationships between system variables that constitute the nonhazardous system states, for example, the power must never be on when the access door is open. The control processes (including the physical design) that enforce these constraints will limit system behavior to safe changes and adaptations. (17)

Leveson’s understanding of systems theory brings along with it a strong conception of “emergence”. She argues that higher levels of systems possess properties that cannot be reduced to the properties of the components, and that safety is one such property:

In systems theory, complex systems are modeled as a hierarchy of levels of organization, each more complex than the one below, where a level is characterized by having emergent or irreducible properties. Hierarchy theory deals with the fundamental differences between one level of complexity and another. Its ultimate aim is to explain the relationships between different levels: what generates the levels, what separates them, and what links them. Emergent properties associated with a set of components at one level in a hierarchy are related to constraints upon the degree of freedom of those components. (11)

But her understanding of “irreducible” seems to be different from that commonly used in the philosophy of science. She does in fact believe that these higher-level properties can be explained by the system of properties at the lower levels — for example, in this passage she asks “… what generates the levels” and how the emergent properties are “related to constraints” imposed on the lower levels. In other words, her position seems to be similar to that advanced by Dave Elder-Vass (link): emergent properties are properties at a higher level that are not possessed by the components, but which depend upon the interactions and composition of the lower-level components.

The domain of safety engineering and accident analysis seems like a particularly suitable place for Bayesian analysis. It seems unavoidable that accident analysis involves both frequency-based probabilities (e.g. the frequency of pump failure) and expert-based estimates of the likelihood of a particular kind of failure (e.g. the likelihood that a train operator will slacken attention to track warnings in response to company pressure on timetable). Bayesian techniques are suitable for the task of combining these various kinds of estimates of risk into a unified calculation.

The topic of safety and accidents is particularly relevant to Understanding Society because it expresses very clearly the causal complexity of the social world in which we live. And rather than simply ignoring that complexity, the systematic study of accidents gives us an avenue for arriving at better ways of representing, modeling, and intervening in parts of that complex world.


Errors in organizations

Organizations do things — process tax returns, deploy armies, send spacecraft to Mars. And in order to do these various things, organizations have people with job descriptions; organization charts; internal rules and procedures; information flows and pathways; leaders, supervisors, and frontline staff; training and professional development programs; and other particular characteristics that make up the decision-making and action implementation of the organization. These individuals and sub-units take on tasks, communicate with each other, and give rise to action steps.

And often enough organizations make mistakes — sometimes small mistakes (a tax return is sent to the wrong person, a hospital patient is administered two aspirins rather than one) and sometimes large mistakes (the space shuttle Challenger is cleared for launch on January 28, 1986, a Union Carbide plant accidentally releases toxic gases over a large population in Bhopal, FEMA bungles its response to Hurricane Katrina). What can we say about the causes of organizational mistakes? And how can organizations and their processes be improved so mistakes are less common and less harmful?

Charles Perrow has devoted much of his career to studying these questions. Two books in particular have shed a great deal of light on the organizational causes of industrial and technological accidents, Normal Accidents: Living with High-Risk Technologies and The Next Catastrophe: Reducing Our Vulnerabilities to Natural, Industrial, and Terrorist Disasters. (Perrow’s work has been discussed in several earlier posts; linklinklink.) The first book emphasizes that errors and accidents are unavoidable; they are the random noise of the workings of a complex organization. So the key challenge is to have processes that detect errors and that are resilient to the ones that make it through. One of Perrow’s central findings in The Next Catastrophe is the importance of achieving a higher level of system resilience by decentralizing risk and potential damage. Don’t route tanker cars of chlorine through dense urban populations; don’t place nuclear power plants adjacent to cities; don’t create an Internet or a power grid with a very small number of critical nodes. Kathleen Tierney’s The Social Roots of Risk: Producing Disasters, Promoting Resilience (High Reliability and Crisis Management) emphasizes the need for system resilience as well (link).

Is it possible to arrive at a more granular understanding of organizational errors and their sources? A good place to begin is with the theory of organizations as “strategic action fields” in the sense advocated by Fligstein and McAdam in A Theory of Fields. This approach imposes an important discipline on us — it discourages the mental mistake of reification when we think about organizations. Organizations are not unitary decision and action bodies; instead, they are networks of people linked in a variety of forms of dependency and cooperation. Various sub-entities consider tasks, gather information, and arrive at decisions for action, and each of these steps is vulnerable to errors and shortfalls. The activities of individuals and sub-groups are stimulated and conveyed through these networks of association; and, like any network of control or communication, there is always the possibility of a broken link or a faulty action step within the extended set of relationships that exist.

Errors can derive from individual mistakes; they can derive from miscommunication across individuals and sub-units within the organization; they can derive from more intentional sources, including self-interested or corrupt behavior on the part of internal participants. And they can derive from conflicts of interest between units within an organization (the manufacturing unit has an interest in maximizing throughput, the quality control unit has an interest in minimizing faulty products).

Errors are likely in every part of an organization’s life. Errors occur in the data-gathering and analysis functions of an organization. A sloppy market study is incorporated into a planning process leading to a substantial over-estimate of demand for a product; a survey of suppliers makes use of ambiguous questions that lead to misinterpretation of the results; a vice president under-estimates the risk posed by a competitor’s advertising campaign. For an organization to pursue its mission effectively, it needs to have accurate information about the external circumstances that are most relevant to its goals. But “relevance” is a judgment issue; and it is possible for an organization to devote its intelligence-gathering resources to the collection of data that are only tangentially helpful for the task of designing actions to carry out the mission of the institution.

Errors occur in implementation as well. The action initiatives that emerge from an organization’s processes — from committees, from CEOs, from intermediate-level leaders, from informal groups of staff — are also vulnerable to errors of implementation. The facilities team formulates a plan for re-surfacing a group of parking lots; this plan depends upon closing these lots several days in advance; but the safety department delays in implementing the closure and the lots have hundreds of cars in them when the resurfacing equipment arrives. An error of implementation.

One way of describing these kinds of errors is to recognize that organizations are “loosely connected” when it comes to internal processes of information gathering, decision making, and action. The CFO stipulates that the internal audit function should be based on best practices nationally; the chief of internal audit interprets this as an expectation that processes should be designed based on the example of top-tier companies in the same industry; and the subordinate operationalizes this expectation by doing a survey of business-school case studies of internal audit functions at 10 companies. But the data collection that occurs now has only a loose relationship to the higher-level expectation formulated by the CFO. Similar disconnects — or loose connections — occur on the side of implementation of action steps as well. Presumably top FEMA officials did not intend that FEMA’s actions in response to Hurricane Katrina would be as ineffective and sporadic as they turned out to be.

Organizations also have a tendency towards acting on the basis of collective habits and traditions of behavior. It is easier for a university’s admissions department to continue the same programs of recruitment and enrollment year after year than it is to rethink the approach to recruitment in a fundamental way. And yet it may be that the circumstances of the external environment have changed so dramatically that the habitual practices will no longer achieve similar results. A good example is the emergence of social media marketing in admissions; in a very short period of time the 17- and 18-year-old young people whom admissions departments want to influence went from willing recipients of glossy admissions publications in the mail to “Facebook-only” readers. Yesterday’s correct solution to an organizational problem may become tomorrow’s serious error, because the environment has changed.

In a way the problem of organizational errors is analogous to the problem of software bugs in large, complex computer systems. It is recognized by software experts that bugs are inevitable; and some of these coding errors or design errors may have catastrophic consequences in unusual settings. (Nancy Leveson’s Safeware: System Safety and Computers provides an excellent review of these possibilities.) So the task for software engineers and organizational designers and leaders is similar: designing fallible systems that do a pretty good job almost all of the time, and are likely to fail gracefully when errors inevitably occur.

Capitalism 2.0?


Capitalism is one particular configuration of the economic institutions that define production and consumption in a society. It involves private ownership of firms and resources, and a system of wage labor through which individuals compete for jobs within the context of a labor market. In its nature it creates positions of substantial power for owners of capital, and generally little power for owners of labor power — workers. In theory capitalism can be joined with both democratic and authoritarian systems of government — for example, France (democratic) and Argentina 1970 (military dictatorship). (Here is an earlier post on alternative capitalisms; link.)

As Marx himself noted, capitalism brought a number of powerful and emancipatory changes into the world. But it is plain that there are substantial deficiencies in our contemporary political economy, from the point of view of the great majority of society. For example:

  • Rising inequalities of income and wealth
  • Disproportionate power of corporations in political and economic life
  • Persistence of racial and ethnic segregation and discrimination 
  • Slow rates of social mobility
  • Pervasive inequalities of opportunity
  • Overwhelming influence of money in electoral politics
  • Inability to address the causes of climate change
  • Inability of the state to effectively regulate products and processes to ensure health and safety
  • Manipulation of culture and values for the sake of profit

What kinds of institutional changes might we imagine for our current political economy that do a better job of satisfying the demands of justice and human wellbeing?

A number of philosophers, political scientists, and economists have addressed the question of how to envision a more just form of capitalism. Kathleen Thelen considers the prospects for an “egalitarian capitalism” (Varieties of Liberalization and the New Politics of Social Solidaritylink); Jon Elster had an important contribution to make on the question of alternatives to capitalism (Alternatives to Capitalism; link); and John Rawls put forward a view of a preferable alternative to capitalism, which he referred to as a property-owning democracy (O’Neill and Williamson, Property-Owning Democracy: Rawls and Beyond; link).

So what might capitalism 2.0 look like if we want a genuinely fair and progressive society in the 21st century? Several features seem clear.

  • Something like decentralized markets in labor and capital seem unavoidable in a large modern society. So the 21st-century economy will be a market economy.
  • Rawls is right that extreme inequalities of property ownership lead to unacceptable inequalities of political participation and human capability fulfillment. So the 21st century will need to find effective ways of distributing wealth and income more broadly.
  • Market mechanisms generally leave some disadvantaged sub-populations behind. A key goal of the 21st century state must be to find effective ways of improving the prerequisites of opportunity for disadvantaged groups. This means that a substantial equality of availability and access to education, nutrition, housing, and other components of quality of life need to be secured by the state.
  • Existing market institutions do not automatically guarantee fair equality of opportunity. So the political economy of capitalism 2.0 will need to use public resources and authority to ensure equality of opportunity for all citizens.

What kinds of political and economic institutions would serve to advance these social goals?

One approach that is gaining international attention is the idea of a universal basic income for all citizens. Belgian philosopher Philippe van Parijs makes a powerful case for the need for universal basic income (link) in the world economy we now face. Here is his definition in the Boston Review article:

By universal basic income I mean an income paid by a government, at a uniform level and at regular intervals, to each adult member of society. The grant is paid, and its level is fixed, irrespective of whether the person is rich or poor, lives alone or with others, is willing to work or not. In most versions–certainly in mine–it is granted not only to citizens, but to all permanent residents. 

The UBI is called “basic” because it is something on which a person can safely count, a material foundation on which a life can firmly rest. Any other income–whether in cash or in kind, from work or savings, from the market or the state–can lawfully be added to it. On the other hand, nothing in the definition of UBI, as it is here understood, connects it to some notion of “basic needs.” A UBI, as defined, can fall short of or exceed what is regarded as necessary to a decent existence. (link)

Swiss voters narrowly defeated such a proposal for Switzerland this spring (link), but serious debates continue. 

Another approach results from politically effective demands for real equality of opportunity. Equality of opportunity requires high-quality public education for everyone. So capitalism 2.0 needs to embody educational institutions that are substantially better and more egalitarian than those we now have — ranging from pre-school to K-12 to universities. Consider this fascinating county-level map of the United States combining per capita income, high school graduate rate, and college graduate rate (link):


The map makes clear the strong association between county income and educational attainment, which implies in turn that children born into the wrong zip code have substantially lower likelihood of attaining high-quality educational success. A more just society would show little variation with respect to educational attainment, even when it also shows substantial variation in per-capita incomes across counties. Achieving comparable levels of educational attainment across rich and poor counties requires a substantial public investment in schools, teachers, and educational resources.

Another determinant of equality of opportunity is universal access to quality healthcare. Poor health affects both current quality of life and future productivity; so when poor people are in circumstances in which they cannot afford or gain access to high-quality healthcare, their current and future life prospects are at risk.

All of these ideas about a more just capitalism require resources; and those resources can only come from public finance, or taxation. The wealth of a society is a joint product which the market allocates privately. Taxation is the mechanism through which the benefits of social cooperation extend more fully to all members of society. It is through taxation that a capitalist society has the potential for creating an environment with high levels of equality of opportunity for its citizens and high levels of quality of life for its population. The resulting political economy promises to be the foundation of a more equitable and productive society. (Here is a post on the moral basis for the extensive democratic state; link.)

Positive organizational behavior

source: Rob Cross, Wayne Baker, Andrew Parker, “What creates energy in organizations?” (link)

Organizations need study for several important reasons. One is their ubiquity in modern life — almost nothing that we need in daily life is created by solo producers. Rather, activity among a number of individuals is coordinated and directed through organizations that produce the goods and services we need — grocery chains, trucking companies, police departments, universities, small groups of cooperating chimpanzees.

A second reason for studying organizations is that existing theories of human behavior don’t do a very good job of explaining organizational behavior. The theory of rational self interest — the premise of the market — doesn’t work very well as a sole theory of behavior within an organization. But neither does the normative theory of a Durkheim or a Weber. We need better theories of the forms of agency that are at work within organizations — the motives individuals have, the ways in which the rules and incentives of the organization affect behavior, the ways the culture of the workplace influences behavior, and the role that local level practices have in influencing individual behavior that makes a difference to the functioning of the organization.

Here are a few complications from current work in sociology and economics.

Economist Amartya Sen observes that the premises of market rationality make social cooperation all but impossible. This is Sen’s central conclusion in “Rational Fools” (link), and it is surely correct: “The purely rational economic man is indeed close to being a social moron”. Sen’s work demonstrates that social behavior — even conceding the point that it derives from the thought processes of individuals — is substantially more layered and multi-factored than neoclassical economics postulates. Sen’s own addition to the mix is his theory of commitments — the idea that individuals have priorities that don’t map conveniently onto utility schemes — and that lots of ordinary collective behavior depends on these behavioral characteristics.

Sociologist Michele Lamont argues that a major difference between upper-middle class French and American men is their attitudes towards their own work in the office or factory. In Money, Morals, and Manners: The Culture of the French and the American Upper-Middle Class she finds that professional-class French men express a certain amount of contempt for their hard-working American counterparts. Her findings suggest substantial differences in the “culture of work and profession” in different national and regional settings. (Here is an earlier post on Lamont’s work; link.)

Experimental economist Ernst Fehr finds that workplaces create substantial behavioral predispositions that are triggered by the frame of the workplace (link). In unpublished work he finds that individuals in the banking industry are slightly more honest than the general population when they think in the frame of their personal lives, but that they are substantially less honest when they think in the frame of the banking office. Fehr and his colleagues demonstrate the power of cultural cues in the workplace (and presumably other well-defined social environments) in influencing the way that individuals make decisions in that environment.

Fehr has also made a major contribution through his research in experimental economics on the subject of altruism. He finds — context-independently — that decision makers are generally not rationally self interested maximizers. And using some results from the neurosciences he argues that there is a biological basis for this “pro-social” element of behavior.  Here is an example of Fehr’s approach:

If we randomly pick two human strangers from a modern society and give them the chance to engage in repeated anonymous exchanges in a laboratory experiment, reciprocally altruistic behaviour emerges spontaneously with a high probability…. However, human altruism even extends far beyond reciprocal altruism and reputation-based cooperation taking the form of strong reciprocity. (Fehr and Fischbacher 2005:7;



(Here is an article by Jon Elster on Fehr’s experimental research on altruism; link.)

So what can we discover about common features of behavior that can be observed in different kinds of organizations? There is a degree of convergence between the theoretical and experimental results that have come out of this research in sociology and economics and the organizational theories of what is now referred to as positive organizational studies. Here is a brilliant collection of research in this area edited by Kim Cameron and Gretchen Spreitzer, The Oxford Handbook of Positive Organizational Scholarship. Cameron and Spreitzer define the field in their introduction in these terms:

Positive organizational scholarship is an umbrella concept used to unify a variety of approaches in organizational studies, each of which incorporates the notion of ‘the positive.’ … “organizational research occurring at the micro, meso, and macro levels which points to unanswered questions about what processes, states, and conditions are important in explaining individual and collective flourishing. Flourishing refers to being in an optimal range of human functioning” [quoting Jane Dutton] (2).

The POS research community places a great deal of importance on the impact that positive social behavior has on the effectiveness of an organization. And these scholars believe that specific institutional arrangements and actions by leaders can increase the levels of positive social behavior in a work environment.

Studies have shown that organizations in several industries (including financial services, health care, manufacturing, and government) that implemented and improved their positive practices over time also increased their performance in desired outcomes such as profitability, productivity, quality, customer satisfaction, and employee retention. That is, positive practices that were institutionalized in organizations, including providing compassionate support for employees, forgiving mistakes and avoiding blame, fostering the meaningfulness of work, expressing frequent gratitude, showing kindness, and caring for colleagues, led organizations to perform at significantly higher levels on desired outcomes. (6)

In a sense they point to the possibility of high level and low level equilibria within roughly the same set of rules. And organizations that succeed in promoting positive behavioral motivations will be more successful in achieving their goals. Adam Grant and Justin Berg analyze these positive motives in their contribution to the Handbook, “Prosocial Motivation at Work”.

What motivates employees to care about making a positive difference in the lives of others, and what actions and experiences does this motivation fuel? (29)

It is both a theoretical premise of the POS research community and an empirical subject of inquiry for these researchers that it is possible to elicit “prosocial” motivations through suitable institutional arrangements and leadership. Interestingly, this seems to be an implication of the work by Ernst Fehr mentioned above as well.

Positive organizational scholarship is a timely contribution to the social sciences because it stands on the cusp between the need for better theories of the actor and the imperative to improve the performance of organizations. Hospitals, manufacturing companies, universities, and non-profit organizations all want to improve their performance in a variety of ways: improve patient safety, reduce costs, improve product quality, improve student retention, improve the delivery of effective social services, and the like. And POS is an empirically grounded approach to arriving at a better understanding of the range of social behaviors that can potentially motivate participants and lead to better collective performance. And the category of “prosocial motivation” that underlies the POS approach is an important dimension of behavior for further research and investigation.

Coleman’s house-of-cards theory of structures

image: Henri Bonaventure Monnier, Crowded Restaurant 1860

image: James Coleman, Foundations of Social Theory, p. 245

James Coleman offers a skeptical position on the question of the reality of social structures in his landmark book, Foundations of Social Theory (1990). Coleman advocates for a view of research and theory in sociology that emphasizes the actions of situated purposive individuals, and he deliberately avoids the idea of persistent social structures within which actors make choices. His focus is on the relations among actors and the higher-level patterns that arise from these relations.

The social environment can be viewed as consisting of two parts. One is the “natural” social environment, growing autonomously as simple social relations develop and expand the structure. A second portion is what may be described as the built, or constructed, social environment, organizations composed of complex social relations. The constructed social environment does not grow naturally through the interests of actors who are parties to relations. Each relation must be constructed by an outsider, and each relation is viable only through its connections to other relations that are part of the same organization…. The structure is like a house of cards, with extensive interdependence among the different relations of which it is composed. (43-44)

This is a fascinating formulation. Essentially Coleman is offering a sketch of how we might conceive of a social ontology that suffices without reference to structures as independent entities. We are advised to think of social structures and norms as no more than coordinated and mutually reinforcing patterns of individual behavior. The emphasis is on individual behavior within the context of the actions of others. As he puts the point later in the book, “The elementary actor is the wellspring of action, no matter how complex are the structures through which action takes place” (503). Essentially there is no place for structures in Coleman’s boat (link).

Coleman takes a similar approach to the topic of social norms, one of the engines through which social structures are generally thought to wield influence on action:

Much sociological theory takes social norms as given and proceeds to examine individual behavior or the behavior of social systems when norms exist. Yet to do this without raising at some point the question of why and how norms come into existence is to forsake the more important sociological problem in order to address the less important. (241)

Coleman offers an example of the house-of-cards interdependence in question here in his discussion of problems arising within bureaucracies as a result of the cost of oversight and policing:

Many kinds of behavior in bureaucracies derive from this fundamental defect: stealing from an employer, loafing on the job, featherbedding (in which two persons do the work of one), padding of expense accounts, use of organizational resources for personal ends, and waste. (79)

These kinds of behavior will swamp the organization, unless there are other actors within the organization who will undertake the costly activity of observing and punishing bad behavior. This might come about because of a formal incentive — people are paid to be auditors. Or it might come about from internalized but informal motives acting in other persons — envy, a sense of fairness, or loyalty to the organization.

The best illustration I can think of in this context is the category of conventional practices of behavior. Let’s say that a study finds that Americans over-tip in small local restaurants. Here is a possible explanation. There is no rule or enforcement mechanism that punishes poor tippers. But because the restaurant is local, the client knows he or she will be returning; and because it is small, he or she knows that today’s behavior will be noted and remembered. Further, the server recognizes the dynamic and reinforces it by providing small non-obligatory extras to the client — a free dessert on a birthday, a good table for a special occasion, a larger pour from the wine bottle. This is an example of social behavior that fits Coleman’s description of a “house of cards” pattern of interdependency between client and server. If the server stops playing his or her role, the client is less inclined to over-tip the next time; and if the supererogatory tip is not forthcoming, the server is less likely to be generous with service at the next visit. The pattern is stable and it can be explained fully in Coleman-like terms. Each party has an interest in continuing the practice, and the pattern is reinforced. (David Lewis does a great job of showing how conventions emerge from intentional behavior at the individual level; Convention: A Philosophical Study.)

Anyone who accepts that social entities and forces rest upon microfoundations must agree that something like Coleman’s recursive story of self-reinforcing patterns of behavior must be correct. But this does not imply that higher-level social structures do not possess stable causal properties nonetheless. The “house-of-cards” pattern of interdependency between auditor and worker, or between server and client, helps to explain how the stable patterns of the organization are maintained; but it does not render superfluous the idea that the structure itself has causal properties or powers. The microfoundations thesis does not entail reductionism (link). (I offered a similar argument in response to John Levi Martin’s parallel arguments in a previous post; link.)

Self-selection and “liberal” professions

Neil Gross stirred up a quite a storm a few years ago when he released a body of research findings on the political complexion of university professors. Conservative organizations and pundits have made hay by denouncing the supposed liberal bias of universities. Gross opens his most recent book, Why Are Professors Liberal and Why Do Conservatives Care?, by confirming that multiple measures demonstrate that faculty members are substantially more likely to be liberal than the general population. But he believes this is both indisputable and uninteresting. What is most interesting for a sociologist is the “why” — how can we explain the skewed distribution of political identities across this professional group?

Gross positions his work as falling in a tradition of research that included two major survey-based studies in the past 75 years, Lazarsfield and Thielens (The Academic Mind: Social Scientists in a Time of Crisis) and Everett Karll Ladd and Seymour Martin Lipset (The Divided Academy: Professors and Politics). He also finds creative ways of incorporating the GSS surveys. In addition, Gross and Solon Simmons carried out their own substantial survey, the Politics of the American Professoriate survey (PAP).

It will be noted that this is a problem that calls out for a social mechanisms explanation, and a fairly simple one at that. Suppose marbles of two colors in equal numbers are raining down in a thoroughly mixed stream on a pair of urns. Occasionally a marble falls into one urn or the other. When we count the marbles in both urns we find that 65% of the marbles in the urn on the left are green, whereas the larger urn on the right has 50% of each color. How is it that there are a higher percentage of green marbles in the urn on the left? There are only a few possibilities, each corresponding to a different mechanism. (i) The marbles have a degree of choice about which urn they enter, and green marbles have a preference for the left urn. (Or a variant: red marbles have a preference for avoiding the left urn.) We could call this “selection bias by chooser”. This is different from two other possible mechanisms: (ii) there is a filter on the left urn, bumping green marbles in and red marbles out (“selection bias by receiver”); or (iii) marbles have a slight tendency to shift color from red to green when they enter the left urn (“environment transformation”).

Fundamentally the question is analogous to the “nature-nurture” conundrum in the study of personality. Are universities more liberal than average occupations because they cultivate liberal thinking (nurture)? Or are they more liberal because of some sort of selection mechanism, drawing liberal members more frequently than chance (nature), and liberal-tending new faculty members bring their political values with them? Gross makes a powerful empirical case for the latter possibility.

I develop an alternative account: for historical reasons the professoriate has developed such a strong reputation for liberalism that smart young liberals today are apt to think of academic work as something that might be appropriate and suitable for them to pursue, whereas smart young conservatives see academe as foreign territory and embark on other career paths. (p. 105)

It might be speculated that this distribution exists because the faculty selection process is biased — conservative candidates are turned away. Gross’s explanation is different. He draws on a strong literature studying gendered occupations that finds that the reputation of the profession has a powerful influence on girls and women as they make educational and career choices. Gross extends this reasoning to liberals and conservatives contemplating an academic career. Essentially he explains the political composition of university faculties as a consequence of a powerful public reputation for being a liberal workplace and a distinctly skewed process of self-selection towards this career. The profession is “typed” as being a particularly good career for more liberal young people, and young people make their career choices in consideration of that assumption. Essentially he argues that universities are publicly perceived as being hospitable to people on the left, and liberal-leaning young people are drawn to the career because of this reputation.

The question of political bias within universities is treated using an interesting experiment that Gross and colleagues Joseph Ma and Ethan Fosse conducted to test whether conservative students have a harder time gaining entrance to graduate programs (164). The project involved sending fictitious letters of inquiry to directors of graduate studies in leading departments in a wide range of disciplines. The letters indicated the same level of preparation for the field. One batch indicated no political information about the student, while the other two batches included the phrases “Worked for the McCain campaign” or “Worked for the Kerry campaign.” Responses were rated according to the degree of encouragement or discouragement they expressed. The experiment is ingenious but it indicates “no result”. There is no statistically significant evidence of bias against applicants who self-identify as conservative. (Gross does report a strong negative response from some of the academics whose potentially discriminatory behavior was tested.)

The study should count as reasonably strong evidence that most social scientists and humanists in leading departments work hard to keep their political feelings and opinions from interfering with their evaluations of academic personnel. (165)

I find Gross’s treatment of this topic to be an exemplary use of quantitative survey data in theoretically informed ways. The PAP survey that Gross initiated (along with colleagues and research assistants) provides substantial new information about the political attitudes and social backgrounds of faculty in the United States. Gross makes deft use of this data source (as well as several others) to evaluate hypotheses about what causes the distribution of political profiles among faculty. Gross’s question is about both groups and individuals, and the survey data helps to evaluate answers to both. And, incidentally, this appears to be a sterling example of the kind of theoretical work that John Levi Martin calls for (link): careful stipulation of various explanatory theories, accompanied by a rigorous effort to evaluate them using appropriate empirical data.

For anyone who cares about universities as places of learning for undergraduate students, Gross’s book is an encouraging one. He provides a clear and convincing explanation of the mechanisms through which a non-random distribution of political attitudes wind up in the population of university and college professors, and he provides strong evidence against the idea that universities and professors exercise discriminatory bias against newcomers who have different political identities. And finally, Gross’s analysis and my own experience suggests that professors generally conform to Weber’s ethic when it comes to proselytizing for one’s own convictions in the classroom: the function and duty of the professor is to help students think for themselves (Max Weber, “The Meaning of Ethical Neutrality,”, Methodology of Social Sciences.)

There is an interesting set of replies to Gross in Society here.

Modeling organizational performance

Organizations do things that we care about. They are generally at least partially designed in order to bring about certain kinds of outcomes, and managers continue to tinker with them to improve them. And we have very good reasons for wanting to be able to measure their performance, to introduce innovations that improve performance, and to measure the improvements that result. These points are true whether we have in mind examples drawn from business, government, or contentious politics.

We might offer a highly abstract description of an organization as an ensemble of —

  • actors and motives
  • rules of action for the actors
  • authority relations
  • activities
  • inputs
  • outputs

Abstractly we can define the quality of the organization in terms of the efficiency and effectiveness with which it brings about its intended outcomes. Consider an organization designed to recruit teenagers into the Peace Corps. The organization requires a certain level of input (money and staff time) to produce a given level out output (let’s say 100 adequately qualified recruits). If two organizations are intended to perform this same function but one requires twice the labor time and twice the inputs of the second, we can say that the first organization is inferior to the second on grounds of efficiency. If one organization gives rise to recruits who are of greater likelihood to persist through training, we can say this organization is superior on grounds of effectiveness.

We might represent the activities, inputs, and outputs of the organization in a diagram analogous to a diagram of an industrial process. Here we would represent the functional components within the structure in terms of their inputs and outputs, and we would represent the workings of the organization as a flow of “products” from one component to another. Consider, for example, this diagram of the production process of a Chinese electrical device factory.

We can imagine a highly analogous diagram for the flow of patients through the various service areas of a hospital.

We can readily introduce evaluative characteristics for such a process: for example, efficiency, productivity, quality of product, level of work satisfaction, and profitability. And now we can give a specific definition to the idea of process improvement; it is an innovation to the process that reduces costs with the same level of output (quality and quantity) or improves output with a constant level of cost.

We are now in a position to ask the question of possible “improvements” in the process: are there innovations in the process that will reduce waste, reduce costs, increase quality of output, improve job satisfaction for workers, or increase profitability? Can we rearrange linkages within the process that reduce costs or increase quality? Can we redesign component processes to save energy, time, or inputs? Can we identify factors that lead to worker dissatisfaction and ameliorate them?

An industrial process like this one can be represented with off-the-shelf simulation software (for example, SIMUL8). Each component process is assigned a set of technical characteristics (raw material needs, time of assembly, energy requirements, labor time) and we can run the simulation to measure inputs (raw materials, energy, labor), outputs, and wastage. We can then experiment with various innovations in the process by tweaking the linkages among the components and modifying the components in ways that affect their technical characteristics. These simulation systems are widely used in manufacturing industry, and they are proven to contribute to rapid design and re-design of complex manufacturing processes so as to create workable industrial solutions. (Here is an Autodesk simulation video of a production process simulation.)

Is it possible to treat social organizations in an analogous way? Can a hospital, a labor union, a tax bureaucracy, or a university be represented as a flow of activities and transformations? There are classes of organizations where this approach seems to work well. It would seem that any organization that serves primarily to process information and transactions can be represented in this way. So a hospital fits this framework fairly well: the patient arrives in the ER reception area; information is collected; patient is moved to an ER examining room; doctor evaluates condition and assigns diagnosis; patient receives urgent treatment; patient is assigned to in-patient room; and so forth. By charting out this set of transactions it is possible for an industrial engineer to suggest changes in process to the hospital administrator that will save time, reduce costs, reduce accidents, or improve quality of treatment for the patient.


This description operates at the level of the functional and technical characteristics of the functional components of the system. But it is often important to approach organizations in a more granular way, by examining the behaviors of the individuals whose activities make up the technical characteristics of the component processes. Let’s suppose that nursing units 1 and 2 have identical duties; but Unit 1 has a higher rate of hospital-born infection than Unit 2. What accounts for the difference? One possibility is that Unit 1 has a lower level of morale among the nurses, leading to a somewhat more careless attitude towards patient treatment. And to understand variations in morale, we need to gather more information about the influences on the working environment as experienced by the two groups of nurses.

Now let us suppose that we are interested in improving the quality of care (reducing hospital-born infections) in Unit 1. We need to have a hypothesis about what factors are contributing to the behaviors leading to sub-par care. Using this hypothesis we can design an intervention. For example, we might reason that Unit 1 has not yet been renovated and is painted a drab green color; whereas Unit 2 is painted with bright, cheerful colors. If we believe that the color of paint influences mood, we might innovate by repainting Unit 1 and monitoring results. If the rate of HBI remains high, then we have disconfirmed the paint hypothesis; if it falls, we have provided some (weak) support for the paint hypothesis.

This more micro-level perspective on the performance of organizations suggests a different kind of modeling. Here it seems that it would be possible to construct an agent-based model of the individuals who make up an interconnected space within a complex institution like a hospital. If we represent the actors’ behavioral characteristics in such a way as to bring “concentration on task” into the simulation, we may be able to demonstrate the effects of low morale on patient safety, based on the interactive behaviors of high-morale or low-morale staff.

Another granular approach is available through the use of general-purpose simulation engines like SimCity to represent the flows and operations of the components of a system. Here is an introduction to the use of SimCity as a way of evaluating the likely consequences of various policy changes; link. Here are several simulations of the economic and demographic effects of mining in Ontario coming from the Social Innovations Simulation project; link.

Finally, there are applied simulation systems based on “discrete event simulation” (DES). Here is a good survey article published in Medical Decision Making describing the application of DES to hospitals; link. The authors describe the approach in these terms:

Discrete event simulation (DES) is a form of computer-based modeling that provides an intuitive and flexible approach to representing complex systems. It has been used in a wide range of health care applications. Most early applications involved analyses of systems with constrained resources, where the general aim was to improve the organization of delivered services. More recently, DES has increasingly been applied to evaluate specific technologies in the context of health technology assessment. The aim of this article is to provide consensus-based guidelines on the application of DES in a health care setting, covering the range of issues to which DES can be applied. The article works through the different stages of the modeling process: structural development, parameter estimation, model implementation, model analysis, and representation and reporting. For each stage, a brief description is provided, followed by consideration of issues that are of particular relevance to the application of DES in a health care setting. Each section contains a number of best practice recommendations that were iterated among the authors, as well as the wider modeling task force.

These simulation methodologies permit one important capability for the institutional designer: they permit the development of “experiments” in which we evaluate the expected consequences of a given innovation or policy change. And they are most applicable in situations where there are queues of users and flows of products. How will the functioning of the emergency room organization in a large hospital change if the registration process — and therefore throughput — is improved? The simulations mentioned here are intended to keep track of the spreading consequences of changes introduced in one or more parts of the system; and, as systems scientists often discover, those consequences are sometimes highly unexpected.

These kinds of approaches have been applied to a range of service organizations — banks, restaurants, hospitals. Essentially this is the application of the tools of industrial and systems engineering to certain kinds of social organizations, and the experience in these applications has been positive. A more difficult question is whether these simulation techniques can aid in the effort to assess the functioning of more comprehensive and multi-purpose institutions like universities, police departments, or legislatures.

Kathleen Tierney on disaster and resilience

The fact of large-scale technology failure has come up fairly often in Understanding Society (link, link, link). There are a couple of reasons for this. One is that our society is highly technology-dependent, relying on more and more densely interlinked and concentrated systems of production and delivery that are subject to unexpected but damaging forms of failure. So it is a pressingly important problem for us to have a better understanding of technology failure than we do today. The other reason that examples of technology failure are frequent here is that it seems pretty clear that failures of this kind are generally social and organizational failures (in part), not simply technological failures. So the study of technology failure is a good way of examining the weaknesses and strengths of various organizational forms — from the firm or plant to the vast regulatory agency. I have highlighted the work of Charles Perrow as being especially useful in this context, especially Normal Accidents: Living with High-Risk Technologies and The Next Catastrophe: Reducing Our Vulnerabilities to Natural, Industrial, and Terrorist Disasters.

Kathleen Tierney has studied disasters very extensively, and her recent The Social Roots of Risk: Producing Disasters, Promoting Resilience is an important contribution. Tierney is both an academic and a practitioner; she is an expert on earthquake science and preparedness and serves as director of the Natural Hazards Center at the University of Colorado. The topics of disaster and technology failure are linked; natural disasters (earthquakes, tsunami, hurricanes) are often the cause of ensuing technology failures of enormous magnitude. Here is Tierney’s over-riding framework of analysis:

The general answer is that disasters of all types occur as a consequence of common sets of social activities and processes that are well understood on the basis of both social science theory and empirical data. Put simply, the organizing idea for this books is that disasters and their impacts are socially produced, and that the forces driving the production of disaster are embedded in the social order itself. As the case studies and research findings discussed throughout the book will show, this is equally true whether the culprit in question is a hurricane, flood, earthquake, or a bursting speculative bubble. The origins of disaster lie not in nature, and not in technology, but rather in the ordinary everyday workings of society itself. (4-5)

This is one of Tierney’s key premises — that disasters are socially produced and socially constituted. Her other major theme is the notion of resilience — the idea that social characteristics exist that make one set of social arrangements more resilient  than another to harm in the face of natural catastrophe. Features of resilience involve —

preexisting, planned, and naturally emerging activities that make societies and communities better able to cope, adapt, and sustain themselves when disasters occur, and also to develop ways of recovering following such events. (5)

Tierney is often drawn to the alliteration of “risk and resilience”. “Risk” is the possibility of serious disturbance to the integrity of a system. “Risk” is a compound of likelihood of a type of disturbance and the damage created by that eventuality. Here is Tierney’s capsule definition:

Risk is commonly conceptualized as the answer to three questions: What can go wrong? How likely is it? And what are the consequences? (11)

“Resilience”, by contrast, is a feature of the system in response to such a disturbance. So the concepts of risk and resilience do not operate on the same level. A more apt opposition is fragility and resilience. (Tierney sometimes refers to brittle institutions.)  Some institutional arrangements are like glass — a sharp tap and they fall into a mound of shards. Others are more like a starfish — able to recover form and function following even very damaging encounters with the world. Both kinds of systems are subject to risk, and the probability of a given disturbance may be the same in the two instances. The difference between them is how well they recover from the realization of risk. But the damage that results from the same disturbance is much greater in a fragile system than a resilient system. And Tierney makes a crucial point for all of us in the twenty-first century: we need to be exerting ourselves to create social systems and communities that are substantially more resilient than they currently are.

A very important example of non-resilient trends in twenty-first century life is the spread of ultra-tall buildings in global cities. There are a variety of reasons why developers and urban leaders like ultra-tall structures — reasons that largely have to do with prestige. But Tierney points out in expert detail the degree to which these buildings are unreasonably fragile in face of disaster: they shed vast quantities of glass, they concentrate people and business in a way that invites terrorist attack, they exist in vulnerable systems of electricity and water that are crucial to their hour-to-hour functioning. A major earthquake in San Francisco has the potential to leave the buildings standing but the populations living within them stranded without light or elevators, and the emergency responders one hundred flights of stairs away from the emergencies they need to confront (63ff.).

The most fundamental and intractable source of hazard for our society that Tierney highlights is the likelihood of failure of government regulatory and safety organizations to carry out their stated missions of protecting the safety and health of the public. Like Perrow in The Next Catastrophe, she finds instance after instance of cases where the public’s interest would be best served by a regulation or prohibition of a certain kind of risky activity (residential and commercial development in flood or earthquake zones, for example) but where powerful economic interests (corporations, local developers) have the overwhelming ability to block sensible and prudent regulations in this space. “Economic power on this scale is easily translated into political power, with important consequences for risk buildup” (91). Tierney offers the case of the Japanese nuclear industry as an example of a concentrated and powerful set of organizations that were able to succeed in creating siting decisions and safety regulations that served their interests rather than the interests of the general public.

As nuclear power emerged as a major source of energy in Japan, communities were essentially bribed into accepting nuclear plants, with the promise of jobs for young workers and support for schools and community projects; also, extensive propaganda efforts were launched…. Then, once government and industry succeeded in getting communities to accept the presence of nuclear plants, the natural tendency was to locate multiple reactors at nuclear sites to achieve economies of scale and to avoid having to repeat costly charm offensives in large numbers of communities. (92)

In Tierney’s view, the problem of regulatory capture by the economically powerful is perhaps the largest obstacle to our ability to create a rational and prudent plan for managing risks in the future (94). (Here is an earlier post on the quiet use of economic power; link.)

The Social Roots of Risk is rich in detail and deeply insightful into the sociology of risk in a large democratic corporation-centered society. The hazards she identifies concerning the failure of our institutions to devise genuinely prudent policies around foreseeable risks (earthquake, hurricane, flood, terrorism, nuclear or chemical plant malfunction, train disaster, …) are deeply alarming. The public and our governments need to absorb these lessons and design for more resilient societies and communities, exactly as Tierney and Perrow argue.

%d bloggers like this: