The second primitive accumulation

One of the more memorable parts of Capital is Marx’s description of the “so-called primitive accumulation of capital” — the historical process where rural people were dispossessed of access to land and forced into industrial employment in cities like Birmingham and Manchester (link). It seems as though we’ve seen another kind of primitive accumulation in the past thirty years — the ruin of well-paid manufacturing jobs based on unionized labor, the disappearance of local retail stores, the extinction of bookstores and locally owned hardware stores, all of which offered a large number of satisfying jobs. We’ve seen a new set of bad choices for displaced workers — McDonald’s servers, Walmart greeters, and Amazon fulfillment workers. And this structural economic change threatens to create a permanent under-class of workers earning just enough to get by.

So what is the future of work and class in advanced economies? Scott Shane’s major investigative story in the New York Times describing Amazon’s operations in Baltimore (link) makes for sobering reading on this question. The story describes work conditions in an Amazon fulfillment center in Baltimore that documents the intensity, pressure, and stress created for Amazon workers by Amazon’s system of work control. This system depends on real-time monitoring of worker performance, with automatic firings coming to workers who fall short on speed and accuracy after two warnings. Other outlets have highlighted the health and safety problems created by the Amazon system, including this piece on worker safety in the Atlantic by Will Evans; link. It is a nightmarish description of a work environment, and hundreds of thousands of workers are employed under these conditions.

Imagine the difference you would experience as a worker in the hardware store mentioned in the New York Times story (driven out of business by online competition) and as a worker in an Amazon fulfillment center. In the hardware store you provide value to the business and the customers; you have social interaction with your fellow workers, your boss, and the customers; you work in a human-scale enterprise that actually cares whether you live or die, whether you are sick or well; and to a reasonable degree you have a degree of self-direction in your work. Your expertise in home improvement, tools, and materials is valuable to the customers, which brings them back for the next project, and it is valuable to you as well. You have the satisfaction of having knowledge and skills that make a difference in other people’s lives. In the fulfillment center your every move is digitally monitored over the course of your 10-hour shift, and if you fall short in productivity or quality after two warnings, you are fired. You have no meaningful relationships with fellow workers — how can you, with the digital quotas you must fulfill every minute, every hour, every day? And you have no — literally no — satisfaction and fulfillment as a human being in your work. The only value of the work is the $15 per hour that you are paid; and yet it is not enough to support you or your family (about $30,000 per year). As technology writer Amy Webb of the Future Today Institute is quoted in the Times article, [It’s not that we may be replaced by robots,] “it’s that we’ve been relegated to robot status.”

What kind of company is that? It is hard to avoid the idea that it is the purest expression that we have ever seen of the ideal type of a capitalist enterprise: devoted to growth, cost avoidance, process efficiency, use of technology, labor control, rational management, and strategic and tactical reasoning based solely on business growth and profit-maximizing calculations. It is a Leviathan that neither Hobbes nor Marx could really have visualized. And social wellbeing — of workers, of communities, of country, of the global future — appears to have no role whatsoever in these calculations. The only affirmative values expressed by the company are “serving the consumer” and being a super-efficient business entity.

What is most worrisome about the Amazon employment philosophy is its single-minded focus on “worker efficiency” at every level, using strict monitoring techniques and quotas to enforce efficient work. And the ability to monitor is increased asymptotically by the use of technology — sensors, cameras, and software that monitor the worker’s every movement. It is the apotheosis of F.W. Taylor’s theories from the 1900s of “scientific management” and time-motion studies. Fundamentally Taylor regarded the worker as a machine-like component of the manufacturing process, whose motions needed to be specified and monitored so as to bring about the most efficient possible process. And, as commentators of many ideological stripes have observed, this is a fundamentally dehumanizing view of labor and the worker. This seems to be precisely the ideal model adopted by Amazon, not only in its fulfillment centers but its delivery drivers, its professional staff, and every other segment of the workforce Amazon can capture.

Business and technology historian David Hounshell presciently noticed the resurgence of Taylorism in a 1988 Harvard Business Review article on “modern manufacturing”; link. (This was well before the advent of online business and technology-based mega-companies.) Here are a few relevant paragraphs from his piece:

Rather than seeing workers as assets to be nurtured and developed, manufacturing companies have often viewed them as objects to be manipulated or as burdens to be borne. And the science of manufacturing has taken its toll. Where workers were not deskilled through extreme divisions of labor, they were often displaced by machinery. For many companies, the ideal factory has been — and continues to be — a totally automated, workerless facility. 

Now in the wake of the eroding competitive position of U.S. manufacturing companies, is it time for an end to Taylor’s management tradition? The books answer in the affirmative, calling for the institution of a less mechanistic, less authoritarian, less functionally divided approach to manufacturing. Dynamic Manufacturing focuses explicitly on repudiating Taylorism, which it takes to be a system of “command and control.” American Business: A Two-Minute Warning is written in a more popular vein, but characterizes U.S. manufacturing methods and the underlying mind-set of manufacturing managers in unmistakably similar ways. Taylorism is the villain and the anachronism. 

Predictably, both books arrive at their diagnoses and prescriptions through their respective evaluations of the “Japanese miracle.” Whereas U.S. manufacturing is rigid and hierarchical, Japanese manufacturing is flexible, agile, organic, and holistic. In the new competitive environment — which favors the company that can continually generate new, high-quality products — the Japanese are more responsive. They will continue to dominate until U.S. manufacturers develop manufacturing units that are, in Hayes, Wheelwright, and Clark’s words, “dynamic learning organizations.” Their book is intended as a primer. (link)

Plainly the more positive ideas associated with positive human resources theory about worker motivation, knowledge, and creativity play no role in Amazon’s thinking about the workplace. And this implies a grim future for work — not only in this company, but in many others who emulate the workplace model pioneered by Amazon.

The abuses of the first fifty years of industrial capitalism eventually came to an end through a powerful union movement. Workers in railroads, textiles, steel, and the automobile industry eventually succeeded in creating union organizations that were able to effectively represent their interests in the workplace. So where is the Amazon worker’s ability to resist? The New York Times story (link) makes it clear that individual workers have almost no ability to influence Amazon’s practices. They can choose not to work for Amazon, but they can’t join a union, because Amazon has effectively resisted unionization. And in places like Baltimore and other cities where Amazon is hiring, the other job choices are even worse (even lower paid, if they exist at all). Amazon makes a great deal of money on their work, and it manages its great initiatives based on their Chaplin-esque speed of completion (one-day delivery). But there is very little ability to change the workplace towards a more human-scale one, and a workplace where the worker’s positive human capacities find fulfillment. An Amazon fulfillment center is anything but that when it comes to the lives of the workers who make it run.

Is there a better philosophy that Amazon might adopt for its work environments? Yes. It is a framework that places worker wellbeing at the same level as efficiency, “1-day delivery” and profitability. It is an approach that gives greater flexibility to shop-floor-level workers, and relaxes to some degree the ever-rising quotas for piece work per minute. It is an approach that sets workplace expectations in a way that fully considers the safety, stress, and health of the workers. It is an approach that embodies genuine respect and concern for its workers — not as public relations initiative, but as a guiding philosophy of the workplace.

There is a hard question and a harder question posed by this idea, however. Is there any reason to think that Amazon will ever evolve in this more humane direction? And harder, is there any reason to think that any large modern corporation can embody these values? Based on the current behavior of Amazon as a company, from top to bottom, the answer to the first question is “no, not unless workers gain real power in the workplace through unionization or some other form of representation in production decisions.” And to the second question, a qualified yes: “yes, a more humane workplace is possible, if there is broad involvement in business decisions by workers as well as shareholders and top executives.” But this too requires a resurgence of some form of organized labor — which our politics of the past 20 years have discouraged at every turn.

Or to quote Oliver Goldsmith in The Deserted Village (1770):

Ill fares the land, to hastening ills a prey,
Where wealth accumulates, and men decay.
Princes and lords may flourish, or may fade;
A breath can make them, as a breath has made:
But a bold peasantry, their country’s pride,
When once destroy’d, can never be supplied.

So where did the dispossessed wind up in nineteenth century Britain? Here is how Engels described the social consequences of this “primitive accumulation” for the working people of Britain in his book, The Condition of the Working Class in England:

It is only when [the observer] has visited the slums of this great city that it dawns upon him that the inhabitants of modern London have had to sacrifice so much that is best in human nature in order to create those wonders of civilisation with which their city teems. The vast majority of Londoners have had to let so many of their potential creative faculties lie dormant, stunted and unused in order that a small, closely-knit group of their fellow citizens could develop to the full the qualities with which nature has endowed them. (30)

This passage, written in 1845, could with minor changes of detail describe the situation of Amazon workers today. “The vast majority … have had to let so many of their potential creative faculties lie dormant, stunted and unused in order that a small, closely-knit group of their fellow citizens could develop to the full the qualities with which nature has endowed them.”

And what about income and standard of living? The graph of median US income by quintile above in constant 2018 dollars tells a very stark story. Since 1967 only the top quintile of household income has demonstrated significant growth (in a timeframe of more than fifty years); and the top 5% of households shows the greatest increase of any group. 80% of US households are barely better off today than they were in 1967; whereas the top 5% of households have increased their incomes by almost 250% in real terms. This has a very clear, unmistakeable implication: that working people, including service workers, industrial workers, and most professionals have received a declining share of the economic product of the nation. Amazon warehouse workers fall in the 2nd-lowest quintile (poorest 21-40%). (It would be very interesting to have a time series of Amazon’s wage bill for blue-collar and white-collar wages excluding top management as a fraction of company revenues and net revenues since 2005.)

Here is a relevant post on the possibilities created for a more fair industrial society by the institution of worker-owned enterprises (link), and here is a post on the European system of workers councils (link), a system that gives workers greater input into decisions about operations and work conditions on the shop floor.

Organizations as open systems

Key to understanding the “ontology of government” is the empirical and theoretical challenge of understanding how organizations work. The activities of government encompass organizations across a wide range of scales, from the local office of the Department of Motor Vehicles (40 employees) to the Department of Defense (861,000 civilian employees). Having the best understanding possible of how organizations work and fail is crucial to understanding the workings of government.

I have given substantial attention to the theory of strategic action fields as a basis for understanding organizations in previous posts (link, link). The basic idea in that approach is that organizations are a bit like social movements, with active coalition-building, conflicting goals, and strategic jockeying making up much of the substantive behavior of the organization. It is significant that organizational theory as a field has moved in this direction in the past fifteen years or so as well. A good example is Scott and Davis, Organizations and Organizing: Rational, Natural and Open System Perspectives (2007). Their book is intended as a “state of the art” textbook in the field of organizational studies. And the title expresses some of the shifts that have taken place in the field since the work of March, Simon, and Perrow (link, link). The word “organizing” in the title signals the idea that organizations are no longer looked at as static structures within which actors carry out well defined roles; but are instead dynamic processes in which active efforts by leaders, managers, and employees define goals and strategies and work to carry them out. And the “open system” phrase highlights the point that organizations always exist and function within a broader environment — political constraints, economic forces, public opinion, technological innovation, other organizations, and today climate change and environmental disaster.

Organizations themselves exist only as a complex set of social processes, some of which reproduce existing modes of behavior and others that serve to challenge, undermine, contradict, and transform current routines. Individual actors are constrained by, make use of, and modify existing structures. (20)

Most analysts have conceived of organizations as social structures created by individuals to support the collaborative pursuit of specified goals. Given this conception, all organizations confront a number of common problems: all must define (and redefine) their objectives; all must induce participants to contribute services; all must control and coordinate these contributions; resources must be garnered from the environment and products or services dispensed; participants must be selected, trained, and replaced; and some sort of working accommodation with the neighbors must be achieved. (23)

Scott and Davis analyze the field of organizational studies in several dimensions: sector (for-profit, public, non-profit), levels of analysis (social psychological level, organizational level, ecological level), and theoretical perspective. They emphasize several key “ontological” elements that any theory of organizations needs to address: the environment in which an organization functions; the strategy and goals of the organization and its powerful actors; the features of work and technology chosen by the organization; the features of formal organization that have been codified (human resources, job design, organizational structure); the elements of “informal organization” that exist in the entity (culture, social networks); and the people of the organization.

They describe three theoretical frameworks through which organizational theories have attempted to approach the empirical analysis of organizations. First, the rational framework:

Organizations are collectivities oriented to the pursuit of relatively specific goals. They are “purposeful” in the sense that the activities and interactions of participants are coordinated to achieve specified goals….. Organizations are collectivities that exhibit a relatively high degree of formalization. The cooperation among participants is “conscious” and “deliberate”; the structure of relations is made explicit. (38)

From the rational system perspective, organizations are instruments designed to attain specified goals. How blunt or fine an instrument they are depends on many factors that are summarized by the concept of rationality of structure. The term rationality in this context is used in the narrow sense of technical or functional rationality (Mannheim, 1950 trans.: 53) and refers to the extent to which a series of actions is organized in such a way as to lead to predetermined goals with maximum efficiency. (45)

Here is a description of the natural-systems framework:

Organizations are collectivities whose participants are pursuing multiple interests, both disparate and common, but who recognize the value of perpetuating the organization as an important resource. The natural system view emphasizes the common attributes that organizations share with all social collectivities. (39)

Organizational goals and their relation to the behavior of participants are much more problematic for the natural than the rational system theorist. This is largely because natural system analysts pay more attention to behavior and hence worry more about the complex interconnections between the normative and the behavioral structures of organizations. Two general themes characterize their views of organizational goals. First, there is frequently a disparity between the stated and the “real” goals pursued by organizations—between the professed or official goals that are announced and the actual or operative goals that can be observed to govern the activities of participants. Second, natural system analysts emphasize that even when the stated goals are actually being pursued, they are never the only goals governing participants’ behavior. They point out that all organizations must pursue support or “maintenance” goals in addition to their output goals (Gross, 1968; Perrow, 1970:135). No organization can devote its full resources to producing products or services; each must expend energies maintaining itself. (67)

And the “open-system” definition:

From the open system perspective, environments shape, support, and infiltrate organizations. Connections with “external” elements can be more critical than those among “internal” components; indeed, for many functions the distinction between organization and environment is revealed to be shifting, ambiguous, and arbitrary…. Organizations are congeries of interdependent flows and activities linking shifting coalitions of participants embedded in wider material-resource and institutional environments.  (40)

(Note that the natural-system and “open-system” definitions are very consistent with the strategic-action-field approach.)

Here is a useful table provided by Scott and Davis to illustrate the three approaches to organizational studies:

An important characteristic of recent organizational theory has to do with the way that theorists think about the actors within organizations. Instead of looking at individual behavior within an organization as being fundamentally rational and goal-directed, primarily responsive to incentives and punishments, organizational theorists have come to pay more attention to the non-rational components of organizational behavior — values, cultural affinities, cognitive frameworks and expectations.

This emphasis on culture and mental frameworks leads to another important shift of emphasis in next-generation ideas about organizations, involving an emphasis on informal practices, norms, and behaviors that exist within organizations. Rather than looking at an organization as a rational structure implementing mission and strategy, contemporary organization theory confirms the idea that informal practices, norms, and cultural expectations are ineliminable parts of organizational behavior. Here is a good description of the concept of culture provided by Scott and Davis in the context of organizations:

Culture describes the pattern of values, beliefs, and expectations more or less shared by the organization’s members. Schein (1992) analyzes culture in terms of underlying assumptions about the organization’s relationship to its environment (that is, what business are we in, and why); the nature of reality and truth (how do we decide which interpretations of information and events are correct, and how do we make decisions); the nature of human nature (are people basically lazy or industrious, fixed or malleable); the nature of human activity (what are the “right” things to do, and what is the best way to influence human action); and the nature of human relationships (should people relate as competitors or cooperators, individualists or collaborators). These components hang together as a more-or-less coherent theory that guides the organization’s more formalized policies and strategies. Of course, the extent to which these elements are “shared” or even coherent within a culture is likely to be highly contentious (see Martin, 2002)—there can be subcultures and even countercultures within an organization. (33)

Also of interest is Scott’s earlier book Institutions and Organizations: Ideas, Interests, and Identities, which first appeared in 1995 and is now in its 4th edition (2014). Scott looks at organizations as a particular kind of institution, with differentiating characteristics but commonalities as well. The IBM Corporation is an organization; the practice of youth soccer in the United States is an institution; but both have features in common. In some contexts, however, he appears to distinguish between institutions and organizations, with institutions constituting the larger normative, regulative, and opportunity-creating environment within which organizations emerge.

Scott opens with a series of crucial questions about organizations — questions for which we need answers if we want to know how organizations work, what confers stability upon them, and why and how they change. Out of a long list of questions, these seem particularly important for our purposes here: “How are we to regard behavior in organizational settings? Does it reflect the pursuit of rational interests and the exercise of conscious choice, or is it primarily shaped by conventions, routines, and habits?” “Why do individuals and organizations conform to institutions? Is it because they are rewarded for doing so, because they believe they are morally obligated to obey, or because they can conceive of no other way of behaving?” “Why is the behavior of organizational participants often observed to depart from the formal rules and stated goals of the organization?” “Do control systems function only when they are associated with incentives … or are other processes sometimes at work?” “How do differences in cultural beliefs shape the nature and operation of organizations?” (Introduction).

Scott and Davis’s work is of particular interest here because it supports analysis of a key question I’ve pursued over the past year: how does government work, and what ontological assumptions do we need to make in order to better understand the successes and failures of government action? What I have called organizational dysfunction in earlier posts (link, link) finds a very comfortable home in the theoretical spaces created by the intellectual frameworks of organizational studies described by Scott and Davis.

Personalized power at the local level

How does government work? We often understand this question as one involving the institutions and actors within the Federal government. But there is a different zone of government and politics that is also very important in public life in the United States, the practical politics and exercise of power at the state and local levels.

Here is an earlier post that addresses some of these issues as well; link. There I present three scenarios for how our democracy, the ideal case, the “not-so-ideal” case, and the “nightmare” case:

The Nightmare Scenario Elected officials have no sincere adherence to the public good; they pursue their own private and political interests through all the powers available to them. Elected officials are sometimes overtly corruptible, accepting significant gifts in exchange for official performance. Elected officials are intimidated by the power of private interests (corporations) to fund electoral opposition to their re-election. Regulatory agencies are dominated by the industries they regulate; independent commissioners are forced out of office; and regulations are toothless when it comes to environmental protection, wilderness protection, health and safety in the workplace, and food safety. Lobbyists for special interests and corporations have almost unrestricted access to legislators and regulators, and are generally able to achieve their goals.

This is the nightmare scenario if one cares about democracy, because it implies that the apparatus of government is essentially controlled by private interests rather than the common good and the broad interests of society as a whole. It isn’t “pluralism”, because there are many important social interests not represented in this system in any meaningful way: poor people, non-unionized workers, people without health insurance, inner-city youth, the environment, people exposed to toxic waste, …

 If anything, personal networks of power and influence appear to be of even greater importance at this level of government than at the Federal level.

So how does personal power work at the local level? Power within a democracy is gained and wielded through a variety of means: holding office within an important institution, marshaling support from a political party, possessing a network of powerful supporters in business, labor, and advocacy groups; securing access to significant sources of political funding; and other mechanisms we can think of. Mayors, governors, and county executives have powers of appointment to reward or punish their supporters and competitors; they have the ability to influence purchasing and other economic levers of the municipality; and they have favors to trade with legislators.

Essentially the question to consider here is how power is acquired, exercised, and maintained by a few powerful leaders in state, county, and city, and what are the barely-visible lines through which these power relations are implemented and maintained. This used to be called “machine politics,” but as Jessica Trounstine demonstrates in Political Monopolies in American Cities: The Rise and Fall of Bosses and Reformers, the phenomenon is broader than Tammany Hall and the mayor-boss politics of the nineteenth century through Mayor Daly’s reign in Chicago. The term Trounstine prefers is “political monopolies”:

I argue that it is not whether a government is machine or reform that determines its propensity to represent the people, but rather its success at stacking the deck in its favor. When political coalitions successfully limit the probability that they will be defeated over the long term — when they eliminate effective competition — they achieve a political monopoly. In these circumstances the governing coalition gains the freedom to be responsive to a narrow segment of the electorate at the expense of the broader community. (KL 140)

What are the levers of influence available to a politician in state and local government that permit some executives to achieve monopoly power? How do mayors, county executives, and political party leaders exercise power over the decisions that are to be made? Once they have executive power they are able to reward friends and punish enemies through appointments to desirable jobs, through favorable access to government contracts (corrupt behavior!), through the power of their Rolodexes (their networks of relationships with other powerful people), through their influence on political party decision-making, through the power of some of their allies (labor unions, business associations, corporations), and through their ability to influence the flow of campaign funding. They have favors to dispense and they have punishments they can dole out.

Consider Southeast Michigan as an interesting example. Michigan’s largest counties have a history of longterm “monopoly” leadership. Wayne County was led for 15 years by Ed McNamara and Oakland County was led by L. Brooks Patterson, and both men wielded a great deal of power in their offices during their tenure. Neither was seriously challenged by strong competing candidates, and Patterson died in office at the age of 80. Some of the levers of power in Wayne County came to light during a corruption investigation in 2011. Below are links to several 2011 stories in MLive on the details of this controversy involving the Wayne County Executive and the Airport Authority Board.

Labor unions have a great deal of influence on the internal politics of the Democratic Party in Michigan. Dudley Buffa’s Union Power and American Democracy: The UAW and the Democratic Party, 1972-83 describes this set of political realities through the 1980s. Buffa shows that the UAW had extraordinary influence in the Democratic Party into the 1980s, and even with the decline of the size and influence of organized labor, it still has virtually veto power on important Democratic Party decisions today.

As noted in many places in Understanding Society, corporations have a great deal of power in political decision-making in the United States. Corporate influence is wielded through effective lobbying, political and political action contributions, and the “social capital” of networks of powerful individuals. (Just consider the influence of Boeing on the actions of the FAA or the influence of the nuclear industry on the actions of the NRC.) G. William Domhoff (Who Rules America? Challenges to Corporate and Class Dominance) provided a classic treatment of the influence of corporate and business elites in the sphere of political power in the United States. He has also created a very useful website dedicated to helping other researchers discover the networks of power in other settings (link). Senator Sheldon Whitehouse and Melanie Wachtell Stinnett provide a more contemporary overview of the power that businesses have in American politics in Captured: The Corporate Infiltration of American Democracy.

When I speak of corporate power in politics, let me be very clear: I do not mean just the activities of the incorporated entities themselves. The billionaire owners of corporations are often actively engaged in battle to expand the influence of the corporations that give them their power and their wealth. Front groups and lobbying groups are often the ground troops when corporate powers don’t want to get their own hands dirty or when they want to institutionalize their influence. So-called philanthropic foundations are often the proxies for billionaire families who want influence and who launch these tools. (kl 214)

Contributors to Corporations and American Democracy provide extensive understanding of the legal and political history through which corporations came to have such extensive legal rights in the United States.

Business executives too have a great deal of influence on the Michigan legislature. Here is a Crain’s Detroit Business assessment of the top influencers in Lansing, “Michigan’s top power players as Lansing insiders see them — and how they wield that influence” (link). Top influencers in the business community, according to the Crain’s article, include Dan Gilbert, chairman of Quicken Loans Inc., Daniel Loepp, president and CEO of Blue Cross Blue Shield of Michigan, Rich Studley, CEO of Michigan Chamber of Commerce, Patti Poppe, CEO of Consumers Energy Co., and Mary Barra, CEO of General Motors Co. Most of these individuals are members of the state’s leading business organization, Business Leaders for Michigan (link). Collectively and individually these business leaders have a great deal of influence on the elected officials of the state.

Finally, elected officials themselves sometimes act in direct self-interest, either electoral or financial, and corruption is a recurring issue in local and state government in many states. Detroit’s mayor Kwame Kilpatrick, a string of Illinois governors, and other elected officials throughout the country were all convicted of corrupt actions leading to personal gain (link).

These kinds of influence and actions underline the extensive and anti-democratic role that a range of political actors play within the decision-making and rule-setting of local government: monopoly-holding political executives, political party officials, big business and propertied interests, labor unions, and special advocacy groups. It would be interesting to put together a scorecard of issues of interest to business, labor, unions, and environmental groups, and see how often each constituency prevails. It is suggestive about the relative power of these various actors that the two issues of the greatest interest to the business community in Michigan in recent years, repeal of the Michigan Business Tax and passage of “Freedom to Work” legislation, were both successful. (Here is an earlier post on the business tax reform issue in Michigan; link.)

Data for case study about networks of influence in SE Michigan

Jeff Wattrick, November 2, 2011. “This didn’t start with Turkia Mullin: The inter-connected web of Wayne County politics from Ed McNamara to Renee Axt”, MLive (link)

___________, November 4, 2011: “Wayne County Executive Bob Ficano replaces top officials, vows to end ‘business as usual'”, MLive (link)

___________, November 7, 2011: “Renee Axt resigns as Chair of Wayne County Airport Authority”, MLive (link)

___________, November 8, 2011: “Almost half of Wayne County voters say Executive Bob Ficano should resign”, MLive (link)

Jim Schaefer and John Wisely. November 15, 2011. “Wayne Co. lawyer who quit is back”. Detroit Free Press. (link)

David Sands. November 15, 2011. “Wayne County Corruption Probe Gathers Speed, Turkia Mullin To Testify”, Huffington Post (link)

Detroit had its own nationally visible political corruption scandal when Mayor Kwame Kilpatrick was charged with multiple counts of racketeering and corruption, for which he was eventually convicted. Stephen Yaccino, October 10, 2013. “Kwame M. Kilpatrick, Former Detroit Mayor, Sentenced to 28 Years in Corruption Case”, New York Times (link).

The internal machinations of Michigan’s political parties with respect to choosing candidates for office reflect the power of major “influencers”. Here is a piece about the choice of candidate for the office of secretary of state in the Democratic Party in 2002: Jack Lessenberry. March 30, 2002. “Austin has uphill fight in Michigan secretary of state race”, Toledo Blade (link).

Electronic Health Records and medical mistakes

Electronic Health Record systems (EHRs) have been broadly implemented by hospitals and health systems around the country as a way of increasing the accuracy, availability, and timeliness of patient health status and treatment information. (These systems are also sometimes called “Digital Medical Records” (DMRs).) They are generally regarded as an important forward step in improving the quality of healthcare. Here is a description of the advantages of Electronic Health Record systems, according to Athena Health:

The advantages of electronic health records in the clinical setting are numerous and important. In the 2012 edition of the Physician Sentiment IndexTM, published by athenahealth and Sermo, 81% of physicians said they believe EHRs improve access to clinical data. More than two-thirds said an EHR can actually improve patient care.

The use of an electronic health records system offers these clinical advantages:

  • No bulky paper records to store, manage and retrieve
  • Easier access to clinical data
  • The ability to establish and maintain effective clinical workflows
  • Fewer medical errors, improved patient safety and stronger support for clinical decision-making
  • Easier participation in Meaningful Use, Patient-Centered Medical Home (PCMH) and other quality programs, with electronic prompts ensuring that required data is recorded at the point of care
  • The ability to gather and analyze patient data that enables outreach to discreet populations
  • The opportunity to interact seamlessly with affiliated hospitals, clinics, labs and pharmacies

Considering all the advantages of electronic health records, and the rapidly growing electronic interconnectedness of the health care world, even if EHRs had not been mandated by health care reform, their development and eventual ubiquity in the health care industry was inevitable.

And yet, like any software system, EHR systems are capable of creating new errors; and some of those errors can be harmful to patients.

Nancy Leveson is an important expert on software system safety who has written extensively on the challenges of writing highly reliable software in safety-critical applications. Here are a few apt observations from her book Safeware: System Safety and Computers (1995).

Although it might seem that automation would decrease the risk of operator error, the truth is that automation does not remove people from systems — it merely moves them to maintenance and repair functions and to higher-level supervisory control and decision making. The effects of human decisions and actions can then be extremely serious. At the same time, the increased system complexity makes the decision-making process more difficult. (10)

The increased pace of change lessens opportunity to learn from experience. Small-scale and relatively nonhazardous systems can evolve gradually by trial and error. But learning by trial and error is not possible for many modern products and processes because the pace of change is too fast and the penalties of failure are too great. Design and operating procedures must be right the first time when there is potential for a major fire, explosion, or release of toxic materials. (12)

(To the last statement we might add “or harm to hospital patients through incorrect prescriptions or failed transmission of lab results”.)

The safety implications of computers exercising direct control over potentially dangerous processes are obvious. Less obvious are the dangers when … software generated data is used to make safety-critical decisions, … software is used in design analysis, … safety-critical data (such as blood bank data) is stored in computer databases. The FDA has received reports of software errors in medical instruments that led to mixing up patient names and data, as well as reports of incorrect outputs from laboratory and diagnostic instruments (such as patient monitors, electrocardiogram analyzers, and imaging devices”. (23)

Automatic control systems [like aircraft autopilots] are designed to cope with the immediate effects of a deviation in the process — they are feedback loops that attempt to maintain a constant system state, and as such, they mask the occurrence of a problem in its early stages. An operator will be aware of such problems only if adequate information to detect them is provided. That such information is often not provided may be the result of the different mental models of the designers and experienced operators, or it may merely reflect financial pressures on designers due to the cost of providing operators with independent information. (117)

One of the cases examined in detail in Safeware is the Therac-25 radiation-therapy device, which due to a minor software flaw in the treatment-entry plan module began seriously injuring patients with excessive doses of radiation in 1985-87 (515 ff.). It had operated without incident thousands of times before the first accident.

So Leveson gives ample reason to be cautious about the safety implications of DMRs and the “fault pathways” through which their normal functioning might harm patients. What has been the experience so far, now that the healthcare industry has witnessed widespread adoption of DMR systems?

Two specific issues involving EHR errors affecting patient care have been identified in the past several years. The first is in the area of errors in the administration of prescription drugs, and the second is in the area of the handling and routing of medical test results. Both errors have the potential for harming the patient.

Jennifer Bresnick (link) summarizes the results of a report by the Pennsylvania Patient Safety Authority concerning medication errors caused by DMR systems. Medication errors (wrong medication, wrong dose, wrong patient, wrong frequency) can occur at several stages of the clinical process, including prescribing, transcribing, dispensing, and administration. The digital medical record is intended to dramatically reduce all these sources of error, the Pennsylvania study shows that the DMR can also contribute to errors at each of these stages.

While EHRs and other technologies are intended to reduce errors and improve the safe, standardized, and well-documented delivery of care, some stakeholders believe that digital tools can simply serve to swap one set of mistakes for another. Poor implementation and lackluster user training can leave patients just as vulnerable to medication errors as they were when providers used paper charts, commented Staley Lawes, PharmD, BCPS, Patient Safety Analyst, and Matthew Grissinger, RPh, FISMP, FASCP, Manager of Medication Safety Analysis in the brief. (link)

Part of the blame, according to the Pennsylvania report, belongs to the design of the user interface:

For this reason, it is important to design a system with an intuitive user interface to minimize the risk for human error. Users should be able to easily enter and retrieve data and share information with other healthcare professionals.  When systems are designed without these considerations in mind, patients are subject to undue risk. (link)

The report contains several specific design standards that would improve the safety of the DMR system:

The interaction between clinician and software is a key component that is to be taken into consideration when trying to improve the safety of health IT,” the report says. “Incident reports can provide valuable information about the types of HIT-related issues that can cause patient harm, and ongoing HIT system surveillance can help in developing medication safety interventions. (link)

It is clear that ongoing health IT system surveillance and remedial interventions are needed. Efforts to improve health IT safety should include attention to software interoperability, usability, and workflow. The relationship between clinician and software includes complex interactions that must be considered to optimize health IT’s contribution to medication safety.

Yackel and Embi (link) treat the problem of test result management errors in “Unintended errors with EHR-based result management: a case series”. Here is their abstract:

Test result management is an integral aspect of quality clinical care and a crucial part of the ambulatory medicine workflow. Correct and timely communication of results to a provider is the necessary first step in ambulatory result management and has been identified as a weakness in many paper-based systems. While electronic health records (EHRs) hold promise for improving the reliability of result management, the complexities involved make this a challenging task. Experience with test result management is reported, four new categories of result management errors identified are outlined, and solutions developed during a 2-year deployment of a commercial EHR are described. Recommendations for improving test result management with EHRs are then given.

They identify test management errors at four stages of the clinical process:

  • results not correctly communicated to provider;
  • results communicated but never received or reviewed by the provider;
  • results reviewed, but appropriate action not recommended by provider;
  • appropriate recommendation made by provider, but action not carried out.

They make several key recommendations for improving the performance of DMR systems in managing test results: Develop fault-tolerant systems that automatically report delivery failures; use robust testing to find rare errors that occur both within and between systems; implement tracking mechanisms for critical tests, such as cancer screening and diagnostics; and deliver results directly to patients.

These are just two types of errors that can arise in digital medical record management systems. It is evident that the designers and implementers of DMRs need to take the systems-safety approach described by Nancy Leveson and implement comprehensive safety failure analysis, both in terms of “safety case analysis” (discovery of failure scenarios) and after-event investigation to identify the source of the failure in the software and its human interface.

These examples are not intended to suggest that DMRs are hazardous and should be avoided. On the contrary, the consolidation and convenient presentation of patient information for the provider is clearly an important step forward. But it is crucial that designers and implementers keep safety at the center of their attention, and to have a healthy respect for the ways in which automated systems can incorporate incorrect assumptions, can produce unintended interactions among components, and can be presented in such a confusing way to the human provider that patient care is harmed.

(Here is a case of treatment involving several different errors conveyed through the digital medical record system that involved attaching biopsy and test results to the wrong patient, leading to the wrong treatment for the patient. It is interesting to read because it reflects some of the complexity identified by Leveson in other system failures.) 

Twelve years of Understanding Society

 


Understanding Society has now reached its twelfth anniversary of continuous publication. This represents 1,271 posts, and over 1.3 million words. According to Google Blogspot statistics, the blog has gained over 11 million pageviews since 2010. Just over half of visitors came from the United States, Great Britain, and Canada, with the remainder spread out over the rest of the world. The most popular posts are “Lukes on power” (134K) and “What is a social structure?” (124K).

I’ve continued to find writing the blog to be a great way of keeping several different lines of thought and research going. My current interest in “organizational causes of technology failures” has had a large presence in the blog in the past year, with just under half of the posts in 2019 on this topic. Likewise, a lot of the thinking I’ve done on the topic of “a new ontology of government” has unfolded in the blog. Other topic areas include the philosophy of social science, philosophy of technology, and theories of social ontology. A theme that was prominent in 2018 that is not represented in the current year is “Democracy and the politics of hate”, but I’m sure I’ll return to this topic in the coming months because I’ll be teaching a course on this subject in the spring.

I continue to look at academic blogging as a powerful medium for academic communication, creativity, and testing out new ideas. I began in 2007 by describing the blog as “open-source philosophy”, and it still has that character for me. And I continue to believe that my best thinking finds expression in Understanding Society. Every post that I begin starts with an idea or a question that is of interest to me on that particular day, and it almost always leads me to learning something new along the way.

I’ve also looked at the blog as a kind of experiment in exploration of social media for serious academic purposes. Can blogging platforms and social media platforms like Twitter or Facebook contribute to academic progress? So it is worth examining the reach of the blog over time, and the population of readers whom it has touched. The graph of pageviews over time is interesting in this respect.

Traffic to the blog increased in a fairly linear way from the beginning date of the data collection in 2010 through about 2017, and then declined more steeply from 2017 through to the present. (The data points are pageviews per month.) At its peak the blog received about 150K pageviews per month, and it seems to be stabilizing now at about 100K pageviews per month. My impression is that a lot of the variation has to do with unobserved changes in search engine page ranking algorithms, resulting in falling numbers of referrals. The Twitter feed associated with the blog has just over 2,100 followers (@dlittle30), and the Facebook page for the blog registers 12,800 followers. The Facebook page is not a very efficient way of disseminating new posts from the blog, though, because Facebook’s algorithm for placing an item into the feed of a “follower” is extremely selective and opaque. A typical item may be fed into 200-400 of the feeds of the almost 13,000 individuals who have expressed interest in the page.

A surprising statistic is that about 75% of pageviews on the blog came through desktop requests rather than mobile requests (phone and tablet). We tend to think that most web viewing is occurring on mobile devices now, but that does not seem to be the case. Also interesting is that the content of the blog is mirrored to a WordPress platform (www.undsoc.org), and the traffic there is a small fraction of the traffic on the Blogspot platform (1,500 pageviews versus 80,000 pageviews).

So thanks to the readers who keep coming back for more, and thanks as well to those other visitors who come because of an interest in a very specific topic. It’s genuinely rewarding and enjoyable to be connected to an international network of people, young and old, who share an interest in how the social world works.

Soviet nuclear disasters: Kyshtym

The 1986 meltdown of reactor number 4 at the Chernobyl Nuclear Power Plant was the greatest nuclear disaster the world has yet seen. Less well known is the Kyshtym disaster in 1957, which resulted in a massive release of radioactive material in the Eastern Ural region of the Soviet Union. This was a catastrophic underground explosion at a nuclear storage facility near the Mayak power plant in the Eastern Ural region of the USSR. Information about the disaster was tightly restricted by Soviet authorities, with predictably bad consequences.

Zhores Medvedev was one of the first qualified scientists to provide information and hypotheses about the Kyshtym disaster. His book Nuclear Disaster in the Urals was written while he was in exile in Great Britain and appeared in 1980. It is fascinating to learn that his reasoning is based on his study of ecological, biological, and environmental research done by Soviet scientists between 1957 and 1980. Medvedev was able to piece together the extent of contamination and the general nature of the cause of the event from basic information about radioactive contamination in lakes and streams in the region included incidentally in scientific reports from the period.

It is very interesting to find that scientists in the United States were surprisingly skeptical about Medvedev’s assertions. W. Stratton et al published a review analysis in Science in 1979 (link) that found Medvedev’s reasoning unpersuasive.

A steam explosion of one tank is not inconceivable but is most improbable, because the heat generation rate from a given amount of fission products is known precisely and is predictable. Means to dissipate this heat would be a part of the design and could be made highly reliable. (423)

They offer an alternative hypothesis about any possible radioactive contamination in the Kyshtym region — the handful of multimegaton nuclear weapons tests conducted by the USSR in the Novaya Zemlya area.

We suggest that the observed data can be satisfied by postulating localized fallout (perhaps with precipitation) from explosion of a large nuclear weapon, or even from more than one explosion, because we have no limits on the length of time that fallout continued. (425)

And they consider weather patterns during the relevant time period to argue that these tests could have been the source of radiation contamination identified by Medvedev. Novaya Zemlya is over 1000 miles north of Kyshtym (20 degrees of latitude). So the fallout from the nuclear tests may be a possible alternative hypothesis, but it is farfetched. They conclude:

We can only conclude that, though a radiation release incident may well be supported by the available evidence, the magnitude of the incident may have been grossly exaggerated, the source chosen uncritically, and the dispersal mechanism ignored. Even so we find it hard to believe that an area of this magnitude could become contaminated and the event not discussed in detail or by more than one individual for more than 20 years. (425)

The heart of their skepticism depends on an entirely indefensible assumption: that Soviet science, engineering, and management were entirely capable of designing and implementing a safe system for nuclear waste storage. They were perhaps right about the scientific and engineering capabilities of the Soviet system; but the management systems in place were woefully inadequate. Their account rested on an assumption of straightforward application of engineering knowledge to the problem; but they failed to take into account the defects of organization and oversight that were rampant within Soviet industrial systems. And in the end the core of Medvedev’s claims have been validated.

Another official report was compiled by Los Alamos scientists, released in 1982, that concluded unambiguously that Medvedev was mistaken, and that the widespread ecological devastation in the region resulted from small and gradual processes of contamination rather than a massive explosion of waste materials (link). Here is the conclusion put forward by the study’s authors:

What then did happen at Kyshtym? A disastrous nuclear accident that killed hundreds, injured thousands, and contaminated thousands of square miles of land? Or, a series of relatively minor incidents, embellished by rumor, and severely compounded by a history of sloppy practices associated with the complex? The latter seems more highly probable.

So Medvedev is dismissed.

After the collapse of the USSR voluminous records about the Kyshtym disaster became available from secret Soviet files, and those records make it plain that US scientists badly misjudged the nature of the Kyshtym disaster. Medvedev was much closer to the truth than were Stratton and his colleagues or the authors of the Los Alamos report.

A scientific report based on Soviet-era documents that were released after the fall of the Soviet Union appeared in the Journal of Radiological Protection in 2017 (A V Akleyev et al 2017; link). Here is their brief description of the accident:

Starting in the earliest period of Mayak PA activities, large amounts of liquid high-level radioactive waste from the radiochemical facility were placed into long-term controlled storage in metal tanks installed in concrete vaults. Each full tank contained 70–80 tons of radioactive wastes, mainly in the form of nitrate compounds. The tanks were water-cooled and equipped with temperature and liquid-level measurement devices. In September 1957, as a result of a failure of the temperature-control system of tank #14, cooling-water delivery became insufficient and radioactive decay caused an increase in temperature followed by complete evaporation of the water, and the nitrate salt deposits were heated to 330 °C–350 °C. The thermal explosion of tank #14 occurred on 29 September 1957 at 4:20 pm local time. At the time of the explosion the activity of the wastes contained in the tank was about 740 PBq [5, 6]. About 90% of the total activity settled in the immediate vicinity of the explosion site (within distances less than 5 km), primarily in the form of coarse particles. The explosion gave rise to a radioactive plume which dispersed into the atmosphere. About 2 × 106 Ci (74PBq) was dispersed by the wind (north-northeast direction with wind velocity of 5–10 m s−1) and caused the radioactive trace along the path of the plume [5]. Table 1 presents the latest estimates of radionuclide composition of the release used for reconstruction of doses in the EURT area. The mixture corresponded to uranium fission products formed in a nuclear reactor after a decay time of about 1 year, with depletion in 137Cs due to a special treatment of the radioactive waste involving the extraction of 137Cs [6]. (R20-21)

Here is the region of radiation contamination (EURT) that Akleyev et al identify:

This region represents a large area encompassing 23,000 square kilometers (8,880 square miles). Plainly Akleyev et al describe a massive disaster including a very large explosion in an underground nuclear waste storage facility, large-scale dispersal of nuclear materials, and evacuation of population throughout a large region. This is very close to the description provided by Medvedev.

A somewhat surprising finding of the Akleyev study is that the exposed population did not show dramatically worse health outcomes and mortality relative to unexposed populations. For example, “Leukemia mortality rates over a 30-year period after the accident did not differ from those in the group of unexposed people” (R30). Their epidemiological study for cancers overall likewise indicates only a small effect of accidental radiation exposure on cancer incidence:

The attributable risk (AR) of solid cancer incidence in the EURTC, which gives the proportion of excess cancer cases out of the sum of excess and baseline cases, calculated according to the linear model, made up 1.9% over the whole follow-up period. Therefore, only 27 cancer cases out of 1426 could be associated with accidental radiation exposure of the EURT population. AR is highest in the highest dose groups (250–500 mGy and >500 mGy) and exceeds 17%.

So why did the explosion occur? James Mahaffey examines the case in detail in Atomic Accidents: A History of Nuclear Meltdowns and Disasters: From the Ozark Mountains to Fukushima. Here is his account:

In the crash program to produce fissile bomb material, a great deal of plutonium was wasted in the crude separation process. Production officials decided that instead of being dumped irretrievably into the river, the plutonium that had failed to precipitate out, remaining in the extraction solution, should be saved for future processing. A big underground tank farm was built in 1953 to hold processed fission waste. Round steel tanks were installed in banks of 20, sitting on one large concrete slab poured at the bottom of an excavation, 27 feet deep. Each bank was equipped with a heat exchanger, removing the heat buildup from fission-product decay using water pipes wrapped around the tanks. The tanks were then buried under a backfill of dirt. The tanks began immediately to fill with various waste solutions from the extraction plant, with no particular distinction among the vessels. The tanks contained all the undesirable fission products, including cobalt-60, strontium-90, and cesium-137, along with unseparated plutonium and uranium, with both acetate and nitrate solutions pumped into the same volume. One tank could hold probably 100 tons of waste product. 

In 1956, a cooling-water pipe broke leading to one of the tanks. It would be a lot of work to dig up the tank, find the leak, and replace the pipe, so instead of going to all that trouble, the engineers in charge just turned off the water and forgot about it. 

A year passed. Not having any coolant flow and being insulated from the harsh Siberian winter by the fill dirt, the tank retained heat from the fission-product decay. Temperature inside reached 660 ° Fahrenheit, hot enough to melt lead and cast bullets. Under this condition, the nitrate solutions degraded into ammonium nitrate, or fertilizer, mixed with acetates. The water all boiled away, and what was left was enough solidified ANFO explosive to blow up Sterling Hall several times, being heated to the detonation point and laced with dangerous nuclides. [189] 

Sometime before 11: 00 P.M. on Sunday, September 29, 1957, the bomb went off, throwing a column of black smoke and debris reaching a kilometer into the sky, accented with larger fragments burning orange-red. The 160-ton concrete lid on the tank tumbled upward into the night like a badly thrown discus, and the ground thump was felt many miles away. Residents of Chelyabinsk rushed outside and looked at the lighted display to the northwest, as 20 million curies of radioactive dust spread out over everything sticking above ground. The high-level wind that night was blowing northeast, and a radioactive plume dusted the Earth in a tight line, about 300 kilometers long. This accident had not been a runaway explosion in an overworked Soviet production reactor. It was the world’s first “dirty bomb,” a powerful chemical explosive spreading radioactive nuclides having unusually high body burdens and guaranteed to cause havoc in the biosphere. The accidentally derived explosive in the tank was the equivalent of up to 100 tons of TNT, and there were probably 70 to 80 tons of radioactive waste thrown skyward. (KL 5295)

So what were the primary organizational and social causes of this disaster? One is the haste created in nuclear design and construction created by Stalin’s insistence on moving forward the Soviet nuclear weapons program as rapidly as possible. As is evident in the Chernobyl case as well, the political pressures on engineers and managers that followed from these political priorities often led to disastrous decisions and actions. A second is the institutionalized system of secrecy that surrounded industry generally, the military specifically, and the nuclear industry most especially. A third is the casual attitude taken by Soviet officials towards the health and wellbeing of the population. And a final cause highlighted by Mahaffey’s account is the low level of attention given at the plant level to safety and maintenance of highly risky facilities. Stratton et al based their analysis on the fact that the heat-generating characteristics of nuclear waste were well understood and that effective means existed for controlling those risks. That may be, but what they failed to anticipate is that these risks would be fundamentally disregarded on the ground and in the supervisory system above the Kyshtym reactor complex.

(It is interesting to note that Mahaffey himself underestimates the amount of information that is now available about the effects of the disaster. He writes that “studies of the effects of this disaster are extremely difficult, as records do not exist, and previous residents are hard to track down” (kl 5330). But the Akleyev study mentioned above provides extensive health details about the affected population made possible as a result of data collected during Soviet times and concealed.)

Safety and accident analysis: Longford

Andrew Hopkins has written a number of fascinating case studies of industrial accidents, usually in the field of petrochemicals. These books are crucial reading for anyone interested in arriving at a better understanding of technological safety in the context of complex systems involving high-energy and tightly-coupled processes. Especially interesting is his Lessons from Longford: The ESSO Gas Plant Explosion. The Longford refining plant suffered an explosion and fire in 1998 that killed two workers, badly injured others, and interrupted the supply of natural gas to the state of Victoria for two weeks. Hopkins is a sociologist, but has developed substantial expertise in the technical details of petrochemical refining plants. He served as an expert witness in the Royal Commission hearings that investigated the accident. The accounts he offers of these disasters are genuinely fascinating to read.

Hopkins makes the now-familiar point that companies often seek to lay responsibility for a major industrial accident on operator error or malfeasance. This was Esso’s defense concerning its corporate liability in the Longford disaster. But, as Hopkins points out, the larger causes of failure go far beyond the individual operators whose decisions and actions were proximate to the event. Training, operating plans, hazard analysis, availability of appropriate onsite technical expertise — these are all the responsibility of the owners and managers of the enterprise. And regulation and oversight of safety practices are the responsibility of stage agencies. So it is critical to examine the operations of a complex and dangerous technology system at all these levels.

A crucial part of management’s responsibility is to engage in formal “hazard and operability” (HAZOP) analysis. “A HAZOP involves systematically imagining everything that might go wrong in a processing plant and developing procedures or engineering solutions to avoid these potential problems” (26). This kind of analysis is especially critical in high-risk industries including chemical plants, petrochemical refineries, and nuclear reactors. It emerged during the Longford accident investigation that HAZOP analyses had been conducted for some aspects of risk but not for all — even in areas where the parent company Exxon was itself already fully engaged in analysis of those risky scenarios. The risk of embrittlement of processing equipment when exposed to super-chilled conditions was one that Exxon had already drawn attention to at the corporate level because of prior incidents.

A factor that Hopkins judges to be crucial to the occurrence of the Longford Esso disaster is the decision made by management to remove engineering staff from the plant to a central location where they could serve a larger number of facilities “more efficiently”.

A second relevant change was the relocation to Melbourne in 1992 of all the engineering staff who had previously worked at Longford, leaving the Longford operators without the engineering backup to which they were accustomed. Following their removal from Longford, engineers were expected to monitor the plant from a distance and operators were expected to telephone the engineers when they felt a need to. Perhaps predictably, these arrangements did not work effectively, and I shall argue in the next chapter that the absence of engineering expertise had certain long-term consequences which contributed to the accident. (34)

One result of this decision is the fact that when the Longford incident began there were no engineering experts on site who could correctly identify the risks created by the incident. Technicians therefore restarted the process by reintroducing warm oil into the super-chilled heat exchanger. The metal had become brittle as a result of the extremely low temperatures and cracked, leading to the release of fuel and subsequent explosion and fire. As Hopkins points out, Exxon experts had long been aware of the hazards of embrittlement. However, it appears that the operating procedures developed by Esso at Longford ignored this risk, and operators and supervisors lacked the technical/scientific knowledge to recognize the hazard when it arose.

The topic of “tight coupling” (the tight interconnection across different parts of a complex technological system) comes up frequently in discussions of technology accidents. Hopkins shows that the Longford case gives a new spin to this idea. In the case of the explosion and fire at Longford it turned out to be very important that plant 1 was interconnected by numerous plumbing connections to plants 2 and 3. This meant that fuel from plants 2 and 3 continued to flow into plant 1 and greatly extended the length of time it took to extinguish the fire. Plant 1 had to be fully isolated from plants 2 and 3 before the fire could be extinguished (or plants 2 and 3 could be restarted), and there were enough plumbing connections among them, poorly understood at the time of the fire, that took a great deal of time to disconnect (32).

Hopkins addresses the issue of government regulation of high-risk industries in connection with the Longford disaster. Written in 1999 or so, he recognizes the trend towards “self-regulation” in place of government rules stipulating the operating of various industries. He contrasts this approach with deregulation — the effort to allow the issue of safe operation to be governed by the market rather than by law.

Whereas the old-style legislation required employers to comply with precise, often quite technical rules, the new style imposes an overarching requirement on employers that they provide a safe and healthy workplace for their employees, as far as practicable. (92)

He notes that this approach does not necessarily reduce the need for government inspections; but the goal of regulatory inspection will be different. Inspectors will seek to satisfy themselves that the industry has done a responsible job of identify hazards and planning accordingly, rather than looking for violations of specific rules. (This parallels to some extent his discussion of two different philosophies of audit, one of which is much more conducive to increasing the systems-safety of high-risk industries; chapter 7.) But his preferred regulatory approach is what he describes as “safety case regulation”. (Hopkins provides more detail about the workings of a safety case regime in Disastrous Decisions: The Human and Organisational Causes of the Gulf of Mexico Blowout, chapter 10.)

The essence of the new approach is that the operator of a major hazard installation is required to make a case or demonstrate to the relevant authority that safety is being or will be effectively managed at the installation. Whereas under the self-regulatory approach, the facility operator is normally left to its own devices in deciding how to manage safety, under the safety case approach it must lay out its procedures for examination by the regulatory authority. (96)

The preparation of a safety case would presumably include a comprehensive HAZOP analysis, along with procedures for preventing or responding to the occurrence of possible hazards. Hopkins reports that the safety case approach to regulation is being adopted by the EU, Australia, and the UK with respect to a number of high-risk industries. This discussion is highly relevant to the current debate over aircraft manufacturing safety and the role of the FAA in overseeing manufacturers.

It is interesting to realize that Hopkins is implicitly critical of another of my favorite authors on the topic of accidents and technology safety, Charles Perrow. Perrow’s central idea of “normal accidents” brings along with it a certain pessimism about the ability to increase safety in complex industrial and technological systems; accidents are inevitable and normal (Normal Accidents: Living with High-Risk Technologies). Hopkins takes a more pragmatic approach and argues that there are engineering and management methodologies that can significantly reduce the likelihood and harm of accidents like the Esso gas plant explosion. His central point is that we don’t need to be able to anticipate a long chain of unlikely events in order to identify the hazard in which these chains may eventuate — for example, loss of coolant in a nuclear reactor or loss of warm oil in a refinery process. These final events of numerous different possible accident scenarios all require procedures in place that will guide the responses of engineers and technicians when “normal accidents” occur (33).

Hopkins highlights the challenge to safety created by the ongoing modification of a power plant or chemical plant; later modifications may create hazards not anticipated by the rigorous accident analysis performed on the original design.

Processing plants evolve and grow over time. A study of petroleum refineries in the US has shown that “the largest and most complex refineries in the sample are also the oldest … Their complexity emerged as a result of historical accretion. Processes were modified, added, linked, enhanced and replaced over a history that greatly exceeded the memories of those who worked in the refinery. (33)

This is one of the chief reasons why Perrow believes technological accidents are inevitable. However, Hopkins draws a different conclusion:

However, those who are committed to accident prevention draw a different conclusion, namely, that it is important that every time physical changes are made to plant these changes be subjected to a systematic hazard identification process. …  Esso’s own management of change philosophy recognises this. It notes that “changes potentially invalidate prior risk assessments and can create new risks, if not managed diligently.” (33)

(I believe this recommendation conforms to Nancy Leveson’s theories of system safety engineering as well; link.)

Here is the causal diagram that Hopkins offers for the occurrence of the explosion at Longford (122).

The lowest level of the diagram represents the sequence of physical events and operator actions leading to the explosion, fatalities, and loss of gas supply. The next level represents the organizational factors identified in Longford’s analysis of the event and its background. Central among these factors are the decision to withdraw engineers from the plant; a safety philosophy that focused on lost-time injuries rather than system hazards and processes; failures in the incident reporting system; failure to perform a HAZOP for plant 1; poor maintenance practices; inadequate audit practices; inadequate training for operators and supervisors; and a failure to identify the hazard created by interconnections with plants 2 and 3. The next level identifies the causes of the management failures — Esso’s overriding focus on cost-cutting and a failure by Exxon as the parent company to adequately oversee safety planning and share information from accidents at other plants. The final two levels of causation concern governmental and societal factors that contributed to the corporate behavior leading to the accident.

(Here is a list of major industrial disasters; link.)

Herbert Simon’s theories of organizations

Image: detail from Family Portrait 2 1965 
(Creative Commons license, Richard Rappaport)

Herbert Simon made paradigm-changing contributions to the theory of rational behavior, including particularly his treatment of “satisficing” as an alternative to “maximizing” economic rationality (link). It is therefore worthwhile examining his views of organizations and organizational decision-making and action — especially given how relevant those theories are to my current research interest in organizational dysfunction. His highly successful book Administrative Behavior went through four editions between 1947 and 1997 — more than fifty years of thinking about organizations and organizational behavior. The more recent editions consist of the original text and “commentary” chapters that Simon wrote to incorporate more recent thinking about the content of each of the chapters.

Here I will pull out some of the highlights of Simon’s approach to organizations. There are many features of his analysis of organizational behavior that are worth noting. But my summary assessment is that the book is surprisingly positive about the rationality of organizations and the processes through which they collect information and reach decisions. In the contemporary environment where we have all too many examples of organizational failure in decision-making — from Boeing to Purdue Pharma to the Federal Emergency Management Agency — this confidence seems to be fundamentally misplaced. The theorist who invented the idea of imperfect rationality and satisficing at the individual level perhaps should have offered a somewhat more critical analysis of organizational thinking.

The first thing that the reader will observe is that Simon thinks about organizations as systems of decision-making and execution. His working definition of organization highlights this view:

In this book, the term organization refers to the pattern of communications and relations among a group of human beings, including the processes for making and implementing decisions. This pattern provides to organization members much of the information and many of the assumptions, goals, and attitudes that enter into their decisions, and provides also a set of stable and comprehensible expectations as to what the other members of the group are doing and how they will react to what one says and does. (18-19).

What is a scientifically relevant description of an organization? It is a description that, so far as possible, designates for each person in the organization what decisions that person makes, and the influences to which he is subject in making each of these decisions. (43)

The central theme around which the analysis has been developed is that organization behavior is a complex network of decisional processes, all pointed toward their influence upon the behaviors of the operatives — those who do the action ‘physical’ work of the organization. (305)

The task of decision-making breaks down into the assimilation of relevant facts and values — a distinction that Simon attributes to logical positivism in the original text but makes more general in the commentary. Answering the question, “what should we do?”, requires a clear answer to two kinds of questions: what values are we attempting to achieve? And how does the world work such that interventions will bring about those values?

It is refreshing to see Simon’s skepticism about the “rules of administration” that various generations of organizational theorists have advanced — “specialization,” “unity of command,” “span of control,” and so forth. Simon describes these as proverbs rather than as useful empirical discoveries about effective administration. And he finds the idea of “schools of management theory” to be entirely unhelpful (26). Likewise, he is entirely skeptical about the value of the economic theory of the firm, which abstracts from all of the arrangements among participants that are crucial to the internal processes of the organization in Simon’s view. He recommends an approach to the study of organizations (and the design of organizations) that focuses on the specific arrangements needed to bring factual and value claims into a process of deliberation leading to decision — incorporating the kinds of specialization and control that make sense for a particular set of business and organizational tasks.

An organization has only two fundamental tasks: decision-making and “making things happen”. The decision-making process involves intelligently gathering facts and values and designing a plan. Simon generally approaches this process as a reasonably rational one. He identifies three kinds of limits on rational decision-making:

  • The individual is limited by those skills, habits, and reflexes which are no longer in the realm of the conscious…
  • The individual is limited by his values and those conceptions of purpose which influence him in making his decision…
  • The individual is limited by the extent of his knowledge of things relevant to his job. (46)

And he explicitly regards these points as being part of a theory of administrative rationality:

Perhaps this triangle of limits does not completely bound the area of rationality, and other sides need to be added to the figure. In any case, the enumeration will serve to indicate the kinds of considerations that must go into the construction of valid and noncontradictory principles of administration. (47)

The “making it happen” part is more complicated. This has to do with the problem the executive faces of bringing about the efficient, effective, and loyal performance of assigned tasks by operatives. Simon’s theory essentially comes down to training, loyalty, and authority.

If this is a correct description of the administrative process, then the construction of an efficient administrative organization is a problem in social psychology. It is a task of setting up an operative staff and superimposing on that staff a supervisory staff capable of influencing the operative group toward a pattern of coordinated and effective behavior. (2)

To understand how the behavior of the individual becomes a part of the system of behavior of the organization, it is necessary to study the relation between the personal motivation of the individual and the objectives toward which the activity of the organization is oriented. (13-14)

Simon refers to three kinds of influence that executives and supervisors can have over “operatives”: formal authority (enforced by the power to hire and fire), organizational loyalty (cultivated through specific means within the organization), and training. Simon holds that a crucial role of administrative leadership is the task of motivating the employees of the organization to carry out the plan efficiently and effectively.

Later he refers to five “mechanisms of organization influence” (112): specialization and division of task; the creation of standard practices; transmission of decisions downwards through authority and influence; channels of communication in all directions; and training and indoctrination. Through these mechanisms the executive seeks to ensure a high level of conformance and efficient performance of tasks.

What about the actors within an organization? How do they behave as individual actors? Simon treats them as “boundedly rational”:

To anyone who has observed organizations, it seems obvious enough that human behavior in them is, if not wholly rational, at least in good part intendedly so. Much behavior in organizations is, or seems to be, task-oriented–and often efficacious in attaining its goals. (88)

But this description leaves out altogether the possibility and likelihood of mixed motives, conflicts of interest, and intra-organizational disagreement. When Simon considers the fact of multiple agents within an organization, he acknowledges that this poses a challenge for rationalistic organizational theory:

Complications are introduced into the picture if more than one individual is involved, for in this case the decisions of the other individuals will be included among the conditions which each individual must consider in reaching his decisions. (80)

This acknowledges the essential feature of organizations — the multiplicity of actors — but fails to treat it with the seriousness it demands. He attempts to resolve the issue by invoking cooperation and the language of strategic rationality: “administrative organizations are systems of cooperative behavior. The members of the organization are expected to orient their behavior with respect to certain goals that are taken as ‘organization objectives'” (81). But this simply presupposes the result we might want to occur, without providing a basis for expecting it to take place.

With the hindsight of half a century, I am inclined to think that Simon attributes too much rationality and hierarchical purpose to organizations.

The rational administrator is concerned with the selection of these effective means. For the construction of an administrative theory it is necessary to examine further the notion of rationality and, in particular, to achieve perfect clarity as to what is meant by “the selection of effective means.” (72)

These sentences, and many others like them, present the task as one of defining the conditions of rationality of an organization or firm; this takes for granted the notion that the relations of communication, planning, and authority can result in a coherent implementation of a plan of action. His model of an organization involves high-level executives who pull together factual information (making use of specialized experts in this task) and integrating the purposes and goals of the organization (profits, maintaining the health and safety of the public, reducing poverty) into an actionable set of plans to be implemented by subordinates. He refers to a “hierarchy of decisions,” in which higher-level goals are broken down into intermediate-level goals and tasks, with a coherent relationship between intermediate and higher-level goals. “Behavior is purposive in so far as it is guided by general goals or objectives; it is rational in so far as it selects alternatives which are conducive to the achievement of the previously selected goals” (4).  And the suggestion is that a well-designed organization succeeds in establishing this kind of coherence of decision and action.

It is true that he also asserts that decisions are “composite” —

It should be perfectly apparent that almost no decision made in an organization is the task of a single individual. Even though the final responsibility for taking a particular action rests with some definite person, we shall always find, in studying the manner in which this decision was reached, that its various components can be traced through the formal and informal channels of communication to many individuals … (305)

But even here he fails to consider the possibility that this compositional process may involve systematic dysfunctions that require study. Rather, he seems to presuppose that this composite process itself proceeds logically and coherently. In commenting on a case study by Oswyn Murray (1923) on the design of a post-WWI battleship, he writes: “The point which is so clearly illustrated here is that the planning procedure permits expertise of every kind to be drawn into the decision without any difficulties being imposed by the lines of authority in the organization” (314). This conclusion is strikingly at odds with most accounts of science-military relations during World War II in Britain — for example, the pernicious interference of Frederick Alexander Lindemann with Patrick Blackett over Blackett’s struggles to create an operations-research basis for anti-submarine warfare (Blackett’s War: The Men Who Defeated the Nazi U-Boats and Brought Science to the Art of Warfare). His comments about the processes of review that can be implemented within organizations (314 ff.) are similarly excessively optimistic — contrary to the literature on principal-agent problems in many areas of complex collaboration.

This is surprising, given Simon’s contributions to the theory of imperfect rationality in the case of individual decision-making. Against this confidence, the sources of organizational dysfunction that are now apparent in several literatures on organization make it more difficult to imagine that organizations can have a high success rate in rational decision-making. If we were seeking for a Simon-like phrase for organizational thinking to parallel the idea of satisficing, we might come up with the notion of bounded localistic organizational rationality”: “locally rational, frequently influenced by extraneous forces, incomplete information, incomplete communication across divisions, rarely coherent over the whole organization”.

Simon makes the point emphatically in the opening chapters of the book that administrative science is an incremental and evolving field. And in fact, it seems apparent that his own thinking continued to evolve. There are occasional threads of argument in Simon’s work that seem to point towards a more contingent view of organizational behavior and rationality, along the lines of Fligstein and McAdam’s theories of strategic action fields. For example, when discussing organizational loyalty Simon raises the kind of issue that is central to the strategic action field model of organizations: the conflicts of interest that can arise across units (11). And in the commentary on Chapter I he points forward to the theories of strategic action fields and complex adaptive systems:

The concepts of systems, multiple constituencies, power and politics, and organization culture all flow quite naturally from the concept of organizations as complex interactive structures held together by a balance of the inducements provided to various groups of participants and the contributions received from them. (27)

The book has been a foundational contribution to organizational studies. At the same time, if Herbert Simon were at the beginning of his career and were beginning his study of organizational decision-making today, I suspect he might have taken a different tack. He was plainly committed to empirical study of existing organizations and the mechanisms through which they worked. And he was receptive to the ideas surrounding the notion of imperfect rationality. The current literature on the sources of contention and dysfunction within organizations (Perrow, Fligstein, McAdam, Crozier, …) might well have led him to write a different book altogether, one that gave more attention to the sources of failures of rational decision-making and implementation alongside the occasional examples of organizations that seem to work at a very high level of rationality and effectiveness.

Asian Conference on the Philosophy of the Social Sciences

photo: Tianjin, China

A group of philosophers of social science convened in Tianjin, China, at Nankai University in June to consider some of the ways that the social sciences can move forward in the twenty-first century. This was the Asian Conference on the Philosophy of the Social Sciences, and there were participants from Asia, Europe, Australia, and the United States. (It was timely for Nankai University to host such a meeting, since it is celebrating the centennial of its founding in 1919 this year.) The conference was highly productive for all participants, and it seems to have the potential of contributing to fruitful future thinking about philosophy and the social sciences in Chinese universities as well.

Organized by Francesco Di Iorio and the School of Philosophy at Nankai University, the meeting was a highly productive international gathering of scholars with interests in all aspects of the philosophy of the social sciences. Topics that came in for discussion included the nature of individual agency, the status of “social kinds”, the ways in which organizations “think”, current thinking about methodological individualism, and the status of idealizations in the social sciences, among many other topics. It was apparent that participants from many countries gained insights from their colleagues from other countries and other regions when discussing social science theory and specific social challenges.

Along with many others, I believe that the philosophy of social science has the potential for being a high-impact discipline in philosophy. The contemporary world poses complex, messy problems with huge import for the whole of the global population, and virtually all of those challenges involve difficult situations of social and behavioral interaction (link). Migration, poverty, youth disaffection, the cost of higher education, the importance of rising economic and social inequalities, the rise of extremism, and the creation of vast urban centers like Shanghai and Rio de Janeiro all involve a mix of behavior, technology, and environment that will require the very best social-science research to navigate successfully. And if anyone ever thought that the social sciences were simpler or easier than the natural sciences, the perplexities we currently face of nationalism, racism, and rising inequalities should certainly set that thought to rest for good.

Philosophy can help social scientists gain better theoretical and analytical understanding of the social world in which we live. Philosophers can do this by thinking carefully about the nature of causal relationships in the social world (link); by considering the limitations of social-science inquiry that are inherent in the nature of the social world (link); and by assessing the implications of various discoveries in the logic of collective action for social life (link).

When we undertake large technology projects we make use of the theories and methods of analysis about forces and materials that are provided by the natural sciences. This is what gives us confidence that buildings will stand up to earthquakes and bridges will be able to sustain the stresses associated with traffic and wind. We turn to policy and legislation in an effort to solve social problems. Public policy is the counterpart to technology. However, it is clear that public policy is far less amenable to precise scientific and analytical guidance. Cause and effect relationships are more difficult to discern in the social world, contingency and conjunction are vastly more important, and the ability of social-science theories to measure and predict is substantially more limited than the natural sciences. So it is all the more important to have a clear and dynamic understanding of the challenges and resources that confront social scientists as they attempt to understand social processes and behavior.

These kinds of “wicked” social problems occur in every country, but they are especially pressing in Asia at present (linklink). As citizens and academics consider their roles in the future of their countries in Japan, Thailand, China, or Russia, Serbia, or France, they will be empowered in their efforts by the best possible thinking about the scope and limits of various disciplines of the social sciences.

This kind of international meeting organized around topics in the philosophy of the social sciences has the potential of stimulating new thinking and substantial progress in our understanding of society. The fact that philosophers in China, Thailand, Finland, Japan, France, and the United States bring very different national and cultural experiences to their philosophical theories creates the possibility of synergy and the challenging of presuppositions. One such example came up in a discussion with Finnish philosopher Uskali Maki over my use of principal-agent problems as a general source of organizational dysfunction. Maki argued that this claim reflects a specific cultural context, and that this kind of dysfunction is substantially less prevalent in Finnish organizations and government agencies. (Maki also argued that my philosophy of social science over-emphasizes plasticity and change, whereas Maki holds that the fact of social order must be explained.) It was also interesting to consider with a Chinese philosopher whether there are aspects of traditional Chinese philosophy that might shed light on current social processes. Does Mencius provide a different way of thinking about the role and legitimacy of government than the social contract tradition in which European philosophers generally operate (link)?

So along with all the other participants, I would like to offer sincere appreciation to Francesco Di Iorio and his colleagues at the School of Philosophy for the superlative inspiration and coordination they provided for this international conference of philosophers.

Auditing FEMA

Crucial to improving an organization’s performance is being able to obtain honest and detailed assessments of its functioning, in normal times and in emergencies. FEMA has had a troubled reputation for faulty performance since the Katrina disaster in 2005, and its performance in response to Hurricane Maria in Louisiana and Puerto Rico was also criticized by observers and victims. So how can FEMA get better? The best avenue is careful, honest review of past performance, identifying specific areas of organizational failure and taking steps to improve in these areas.

It is therefore enormously disturbing to read an investigative report in the Washington Post ((Lisa Rein and Kimberly Kindy, Washington Post, June 6, 2019); link) documenting that investigation and audits by the Inspector General of the Department of Homeland Security were watered down and sanitized at the direction of the audit bureau’s acting director, John V. Kelly.

Auditors in the Department of Homeland Security inspector general’s office confirmed problems with the Federal Emergency Management Agency’s performance in Louisiana — and in 11 other states hit over five years by hurricanes, mudslides and other disasters. 

But the auditors’ boss, John V. Kelly, instead directed them to produce what they called “feel-good reports” that airbrushed most problems and portrayed emergency responders as heroes overcoming vast challenges, according to interviews and a new internal review. 

Investigators determined that Kelly didn’t just direct his staff to remove negative findings. He potentially compromised their objectivity by praising FEMA’s work ethic to the auditors, telling them they would see “FEMA at her best” and instructing supervisors to emphasize what the agency had done right in its disaster response. (Washington Post, June 6, 2019)

“Feel-good” reports are not what quality improvement requires, and they are not what legislators and other public officials need as they consider the adequacy of some of our most important governmental institutions. It is absolutely crucial for the public and for government oversight that we should be able to rely on the honest, professional, and rigorous work of auditors and investigators without political interference in their findings. These are the mechanisms through which the integrity of regulatory agencies and other crucial governmental agencies is maintained.

Legislators and the public are already concerned about the effectiveness of the Federal Aviation Agency’s oversight in the certification process of the Boeing 737 MAX. The evidence brought forward by the Washington Post concerning interference with the work of the staff of the Inspector General of DHS simply amplifies that concern. The article correctly observes that independent and rigorous oversight is crucial for improving the functioning of government agencies, including DHS and FEMA:

Across the federal government, agencies depend on inspectors general to provide them with independent, fact-driven analysis of their performance, conducting audits and investigations to ensure that taxpayers’ money is spent wisely. 

Emergency management experts said that oversight, particularly from auditors on the ground as a disaster is unfolding, is crucial to improving the response, especially in ensuring that contracts are properly administered. (Washington Post, June 6, 2019)

Honest government simply requires independent and effective oversight processes. Every agency, public and private, has an incentive to conceal perceived areas of poor performance. Hospitals prefer to keep secret outbreaks of infection and other medical misadventures (link), the Department of Interior has shown an extensive pattern of conflict of interest by some of its senior officials (link), and the Pentagon Papers showed how the Department of Defense sought to conceal evidence of military failure in Vietnam (link). The only protection we have from these efforts at concealment, lies, and spin is vigorous governmental review and oversight, embodied by offices like the Inspectors General of various agencies, and an independent and vigorous press able to seek out these kinds of deception.

%d bloggers like this: