The research university

Where do new ideas, new technologies, and new ways of thinking about the world come from in a modern society? Since World War II the answer to this question has largely been found in research universities. Research universities are doctoral institutions that employ professors who are advanced academic experts in a variety of fields and that expend significant amounts of external funds in support of ongoing research. Given the importance of innovation and new ideas in the knowledge economy of the twenty-first century, it is very important to understand the dynamics of research universities, and to understand factors that make them more or less productive in achieving new knowledge. And, crucially, we need to understand how public policy can enhance the effectiveness of the university research enterprise for the benefit of the whole of society.

Jason Owen-Smith’s recent Research Universities and the Public Good: Discovery for an Uncertain Future is a very welcome and insightful contribution to better understanding this topic. Owen-Smith is a sociology professor at the University of Michigan (itself a major research university with over 1.5 billion dollars in annual research funding), and he brings to his task some of the most insightful ideas currently transforming the field of organizational studies.

Owen-Smith analyzes research universities (RU) in terms of three fundamental ideas. RUs serves as sourceanchor, and hub for the generation of innovations and new ideas in a vast range of fields, from the humanities to basic science to engineering and medicine. And he believes that this triple function makes research universities virtually unique among American (or global) knowledge-producing organizations, including corporate and government laboratories (33).

The idea of the university as a source is fairly obvious: it is the idea that universities create and disseminate new knowledge in a very wide range of fields. Sometimes that knowledge is of interest to a hundred people worldwide; and sometimes it results in the creation of genuinely transformative technologies and methods. The idea of the university as “anchor” refers largely to the stability that research universities offer the knowledge enterprise. Another aspect of the idea of the university as an anchor is the fact that it helps to create a public infrastructure that encourages other kinds of innovation in the region that it serves — much as an anchor tenant helps to bring potential customers to smaller stores in a shopping mall. Unlike other knowledge-centered organizations like private research labs or federal laboratories, universities have a diverse portfolio of activity that confers a very high level of stability over time. This is a large asset for the country as a whole. It is also frequently an asset for the city or region in which it is located.

The idea of the university as a hub is perhaps the most innovative perspective offered here. The idea of a hub is a network concept. A hub is a node that links individuals and centers to each other in ways that transcend local organizational charts. And the power of a hub, and the networks that it joins, is that it facilitates the exchange of information and ideas and creates the possibility of new forms of cooperation and collaboration. Here the idea is that a research university is a place where researchers form working relationships, both on campus and in national networks of affiliation. And the density and configuration of these relationships serve to facilitate communication and diffusion of new ideas and approaches to a given problem, with the result that progress is more rapid. O-S makes use of Peter Galison’s treatment of the simultaneous discovery of the relativity of time measurement by Einstein and Poincaré in Einstein’s Clocks and Poincaré’s Maps: Empires of Time.  Galison shows that Einstein and Poincaré were both involved in extensive intellectual networks that were quite relevant to their discoveries; but that their innovations had substantially different effects because of differences in those networks. Owen-Smith believes that these differences are very relevant in the workings of modern RUs in the United States as well. (See also Galison’s Image and Logic: A Material Culture of Microphysics.)

Radical discoveries like the theory of special relativity are exceptionally rare, but the conditions that gave rise to them should also enable less radical insights. Imagining universities as organizational scaffolds for a complex collaboration networks and focal point where flows of ideas, people, and problems come together offers a systematic way to assess the potential for innovation and novelty as well as for multiple discoveries. (p. 15)

Treating a complex and interdependent social process that occurs across relatively long time scales as if it had certain needs, short time frames, and clear returns is not just incorrect, it’s destructive. The kinds of simple rules I suggested earlier represent what organizational theorist James March called “superstitious learning.” They were akin to arguing that because many successful Silicon Valley firms were founded in garages, economic growth is a simple matter of building more garages. (25)

Rather, O-S demonstrates in the case of the development of the key discoveries that led to the establishment of Google, the pathway was long, complex, and heavily dependent on social networks of scientists, funders, entrepreneurs, graduate students, and federal agencies.

A key observation in O-S’s narrative at numerous points is the futility — perhaps even harmfulness — of attempting to harness university research to specific, quantifiable economic or political goals. The idea of selecting university research and teaching programs on the basis of their ROI relative to economic goals is, according to O-S, deeply futile. The extended example he offers of the research that led to the establishment of Google as a company and a search engine illustrates this point very compellingly: much of the foundational research that made the search algorithms possible had the look of entirely non-pragmatic or utilitarian knowledge production at the time it was funded (chapter 1). (The development of the smart phone has a similar history; 63.) Philosophy, art history, and social theory can be as important to the overall success of the research enterprise as more intentionally directed areas of research (electrical engineering, genetic research, autonomous vehicle design). His discussion of Wisconsin Governor Scott Walker’s effort to revise the mission statement of the University of Wisconsin is exemplary (45 ff.).

Contra Governor Walker, the value of the university is found not in its ability to respond to immediate needs but in an expectation that joining systematic inquiry and education will result in people and ideas that reach beyond local, sometimes parochial, concerns. (46-47)

Also interesting is O-S’s discussion of the functionality of the extreme decentralization that is typical of most large research universities. In general O-S regards this decentralization as a positive thing, leading to greater independence for researchers and research teams and permitting higher levels of innovation and productive collaboration. In fact, O-S appears to believe that decentralization is a critical factor in the success of the research university as source, anchor, and hub in the creation of new knowledge.

The competition and collaboration enabled by decentralized organization, the pluralism and tension created when missions and fields collide, and the complex networks that emerge from knowledge work make universities sources by enabling them to produce new things on an ongoing basis. Their institutional and physical stability prevents them from succumbing to either internal strife or the kinds of ‘creative destruction’ that economist Joseph Schumpeter took to be a fundamental result of innovation under capitalism. (61)

O-S’s discussion of the micro-processes of discovery is particularly interesting (chapter 3). He makes a sustained attempt to dissect the interactive, networked ways in which multiple problems, methods, and perspectives occasionally come together to solve an important problem or develop a novel idea or technology. In O’S’s telling of the story, the existence of intellectual and scientific networks is crucial to the fecundity of these processes in and around research universities.

This is an important book and one that merits close reading. Nothing could be more critical to our future than the steady discovery of new ideas and solutions. Research universities have shown themselves to be uniquely powerful engines for discovery and dissemination of new knowledge. But the rapid decline of public appreciation of universities presents a serious risk to the continued vitality of the university-based knowledge sector. The most important contribution O-S has made here, in my reading, is the detailed work he has done to give exposition to the “micro-processes” of the research university — the collaborations, the networks, the unexpected contiguities of problems, and the high level of decentralization that American research universities embody. As O-S documents, these processes are difficult to present to the public in a compelling way, and the vitality of the research university itself is vulnerable to destructive interference in the current political environment. Providing a clear, well-documented account of how research universities work is a major and valuable contribution.

Eleven years of Understanding Society

This month marks the end of the eleventh year of publication of Understanding Society. Thanks to all the readers and visitors who have made the blog so rewarding. The audience continues to be international, with roughly half of visits coming from the United States and the rest from UK, the Philippines, India, Australia, and other European countries. There are a surprising number of visits from Ukraine.

Topics in the past year have been diverse. The most frequent topic is my current research interest, organizational dysfunction and technology failure. Also represented are topics in the philosophy of social science (causal mechanisms, computational modeling), philosophy of history, China, and the politics of hate and division. The post with the largest number of views was “Is history probabilistic?”, posted on December 30, and the least-read post was “The insights of biography”, posted on August 29. Not surprisingly, the content of the blog follows the topics which I’m currently thinking about, including most recently the issue of sexual harassment of women in university settings.

Writing the blog has been a good intellectual experience for me. Taking an hour or two to think intensively about a particular idea — large or small — and trying to figure out what I think about it is genuinely stimulating for me. It makes me think of the description that Richard Schacht gave in an undergraduate course on nineteenth-century philosophy of Hegel’s theory of creativity and labor. A sculptor begins with an indefinite idea of a physical form, a block of stone, and a hammer and chisel, and through interaction with the materials, tools, and hands he or she creates something new. The initial vision, inchoate as it is, is not enough, and the block of stone is mute. But the sculptor gives material expression to his or her visions through concrete interaction with the materials at hand. This is not a bad analogy for the process of thinking and writing itself. It is interesting that Marx’s conception of the creativity of labor derives from this Hegelian metaphor.

This is what I had hoped for when I began the blog in 2007. I wanted to have a challenging form of expression that would allow me to develop ideas about how society and the social sciences work, and I hoped that this activity would draw me into new ideas, new thinking, and new approaches to problems already of interest. This has certainly materialized for me — perhaps in the same way that a sculptor develops new capacities by contending with the resistance and contingency of the stone. There are issues, perspectives, and complexities that I have come to find very interesting that would not have come up in a more linear kind of academic writing.

It is also interesting for me to reflect on the role that “audience” plays for the writer. Since the first year of the blog I have felt that I understood the level of knowledge, questions, and interests that brought visitors to read a post or two, and sometimes to leave a comment. This is a smart, sophisticated audience. I have felt complete freedom in treating my subjects in the way that I think about them, without needing to simplify or reduce the problems I am considering to a more “public” level. This contrasts with the experience I had in blogging for the Huffington Post a number of years ago. Huff Post was a much more visible platform, but I never felt a connection with the audience, and I never felt the sense of intellectual comfort that I have in producing Understanding Society. As a result it was difficult to formulate my ideas in a way that seemed both authentic and original.

So thank you, to all the visitors and readers who have made the blog so satisfying for me over such a long time.

Sexual harassment in academic contexts

Sexual harassment of women in academic settings is regrettably common and pervasive, and its consequences are grave. At the same time, it is a remarkably difficult problem to solve. The “me-too” movement has shed welcome light on specific individual offenders and has generated more awareness of some aspects of the problem of sexual harassment and misconduct. But we have not yet come to a public awareness of the changes needed to create a genuinely inclusive and non-harassing environment for women across the spectrum of mistreatment that has been documented. The most common institutional response following an incident is to create a program of training and reporting, with a public commitment to investigating complaints and enforcing university or institutional policies rigorously and transparently. These efforts are often well intentioned, but by themselves they are insufficient. They do not address the underlying institutional and cultural features that make sexual harassment so prevalent.

The problem of sexual harassment in institutional contexts is a difficult one because it derives from multiple features of the organization. The ambient culture of the organization is often an important facilitator of harassing behavior — often enough a patriarchal culture that is deferential to the status of higher-powered individuals at the expense of lower-powered targets. There is the fact that executive leadership in many institutions continues to be predominantly male, who bring with them a set of gendered assumptions that they often fail to recognize. The hierarchical nature of the power relations of an academic institution is conducive to mistreatment of many kinds, including sexual harassment. Bosses to administrative assistants, research directors to post-docs, thesis advisors to PhD candidates — these unequal relations of power create a conducive environment for sexual harassment in many varieties. In each case the superior actor has enormous power and influence over the career prospects and work lives of the women over whom they exercise power. And then there are the habits of behavior that individuals bring to the workplace and the learning environment — sometimes habits of masculine entitlement, sometimes disdainful attitudes towards female scholars or scientists, sometimes an underlying willingness to bully others that finds expression in an academic environment. (A recent issue of the Journal of Social Issues (link) devotes substantial research to the topic of toxic leadership in the tech sector and the “masculinity contest culture” that this group of researchers finds to be a root cause of the toxicity this sector displays for women professionals. Research by Jennifer Berdahl, Peter Glick, Natalya Alonso, and more than a dozen other scholars provides in-depth analysis of this common feature of work environments.)

The scope and urgency of the problem of sexual harassment in academic contexts is documented in excellent and expert detail in a recent study report by the National Academies of Sciences, Engineering, and Medicine (link). This report deserves prominent discussion at every university.

The study documents the frequency of sexual harassment in academic and scientific research contexts, and the data are sobering. Here are the results of two indicative studies at Penn State University System and the University of Texas System:

The Penn State survey indicates that 43.4% of undergraduates, 58.9% of graduate students, and 72.8% of medical students have experienced gender harassment, while 5.1% of undergraduates, 6.0% of graduate students, and 5.7% of medical students report having experienced unwanted sexual attention and sexual coercion. These are staggering results, both in terms of the absolute number of students who were affected and the negative effects that these  experiences had on their ability to fulfill their educational potential. The University of Texas study shows a similar pattern, but also permits us to see meaningful differences across fields of study. Engineering and medicine provide significantly more harmful environments for female students than non-STEM and science disciplines. The authors make a particularly worrisome observation about medicine in this context:

The interviews conducted by RTI International revealed that unique settings such as medical residencies were described as breeding grounds for abusive behavior by superiors. Respondents expressed that this was largely because at this stage of the medical career, expectation of this behavior was widely accepted. The expectations of abusive, grueling conditions in training settings caused several respondents to view sexual harassment as a part of the continuum of what they were expected to endure. (63-64)

The report also does an excellent job of defining the scope of sexual harassment. Media discussion of sexual harassment and misconduct focuses primarily on egregious acts of sexual coercion. However, the  authors of the NAS study note that experts currently encompass sexual coercion, unwanted sexual attention, and gender harassment under this category of harmful interpersonal behavior. The largest sub-category is gender harassment:

“a broad range of verbal and nonverbal behaviors not aimed at sexual cooperation but that convey insulting, hostile, and degrading attitudes about” members of one gender (Fitzgerald, Gelfand, and Drasgow 1995, 430). (25)

The “iceberg” diagram (p. 32) captures the range of behaviors encompassed by the concept of sexual harassment. (See Leskinen, Cortina, and Kabat 2011 for extensive discussion of the varieties of sexual harassment and the harms associated with gender harassment.)

 

The report emphasizes organizational features as a root cause of a harassment-friendly environment.

By far, the greatest predictors of the occurrence of sexual harassment are organizational. Individual-level factors (e.g., sexist attitudes, beliefs that rationalize or justify harassment, etc.) that might make someone decide to harass a work colleague, student, or peer are surely important. However, a person that has proclivities for sexual harassment will have those behaviors greatly inhibited when exposed to role models who behave in a professional way as compared with role models who behave in a harassing way, or when in an environment that does not support harassing behaviors and/or has strong consequences for these behaviors. Thus, this section considers some of the organizational and environmental variables that increase the risk of sexual harassment perpetration. (46)

Some of the organizational factors that they refer to include the extreme gender imbalance that exists in many professional work environments, the perceived absence of organizational sanctions for harassing behavior, work environments where sexist views and sexually harassing behavior are modeled, and power differentials (47-49). The authors make the point that gender harassment is chiefly aimed at indicating disrespect towards the target rather than sexual exploitation. This has an important implication for institutional change. An institution that creates a strong core set of values emphasizing civility and respect is less conducive to gender harassment. They summarize this analysis in the statement of findings as well:

Organizational climate is, by far, the greatest predictor of the occurrence of sexual harassment, and ameliorating it can prevent people from sexually harassing others. A person more likely to engage in harassing behaviors is significantly less likely to do so in an environment that does not support harassing behaviors and/or has strong, clear, transparent consequences for these behaviors. (50)

So what can a university or research institution do to reduce and eliminate the likelihood of sexual harassment for women within the institution? Several remedies seem fairly obvious, though difficult.

  • Establish a pervasive expectation of civility and respect in the workplace and the learning environment
  • Diffuse the concentrations of power that give potential harassers the opportunity to harass women within their domains
  • Ensure that the institution honors its values by refusing the “star culture” common in universities that makes high-prestige university members untouchable
  • Be vigilant and transparent about the processes of investigation and adjudication through which complaints are considered
  • Create effective processes that ensure that complainants do not suffer retaliation
  • Consider candidates’ receptivity to the values of a respectful, civil, and non-harassing environment during the hiring and appointment process (including research directors, department and program chairs, and other positions of authority)
  • Address the gender imbalance that may exist in leadership circles

As the authors put the point in the final chapter of the report:

Preventing and effectively addressing sexual harassment of women in colleges and universities is a significant challenge, but we are optimistic that academic institutions can meet that challenge–if they demonstrate the will to do so. This is because the research shows what will work to prevent sexual harassment and why it will work. A systemwide change to the culture and climate in our nation’s colleges and universities can stop the pattern of harassing behavior from impacting the next generation of women entering science, engineering, and medicine. (169)

Turing’s journey

A recent post comments on the value of biography as a source of insight into history and thought. Currently I am reading Andrew Hodges’ Alan Turing: The Enigma (1983), which I am finding fascinating both for its portrayal of the evolution of a brilliant and unconventional mathematician as well as the honest efforts Hodges makes to describe Turing’s sexual evolution and the tragedy in which it eventuated. Hodges makes a serious effort to give the reader some understanding of Turing’s important contributions, including his enormously important “computable numbers” paper. (Here is a nice discussion of computability in the Stanford Encyclopedia of Philosophylink.) The book also offers a reasonably technical account of the Enigma code-breaking process.

Hilbert’s mathematical imagination plays an important role in Turing’s development. Hilbert’s speculation that all mathematical statements would turn out to be derivable or disprovable turned out to be wrong, and Turing’s computable numbers paper (along with Godel and Church) demonstrated the incompleteness of mathematics. But it was Hilbert’s formulation of the idea that permitted the precise and conclusive refutations that came later. (Here is Richard Zack’s account in the Stanford Encyclopedia of Philosophy of Hilbert’s program; link.)

And then there were the machines. I had always thought of the Turing machine as a pure thought experiment designed to give specific meaning to the idea of computability. It has been eye-opening to learn of the innovative and path-breaking work that Turing did at Bletchley Park, Bell Labs, and other places in developing real computational machines. Turing’s development of real computing machines and his invention of the activity of “programming” (“construction of tables”) make his contributions to the development of digital computing machines much more advanced and technical than I had previously understood. His work late in the war on the difficult problem of encrypting speech for secure telephone conversation was also very interesting and innovative. Further, his understanding of the priority of creating a technology that would support “random access memory” was especially prescient. Here is Hodges’ summary of Turing’s view in 1947:

Considering the storage problem, he listed every form of discrete store that he and Don Bayley had thought of, including film, plugboards, wheels, relays, paper tape, punched cards, magnetic tape, and ‘cerebral cortex’, each with an estimate, in some cases obviously fanciful, of access time, and of the number of digits that could be stored per pound sterling. At one extreme, the storage could all be on electronic valves, giving access within a microsecond, but this would be prohibitively expensive. As he put it in his 1947 elaboration, ‘To store the content of an ordinary novel by such means would cost many millions of pounds.’ It was necessary to make a trade-off between cost and speed of access. He agreed with von Neumann, who in the EDVAC report had referred to the future possibility of developing a special ‘Iconoscope’ or television screen, for storing digits in the form of a pattern of spots. This he described as ‘much the most hopeful scheme, for economy combined with speed.’ (403)

These contributions are no doubt well known by experts on the history of computing. But for me it was eye-opening to learn how directly Turing was involved in the design and implementation of various automatic computing engines, including the British ACE machine itself at the National Physical Laboratory (link). Here is Turing’s description of the evolution of his thinking on this topic, extracted from a lecture in 1947:

Some years ago I was researching on what might now be described as an investigation of the theoretical possibilities and limitations of digital computing machines. I considered a type of machine which had a central mechanism and an infinite memory which was contained on an infinite tape. This type of machine appeared to be sufficiently general. One of my conclusions was that the idea of a ‘rule of thumb’ process and a ‘machine process’ were synonymous. The expression ‘machine process’ of course means one which could be carried out by the type of machine I was considering…. Machines such as the ACE may be regarded as practical versions of this same type of machine. There is at least a very close analogy. (399)

At the same time his clear logical understanding of the implications of a universal computing machine was genuinely visionary. He was evangelical in his advocacy of the goal of creating a machine with a minimalist and simple architecture where all the complexity and specificity of the use of the machine derives from its instructions (programming), not its specialized hardware.

Also interesting is the fact that Turing had a literary impulse (not often exercised), and wrote at least one semi-autobiographical short story about a sexual encounter. Only a few pages survive. Here is a paragraph quoted by Hodges:

Alec had been working rather hard until two or three weeks before. It was about interplanetary travel. Alec had always been rather keen on such crackpot problems, but although he rather liked to let himself go rather wildly to newspapermen or on the Third Programme when he got the chance, when he wrote for technically trained readers, his work was quite sound, or had been when he was younger. This last paper was real good stuff, better than he’d done since his mid twenties when he had introduced the idea which is now becoming known as ‘Pryce’s buoy’. Alec always felt a glow of pride when this phrase was used. The rather obvious double-entendre rather pleased him too. He always liked to parade his homosexuality, and in suitable company Alec could pretend that the word was spelt without the ‘u’. It was quite some time now since he had ‘had’ anyone, in fact not since he had met that soldier in Paris last summer. Now that his paper was finished he might justifiably consider that he had earned another gay man, and he knew where he might find one who might be suitable. (564)

The passage is striking for several reasons; but most obviously, it brings together the two leading themes of his life, his scientific imagination and his sexuality.

This biography of Turing reinforces for me the value of the genre more generally. The reader gets a better understanding of the important developments in mathematics and computing that Turing achieved, it presents a vivid view of the high stakes in the secret conflict that Turing was a crucial part of in the use of cryptographic advances to defeat the Nazi submarine threat, and it gives personal insights into the very unique individual who developed into such a world-changing logician, engineer, and scientist.

Social generatively and complexity

The idea of generativity in the realm of the social world expresses the notion that social phenomena are generated by the actions and thoughts of the individuals who constitute them, and nothing else (linklink). More specifically, the principle of generativity postulates that the properties and dynamic characteristics of social entities like structures, ideologies, knowledge systems, institutions, and economic systems are produced by the actions, thoughts, and dispositions of the set of individuals who make them up. There is no other kind of influence that contributes to the causal and dynamic properties of social entities. Begin with a population of individuals with such-and-so mental and behavioral characteristics; allow them to interact with each other over time; and the structures we observe emerge as a determinate consequence of these interactions.

This view of the social world lends great ontological support to the methods associated with agent-based models (link). Here is how Joshua Epstein puts the idea in Generative Social Science: Studies in Agent-Based Computational Modeling):

Agent-based models provide computational demonstrations that a given microspecification is in fact sufficient to generate a macrostructure of interest…. Rather, the generativist wants an account of the configuration’s attainment by a decentralized system of heterogeneous autonomous agents. Thus, the motto of generative social science, if you will, is: If you didn’t grow it, you didn’t explain its emergence. (42)

Consider an analogy with cooking. The properties of the cake are generated by the properties of the ingredients, their chemical properties, and the sequence of steps that are applied to the assemblage of the mixture from the mixing bowl to the oven to the cooling board. The final characteristics of the cake are simply the consequence of the chemistry of the ingredients and the series of physical influences that were applied in a given sequence.

Now consider the concept of a complex system. A complex system is one in which there is a multiplicity of causal factors contributing to the dynamics of the system, in which there are causal interactions among the underlying causal factors, and in which causal interactions are often non-linear. Non-linearity is important here, because it implies that a small change in one or more factors may lead to very large changes in the outcome. We like to think of causal systems as consisting of causal factors whose effects are independent of each other and whose influence is linear and additive.

A gardener is justified in thinking of growing tomatoes in this way: a little more fertilizer, a little more water, and a little more sunlight each lead to a little more tomato growth. But imagine a garden in which the effect of fertilizer on tomato growth is dependent on the recent gradient of water provision, and the effects of both positive influencers depends substantially on the recent amount of sunlight available. Under these circumstances it is difficult to predict the aggregate size of the tomato given information about the quantities of the inputs.

One of the key insights of complexity science is that generativity is fully compatible with a wicked level of complexity. The tomato’s size is generated by its history of growth, determined by the sequence of inputs over time. But for the reason just mentioned, the complexity of interactions between water, sunlight, and fertilizer in their effects on growth mean that the overall dynamics of tomato growth are difficult to reconstruct.

Now consider the idea of strong emergence — the idea that some aggregates possess properties that cannot in principle be explained by reference to the causal properties of the constituents of the aggregate. This means that the properties of the aggregate are not generated by the workings of the constituents; otherwise we would be able in principle to explain the properties of the aggregate by demonstrating how they derive from the (complex) pathways leading from the constituents to the aggregate. This version of the absolute autonomy of some higher-level properties is inherently mysterious. It implies that the aggregate does not supervene upon the properties of the constituents; there could be different aggregate properties with identical constituent properties. And this seems ontological untenable.

The idea of ontological individualism captures this intuition in the setting of social phenomena: social entities are ultimately composed of and constituted by the properties of the individuals who make them up, and nothing else. This does not imply methodological individualism; for reasons of complexity or computational limitations it may be practically impossible to reconstruct the pathways through which the social entity is generated out of the properties of individuals. But ontological individualism places an ontological constraint on the way that we conceptualize the social world. And it gives a concrete meaning to the idea of the microfoundations for a social entity. The microfoundations of a social entity are the pathways and mechanisms, known or unknown, through which the social entity is generated by the actions and intentionality of the individuals who constitute it.

Empowering the safety officer?

How can industries involving processes that create large risks of harm for individuals or populations be modified so they are more capable of detecting and eliminating the precursors of harmful accidents? How can nuclear accidents, aviation crashes, chemical plant explosions, and medical errors be reduced, given that each of these activities involves large bureaucratic organizations conducting complex operations and with substantial inter-system linkages? How can organizations be reformed to enhance safety and to minimize the likelihood of harmful accidents?

One of the lessons learned from the Challenger space shuttle disaster is the importance of a strongly empowered safety officer in organizations that deal in high-risk activities. This means the creation of a position dedicated to ensuring safe operations that falls outside the normal chain of command. The idea is that the normal decision-making hierarchy of a large organization has a built-in tendency to maintain production schedules and avoid costly delays. In other words, there is a built-in incentive to treat safety issues with lower priority than most people would expect.

If there had been an empowered safety officer in the launch hierarchy for the Challenger launch in 1986, there is a good chance this officer would have listened more carefully to the Morton-Thiokol engineering team’s concerns about low temperature damage to O-rings and would have ordered a halt to the launch sequence until temperatures in Florida raised to the critical value. The Rogers Commission faulted the decision-making process leading to the launch decision in its final report on the accident (The Report of the Presidential Commission on the Space Shuttle Challenger Accident – The Tragedy of Mission 51-L in 1986 – Volume OneVolume TwoVolume Three).

This approach is productive because empowering a safety officer creates a different set of interests in the management of a risky process. The safety officer’s interest is in safety, whereas other decision makers are concerned about revenues and costs, public relations, reputation, and other instrumental goods. So a dedicated safety officer is empowered to raise safety concerns that other officers might be hesitant to raise. Ordinary bureaucratic incentives may lead to underestimating risks or concealing faults; so lowering the accident rate requires giving some individuals the incentive and power to act effectively to reduce risks.

Similar findings have emerged in the study of medical and hospital errors. It has been recognized that high-risk activities are made less risky by empowering all members of the team to call a halt in an activity when they perceive a safety issue. When all members of the surgical team are empowered to halt a procedure when they note an apparent error, serious operating-room errors are reduced. (Here is a report from the American College of Obstetricians and Gynecologists on surgical patient safety; link. And here is a 1999 National Academy report on medical error; link.)

The effectiveness of a team-based approach to safety depends on one central fact. There is a high level of expertise embodied in the staff operating a surgical suite, an engineering laboratory, or a drug manufacturing facility. By empowering these individuals to stop a procedure when they judge there is an unrecognized error in play, this greatly extend the amount of embodied knowledge involved in a process. The surgeon, the commanding officer, or the lab director is no longer the sole expert whose judgments count.

But it also seems clear that these innovations don’t work equally well in all circumstances. Take nuclear power plant operations. In Atomic Accidents: A History of Nuclear Meltdowns and Disasters: From the Ozark Mountains to Fukushima James Mahaffey documents multiple examples of nuclear accidents that resulted from the efforts of mid-level workers to address an emerging problem in an improvised way. In the case of nuclear power plant safety, it appears that the best prescription for safety is to insist on rigid adherence to pre-established protocols. In this case the function of a safety officer is to monitor operations to ensure protocol conformance — not to exercise independent judgment about the best way to respond to an unfavorable reactor event.

It is in fact an interesting exercise to try to identify the kinds of operations in which these innovations are likely to be effective.

Here is a fascinating interview in Slate with Jim Bagian, a former astronaut, one-time director of the Veteran Administration’s National Center for Patient Safety, and distinguished safety expert; link. Bagian emphasizes the importance of taking a system-based approach to safety. Rather than focusing on finding blame for specific individuals whose actions led to an accident, Bagian emphasizes the importance of tracing back to the institutional, organizational, or logistic background of the accident. What can be changed in the process — of delivering medications to patients, of fueling a rocket, or of moving nuclear solutions around in a laboratory — that make the likelihood of an accident substantially lower?

The safety principles involved here seem fairly simple: cultivate a culture in which errors and near-misses are reported and investigated without blame; empower individuals within risky processes to halt the process if their expertise and experience indicates the possibility of a significant risky error; create individuals within organizations whose interests are defined in terms of the identification and resolution of unsafe practices or conditions; and share information about safety within the industry and with the public.

The second American revolution

The first American Revolution broke the bonds of control exercised by a colonial power over the actions and aspirations of a relatively small number of people in North America in 1776 — about 2.5 million people. The second American Revolution promises to affect vastly larger numbers of Americans and their freedom, and it is not yet complete. (There were about 19 million African-Americans in the United States in 1960.)

This is the Civil Rights revolution, which has been underway since 1865 (the end of the Civil War); which took increased urgency in the 1930s through the 1950s (the period of Jim Crow laws and a coercive, violent form of white supremacy); and which came to fruition in the 1960s with collective action by thousands of ordinary people and the courageous, wise leadership of men and women like Dr. Martin Luther King, Jr. When we celebrate the life and legacy of MLK, it is this second American revolution that is the most important piece of his legacy.

And this is indeed a revolution. It requires a sustained and vigilant struggle against a powerful status quo; it requires gaining political power and exercising political power; and it promises to enhance the lives, dignity, and freedoms of millions of Americans.

This revolution is not complete. The assault on voting rights that we have seen in the past decade, the persistent gaps that exist in income, health, and education between white Americans and black Americans, the ever-more-blatant expressions of racist ideas at the highest level — all these unmistakeable social facts establish that the struggle for racial equality is not finished.

Dr. King’s genius was his understanding from early in his vocation that change would require courage and sacrifice, and that it would also require great political wisdom. It was Dr. King’s genius to realize that enduring social change requires changing the way that people think; it requires moral change as well as structural change. This is why Dr. King’s profoundly persuasive rhetoric was so important; he was able to express through his speeches and his teaching a set of moral values that almost all Americans could embrace. And by embracing these values they themselves changed.

The struggle in South Africa against apartheid combined both aspects of this story — anti-colonialism and anti-racism. The American civil rights movement focused on uprooting the system of racial oppression and discrimination this country had created since Reconstruction. It focused on creating the space necessary for African-American men and women, boys and girls, to engage in their own struggles for freedom and for personal growth. It insisted upon the same opportunities for black children that were enjoyed by the children of the majority population.

Will the values of racial equality and opportunity prevail? Will American democracy finally embrace and make real the values of equality, dignity, and opportunity that Dr. King expressed so eloquently? Will the second American revolution finally erase the institutions and behaviors of several centuries of oppression?

Dr. King had a fundamental optimism that was grounded in his faith: “the arc of the moral universe is long, but it bends toward justice.” But of course we understand that only long, sustained commitment to justice can bring about this arc of change. And the forces of reaction are particularly strong in the current epoch of political struggle. So it will require the courage and persistence of millions of Americans to these ideals if racial justice is finally to prevail.

Here is an impromptu example of King’s passionate commitment to social change through non-violence. This was recorded in Yazoo City, Mississippi in 1966, during James Meredith’s March against Fear.

Is history probabilistic?

Many of our intuitions about causality are driven by a background assumption of determinism: one cause, one effect, always. But it is evident in many realms — including especially the social world — that causation is probabilistic. A cause makes its effects more likely than they would be in the absence of the cause. Exposure to a Zika-infected mosquito makes it more likely that the individual will acquire the illness; but many people exposed to Zika mosquitoes do not develop the illness. Wesley Salmon formulated this idea in terms of the concept of causal relevance: C is causally relevant to O just in case the conditional probability of O given C is different from the probability of O. (Some causes reduce the probability of their outcomes.)

There is much more to say about this model — chiefly the point that causes rarely exercise their powers in isolation from other factors. So, as J.L. Mackie worked out in The Cement of the Universe: A Study of Causation, we need to be looking for conjunctions of factors that jointly affect the probability of the occurrence of O. Causation is generally conjunctural. But the essential fact remains: no matter how many additional factors we add to the analysis, we are still unlikely to arrive at deterministic causal statements: “whenever ABCDE occurs, O always occurs.”

But here is another kind of certainty that also arises in a probabilistic world. When sequences are governed by objective probabilities, we are uncertain about any single outcome. But we can be highly confident that a long series of trials will converge around the underlying probability. In an extended series of throws of a fair pair of dice the frequency of throwing a 7 will converge around 6/36, whereas the frequency of throwing a 12 will converge around 1/36. So we can be confident that the eventual set of outcomes will look like the histogram above.

Can we look at history as a vast series of stochastic events linked by relations of probabilistic causation? And does this permit us to make historical predictions after all?

Let’s explore that idea. Imagine that history is entirely the product of a set of stochastic events connected with each other by fixed objective probabilities. And suppose we are interested in a particular kind of historical outcome — say the emergence of central states involving dictatorship and democracy. We might represent this situation as a multi-level process of social-political complexification — a kind of primordial soup of political development by opportunistic agents within a connected population in a spatial region. Suppose we postulate a simple political theory of competition and cooperation driving patterns of alliance formation, institution formation, and the aggregation of power by emerging institutions. (This sounds somewhat similar to Tilly’s theory of state formation in Coercion, Capital and European States, A.D. 990 – 1992 and to Michael Mann’s treatment of civilizations in The Sources of Social Power: Volume 1, A History of Power from the Beginning to AD 1760.)

Finally we need to introduce some kind of mechanism of invention — of technologies, institutions, and values systems. This is roughly analogous to the mechanism of genetic mutation in the evolution of life.

Now we are ready to ask some large historical questions about state formation in numerous settings. What is the likelihood of the emergence of a stable system of self-governing communities? What is the likelihood that a given population will arrive at a group of inventions involving technology, institutions, and values systems that permit the emergence of central state capable of imposing its will over distance, collecting revenues to support its activities, and conducting warfare? And what is the likelihood of local failure, resulting in the extinction of the local population? We might look at the historical emergence of various political-economic forms such as plunder societies (Genghis Khan), varieties of feudalism, and medieval city states as different outcomes resulting from the throw of the dice in these different settings.

Self-governance seems like a fairly unlikely outcome within this set of assumptions. Empire and dictatorship seem like the more probable outcomes of the interplay of self-interest, power, and institutions. In order to get self-governance out of processes like these we need to identify a mechanism through which collective action by subordinate agents is possible. Such mechanisms are indeed familiar — the pressures by subordinate but powerful actors in England leading to the reform of absolutist monarchy, the overthrow of the French monarchy by revolutionary uprisings, the challenges to the Chinese emperor represented by a series of major rebellions in the nineteenth century. But such counter-hegemonic processes are often failures, and even when successful they are often coopted by powerful insiders. These possibilities lead us to estimate a low likelihood of stable self-governance.

So this line of thought suggests that a stochastic model of the emergence of central states is possible but discouraging. Assign probabilities to the various kinds of events that need to occur at each of the several stages of civilizational development; run the model a large number of times; and you have a Monte Carlo model of the emergence of dictatorship and democracy. And the discouraging likelihood is that democratic self-governance is a rare outcome.

However, there are several crucial flaws in this analysis. First, the picture is flawed by the fact that history is made by purposive agents, not algorithms or mechanical devices. These actors are not characterized by fixed objective probabilities. Historical actors have preferences and take actions to influence outcomes at crucial points. Second, agents are not fixed over time, but rather develop through learning. They are complex adaptive agents. They achieve innovations in their practices just as the engineers and bureaucrats do. They develop and refine repertoires of resistance (Tilly). So each play of the game of political history is novel in important respects. History is itself influenced by previous history.

Finally, there is the familiar shortcoming of simulations everywhere: a model along these lines unavoidably requires making simplifying assumptions about the causal factors in play. And these simplifications can be shown to have important consequences for the sensitivity of the model.

So it is important to understand that social causation is generally probabilistic; but this fact does not permit us to assign objective probabilities to the emergence of central states, dictatorships, or democracies.

(See earlier posts on more successful efforts to use Bayesian methods to assess the likelihood of the emergence of specific outcomes in constrained historical settings;
link, link.)

Organizational dysfunction

What is a dysfunction when it comes to the normal workings of an organization? In order to identify dysfunctions we need to have a prior conception of the “purpose” or “agreed upon goals” of an organization. Fiscal agencies collect taxes; child protection services work to ensure that foster children are placed in safe and nurturing environments; air travel safety regulators ensure that aircraft and air fields meet high standards of maintenance and operations; drug manufacturers produce safe, high-quality medications at a reasonable cost. A dysfunction might be defined as an outcome for an organization or institution that runs significantly contrary to the purpose of the organization. We can think of major failures in each of these examples.

But we need to make a distinction between failure and dysfunction. The latter concept is systemic, having to do with the design and culture of the organization. Failure can happen as a result of dysfunctional arrangements; but it can happen as a result of other kinds of factors as well. For example, the Tylenol crisis of 1982 resulted from malicious tampering by an external third party, not organizational dysfunction.

Here is an example from a Harvard Business Review article by Gill Corkindale indicating some of the kinds of dysfunction that can be identified in contemporary business organizations:

Poor organizational design and structure results in a bewildering morass of contradictions: confusion within roles, a lack of co-ordination among functions, failure to share ideas, and slow decision-making bring managers unnecessary complexity, stress, and conflict. Often those at the top of an organization are oblivious to these problems or, worse, pass them off as or challenges to overcome or opportunities to develop. (link)

And the result of failures like these is often poor performance and sometimes serious crisis for the organization or its stakeholders.

But — as in software development — it is sometimes difficult to distinguish between a feature and a bug. What is dysfunctional for the public may indeed be beneficial for other actors who are in a position to influence the design and workings of the organization. This is the key finding of researchers like Jack Knight, who argues in Institutions and Social Conflict for the prevalence of conflicting interests in the design and operations of many institutions and organizations; link. And it follows immediately from the approach to organizations encapsulated in the Fligstein and McAdam theory of strategic action fields (link).

There is an important related question to consider: why do recognized dysfunctional characteristics persist? When a piano is out of tune, the pianist and the audience insist on a professional tuning. When the Nuclear Regulatory Commission persistently fails to enforce its regulations through rigorous inspection protocols, nothing happens (Union of Concerned Scientists, link; Perrow, link). Is it that the individuals responsible for the day-to-day functioning of the organization are complacent or unmotivated? Is it that there are contrary pressures that arise to oppose corrective action? Or, sometimes, is it that the adjustments needed to correct one set of dysfunctions can be expected to create another, even more harmful, set of bad outcomes?

One intriguing hypothesis is that correction of dysfunctions requires observation, diagnosis, and incentive alignment. It is necessary that some influential actor or group should be able to observe the failure; it should be possible to trace the connection between the failure and the organizational features that lead to it; and there should be some way of aligning the incentives of the powerful actors within and around the organization so that their best interests are served by their taking the steps necessary to correct the dysfunction. If any of these steps is blocked, then a dysfunctional organization can persist indefinitely.

The failures of Soviet agriculture were observable and the links between organization and farm inefficiency were palpable; but the Soviet public had not real leverage with respect to the ministries and officials who ran the agricultural system. Therefore Soviet officials had no urgent incentive to reform agriculture. So the dysfunctions of collective farming were not corrected until the collapse of the USSR. A dysfunction in a corporation within a market economy that significantly impacts its revenues and profits will be noticed by shareholders, and pressure will be exerted to correct the dysfunction. The public has a strong interest in nuclear reactor safety; but its interests are weak and diffused when compared to the interests of the industry and its lobbyists; so Congressional opposition to reform of the agency remains strong. The same could be said with respect to the current crisis at the Consumer Financial Protection Bureau; the influence of the financial industry and its lobbyists can be concentrated in a way that the interests of the public cannot.

Charles Perrow has written extensively on the failures of the US regulatory sector (link). Here is his description of regulatory capture in the nuclear power industry:

Nuclear safety is problematic when nuclear plants are in private hands because private firms have the incentive and, often, the political and economic power to resist effective regulation. That resistance often results in regulators being captured in some way by the industry. In Japan and India, for example, the regulatory function concerned with safety is subservient to the ministry concerned with promoting nuclear power and, therefore, is not independent. The United States had a similar problem that was partially corrected in 1975 by putting nuclear safety into the hands of an independent agency, the Nuclear Regulatory Commission (NRC), and leaving the promotion of nuclear power in the hands of the Energy Department. Japan is now considering such a separation. It should make one. Since the accident at Fukushima, many observers have charged that there is a revolving door between industry and the nuclear regulatory agency in Japan — what the New York Times called a “nuclear power village” — compromising the regulatory function. (link)

Ten years of Understanding Society

 

This month marks the tenth anniversary of Understanding Society. The blog now includes 1,176 posts on topics in the philosophy of social science, the heterogeneity of the social world, current thinking about social problems, and occasional contributions on how we can envision a better future. Thanks to all of the readers who have visited during the past twelve months!

The blog continues to serve as a simulating outlet for intellectual work for me. Each post is roughly a thousand words, and my aim is to develop one idea or address one problem in the post. I’ve never tried for consistency or thematic coherence over time; the blog is more of a research notebook for me, allowing me to capture ideas and topics as they come up. Since the beginning I’ve looked at it as a kind of “open source philosophy,” allowing for the development of ideas and arguments in a piecemeal way. At the same time, it serves as a kind of seismograph for me, letting me recall the kind of topics that have come to the fore over time.

And, as I had hoped, the blog has created a platform for moving ideas from conception to academic publication for new ideas. My book New Directions in the Philosophy of Social Science appeared about a year ago, and it was wholly developed through the blog.

In the past year there have been a number of posts on familiar topics — critical realism, social mechanisms, and social inequality, for example. These are enduring topics in my research and writing. But a few new topics have arisen as well. One is the question of the dynamics and extent of hate-driven political movements, such as the populist-nationalist extremism of the Trump campaign and presidency (link). Another is completely unrelated — the fascinating history of fundamental physics during the early decades is the twentieth century, culminating in the development of the atomic bomb (link). And a third is an emerging area of interest for me — the nature and causes of organizational dysfunction in contemporary institutions (link). I’ve even had occasion to reflect on cephalopod intelligence (link).

The blog continues to enjoy increasing numbers of visitors. Google recorded over 140,000 page views on the blog in the past month, resulting in over 1.5 million page views over the past year. Part of that traffic comes from followers on Twitter (2,106), Facebook (7,786), Google+ (1,621), and Flipboard, and a great number of the visits are directed by Google searches on relevant topics.

So thank you, readers and visitors, and I hope you will keep reading and commenting!

%d bloggers like this: