Experimental methods in sociology

An earlier post noted the increasing importance of experimentation in some areas of economics (link), and posed the question of whether there is a place for experimentation in sociology as well. Here I’d like to examine that question a bit further.

Let’s begin by asking the simple question: what is an experiment? An experiment is an intervention through which a scientist seeks to identify the possible effects of a given factor or “treatment”. The effect may be thought to be deterministic (whenever X occurs, Y occurs); or it may be probabilistic (the occurrence of X influences the probability of the occurrence of Y). Plainly, the experimental evaluation of probabilistic causal hypotheses requires repeating the experiment a number of times and evaluating the results statistically; whereas a deterministic causal hypothesis can in principle be refuted by a single trial.

In “The Principles of Experimental Design and Their Application in Sociology” (link) Michelle Jackson and D.R. Cox provide a simple and logical specification of experimentation:

We deal here with investigations in which the effects of a number of alternative conditions or treatments are to be compared. Broadly, the investigation is an experiment if the investigator controls the allocation of treatments to the individuals in the study and the other main features of the work, whereas it is observational if, in particular, the allocation of treatments has already been determined by some process outside the investigator’s control and detailed knowledge. The allocation of treatments to individuals is commonly labeled manipulation in the social science context. (Jackson and Cox 2013: 28)

There are several relevant kinds of causal claims in sociology that might admit of experimental investigation, corresponding to all four causal linkages implied by the model of Coleman’s boat (Foundations of Social Theory)—micro-macro, macro-micro, micro-micro, and macro-macro (link). Sociologists generally pay close attention to the relationships that exist between structures and social actors, extending in both directions. Hypotheses about causation in the social world require testing or other forms of empirical evaluation through the collection of evidence. It is plausible to ask whether the methods associated with experimentation are available to sociology. In many instances, the answer is, yes.

There appear to be three different kinds of experiments that would possibly make sense in sociology.

  1. Experiments evaluating hypotheses about features of human motivation and behavior
  2. Experiments evaluating hypotheses about the effects of features of the social environment on social behavior
  3. Experiments evaluating hypotheses about the effects of “interventions” on the characteristics of an organization or local institution

First, sociological theories generally make use of more or less explicit theories of agents and their behavior. These theories could be evaluated using laboratory-based design for experimental subjects in specified social arrangements, parallel to existing methods in experimental economics. For example, Durkheim, Goffman, Coleman, and Hedström all provide different accounts of the actors who constitute social phenomena. It is feasible to design experiments along the lines of experimental economics to evaluate the behavioral hypotheses advanced by various sociologists.

Second, sociology is often concerned with the effects of social relationships on social behavior—for example, friendships, authority relations, or social networks. It would appear that these effects can be probed through direct experimentation, where the researcher creates artificial social relationships and observes behavior. Matthew Salganik et al’s internet-based experiments (20062009) on “culture markets” fall in this category (Hedström 2006). Hedström describes the research by Salganik, Dodds, and Watts (2006) in these terms:

Salganik et al. (2) circumvent many of these problems [of survey-based methodology] by using experimental rather than observational data. They created a Web-based world where more than 14,000 individuals listened to previously unknown songs, rated them, and freely downloaded them if they so desired. Subjects were randomly assigned to different groups. Individuals in only some groups were informed about how many times others in their group had downloaded each song. The experiment assessed whether this social influence had any effects on the songs the individuals seemed to prefer. 

As expected, the authors found that individuals’ music preferences were altered when they were exposed to information about the preferences of others. Furthermore, and more importantly, they found that the extent of social influence had important consequences for the collective outcomes that emerged. The greater the social influence, the more unequal and unpredictable the collective outcomes became. Popular songs became more popular and unpopular songs became less popular when individuals influenced one another, and it became more difficult to predict which songs were to emerge as the most popular ones the more the individuals influenced one another. (787)

Third, some sociologists are especially interested in the effects of micro-context on individual actors and their behavior. Erving Goffman and Harold Garfinkel offer detailed interpretations of the causal dynamics of social interactions at the micro level, and their work appears to be amenable to experimental treatment. Garfinkel (Studies in Ethnomethodology), in particular, made use of research methods that are especially suggestive of controlled experimental designs.

Fourth, sociologists are interested in macro-causes of individual social action. For example, sociologists would like to understand the effects of ideologies and normative systems on individual actors, and others would like to understand the effects of differences in large social structures on individual social actors. Weber hypothesized that the Protestant ethic caused a certain kind of behavior. Theoretically it should be possible to establish hypotheses about the kind of influence a broad cultural factor is thought to exercise over individual actors, and then design experiments to evaluate those hypotheses. Given the scope and pervasiveness of these kinds of macro-social factors, it is difficult to see how their effects could be assessed within a laboratory context. However, there are a range of other experimental designs that could be used, including quasi-experiments (link) and field experiments and natural experiments (link),  in which the investigator designs appropriate comparative groups of individuals in observably different ideological, normative, or social-structural arrangements and observes the differences that can be discerned at the level of social behavior. Does one set of normative arrangements result in greater altruism? Does a culture of nationalism promote citizens’ propensity for aggression against outsiders? Does greater ethnic homogeneity result in higher willingness to comply with taxation, conscription, and other collective duties?

Finally, sociologists are often interested in macro- to macro-causation. For example, consider the claims that “defeat in war leads to weak state capacity in the subsequent peace” or “economic depression leads to xenophobia”. Of course it is not possible to design an experiment in which “defeat in war” is a treatment; but it is possible to develop quasi-experiments or natural experiments that are designed to evaluate this hypothesis. (This is essentially the logic of Theda Skocpol’s (1979) analysis of the causes of social revolution in States and Social Revolutions: A Comparative Analysis of France, Russia, and China.) Or consider a research question in contentious politics, does widespread crop failure give rise to rebellions? Here again, the direct logic of experimentation is generally not available; but the methods articulated in the fields of quasi-experimentation, natural experiments, and field experiments offer an avenue for research designs that have a great deal in common with experimentation. A researcher could compile a dataset for historical China that records weather, crop failure, crop prices, and incidents of rebellion and protest. This dataset could support a “natural experiment” in which each year is assigned to either “control group” or “intervention group”; the control group consists of years in which crop harvests were normal, while the intervention group would consist of years in which crop harvests are below normal (or below subsistence). The experiment is then a simple one: what is the average incidence of rebellious incident in control years and intervention years?

So it is clear that causal reasoning that is very similar to the logic of experimentation is common throughout many areas of sociology. That said, the zone of sociological theorizing that is amenable to laboratory experimentation under random selection and a controlled environment is largely in the area of theories of social action and behavior: the reasons actor behave as they do, hypotheses about how their choices would differ under varying circumstances, and (with some ingenuity) how changing background social conditions might alter the behavior of actors. Here there are very direct parallels between sociological investigation and the research done by experimental and behavioral economists like Richard Thaler (Misbehaving: The Making of Behavioral Economics). And in this way, sociological experiments have much in common with experimental research in social psychology and other areas of the behavioral sciences.

A new social ontology of government

After several years of thinking about the nature of government as a network of organizations, I am happy to share the news that Palgrave Macmillan has published my short book, A New Social Ontology of Government: Consent, Coordination, and Authority (Foundations of Government and Public Administration). Thanks to Jos Raadschelders for proposing the book, and thanks to friends and colleagues at the University of Michigan, the University of Duisburg-Essen, the University of Milan, and Nankai University for helping me think through these new ideas. (What a great way to spend a year of sabbatical!)

My goal in this research was to approach the problem of analyzing the workings of government — decision-making, regulation, knowledge-gathering, enforcement, coordination — by making use of recent ideas about the nature of the social world in several disciplines. The result is an actor-centered social ontology that aims to take full advantage of recent frameworks of theory in sociology, political science, organizational studies, and philosophy.

Here is the brief description of the goals of the book:

This book provides a better understanding of some of the central puzzles of empirical political science: how does “government” express will and purpose? How do political institutions come to have effective causal powers in the administration of policy and regulation? What accounts for both plasticity and perseverance of political institutions and practices? And how are we to formulate a better understanding of the persistence of dysfunctions in government and public administration – failures to achieve public goods, the persistence of self-dealing behavior by the actors of the state, and the apparent ubiquity of corruption even within otherwise high-functioning governments?

I’ve tried to combine recent work in the philosophy of social science on the topic of social ontology (the nature of the social world), on the one hand, with recent developments in sociological theory about how to think about social entities. How do organizations and institutions work, from an actor-centered point of view? What do these features of ontology imply about the dynamics that governments are likely to display in routine times and moments of crisis? What is government, really?

Here is what I mean by “ontology” in this context.

What kind of things are we talking about when we refer to “government”? What sorts of processes, forces, mechanisms, structures, and activities make up the workings of government? In recent years philosophers of social science have rightly urged that we need to better understand the “stuff” of the social world if we are to have a good understanding of how it works. In philosophical language, we need to focus for a time on issues of ontology with regard to the social world. What kinds of entities, powers, forces, and relations exist in the social realm? What kinds of relations tie them together? What are some of the mechanisms and causal powers that constitute the workings of these social entities? Are there distinctive levels of social organization and structure that can be identified? Earlier approaches to the philosophy of the social sciences have largely emphasized issues of epistemology, explanation, methodology, and confirmation, and have often been guided by unhelpful analogies with positivism and the natural sciences. Greater attention to social ontology promises to allow working social scientists and philosophers alike to arrive at a more nuanced understanding of the nature of the social world. Better thinking about social ontology is important for the progress of social science. Bad ontology breeds bad science. (2)

These are empirical questions; but they are also conceptual and philosophical questions. And when we consider the scope, complexity, and fluidity of government agencies and institutions, it is clear that we need to make use of illuminating theories from sociology, organizational studies, and philosophy if we are to come to an adequate understanding of “government” as an extended social organization.

What I offer in the book is an approach to analyzing the ontology of government that proceeds from an actor-centered point of view. Government officials, functionaries, and experts are actors within organizations and institutional cultures; and government itself is a network of organizations that often proceed on the basis of independent and contradictory priorities and goals. Here is a brief description of the actor-centered approach to ontology that I have taken in this book:

Another important truth about government is that it is made up of actors—individuals who occupy roles; who have beliefs, interests, commitments, and goals; who exist within social relations and networks involving other individuals both within and outside the corridors of power; and whose thoughts, intentions, and actions are never wholly defined by the norms, organizational imperatives, and institutions within which they operate. Government officials and functionaries are not robots, defined by the dictates of role responsibilities and policies. So it is crucial to approach the ontology of government from an “actor-centered” point of view, and to understand the powers and capacities of government in terms of the ways in which individual actors are disposed to act in a range of institutional and organizational circumstances. Whether we think of the top administrators and executives, or the experts and formulators of policy drafts, or the managers of extended groups of specialized staff, or the individuals who receive complaints from the public, or the compliance officers whose job it is to ensure that policies are followed by insiders and outsiders—all of these positions are occupied by individual actors who bring their own mental frameworks, interests, emotions, and knowledge to the work they do in government. (3)

Fortunately, there are a number of important new theoretical tools and frameworks that assist in the project of analyzing the ontology of government. Fligstein and McAdam’s formulation of the theory of strategic action fields is deeply helpful (A Theory of Fields). But so are the ideas associated with assemblage theory, synthesized by Manuel DeLanda (Assemblage Theory). And new contributions to organizational theory, including especially work by Scott and Davis (Organizations and Organizing: Rational, Natural and Open System Perspectives), shed a great deal of light on aspects of government action and functioning that are otherwise obscure.

The question of collective agency is an important topic to consider when analyzing the ontology of government. I argue that governments do indeed have a form of collective agency along the lines of the ideas expressed by List and Pettit (Group Agency: The Possibility, Design, and Status of Corporate Agents); but it is a conception of group agency that depends on the organized activities and plans of the actors who constitute various units of the organization. Moreover, I argue that the disconnects, inconsistent priorities, principal-agent problems, and conflicts of interest that arise within organizations pretty much ensure that governments and other mega-organizations are myopic and unsteady.

Now think of the possibilities of overlap, interference, and inconsistency that exist among the functionings and missions of diverse agencies. Each agency has its mission and priorities; these goals imply efforts on the part of the leaders, managers, and staff of the agency to bring about certain kinds of results. And sometimes—perhaps most times— these results may be partially inconsistent with the priorities, goals, and initiatives of other governmental agencies. The Commerce Department has a priority of encouraging the export of US technology to other countries, to generate business success and economic growth in the United States. Some of those technologies involve processes like nuclear power production. But other agencies—and the Commerce Department itself in another part of its mission—have the goal of limiting the risks of the proliferation of technologies with potential military uses. Here is the crucial point to recognize: there is no “master executive” capable of harmoniously adjusting the activities of all departments so as to bring about the best outcome for the country, all things considered. There is the President of the United States, of course, who wields authority over the cabinet secretaries who serve as chief executives of the various departments; and there is the Congress, which writes legislation charging and limiting the activities of government. But it is simply impossible to imagine an overall master executive who serves as symphony conductor to all these different areas of government activity. At the best, occasions of especially obvious inconsistency of mission and effort can be identified and ameliorated. New policies can be written, memoranda of understanding between agencies can be drafted, and particular dysfunctions can be untangled. But this is a piecemeal and never-complete process. (5)

The book pays attention to the forms of dysfunction that can be seen to arise within organizations and networks of organizations, given the nature of the actor-centered activity that they encompass. (Other experts on organizational dysfunction include Charles Perrow (Normal Accidents: Living with High-Risk Technologies) and Diane Vaughan (The Challenger Launch Decision: Risky Technology, Culture, and Deviance at NASA).) So the occurrence of corruption, failure of execution, short attention span, inadequate communication, and inconsistency of effort across related agencies are dysfunctional, but can be seen to be as natural to organizations as friction is to mechanical devices. Here is the general view of dysfunction that is developed in the book:

This chapter focuses on a key consequence of the social realities of government presented to this point: the fact that agencies and bureaucracies represent loosely-linked groups of actors who have a variety of different interests, only imperfectly subject to control by the central executive. This is the key theme of the “natural systems” approach to organizations described in Chapter 4. This unavoidable looseness of bureaucratic organization creates the possibility of several kinds of lack of coherence between government’s intentions and the actual activities of agencies and departments. The chapter introduces the idea of a principal-agent problem and demonstrates that this is an unavoidable feature of organizational functioning. It also applies the idea of “loose coupling” to the challenges associated with inter-agency collaboration. And it touches on an important problem at the level of the actor, the problem of conflict of interest and commitment. The chapter considers a range of organizational mechanisms aimed at enhancing internal controls and compliance. (92)

Government is essential to preserving and enhancing the public good in a complex, interdependent society. In order to bring about effective formulation and implementation of public policy, regulation, enforcement, and protection of the health and welfare of the public, it is crucial to have a realistic understanding of how organizations work, and what can be done to make them as effective as possible. And we need to have an honest understanding of the features of organizational behavior that lead to dysfunction if we are to have any hope of creating governmental (and private) organizations that can truly be described as “high-reliability organizations” (link). Fundamentally this book offers a conception of government as a network of organizations, loosely coupled and subject to imperfect executive management. The key challenge for elected leaders and government officials and experts is how to navigate the limitations of large organizations while achieving the larger part of an agenda for the public good.

Guest post by Nicholas Preuth

Nicholas Preuth is a philosophy student at the University of Michigan. His primary interests fall in the philosophy of law and the philosophy of social science. Thanks, Nick, for contributing this post!

Distinguishing Meta-Social Ontology from Social Ontology

Social ontology is the study of the properties of the social world. Conventional claims about social ontology proceed by asking and answering questions such as, “what is the existential status of social entities (e.g. institutions, governments, etc.)?”, “can institutions exert causal influence?”, “what is the causal relationship between micro, meso, and macro-level social entities?”, etc. Daniel Little is one of the many philosophers and sociologists who has written extensively on the topic of social ontology (see discussions herehere, and here). The types of arguments and discussions found in those blog posts represent conventional social ontology discussions—conventional in the sense that the content of the posts constitute the sort of commonly agreed-upon purview of social ontology discussions.

However, in recent years, many works of social ontology have embedded a new type of claim in their works that differs from the conventional discussions of social ontology. These new claims are a series of methodological claims about the role and importance of ontology in conducting social scientific research. Unlike conventional claims about ontology that try to answer substantive questions about the nature of the social world, these methodological claims ask why ontology matters and what role ontology should play in the conduct of social science research. Here is an example of Brian Epstein making such a claim in his work, The Ant Trap: Rebuilding the Foundations of the Social Sciences:

Ontology has ramifications, and ontological mistakes lead to scientific mistakes. Commitments about the nature of the entities in a science—how they are composed, the entities on which they ontologically depend—are woven into the models of science.…despite Virchow’s expertise with a microscope, his commitment to cell theory led him to subdivide tissues into cells where there are none. And that led to poor theories about how anatomical features come to be, how they are changed or destroyed, and what they do. (Brian Epstein, The Ant Trap: Rebuilding the Foundations of the Social Sciences (Oxford: Oxford University Press, 2015, 40-41))

Notice how in this passage Epstein makes a claim about why ontology is important and, consequently, tacitly takes a stance on a methodological relationship between ontology and research. According to Epstein, ontology matters because ontology shapes the very way that we investigate the world. He believes that bad ontology leads researchers into scientific mistakes because ontology distorts a researcher’s ability to objectively investigate phenomena. Epstein’s evident unstated conclusion here—which is never explicitly formulated in his book, even though it is a very important underlying premise in his project—is that ontological theorizing must take methodological priority over scientific experimentation. As Epstein might sum up, we ought to think about ontology first, and then conduct research later.

Yet Epstein’s statement is not the only way of construing the relationship between ontology and research. Epstein’s unstated assumption that ontological work should be done before research is a highly contested assertion. Why should we just accept that ontology should come before empirical research? Are there no other ways of thinking about the relationship between ontology and social science research? These methodological questions are better suited being treated as separate, distinct questions rather than being embedded within the usual set of conventional questions about social ontology. There should be a conceptual distinction between the conventional claims about social ontology that actually engage with understanding the social world, and these new kinds of methodological claims about the relationship between ontology and research. If we adhere to such a distinction, then Epstein’s methodological claims do not belong to the field of social ontology: they are claims about meta-social ontology.

Meta-social ontology aims to explicitly illuminate the methodological relationship between ontological theorizing in the social sciences and the empirical practice of social science research. The field of meta-social ontology seeks to answer two crucial questions:

  1. What methodology best guides the practice of ontological theorizing?
  2. To what extent should we be existentially committed to the ontological statements we make about the social world?

Let’s spend some time examining both questions, as well as proposed answers to each question.

The first question is a clear formulation of the kind of question that Epstein wants to answer in his book. There are two typical approaches to answering this question. Epstein’s approach, that ontological theorizing must occur prior to and outside of scientific experimentation, is called apriori ontology. Apriori ontology argues that ontology can be successfully crafted through theoretical deductions and philosophical reasoning, and that it is imperative to do so because ontological mistakes lead to scientific mistakes. Here is another philosopher, John Searle, supporting the apriori social ontology position:

I believe that where the social sciences are concerned, social ontology is prior to methodology and theory. It is prior in the sense that unless you have a clear conception of the nature of the phenomena you are investigating, you are unlikely to develop the right methodology and the right theoretical apparatus for conducting the investigation. (John Searle, “Language and Social Ontology,” Theory and Society, Vol. 37:5, 2008, 443).

Searle’s formulation of apriori ontology here gives an explicit methodological priority to ontological theorizing. In other words, he believes that the correct ontology needs to be developed first before scientific experimentation, or else the experimentation will be misguided. No doubt Epstein agrees with this methodological priority, but he does not explicitly state it. Nevertheless, both Searle and Epstein are clear advocates of the apriori ontology position.

However, there is another approach to ontological theorizing that challenges apriori ontology as being too abstracted from the actual conduct of social science experimentation. This other approach is called aposteriori ontology. Aposteriori ontology rejects the efficacy of abstract ontological theorizing derived from speculative metaphysics. Instead, aposteriori ontology advocates for ontology to be continually constructed, informed, and refined by empirical social science research. Here is Little’s formulation of aposteriori ontology:

I believe that ontological theorizing is part of the extended scientific enterprise of understanding the world, and that efforts to grapple with empirical puzzles in the world are themselves helpful to refine and specifying our ontological ideas…. Ontological theories are advanced as substantive and true statements of some aspects of the social world, but they are put forward as being fundamentally a posteriori and corrigible. (D. Little, “Social Ontology De-dramatized,” Philosophy of the Social Sciences, I-11, 2020, 2-4)

Unlike apriori ontology, aposteriori ontology does not look at ontology as being prior to scientific research. Instead, aposteriori ontology places scientific experimentation alongside ontological developments as two tools that go hand-in-hand in guiding our understanding of the social world. In sum, the apriori vs. aposteriori debate revolves around whether ontology should be seen as an independent, theoretical pursuit that determines our ability to investigate the world, or if ontology should be seen as another collaborative tool within the scientific enterprise, alongside empirical research and theory formation, that helps us advance our understanding of the nature of the social world.

The second question in the field of meta-ontology is a question of existential commitment: to what extent do we need to actually believe in the existence of the ontological statements we posit about the world? This is less complicated than it sounds. Consider this example: we often talk about the notion of a “ruling class” in society, where “ruling class” is understood as a social group that wields considerable influence over a society’s political, economic, and social agenda. When we employ the term “ruling class,” do we actually mean to say that such a formation really exists in society, or is this just a helpful term that allows us to explain the occurrence of certain social phenomena while also allowing us to continue to generate more explanations of more social phenomena? This is the heart of the second issue in meta-ontology.

Similar to the apriori vs. aposteriori debate, proposed answers to this question tend to be dichotomous. The two main approaches to this question are realism and anti-realism (sometimes called pragmatism). Realism asserts that we should be existentially committed to the ontological entities that we posit. Epstein, Searle, and Little are among those who fall into this camp. Here is Epstein’s approximate formulation of realism:

What are social facts, social objects, and social phenomena—these things that the social sciences aim to model and explain?… How the social world is built is not a mystery, not magical or inscrutable or beyond us. (Epstein, The Ant Trap, 7)

As Epstein expresses here, realists believe that it is possible to discover the social world just as scientist discover the natural world. Realists maintain that their ontological statements about the world reflect social reality, meaning that the discovery and explanatory success of the “ruling class” hypothesis is like finding a new theory of the natural world.

Contrarily, anti-realists/pragmatists argue that ontology is only useful insofar as it advances scientific inquiry and enables successful inferences to a larger number of social phenomena. They do not believe that ontological statements reflect social reality, so they are not existentially committed to the truth of any particular ontology of the social world. Richard Lauer, a proponent of an anti-realist/pragmatist meta-social ontology, defines it like this:

The function of these statements is pragmatic. Such statements may open new possibilities that can further scientific aims, all without requiring a realist attitude…instead of concerning ourselves with whether there really are such [things], we may ask about the empirical merits of moving to [such] a view. (Richard Lauer, “Is Social Ontology Prior to Social Scientific Methodology,” Philosophy of the Social Sciences, Vol. 49:3, 2019, 184)

Taking the ruling class example above, an anti-realist/pragmatist like Lauer would suggest that the concept of ruling class is useful because it allows us to generate more explanations of social phenomena while rejecting the idea that there is such a thing as “ruling classes” that actually exists.

There is, however, some room for middle ground between realism and anti-realism. Harold Kincaid, another well-known philosopher of social science, has tried to push the realism/anti-realism debates in a more fruitful direction by asserting that a better way to answer the question is by addressing the question towards empirical research in specific, localized contexts:

“I think we can go beyond blanket realism or instrumentalism if we look for more local issues and do some clarification. A first step, I would argue, is to see just when, where, and how specific social research with specific ontologies has empirical success…The notion of a ‘ruling class’ at certain times and places explains much. Does dividing those at the top differently into ruling elites also explain? That could well be the case and it could be that we can do so without contradicting the ruling class hypothesis…These are the kind of empirical issues which give ‘realism’ and ‘pluralism’ concrete implications.” (Harold Kincaid, “Concrete Ontology: Comments on Lauer, Little, and Lohse,” Philosophy of the Social Sciences, I-8, 2020, 4-5)

Kincaid suggests here that a better way of arguing for the efficacy of a realist or anti-realist meta-ontology is by looking at the particular success of specific ontological statements in the social sciences and thereby determining an answer from there. Taking our ruling class example, Kincaid would suggest that we investigate the success of the ruling class hypothesis in localized contexts, and then from there evaluate our existential commitment to it based on its ability to successfully explain social phenomena and provoke new research regarding new social phenomena. This is still a clear endorsement of realism with respect to social concepts and entities. However, it pushes the conversation away from blanket realism (like Epstein) and blanket pragmatism (like Lauer). Instead, Kincaid emphasizes the interaction of empirical research on the subsequent development of our realist/anti-realist meta-ontological position towards specific social phenomena. Thus, as Kincaid sums up his position, “we need to get more concrete!” (Kincaid, 8).

So, there are many ways one can think about the methodological relationship between social ontology and social science research. If we were to categorize the philosophers discussed here, it would look like this:

  1. Apriori realism ontology (Searle, Epstein)
  2. Aposteriori realism ontology (Little, Kincaid)
  3. Anti-realist pragmatism ontology (Lauer)

In light of these discussions, it is important that works of social ontology maintain a conceptual distinction between social ontology arguments and meta-social ontology arguments. As we saw with Epstein, it can be tempting to throw in meta-social ontological justifications in a new work of social ontology. However, this both blurs the distinction between the field of social ontology and the field of meta-social ontology, and it obscures the view that meta-social ontological discussions deserve a treatment in their own right. As a complex, abstract field that deals with difficult subject matter, social ontology should strive for the utmost clarity. Adding meta-social ontological considerations as a quick aside in a work on social ontology just muddies the already murky water.

Defining the philosophy of technology

The philosophy of technology ought to be an important field within contemporary philosophy, given the centrality of technology in our lives. And yet there is not much of a consensus among philosophers about what the subject of the philosophy of technology actually is. Are we most perplexed by the ethical issues raised by new technological possibilities — genetic engineering, face recognition, massive databases and straightforward tools for extracting personal information from them? Should we rather ask about the risks created by new technologies — the risks of technology catastrophe, of unintended health effects, or of further intensification of environmental harms on the planet we inhabit? Should we give special attention to issues of “technology justice” and the inequalities among people that technologies often facilitate, and the forms of power that technology enables for some groups over others? Should we direct our attention to the “existential” issues raised by technology — the ways that immersion in a technologically intensive world have influenced our development as persons, as intentional and meaning-creating individuals? Are there issues of epistemology, rationality, and creativity that are raised by technology within a social and scientific setting? Should we use this field of philosophy to examine how technology influences human society, and how society influences the development and character of technology? Should we, finally, be concerned that the technology opportunities that confront us encourage an inescapable materialism and a decline of meaningful spiritual or poetic experience?

A useful way of approaching this question is to consider the topics included in the Blackwell handbook, A Companion to the Philosophy of Technology, edited by Jan Kyrre Berg Olsen Friis, Stig Andur Pedersen, and Vincent F. Hendricks. The editors and contributors do a good job of attempting to discover philosophical problems in issues raised by technology. The major divisions in this companion include Introduction, History of Technology, Technology and Science, Technology and Philosophy, Technology and Environment, Technology and Politics, Technology and Ethics, and Technology and the Future.

The editors summarize the scope of the field in these terms: 

The philosophy of technology taken as a whole is an understanding of the consequences of technological impacts relating to the environment, the society and human existence. (Introduction)

As a definition, however, this attempt falls short. By focusing on “consequences” it leaves unexamined the nature of technology itself, it suggests a unidirectional relationship between technology and human and social life, and it is silent about the normative dimensions of any critical approach to the understanding of technology.

Another useful approach to the topic of how to define the philosophy of technology is Tom Misa’s edited collection, Modernity and Technology. (Misa’s introduction to the volume is available here.) Misa is an historian of technology (he contributes the lead essay on history of technology in the Companion), and he is a particularly astute observer and interpreter of technology in society. His reflections on technology and modernity are especially valuable. Here are a few key ideas:

Technologies interact deeply with society and culture, but the interactions involve mutual influence, substantial uncertainty, and historical ambiguity, eliciting resistance, accommodation, acceptance, and even enthusiasm. In an effort to capture these fluid relations, we adopt the notion of co-construction. (3)

This point emphasizes the idea that technology is not a separate historical factor, but rather permeates (and is permeated by) social, cultural, economic, and political realities at every point in time. This is the reality that Misa designates as “co-construction”.

A related insight is Misa’s insistence that technology is not one uniform domain that is amenable to analysis and discussion at the purely macro-level. Instead, at any given time the technologies and technological systems available to an epoch are a heterogeneous mix with different characteristics and different ways of influencing human interests. It is necessary, therefore, to address the micro-characteristics of particular technologies rather than “technology in general”.

Theorists of modernity frequently conjure a decontextualized image of scientific or technological rationality that has little relation to the complex, messy, collective, problem-solving activities of actual engineers and scientists…. These theorists of modernity invariably posit “technology,” where they deal with it at all, as an abstract, unitary, and totalizing entity, and typically counterpose it against traditional formulations (such as lifeworld, self, or focal practices). … Abstract, reified, and universalistic conceptions of technology obscure the significant differences between birth control and hydrogen bombs, and blind us to the ways different groups and cultures have appropriated the same technology and used it to different ends. To constructively confront technology and modernity, we must look more closely at individual technologies and inquire more carefully into social and cultural processes. (8-9)

And Misa confronts the apparent dichotomy often expressed in technology studies, between technological determinism and social construction of technology:

One can see, of course, that these rival positions are not logically opposed ones. Modern social and cultural formations are technologically shaped; try to think carefully about mobility or interpersonal relations or a rational society without considering the technologies of harbors, railroad stations, roads, telephones, and airports; and the communities of scientists and engineers that make them possible. At the same time, one must understand that technologies, in the modern era as in earlier ones, are socially constructed; they embody varied and even contradictory economic, social, professional, managerial, and military goals. In many ways designers, engineers, managers, financiers, and users of technology all influence the course of technological developments. The development of a technology is contested and controversial as well as constrained and constraining. (10)

It may be that a diagram does a better job of “mapping” the field of the philosophy of technology than a simple definition. Here is a first effort:

The diagram captures the idea that technology is embedded both within the agency, cultures, and values of living human beings during an epoch, and within the social institutions within which human beings function. Human beings and social relations drive the development of technologies, and they are in turn profoundly affected by the changing realities of ambient technologies. The social institutions include economic institutions (property relations, production and distribution relations), political institutions (institutions of law, policy, and power), and social relations (gender, race, various forms of social inequality). In orange, the diagram represents various kinds of problems of assessment, implementation, development, control, and decision-making that arise in the course of the development and management of technologies, including issues of risk assessment, distribution of burdens and benefits of the effects of technology, and issues concerning future generations and the environment.

A general definition of technology might be framed in these terms: “transformation of nature through labor, tools, and knowledge”. And a brief definition of the philosophy of technology, still preliminary, might go along these lines: 

The philosophy of technology attempts to uncover the multiple issues raised by “transformation of nature through labor, tools, and knowledge” within the context of large, complex societies. These issues include normative questions, questions of social causation, questions of distributive justice, issues concerning management of risk, and the relationship between technology and human wellbeing.

Thinking about pandemic models

One thing that is clear from the pandemic crisis that is shaking the world is the crucial need we have for models that allow us to estimate the future behavior of the epidemic. The dynamics of the spread of an epidemic are simply not amenable to intuitive estimation. So it is critical to have computational models that permit us to project the near- and middle-term behavior of the disease, based on available data and assumptions.

Scott Page is a complexity scientist at the University of Michigan who has written extensively on the uses and interpretation of computational models in the social sciences. His book, The Model Thinker: What You Need to Know to Make Data Work for You, does a superlative job of introducing the reader to a wide range of models. One of his key recommendations is that we should consider many models when we are trying to understand a particular kind of phenomenon. (Here is an earlier discussion of the book; link.) Page contributed a very useful article to the Washington Post this week that sheds light on the several kinds of pandemic models that are currently being used to understand and predict the course of the pandemic at global, national, and regional levels (“Which pandemic model should you trust?”; (link). Page describes the logic of “curve-fitting” models like the Institute for Health Metrics and Evaluation (IHME) model as well as epidemiological models that proceed on the basis of assumptions about the causal and social processes through which disease spreads. The latter attempt to represent the process of infection from infected person to susceptible person to recovered person. (Page refers to these as “microfoundational” models.) Page points out that all models involve a range of probable error and missing data, and it is crucial to make use of a range of different models in order to lay a foundation for sound public health policies. Here are his summary thoughts:

All this doesn’t mean that we should stop using models, but that we should use many of them. We can continue to improve curve-fitting and microfoundation models and combine them into hybrids, which will improve not just predictions, but also our understanding of how the virus spreads, hopefully informing policy. 

Even better, we should bring different kinds of models together into an “ensemble.” Different models have different strengths. Curve-fitting models reveal patterns; “parameter estimation” models reveal aggregate changes in key indicators such as the average number of people infected by a contagious individual; mathematical models uncover processes; and agent-based models can capture differences in peoples’ networks and behaviors that affect the spread of diseases. Policies should not be based on any single model — even the one that’s been most accurate to date. As I argue in my recent book, they should instead be guided by many-model thinking — a deep engagement with a variety of models to capture the different aspects of a complex reality. (link)

Page’s description of the workings of these models is very helpful for anyone who wants to have a better understanding of the way a pandemic evolves. Page has also developed a valuable series of videos that go into greater detail about the computational architecture of these various types of models (link). These videos are very clear and eminently worth viewing if you want to understand epidemiological modeling better.Social network analysis is crucial to addressing the challenge of how to restart businesses and other social organizations. Page has created “A Leader’s Toolkit For Reopening: Twenty Strategies to Reopen and Reimagine”, a valuable set of network tools and strategies offering concrete advice about steps to take in restarting businesses safely and productively. Visit this site to see how tools of network analysis can help make us safer and healthier in the workplace (link). 

Social network analysis is crucial to addressing the challenge of how to restart businesses and other social organizations. Page has created “A Leader’s Toolkit For Reopening: Twenty Strategies to Reopen and Reimagine”, a valuable set of network tools and strategies offering concrete advice about steps to take in restarting businesses safely and productively. Visit this site to see how tools of network analysis can help make us safer and healthier in the workplace (link). 

Another useful recent resource on the logic of pandemic models is Jonathan Fuller’s recent article “Models vs. evidence” in Boston Review (link). Fuller is a philosopher of science who undertakes two tasks in this piece: first, how can we use evidence to evaluate alternative models? And second, what accounts for the disagreements that exist in the academic literature over the validity of several classes of models? Fuller has in mind essentially the same distinction as Page does, between curve-fitting and microfoundational models. Fuller characterizes the former as “clinical epidemiological models” and the latter as “infectious disease epidemiological models”, and he argues that the two research communities have very different ideas about what constitutes appropriate use of empirical evidence in evaluating a model. Essentially Fuller believes that the two approaches embody two different philosophies of science with regard to computational models of epidemics, one more strictly empirical and the other more amenable to a combination of theory and evidence in developing and evaluating the model. The article provides a level of detail that would make it ideal for a case study in a course on the philosophy of social science.

Joshua Epstein, author of Generative Social Science: Studies in Agent-Based Computational Modeling (Princeton Studies in Complexity), gave a brief description in 2009 of the application of agent-based models to pandemics in “Modelling to Contain Pandemics” (link). Epstein describes a massive ABM model of a global pandemic, the Global-Scale Agent Model (GSAM), that attempted to model the spread of the H1N1 virus in 1996. Here is a video in which Miles Parker explains and demonstrates the model (link). 

Another useful resource is this video on “Network Theory: Network Diffusion & Contagion” (link), which provides greater detail about how the structure of social networks influences the spread of an infectious disease (or ideas, attitudes, or rumors).

My own predilections in the philosophy of science lean towards scientific realism and the importance of identifying underlying causal mechanisms. This leaves me more persuaded by the microfoundational / infectious disease models than the curve-fitting models. The criticisms that Nancy Cartwright and Jeremy Hardie offer in Evidence-Based Policy: A Practical Guide to Doing It Better of the uncritical methodology of randomized controlled trials (link) seem relevant here as well. The IHME model is calibrated against data from Wuhan and more recently northern Italy; but circumstances were very different in each of those locales, making it questionable that the same inflection points will show up in New York or California. As Cartwright and Hardie put the point, “The fact that causal principles can differ from locale to locale means that you cannot read off that a policy will work here from even very solid evidence that it worked somewhere else” (23). But, as Page emphasizes, it is valuable to have multiple models working from different assumptions when we are attempting to understand a phenomenon as complex as epidemic spread. Fuller makes much the same point in his article:

Just as we should embrace both models and evidence, we should welcome both of epidemiology’s competing philosophies. This may sound like a boring conclusion, but in the coronavirus pandemic there is no glory, and there are no winners. Cooperation in society should be matched by cooperation across disciplinary divides. The normal process of scientific scrutiny and peer review has given way to a fast track from research offices to media headlines and policy panels. Yet the need for criticism from diverse minds remains.

The democratic dilemma of trust

In 2007 Chuck Tilly published an intriguing historical and theoretical study of the politics of equality and voice, Democracy. The book is a study of the historical movements towards greater democracy — and likewise, the forces that lead to de-democratization. The threat currently posed to western democracies by the rise of radical populism makes it worthwhile thinking once more about some of these theories.

Here is the definition that Tilly offers for democracy throughout the book: “In this simplified perspective, a regime is democratic to the degree that political relations between the state and its citizens feature broad, equal, protected and mutually binding consultation” (13-14).

And here is how he defines these four crucial features of democratic institutions:

The terms broad, equal, protected, and mutually binding identify four partly independent dimensions of variation among regimes. Here are rough descriptions of the four dimensions:

  1. Breadth: From only a small segment of the population enjoying extensive rights, the rest being largely excluded from public politics, to very wide political inclusion of people under the state’s jurisdiction (at one extreme, every household has its own distinctive relation to the state, but only a few households have full rights of citizenship; at the other, all adult citizens belong to the same homogeneous category of citizenship)
  2. Equality: From great inequality among and within categories of citizens to extensive equality in both regards (at one extreme, ethnic categories fall into a well-defined rank order with very unequal rights and obligations; at the other, ethnicity has no significant connection with political rights or obligations and largely equal rights prevail between native-born and naturalized citizens)
  3. Protection: From little to much protection against the state’s arbitrary action (at one extreme, state agents constantly use their power to punish personal enemies and reward their friends; at the other, all citizens enjoy publicly visible due process)
  4. Mutually binding consultation: From non-binding and/or extremely asymmetrical to mutually binding (at one extreme, seekers of state benefits must bribe, cajole, threaten, or use third-party influence to get anything at all; at the other, state agents have clear, enforceable obligations to deliver benefits by category of recipient) (14-15)

It is interesting to observe that this definition of democracy gives all of its attention to the behavior of government and the relationship of government to its citizenry. But twentieth-century history, and the early decades of the twenty-first century, make it clear that anti-democracy dwells in citizens as well as authoritarian wielders of state power. The use of coercion and violence is not the monopoly of the state. In Fascists Michael Mann emphasizes the role of fascist paramilitary organizations in the rise of fascism in Germany, Italy, and other organizations, and their brutal use of violence against their “enemies”. And his treatment of ethnic cleansing in The Dark Side of Democracy: Explaining Ethnic Cleansing likewise makes it clear that the impulses of right-wing organizations in civil society can lead to murderous violence in contemporary settings as well. This appears to be relevant in India today, with the blending of BJP party organizations and extremist nationalist organizations in civil society in the fomenting of anti-Muslim violence. So anti-democratic impulses are by no means the terrain of authoritarian states only. Contemporary white supremacist organizations in the United States seem to represent exactly this kind of danger.

The definition and explications that Tilly offers here can be understood in a normative way. Higher scores in these four dimensions mean a better society — a more democratic society. But they can also be understood as contributing to a political psychology of democracy: “This is what it will take for a democracy to be stable and enduring.” Citizens need to have rights of participation; these rights need to be genuinely equal; citizens need to be protected from arbitrary state action; and important decisions of public policy need to be decided through institutions and rules that bind state actors. And they need to be confident in each of these conditions in their existing political institutions.

One of the factors that Tilly emphasizes in his account of political democracy is the role of trust — trust between rulers and citizens, and of course, between citizens and rulers. There is an intimate connection between trust and that crucial idea of democratic theory, “consent of the governed”. Paying taxes, obeying local laws, accepting conscription — these are all democratic duties; but they are also largely voluntary, in the sense that enforcement is sporadic and only partially effective. Participants need to trust that these duties apply to all citizens, and that everyone is, roughly speaking, accepting his or her share of the burdens. If the governed have lost trust in the political institutions that govern them, then their continuing consent is in question.

Here and elsewhere (Trust and Rule) Tilly puts a lot of his chips on his idea of “trust networks” as a primary vehicle of social trust. But here Tilly seems to miss the boat a bit. He does not address the broad question of institutional trust; rather, his trust concepts all fall at the more local and individual-to-individual end of the spectrum. He characterizes trust as a relationship (81), which is fair enough; but the terms of the relationship are other individuals, not institutions or practices.

Trust networks, to put it more formally, contain ramified interpersonal connections, consisting mainly of strong ties, within which people set valued, consequential, long-term resources and enterprises at risk to the malfeasance, mistakes, or failures of others. (81)

Trust networks gain political importance when they intersect with patron-client relationships with governing elites; groups are able to secure benefits when their network is able to negotiate a favorable settlement of a policy issue, and then deliver the behavior (voting, demonstrations, public support) of the individuals within the trust network in question. This might be an ethnic or racial group, a regional association (farmers, small business owners), or a political advocacy movement (environmentalists, anti-tax activists). So trust is involved in making government work in these circumstances; but it is not trust between citizen and government, but rather among citizens within their own trust networks, and between the powerful and the spokespersons of these networks (link).

In fact, current mistrust in government seems to rest heavily on trust networks within the right: trust in Fox News, trust in Breitbart, trust in the organizations and leaders of the right, trust in the extended network represented by the Tea Party, trust in fellow members of various right-wing organizations who may be neighbors or Twitter sources.

But the challenge to our current democratic institutions seems to have to do with a loss of institutional trust — trust, confidence, and reliance in our basic institutions.

So the question here is this: why have large segments of the populations of western democracies lost a substantial amount of trust in the institutions of governance in their democracies? Why does the idea of a social contract in which everyone benefits from cooperation and public policy no longer have the grip that it needs to have if democracy is to thrive?

One answer seems evident, but perhaps too superficial: there has been a concerted campaign for at least fifty years of cultivating mistrust of government in the United States and other countries that has led to cynicism in many, rejection of government policy and the legitimacy of taxation in others, and loony resistance in others. (Think of the 2016 Malheur National Wildlife Refuge occupation, for example, and the extremist anti-government ideologies expressed by its activists.) This is propaganda, a deliberate effort to shape political attitudes and beliefs through the techniques of Madison Avenue. Grover Norquist’s explicit political goal was expressed in vivid terms: “My goal is to cut government in half in twenty-five years, to get it down to the size where we can drown it in the bathtub.” This suggests that mistrust of government is due, in part anyway, to the results of a highly effective marketing campaign by conservatives aimed at producing exactly that mistrust in a significant portion of the population. The slogans and political language of extremist populism are chosen with exactly this effect in mind — to lead followers to despise and mistrust the “elites” who govern them in Washington (or Lansing, Albany, and Sacramento). It is genuinely shocking to see conservative activists challenging the legitimacy of state action in support of maintaining public health in the Covid-19 pandemic; if this is not a legitimate role for government, one wonders, what ever would be?

What gave conservatives and now right-wing populists and white nationalists the ability to mobilize significant numbers of citizens in support of their anti-government rhetoric? In Deeply Divided: Racial Politics and Social Movements in Postwar America McAdam and Kloos offer the basis for explaining the decline of trust in US politics to two fundamental issues — white resentment over the new politics of race from roughly 1960 forward (positioning some voters to believe they are no longer getting their fair share), and the rising levels of inequality of wealth, income, and quality of life in the United States (leading some voters to believe they have been left out of the prosperity of the late twentieth century). These general factors made political mobilization around a conservative, anti-government, and racialized politics feasible; and conservative GOP leaders eagerly stepped forward to make use of this political wedge. (McAdam and Kloos provide an astounding collection of quotes by Republican candidates for president against Barack Obama in vile, racist terms.) (Here are earlier discussions of McAdam and Kloos; linklinklink.)

So what features of political and social life are likely to enhance trust in basic social institutions? Tilly refers first to Robert Putnam’s discussions of civic engagement and social capital, in Making Democracy Work: Civic Traditions in Modern Italy and Bowling Alone: The Collapse and Revival of American Community. But he is not satisfied with Putnam’s basic hypothesis — that greater civic engagement leads to greater trust in political institutions, and eventually to a broader level of consent among citizens. Instead, he turns to theorizing about the challenges of democratic governance by Mark Warren, which he summarizes as “the democratic dilemma of trust” (93), and the potential that deliberative democracy has for rekindling democratic trust.

The deliberative solution, which Warren himself prefers, bridges the gap by making democratic deliberation and trust mutually complementary: the very process of deliberation generates trust, but the existence of trust facilitates deliberation. (93)

But significantly, Tilly does not take this line of thought very far; and he doesn’t explicitly recognize that the trust to which Warren refers is categorically different from that involved in Tilly’s own concept of a trust network.

I am surprised to discover that I find Tilly’s treatment of democracy to be deficient precisely because it is too much in the realist tradition of political science (link). Tilly’s theories of politics and the state, and the relationship between state and citizen, are too much committed to the cost-benefit calculations of rulers and the governed. This places him in the middle of fairly standard “positive” theories of democracy that have dominated American political science for decades. Tilly pays no heed here — and I cannot think of broader treatments elsewhere in his writings — to the political importance of the “mystic chords of memory” and the “better angels of our nature“. Those were the words of Abraham Lincoln in his first inaugural address, and they refer to the political emotions and commitments that secure us to a set of political institutions that we support, not because of the narrow shopping list of benefits and burdens that they offer, but because of their fundamental justice and their compatibility with our ideals of equality and personhood. But surely a democracy depends ultimately and its ability to cultivate that kind of trust and commitment among many of its citizens. Chuck, you’ve let us down!

#####
Here are Abraham Lincoln’s closing words in his First Inaugural Address (March 4, 1861), expressing to his commitment to preserve the Union:

While the people retain their virtue, and vigilance, no administration, by any extreme of wickedness or folly, can very seriously injure the government, in the short space of four years.

My countrymen, one and all, think calmly and well, upon this whole subject. Nothing valuable can be lost by taking time. If there be an object to hurry any of you, in hot haste, to a step which you would never take deliberately, that object will be frustrated by taking time; but no good object can be frustrated by it. Such of you as are now dissatisfied, still have the old Constitution unimpaired, and, on the sensitive point, the laws of your own framing under it; while the new administration will have no immediate power, if it would, to change either. If it were admitted that you who are dissatisfied, hold the right side in the dispute, there still is no single good reason for precipitate action. Intelligence, patriotism, Christianity, and a firm reliance on Him, who has never yet forsaken this favored land, are still competent to adjust, in the best way, all our present difficulty.

In your hands, my dissatisfied fellow countrymen, and not in mine, is the momentous issue of civil war. The government will not assail you. You can have no conflict, without being yourselves the aggressors. You have no oath registered in Heaven to destroy the government, while I shall have the most solemn one to “preserve, protect and defend” it.

I am loath to close. We are not enemies, but friends. We must not be enemies. Though passion may have strained, it must not break our bonds of affection. The mystic chords of memory, stretching from every battle-field, and patriot grave, to every living heart and hearthstone, all over this broad land, will yet swell the chorus of the Union, when again touched, as surely they will be, by the better angels of our nature.

Social factors driving technology

In a recent post I addressed the question of how social and political circumstances influence the direction of technological change (link). There I considered Thomas Hughes’s account of the development of electric power as a “socio-technological system”. Robert Pool’s 1997 book Beyond Engineering: How Society Shapes Technology is a synthetic study that likewise gives primary attention to the important question of how society shapes technology. He too highlights the importance of the “sociotechnical system” within which a technology emerges and develops:

Instead, I learned, one must look past the technology to the broader “sociotechnical system” — the social, political, economic, and institutional environments in which the technology develops and operates. The United States, France, and Italy provided very different settings for their nuclear technologies, and it shows. (kl 86)

Any modern technology, I found, is the product of a complex interplay between its designers and the larger society in which it develops. (kl 98)

Furthermore, a complex technology generally demands a complex organization to develop, build, and operate it, and these complex organizations create yet more difficulties and uncertainty. As we’ll see in chapter 8, organizational failures often underlie what at first seem to be failures of a technology. (kl 1890)

For all these reasons, modern technology is not simply the rational product of scientists and engineers that it is often advertised to be. Look closely at any technology today, from aircraft to the Internet, and you’ll find that it truly makes sense only when seen as part of the society in which it grew up. (kl 153)

Pool emphasizes the importance of social organization and large systems in the processes of technological development:

Meanwhile, the developers of technology have also been changing. A century ago, most innovation was done by individuals or small groups. Today, technological development tends to take place inside large, hierarchical organizations. This is particularly true for complex, large-scale technologies, since they demand large investments and extensive, coordinated development efforts. But large organizations inject into the development process a host of considerations that have little or nothing to do with engineering. Any institution has its own goals and concerns, its own set of capabilities and weaknesses, and its own biases about the best ways to do things. Inevitably, the scientists and engineers inside an institution are influenced — often quite unconsciously — by its culture.

There are a number of obvious ways in which social circumstances influence the creation and development of various technologies. For example:

  1. the availability of technical expertise through the educational system
  2. the ways in which consumer tastes are created, shaped, and expressed in the economic system
  3. the ways in which political interests of government are expressed through research funding, legislation, and command
  4. the imperatives of national security and defense (World War II => radar, sonar, operations research, digital computers, cryptography, atomic bomb, rockets and jet aviation, …)
  5. The needs of corporations and industry for technological change, supported by industry laboratories and government research funding
  6. The development of complex systems of organization of projects and efforts in pursuit of a goal including the efforts of thousands of participants

Factors like these influence the direction of technology in a variety of ways. The first factor mentioned here has to do with the infrastructure needed to create expertise and instrumentation in science and engineering. The discovery of radar would have been impossible without preexisting expertise in radio technology and materials at MIT and elsewhere; the rapid development of atomic fission for reactors and weapons depended crucially on the availability of advanced expertise in physics, chemistry, materials, and instrumentation; and so on for virtually all the technologies that have transformed the world in the past seventy years. We might describe this as defining the “supply” side of technological change. Along with manufacturing and fabrication expertise, the availability of advanced engineering knowledge and research is a necessary condition for the development of new advanced technology.

The demand side of technological development is represented by the next several bullets. Clearly, in a market society the consumer tastes and wants of the public have a great deal of effect on the development of technology. Smart phones were difficult to imagine prior to the launch of the iPhone in 2007; and if there had been only limited demand for a device that takes photos and videos, plays music, makes phone calls, surfs the internet, and maintains email communication, the device would not have undergone the intensive development that it actually experienced. Many apparently “useful” consumer devices never find a space in the development and marketing process that allow them to come to maturity.

The development of the Internet illustrates the third and fourth items listed here. ARPANET was originally devised as a system of military and government communication. Advanced research in computer science and information theory was taking place during the 1960s, but without the stimulus of the government-funded Advanced Projects Research Agency and sponsorship by the Defense Communications Agency it is doubtful that the Internet would have developed — or would have developed with the characteristics it now possesses.

The fifth item, describing the needs and incentives experienced by industry and corporations guiding their efforts at technology innovation, has clearly played a major role in the development of technology in the past half century as well. Consider agribusiness and the pursuit by companies like Monsanto to gain exclusive intellectual property rights in seed lines and genetically engineered crops. These business interests stimulate research by companies in this industry towards discovery of intellectual property that can be applied to technological change in agriculture — for the purpose of generating profits for the agribusiness corporation. Here is a brief description of this dynamic from the Guardian (link):

Monsanto, which has won its case against Bowman in lower courts, vociferously disagrees. It argues that it needs its patents in order to protect its business interests and provide a motivation for spending millions of dollars on research and development of hardier, disease-resistant seeds that can boost food yields.

Why are there no foot-pump devices for evacuating blood during surgery — an urgent need in developing countries where electric power is uncertain and highly expensive devices are difficult to acquire? The answer is fairly obvious: no medical-device company has a profit-based incentive to produce a device which will yield a profit of pennies. Therefore “sustainable technology” in support of healthcare in poor countries does not get developed. (Here are examples of technology innovations that would be helpful in rural healthcare in high-poverty countries that market-driven forces are never likely to develop; link.)

The final item mentioned above complements the first — the development of business organization systems parallels the development of systems of expertise and training at universities. Engineering, operations research, and organizational theory all progressed dramatically in the twentieth century, and the ways that they took shape influenced the direction and characteristics of the technologies that were developed. Thomas Hughes describes these complex systems of government, university, and business organizations in Rescuing Prometheus, a book that emphasizes the systems requirements of both engineering as a profession and the large organizations through which technologies are developed and managed. Particularly interesting are the examples of the SAGE early warning system and the ARPANET; in each case Hughes argues that these technologies could not have been accomplished without the creation of new frameworks of systems engineering and systems organization.

MIT assumed this special responsibility [of public service] wholeheartedly when it became the system builder for the SAGE Project (Semiautomatic Ground Environment), a computer-and radar-based air defense system created in the 1950s. The SAGE Project presents an unusual example of a university working closely with the military on a large-scale technological project during its design and development, with industry active in a secondary role. SAGE also provides an outstanding instance of system builders synthesizing organizational and technical innovation. It is as well an instructive case of engineers, managers, and scientists taking a systems and transdisciplinary approach. (15)

It is clear from these considerations and examples, that technologies do not develop according to their own internal technical logic. Instead, they are invented, developed, and built out as the result of dozens of influences that are embodied in the social, economic, and political environment in which they emerge. And though neither Hughes nor Pool identifies directly with the researchers in the fields of the Social Construction of Technology (SCOT) and Science, Technology, and Society studies (STS), their findings converge to a substantial extent with the central ideas of those approaches. (Here are some earlier discussions of that approach; linklinklink). Technology is socially embedded.

Explaining large historical change

Great events happen; people live through them; and both ordinary citizens and historians attempt to make sense of them. Examples of the kinds of events I have in mind include the collapse of communism in Eastern Europe and the USSR; the rise of fascism in Europe in the 1930s; the violent suppression of the Democracy Movement in Tiananmen Square; the turn to right-wing populism in Europe and the United States; and the Rwandan genocide in 1994. My purpose here is to identify some of the important intellectual and conceptual challenges that present themselves in the task of understanding events on this scale. My fundamental points are these: large-scale historical developments are deeply contingent; the scale at which we attempt to understand the event matters; and there is important variation across time, space, region, culture, and setting when it comes to the large historical questions we want to investigate. This means that it is crucial for historians to pay attention to the particulars of institutions, knowledge systems, and social actors that combined to create a range of historical outcomes through a highly contingent and path-dependent process. The question for historiography is this: how can historians do the best job possible of discovering, documenting, and organizing their accounts of these kinds of complex historical happenings?

Is an historical period or episode an objective thing? It is not. Rather, it is an assemblage of different currents, forces, individual actors, institutional realities, international pressures, and popular claims, and there are many different “stories” that we can tell about the period. This is not a claim for relativism or subjectivism; it is rather the simple and well-understood point for social scientists and historians, that a social and historical realm is a dense soup of often conflicting tendencies, forces, and agencies. Weber understood this point in his classic essay “’Objectivity’ in Social Science” when he said that history must be constantly re-invented by successive generations of historians: “There is no absolutely “objective” scientific analysis of culture—or put perhaps more narrowly but certainly not essentially differently for our purposes—of “social phenomena” independent of special and “one-sided” viewpoints according to which—expressly or tacitly, consciously or unconsciously—they are selected, analyzed and organized for expository purposes” (Weber 1949: 72). Think of the radically different accounts offered of the French Revolution by Albert Soboul, Simon Schama, and Alexis de Tocqueville; and yet each offers insightful, honest, and “objective” interpretations of part of the history of this complex event.

We need to recall always that socially situated actors make history. History is social action in time, performed by a specific population of actors, within a specific set of social arrangements and institutions. Individuals act, contribute to social institutions, and contribute to change. People had beliefs and modes of behavior in the past. They did various things. Their activities were embedded within, and in turn constituted, social institutions at a variety of levels. Social institutions, structures, and ideologies supervene upon the historical individuals of a time. Institutions have great depth, breadth, and complexity. Institutions, structures, and ideologies display dynamics of change that derive ultimately from the mentalities and actions of the individuals who inhabit them during a period of time. And both behavior and institutions change over time.

This picture needs of course to reflect the social setting within which individuals develop and act. Our account of the “flow” of human action eventuating in historical change needs to take into account the institutional and structural environment in which these actions take place. Part of the “topography” of a period of historical change is the ensemble of institutions that exist more or less stably in the period: cultural arrangements, property relations, political institutions, family structures, educational practices. But institutions are heterogeneous and plastic, and they are themselves the product of social action. So historical explanations need to be sophisticated in their treatment of institutions and structures.

In Marx’s famous contribution to the philosophy of history, he writes that “men make their own history; but not in circumstances of their own choosing.” And circumstances can be both inhibiting and enabling; they constitute the environment within which individuals plan and act. It is an important circumstance that a given time possesses a fund of scientific and technical knowledge, a set of social relationships of power, and a level of material productivity. It is also an important circumstance that knowledge is limited; that coercion exists; and that resources for action are limited. Within these opportunities and limitations, individuals, from leaders to ordinary people, make out their lives and ambitions through action.

On this line of thought, history is a flow of human action, constrained and propelled by a shifting set of environmental conditions (material, social, epistemic). There are conditions and events that can be described in causal terms: enabling conditions, instigating conditions, cause and effect, … But here my point is to ask you to consider whether uncritical use of the language of cause and effect does not perhaps impose a discreteness of historical events that does not actually reflect the flow of history very well. It is of course fine to refer to historical causes; but we always need to understand that causes depend upon the structured actions of socially constituted individual actors.

A crucial idea in the new philosophy of history is the fact of historical contingency. Historical events are the result of the conjunction of separate strands of causation and influence, each of which contains its own inherent contingency. Social change and historical events are highly contingent processes, in a specific sense: they are the result of multiple influences that “could have been otherwise” and that have conjoined at a particular point in time in bringing about an event of interest. And coincidence, accident, and unanticipated actions by participants and bystanders all lead to a deepening of the contingency of historical outcomes. However, the fact that social outcomes have a high degree of contingency is entirely consistent with the idea that the idea that a social order embodies a broad collection of causal processes and mechanisms. These causal mechanisms are a valid subject of study – even though they do not contribute to a deterministic causal order.

What about scale? Should historians take a micro view, concentrating on local actions and details; or should they take a macro view, seeking out the highest level structures and patterns that might be visible in history? Both perspectives have important shortcomings. There is a third choice available to the historian, however, that addresses shortcomings of both micro- and macro-history. This is to choose a scale that encompasses enough time and space to be genuinely interesting and important, but not so much as to defy valid analysis. This level of scale might be regional – for example, G. William Skinner’s analysis of the macro-regions of China. It might be national – for example, a social history of Indonesia. And it might be supra-national – for example, an economic history of Western Europe. The key point is that historians in this middle range are free to choose the scale of analysis that seems to permit the best level of conceptualization of history, given the evidence that is available and the social processes that appear to be at work. And this mid-level scale permits the historian to make substantive judgments about the “reach” of social processes that are likely to play a causal role in the story that needs telling. This level of analysis can be referred to as “meso-history,” and it appears to offer an ideal mix of specificity and generality.

Here is one strong impression that emerges from the almost any area of rigorous historical writing. Variation within a social or historical phenomenon seems to be all but ubiquitous. Think of the Cultural Revolution in China, demographic transition in early modern Europe, the ideology of a market society, or the experience of being black in America. We have the noun — “Cultural Revolution”, “European fascism”, “democratic transition” — which can be explained or defined in a sentence or two; and we have the complex underlying social realities to which it refers, spread out over many regions, cities, populations, and decades.

In each case there is a very concrete and visible degree of variation in the factor over time and place. Historical and social research in a wide variety of fields confirms the non-homogeneity of social phenomena and the profound location-specific variations that occur in the characteristics of virtually all large social phenomena. Social nouns do not generally designate uniform social realities. These facts of local and regional variation provide an immediate rationale for case studies and comparative research, selecting different venues of the phenomenon and identifying specific features of the phenomenon in this location. Through a range of case studies it is possible for the research community to map out both common features and distinguishing features of a given social process.

What is the upshot of these observations? It is that good historical writing needs to be attentive to difference — difference across national settings, across social groups, across time; that it should be grounded in many theories of how social processes work, but wedded to none; and that it should pay close attention to the evolution of the social arrangements (institutions) through which individuals conduct their social lives. I hope these remarks also help to make the case that philosophers can be helpful contributors to the work that historians do, by assisting in teasing out some of the conceptual and philosophical issues that they inevitably must confront as they do their work.

Slime mold intelligence

We often think of intelligent action in terms of a number of ideas: goal-directedness, belief acquisition, planning, prioritization of needs and wants, oversight and management of bodily behavior, and weighting of risks and benefits of alternative courses of action. These assumptions presuppose the existence of the rational subject who actively orchestrates goals, beliefs, and priorities into an intelligent plan of action. (Here is a series of posts on “rational life plans”; linklinklink.)

It is interesting to discover that some simple adaptive systems apparently embody an ability to modify behavior so as to achieve a specific goal without possessing a number of these cognitive and computational functions. These systems seem to embody some kind of cross-temporal intelligence. An example that is worth considering is the spatial and logistical capabilities of the slime mold. A slime mold is a multi-cellular “organism” consisting of large numbers of independent cells without a central control function or nervous system. It is perhaps more accurate to refer to the population as a colony rather than an organism. Nonetheless the slime mold has a remarkable ability to seek out and “optimize” access to food sources in the environment through the creation of a dynamic network of tubules established through space.

The slime mold lacks beliefs, it lacks a central cognitive function or executive function, it lacks “memory” — and yet the organism (colony?) achieves a surprising level of efficiency in exploring and exploiting the food environment that surrounds it. Researchers have used slime molds to simulate the structure of logistical networks (rail and road networks, telephone and data networks), and the results are striking. A slime mold colony appear to be “intelligent” in performing the task of efficiently discovering and exploiting food sources in the environment in which it finds itself.

One of the earliest explorations of this parallel between biological networks and human-designed networks was Tero et al, “Rules for Biologically Inspired Adaptive Network Design” in Science in 2010 (link). Here is the abstract of their article:

Abstract Transport networks are ubiquitous in both social and biological systems. Robust network performance involves a complex trade-off involving cost, transport efficiency, and fault tolerance. Biological networks have been honed by many cycles of evolutionary selection pressure and are likely to yield reasonable solutions to such combinatorial optimization problems. Furthermore, they develop without centralized control and may represent a readily scalable solution for growing networks in general. We show that the slime mold Physarum polycephalum forms networks with comparable efficiency, fault tolerance, and cost to those of real-world infrastructure networks—in this case, the Tokyo rail system. The core mechanisms needed for adaptive network formation can be captured in a biologically inspired mathematical model that may be useful to guide network construction in other domains.

Their conclusion is this:

Overall, we conclude that the Physarum networks showed characteristics similar to those of the [Japanese] rail network in terms of cost, transport efficiency, and fault tolerance. However, the Physarum networks self-organized without centralized control or explicit global information by a process of selective reinforcement of preferred routes and simultaneous removal of redundant connections. (441)

They attempt to uncover the mechanism through which this selective reinforcement of routes takes place, using a simulation “based on feedback loops between the thickness of each tube and internal protoplasmic flow in which high rates of streaming stimulate an increase in tube diameter, whereas tubes tend to decline at low flow rates” (441). The simulation is successful in approximately reproducing the observable dynamics of evolution of the slime mold networks. Here is their summary of the simulation:

Our biologically inspired mathematical model can capture the basic dynamics of network adaptability through iteration of local rules and produces solutions with properties comparable or better than those real-world infrastructure networks. Furthermore, the model has a number of tunable parameters that allow adjustment of the benefit-cost ratio to increase specific features, such as fault tolerance or transport efficiency, while keeping costs low. Such a model may provide a useful starting point to improve routing protocols and topology control for self-organized networks such as remote sensor arrays, mobile ad hoc networks, or wireless mesh networks. (442)

Here is a summary description of what we might describe as the “spatial problem-solving abilities” of the slime mold based on this research by Katherine Harman in a Scientific American blog post (link):

Like the humans behind a constructed network, the organism is interested in saving costs while maximizing utility. In fact, the researchers wrote that this slimy single-celled amoeboid can “find the shortest path through a maze or connect different arrays of food sources in an efficient manner with low total length yet short average minimum distances between pairs of food sources, with a high degree of fault tolerance to accidental disconnection”—and all without the benefit of “centralized control or explicit global information.” In other words, it can build highly efficient connective networks without the help of a planning board.

This research has several noteworthy features. First, it seems to provide a satisfactory account of the mechanism through which slime mold “network design intelligence” is achieved. Second, the explanation depends only on locally embodied responses at the local level, without needing to appeal to any sort of central coordination or calculation. The process is entirely myopic and locally embodied, and the “global intelligence” of the colony is entirely generated by the locally embodied action states of the individual mold cells. And finally, the simulation appears to offer resources for solving real problems of network design, without the trouble of sending out a swarm of slime mold colonies to work out the most efficient array of connectors.

We might summarize this level of slime-mold intelligence as being captured by:

  • trial-and-error extension of lines of exploration
  • localized feedback on results of a given line leading to increase/decrease of the volume of that line

This system is decentralized and myopic with no ability to plan over time and no “over-the-horizon” vision of potential gains from new lines of exploration. In these respects slime-mold intelligence has a lot in common with the evolution of species in a given ecological environment. It is an example of “climbing Mt. Improbable” involving random variation and selection based on a single parameter (volume of flow rather than reproductive fitness). If this is a valid analogy, then we might be led to expect that the slime mold is capable of finding local optima in network design but not global optima. (Or the slime colony may avoid this trap by being able to fully explore the space of network configurations over time.) What the myopia of this process precludes is the possibility of strategic action and planning — absorbing sacrifices at an early part of the process in order to achieve greater gains later in the process. Slime molds would not be very good at chess, Go, or war.

I’ve been tempted to offer the example of slime mold intelligence as a description of several important social processes apparently involving collective intentionality: corporate behavior and discovery of pharmaceuticals (link) and the aggregate behavior of large government agencies (link).

On pharmaceutical companies:

So here’s the question for consideration here: what if we attempted to model the system of population, disease, and the pharmaceutical industry by representing pharma and its multiple research and discovery units as the slime organism and the disease space as a set of disease populations with different profitability characteristics? Would we see a major concentration of pharma slime around a few high-frequency, high profit disease-drug pairs? Would we see substantial under-investment of pharma slime on low frequency low profit “orphan” disease populations? And would we see hyper-concentrations around diseases whose incidence is responsive to marketing and diagnostic standards? (link)

On the “intelligence” of firms and agencies:

But it is perfectly plain that the behavior of functional units within agencies are only loosely controlled by the will of the executive. This does not mean that executives have no control over the activities and priorities of subordinate units. But it does reflect a simple and unavoidable fact about large organizations. An organization is more like a slime mold than it is like a control algorithm in a factory. (link)

In each instance the analogy works best when we emphasize the relative weakness of central strategic control (executives) and the solution-seeking activities of local units. But of course there is a substantial degree of executive involvement in both private and public organizations — not fully effective, not algorithmic, but present nonetheless. So the analogy is imperfect. It might be more accurate to say that the behavior of large complex organizations incorporates both imperfect central executive control and the activities of local units with myopic search capabilities coupled with feedback mechanisms. The resulting behavior of such a system will not look at all like the idealized business-school model of “fully implemented rational business plans”, but it will also not look like a purely localized resource-maximizing network of activities.

******

Here is a very interesting set of course notes in which Prof. Donglei Du from the University of New Brunswick sets the terms for a computational and heuristic solution to a similar set of logistics problems. Du asks his students to consider the optimal locations of warehouses to supply retailers in multiple locations; link. Here is how Du formulates the problem:
*     Assuming that plants and retailer locations are fixed, we concentrate on the following strategic decisions in terms of warehouses.

  • Pick the optimal number, location, and size of warehouses 
  • Determine optimal sourcing strategy
    • Which plant/vendor should produce which product 
  • Determine best distribution channels
    • Which warehouses should service which retailers

The objective is to design or reconfigure the logistics network so as to minimize annual system-wide costs, including

  • Production/ purchasing costs
  • Inventory carrying costs, and facility costs (handling and fixed costs)
  • Transportation costs

As Du demonstrates, the mathematics involved in an exact solution are challenging, and become rapidly more difficult as the number of nodes increases.

Even though this example looks rather similar to the rail system example above, it is difficult to see how it might be modeled using a slime mold colony. The challenge seems to be that the optimization problem here is the question of placement of nodes (warehouses) rather than placement of routes (tubules).

Methods of causal inquiry

This diagram provides a map of an extensive set of methods of causal inquiry in the social sciences. The goal here is to show that the many approaches that social scientists have taken to discovering causal relationships have an underlying order, and they can be related to a small number of ontological ideas about social causation. (Here is a higher resolution version of the image; link.)

We begin with the idea that causation involves the production of an outcome by a prior set of conditions mediated by a mechanism. The task of causal inquiry is to discover the events, conditions, and processes that combine to bring about the outcome of interest. Given that causal relationships are often unobservable and complexly intertwined with multiple other causal processes, we need to have methods of inquiry to allow us to use observable evidence and hypothetical theories about causal mechanisms to discover valid causal relationships.

The upper left node of the diagram reviews the basic elements of the ontology of social causation. It gives priority to the idea of causal realism — the view that social causes are real and inhere in a substrate of social action constituted by social actors and their relations and interactions. This substrate supports the existence of causal mechanisms (and powers) through which causal relations unfold. It is noted that causes are often manifest in a set of necessary and/or sufficient conditions: if X had not occurred, Y would not have occurred. Causes support (and are supported by) counterfactual statements — our reasoning about what would have occurred in somewhat different circumstances. The important qualification to the simple idea of exceptionless causation is the fact that much causation is probabilistic rather than exceptionless: the cause increases (or decreases) the likelihood of occurrence of its effect. Both exceptionless causation and probabilistic causation supports the basic Humean idea that causal relations are often manifest in observable regularities.

These features of real causal relations give rise to a handful of different methods of inquiry.

First, there is a family of methods of causal inquiry that involve search for underlying causal mechanisms. These include process tracing, individual case studies, paired comparisons, comparative historical sociology, and the application of theories of the middle range.

Second, the ontology of generative causal mechanisms suggests the possibility of simulations as a way of probing the probable workings of a hypothetical mechanism. Agent-based models and computational simulations more generally are formal attempts to identify the dynamics of the mechanisms postulated to bring about specific social outcomes.

Third, the fact that causes produce their effects supports the use of experimental methods. Both exceptionless causation and probabilistic causation supports experimentation; the researcher attempts to discern causation by creating a pair of experimental settings differing only in the presence or absence of the “treatment” (hypothetical causal agent), and observing the outcome.

Fourth, the fact that exceptionless causation produces a set of relationships among events that illustrate the logic of necessary and sufficient conditions permits a family of methods inspired by JS Mills’ methods of similarity and difference. If we can identify all potentially relevant causal factors for the occurrence of an outcome and if we can discover a real case illustrating every combination of presence and absence of those factors and the outcome of interest, then we can use truth-functional logic to infer the necessary and/or sufficient conditions that produce the outcome. These results constitute JL Mackie’s INUS conditions for the causal system under study (insufficient but non-redundant parts of a condition which is itself unnecessary but sufficient for the occurrence of the effect). Charles Ragin’s Boolean methods and fuzzy-set theories of causal analysis and the method of quantitative comparative analysis conform to the same logical structure.

Probabilistic causation cannot be discovered using these Boolean methods, but it is possible to use statistical and probabilistic methods in application to large datasets to discover facilitating and inhibiting conditions and multifactoral and conjunctural causal relations. Statistical analysis can produce evidence of what Wesley Salmon refers to as “causal relevance” (conditional probabilities that are not equal to background population probabilities). This is expressed as: P(O|A&B&C) <> P(O).

Finally, the fact that causal factors can be relied upon to give rise to some kind of statistical associations between factors and outcomes supports the application of methods of inquiry involving regression, correlation analysis, and structural equation modeling. 

It is important to emphasize that none of these methods is privileged over all the others, and none permits a purely inductive or empirical study to arrive at valid claims about causation. Instead, we need to have hypotheses about the mechanisms and powers that underlie the causal relationships we identify, and the features of the causal substrate that give these mechanisms their force. In particular, it is sometimes believed that experimental methods, random controlled trials, or purely statistical analysis of large datasets can establish causation without reference to hypothesis and theory. However, none of these claims stands up to scrutiny. There is no “gold standard” of causal inquiry.

This means that causal inquiry requires a plurality of methods of investigation, and it requires that we arrive at theories and hypotheses about the real underlying causal mechanisms and substrate that give rise to (“generate”) the outcomes that we observe.

%d bloggers like this: