Brian Epstein’s radical metaphysics

Brian Epstein is adamant that the social sciences need to think very differently about the nature of the social world. In The Ant Trap: Rebuilding the Foundations of the Social Sciences he sets out to blow up our conventional thinking about the relation between individuals and social facts. In particular, he is fundamentally skeptical about any conception of the social world that depends on the idea of ontological individualism, directly or indirectly. Here is the plainest statement of his view:

When we look more closely at the social world, however, this analogy [of composition of wholes out of independent parts] falls apart. We often think of social facts as depending on people, as being created by people, as the actions of people. We think of them as products of the mental processes, intentions, beliefs, habits, and practices of individual people. But none of this is quite right. Research programs in the social sciences are built on a shaky understanding of the most fundamental question of all: What are the social sciences aboutOr, more specifically: What are social facts, social objects, and social phenomena—these things that the social sciences aim to model and explain? 

My aim in this book is to take a first step in challenging what has come to be the settled view on these questions. That is, to demonstrate that philosophers and social scientists have an overly anthropocentric picture of the social world. How the social world is built is not a mystery, not magical or inscrutable or beyond us. But it turns out to be not nearly as people-centered as is widely assumed. (p. 7)

Here is one key example Epstein provides to give intuitive grasp of the anti-reductionist metaphysics he has in mind — the relationship between “the Supreme Court” and the nine individuals who make it up.

One of the examples I will be discussing in some detail is the United States Supreme Court. It is small— nine members— and very familiar, so there are lots of facts about it we can easily consider. Even a moment’s reflection is enough to see that a great many facts about the Supreme Court depend on much more than those nine people. The powers of the Supreme Court are not determined by the nine justices, nor do the nine justices even determine who the members of the Supreme Court are. Even more basic, the very existence of the Supreme Court is not determined by those nine people. In all, knowing all kinds of things about the people that constitute the Supreme Court gives us very little information about what that group is, or about even the most basic facts about that group. (p. 10)

Epstein makes an important observation when he notes that there are two “consensus” views of the individual-level substrate of the social world, not just one. The first is garden-variety individualism: it is individuals and their properties (psychological, bodily) involved in external relations with each other that constitute the individual-level substrate of the social. In this case is reasonable to apply the supervenience relation to the relation between individuals and higher-level social facts (link).

The second view is more of a social-constructivist orientation towards individuals: individuals are constituted by their representations of themselves and others; the individual-level is inherently semiotic and relational. Epstein associates this view with Searle (50 ff.); but it seems to characterize a range of other theorists, from Geertz to Goffman and Garfinkel. Epstein refers to this approach as the “Standard Model” of social ontology. Fundamental to the Standard View is the idea of institutional facts — the rules of a game, the boundaries of a village, the persistence of a paper currency. Institutional facts are held in place by the attitudes and performances of the individuals who inhabit them; but they are not reducible to an ensemble of individual-level psychological facts. And the constructionist part of the approach is the idea that actors jointly constitute various social realities — a demonstration against the government, a celebration, or a game of bridge. And Epstein believes that supervenience fails in the constructivist ontology of the Standard View (57).

Both views are anti-dualistic (no inherent social “stuff”); but on Epstein’s approach they are ultimately incompatible with each other.

But here is the critical point: Epstein doesn’t believe that either of these views is adequate as a basis for social metaphysics. We need a new beginning in the metaphysics of the social world. Where to start this radical work? Epstein offers several new concepts to help reshape our metaphysical language about social facts — what he refers to as “grounding” and “anchoring” of social facts. “Grounding” facts for a social fact M are lower-level facts that help to constitute the truth of M. “Bob and Jane ran down Howe Street” partially grounds the fact “the mob ran down Howe Street” (M). The fact about Bob and Jane is one of the features of the world that contributes to the truth and meaning of M. “Full grounding” is a specification of all the facts needed in order to account for M. “Anchoring” facts are facts that characterize the constructivist aspect of the social world — conformance to meanings, rules, or institutional structures. An anchoring fact is one that sets the “frame” for a social fact. (An earlier post offered reflections on anchor individualism; link.)

Epstein suggests that “grounding” corresponds to classic ontological individualism, while “anchoring” corresponds to the Standard View (the constructivist view).

What I will call “anchor individualism” is a claim about how frame principles can be anchored. Ontological individualism, in contrast, is best understood as a claim about how social facts can be grounded. (100)

And he believes that a more adequate social ontology is one that incorporates both grounding and anchoring relations. “Anchoring and grounding fit together into a single model of social ontology” (82).

Here is an illustrative diagram of how the two kinds of relations work in a particular social fact (Epstein 94):

So Epstein has done what he set out to do: he has taken the metaphysics of the social world as seriously as contemporary metaphysicians do other important topics, and he has teased out a large body of difficult questions about constitution, causation, formation, grounding, and anchoring. This is a valuable and innovative contribution to the philosophy of social science.

But does this exercise add significantly to our ability to conduct social science research and theory? Do James Coleman, Sam Popkin, Jim Scott, George Steinmetz, or Chuck Tilly need to fundamentally rethink their approach to the social problems they attempted to understand in their work? Do the metaphysics of “frame”, “ground”, and “anchor” make for better social research?

My inclination is to think that this is not an advantage we can attribute to The Ant Trap. Clarity, precision, surprising conceptual formulations, yes; these are all virtues of the book. But I am not convinced that these conceptual innovations will actually make the work of explaining industrial actions, rebellious behavior, organizational failures, educational systems that fail, or the rise of hate-based extremism more effective or insightful.

In order to do good social research we do of course need to have a background ontology. But after working through The Ant Trap several times, I’m still not persuaded that we need to move beyond a fairly commonsensical set of ideas about the social world:

  • individuals have mental representations of the world they inhabit
  • institutional arrangements exist through which individuals develop, form, and act
  • individuals form meaningful relationships with other individuals
  • individuals have complicated motivations, including self-interest, commitment, emotional attachment, political passion
  • institutions and norms are embodied in the thoughts, actions, artifacts, and traces of individuals (grounded and anchored, in Epstein’s terms)
  • social causation proceeds through the substrate of individuals thinking, acting, re-acting, and engaging with other individuals

These are the assumptions that I have in mind when I refer to “actor-centered sociology” (link). This is not a sophisticated philosophical theory of social metaphysics; but it is fully adequate to grounding a realist and empirically informed effort to understand the social world around us. And nothing in The Ant Trap leads me to believe that there are fundamental conceptual impossibilities embedded in these simple, mundane individualistic ideas about the social world.

And this leads me to one other conclusion: Epstein argues the social sciences need to think fundamentally differently. But actually, I think he has shown at best that philosophers can usefully think differently — but in ways that may in the end not have a lot of impact on the way that inventive social theorists need to conceive of their work.

(The photo at the top is chosen deliberately to embody the view of the social world that I advocate: contingent, institutionally constrained, multi-layered, ordinary, subject to historical influences, constituted by indefinite numbers of independent actors, demonstrating patterns of coordination and competition. All these features are illustrated in this snapshot of life in Copenhagen — the independent individuals depicted, the traffic laws that constrain their behavior, the polite norms leading to conformance to the crossing signal, the sustained effort by municipal actors and community based organizations to encourage bicycle travel, and perhaps the lack of diversity in the crowd.)

Complexity and contingency

One of the more intriguing currents of social science research today is the field of complexity theory. Scientists like John Holland (Complexity: A Very Short Introduction), John Miller and Scott Page (Complex Adaptive Systems: An Introduction to Computational Models of Social Life), and Joshua Epstein (Generative Social Science: Studies in Agent-Based Computational Modeling) make bold and interesting claims about how social processes embody the intricate interconnectedness of complex systems.

John Holland describes some of the features of behavior of complex systems in these terms in Complexity:

  • self-organization into patterns, as occurs with flocks of birds or schools of fish  
  • chaotic behaviour where small changes in initial conditions (‘ the flapping of a butterfly’s wings in Argentina’) produce large later changes (‘ a hurricane in the Caribbean’)  
  • ‘fat-tailed’ behaviour, where rare events (e.g. mass extinctions and market crashes) occur much more often than would be predicted by a normal (bell-curve) distribution  
  • adaptive interaction, where interacting agents (as in markets or the Prisoner’s Dilemma) modify their strategies in diverse ways as experience accumulates. (p. 5)

In CAS the elements are adaptive agents, so the elements themselves change as the agents adapt. The analysis of such systems becomes much more difficult. In particular, the changing interactions between adaptive agents are not simply additive. This non-linearity rules out the direct use of PDEs in most cases (most of the well-developed parts of mathematics, including the theory of PDEs, are based on assumptions of additivity). (p. 11)

Miller and Page put the point this way:

One of the most powerful tools arising from complex systems research is a set of computational techniques that allow a much wider range of models to be explored. With these tools, any number of heterogeneous agents can interact in a dynamic environment subject to the limits of time and space. Having the ability to investigate new theoretical worlds obviously does not imply any kind of scientific necessity or validity— these must be earned by carefully considering the ability of the new models to help us understand and predict the questions that we hold most dear. (Complex Adaptive Systems, kl 199)

Much of the focus of complex systems is on how systems of interacting agents can lead to emergent phenomena. Unfortunately, emergence is one of those complex systems ideas that exists in a well-trodden, but relatively untracked, bog of discussion. The usual notion put forth underlying emergence is that individual, localized behavior aggregates into global behavior that is, in some sense, disconnected from its origins. Such a disconnection implies that, within limits, the details of the local behavior do not matter to the aggregate outcome. Clearly such notions are important when considering the decentralized systems that are key to the study of complex systems. Here we discuss emergence from both an intuitive and a theoretical perspective. 

(Complex Adaptive Systems, kl 832)

As discussed previously, we have access to some useful “emergence” theorems for systems that display disorganized complexity. However, to fully understand emergence, we need to go beyond these disorganized systems with their interrelated, helter-skelter agents and begin to develop theories for those systems that entail organized complexity. Under organized complexity, the relationships among the agents are such that through various feedbacks and structural contingencies, agent variations no longer cancel one another out but, rather, become reinforcing. In such a world, we leave the realm of the Law of Large Numbers and instead embark down paths unknown. While we have ample evidence, both empirical and experimental, that under organized complexity, systems can exhibit aggregate properties that are not directly tied to agent details, a sound theoretical foothold from which to leverage this observation is only now being constructed. 

(Complex Adaptive Systems, kl 987)

And here is Joshua Epstein’s description of what he calls “generative social science”:

The agent-based computational model— or artificial society— is a new scientific instrument. 1 It can powerfully advance a distinctive approach to social science, one for which the term “generative” seems appropriate. I will discuss this term more fully below, but in a strong form, the central idea is this: To the generativist, explaining the emergence2 of macroscopic societal regularities, such as norms or price equilibria, requires that one answer the following question:  

The Generativist’s Question 

*     How could the decentralized local interactions of heterogeneous autonomous agents generate the given regularity?  

The agent-based computational model is well-suited to the study of this question since the following features are characteristics. (5)

Here Epstein refers to the characteristics of heterogeneity of actors, autonomy, explicit space, local interactions, and bounded rationality. And he believes that it is both possible and mandatory to show how higher-level social characteristics emerge from the rule-governed interactions of the agents at a lower level.
 
There are differences across these approaches. But generally these authors bring together two rather different ideas — the curious unpredictability of even fairly small interconnected systems familiar from chaos theory, and the idea that there are simple higher level patterns that can be discovered and explained based on the turbulent behavior of the constituents. And they believe that it is possible to construct simulation models that allow us to trace out the interactions and complexities that constitute social systems.

So does complexity science create a basis for a general theory of society? And does it provide a basis for understanding the features of contingency, heterogeneity, and plasticity that I have emphasized throughout? I think these questions eventually lead to “no” on both counts.

Start with the fact of social contingency. Complexity models often give rise to remarkable and unexpected outcomes and patterns. Does this mean that complexity science demonstrates the origin of contingency in social outcomes? By no means; in fact, the opposite is true. The outcomes demonstrated by complexity models are in fact no more than computational derivations of the consequences of the premises of these models. So the surprises created by complex systems models only appear contingent; in fact they are generated by the properties of the constituents. So the surprises produced by complexity science are simulacra of contingency, not the real thing.

Second, what about heterogeneity? Does complexity science illustrate or explain the heterogeneity of social things? Not particularly. The heterogeneity of social things — organizations, value systems, technical practices — does not derive from complex system effects; it derives from the fact of individual actor interventions and contingent exogenous influences.

Finally, consider the feature of plasticity — the fact that social entities can “morph” over time into substantially different structures and functions. Does complexity theory explain the feature of social plasticity? It does not. This is simply another consequence of the substrate of the social world itself: the fact that social structures and forces are constituted by the actors that make them up. This is not a systems characteristic, but rather a reflection of the looseness of social interaction. The linkages within a social system are weak and fragile, and the resulting structures can take many forms, and are subject to change over time.

The tools of simulation and modeling that complexity theorists are in the process of developing are valuable contributions, and they need to be included in the toolbox. However, they do not constitute the basis of a complete and comprehensive methodology for understanding society. Moreover, there are important examples of social phenomena that are not at all amenable to treatment with these tools.

This leads to a fairly obvious conclusion, and one that I believe complexity theorists would accept: that complexity theories and the models they have given rise to are a valuable contribution; but they are only a partial answer to the question, how does the social world work?

Menon and Callender on the physics of phase transitions

In an earlier post I considered the topic of phase transitions as a possible source of emergent phenomena (link). I argued there that phase transitions are indeed interesting, but don’t raise a serious problem of strong emergence. Tarun Menon considers this issue in substantial detail in the chapter he co-authored with Craig Callender in The Oxford Handbook of Philosophy of Physics, “Turn and face the strange … ch-ch-changes: Philosophical questions raised by phase transitions” (link). Menon and Callender provide a very careful and logical account of three ways of approaching the physics of phase transitions within physics and three versions of emergence (conceptual, explanatory, ontological). The piece is technical but very interesting, with a somewhat deflating conclusion (if you are a fan of emergence):

We have found that when one clarifies concepts and digs into the details, with respect to standard textbook statistical mechanics, phase transitions are best thought of as conceptually novel, but not ontologically or explanatorily irreducible. 

Menon and Callendar review three approaches to the phenomenon of phase transition offered by physics: classical thermodynamics, statistical mechanics, and renormalization group theory. Thermodynamics describes the behavior of materials (gases, liquids, and solids) at the macro level; and statistical mechanics and renormalization group theory are theories of the micro states of materials intended to allow derivation of the macro behavior of the materials from statistical properties of the micro states. They describe this relationship in these terms:

Statistical mechanics is the theory that applies probability theory to the microscopic degrees of freedom of a system in order to explain its macroscopic behavior. The tools of statistical mechanics have been extremely successful in explaining a number of thermodynamic phenomena, but it turned out to be particularly difficult to apply the theory to the study of phase transitions. (193)

Here is the mathematical definition of phase transition that they provide:

Mathematically, phase transitions are represented by nonanalyticities or singularities in a thermodynamic potential. A singularity is a point at which the potential is not infinitely differentiable, so at a phase transition some derivative of the thermo dynamic potential changes discontinuously. (191)

And they offer this definition:

(Def 1) An equilibrium phase transition is a nonanalyticity in the free energy. (194)

Here is their description of how the renormalization group theory works:

To explain the method, we return to our stalwart Ising model. Suppose we coarse grain a 2 D Ising model by replacing 3 × 3 blocks of spins with a single spin pointing in the same direction as the majority in the original block. This gives us a new Ising system with a longer distance between lattice sites, and possibly a different coupling strength. You could look at this coarse graining procedure as a transformation in the Hamiltonian describing the system. Since the Hamiltonian is characterized by the coupling strength, we can also describe the coarse graining as a transformation in the coupling parameter. Let K be the coupling strength of the original system and R be the relevant transformation. The new coupling strength is K′ = RK. This coarse graining procedure could be iterated, producing a sequence of coupling parameters, each related to the previous one by the transformation R. The transformation defines a flow on parameter space. (195)

Renormalization group theory, then, is essentially the mathematical basis of coarse-graining analysis (link).

The key difficulty that has been used to ground arguments about strong emergence of phase transitions is now apparent: there seems to be a logical disjunction between the resources of statistical mechanics and the findings of thermodynamics. In theory physicists would like to hold that statistical mechanics provides the micro-level representation of the phenomena described by thermodynamics; or in other words, that thermodynamic facts can be reduced to derivations from statistical mechanics. However, the definition of a phase transition above specifies that the phenomena display “nonanalyticities” — instantaneous and discontinuous changes of state. It is easily demonstrated that the equations used in statistical mechanics do not display nonanalyticities; change may be abrupt, but it is not discontinuous, and the equations are infinitely differentiable. So if phase transitions are points of nonanalyticity, and statistical mechanics does not admit of nonanalytic equations, then it would appear that thermodynamics is not derivable from statistical mechanics. Similar reasoning applies to renormalization group theory.

This problem was solved within statistical mechanics by admitting of infinitely many bodies within the system that is represented (or alternatively, admitting of infinitely compressed volumes of bodies); but neither of these assumptions of infinity is realistic of the material world.

So are phase transitions “emergent” phenomena in either a weak sense or a strong sense, relative to the micro-states of the material in question? The strongest sense of emergence is what Menon and Callender call ontological irreducibility.

Ontological irreducibility involves a very strong failure of reduction, and if any phenomenon deserves to be called emergent, it is one whose description is ontologically irreducible to any theory of its parts. Batterman argues that phase transitions are emergent in this sense (Batterman 2005). It is not just that we do not know of an adequate statistical mechanical account of them, we cannot construct such an account. Phase transitions, according to this view, are cases of genuine physical discontinuities. (215)

The possibility that phase transitions are ontologically emergent at the level of thermodynamics is raised by the point about the mathematical characteristics of the equations that constitute the statistical mechanics description of the micro level — the infinite differentiability of those equations. But Menon and Callender give a compelling reason for thinking this is misleading. They believe that phase transitions constitute a conceptual novelty with respect to the resources of statistical mechanics — phase transitions do not correspond to natural kinds at the level of the micro-constitution of the material. But they argue that this does not establish that the phenomena cannot be explained or derived from a micro-level description. So phase transitions are not emergent according to the explanatory or ontological understandings of that idea.

The nub of the issue comes down to how we construe the idealization of statistical mechanics that assumes that a material consists of an infinite number of elements. This is plainly untrue of any real system (gas, liquid, or solid). The fact that there are boundaries implies that important thermodynamic properties are not “extensive” with volume: twice the volume leads to twice the entropy. But the way in which the finitude of a volume of material affects its behavior is through the effects of novel behaviors at the edges of the volume. And in many instances these effects are small relative to the behavior of the whole, if the volume is large enough.

Does this fact imply that there is a great mystery about extensivity, that extensivity is truly emergent, that thermodynamics does not reduce to finite N statistical mechanics? We suggest that on any reasonably uncontentious way of defining these terms, the answer is no. We know exactly what is happening here. Just as the second law of thermodynamics is no longer strict when we go to the microlevel, neither is the concept of extensivity. (201-202)

There is an important idealization on the thermodynamic description as well — the notion that several specific kinds of changes are instantaneous or discontinuous. But this assumption can also be seen as an idealization, corresponding to a physical system that is undergoing changes at different rates under different environmental conditions. What thermodynamics describes as an instantaneous change from liquid to gas may be better understood as a rapid process of change at the molar level which can be traced through in a continuous way.

(The fact that some systems are coarse-grained has an interesting implication for this set of issues (link). The interesting implication is that while it is generally true that the micro states in such a system entail the macro states, the reverse is not true: we cannot infer from a given macro state to the exact underlying micro state. Rather, many possible micro states correspond to a given macro state.)

The conclusion they reach is worth quoting:

Phase transitions are an important instance of putatively emergent behavior. Unlike many things claimed emergent by philosophers (e.g., tables and chairs), the alleged emergence of phase transitions stems from both philosophical and scientific arguments. Here we have focused on the case for emergence built from physics. We have found that when one clarifies concepts and digs into the details, with respect to standard textbook statistical mechanics, phase transitions are best thought of as conceptually novel, but not ontologically or explanatorily irreducible. And if one goes past textbook statistical mechanics, then an argument can be made that they are not even conceptually novel. In the case of renormalization group theory, consideration of infinite systems and their singular behavior provides a central theoretical tool, but this is compatible with an explanatory reduction. Phase transitions may be “emergent” in some sense of this protean term, but not in a sense that is incompatible with the reductionist project broadly construed. (222)

Or in other words, Minon and Callender refute one of the most technically compelling interpretations of ontological emergence in physical systems. They show that the phenomena of phase transitions as described by classical thermodynamics are compatible with being reduced to the dynamics of individual elements at the micro-level, so phase transitions are not ontologically emergent.

Are these arguments relevant in any way to debates about emergence in social system dynamics? The direct relevance is limited, since these arguments depend entirely on the mathematical properties of the ways in which the micro-level of physical systems are characterized (statistical mechanics). But the more general lesson does in fact seem relevant: rather than simply postulating that certain social characteristics are ontologically emergent relative to the actors that make them up, we would be better advised to look for the local-level processes that act to bring about surprising transitions at critical points (for example, the shift in a flock of birds from random flight to a swarm in a few seconds).

DeLanda on historical ontology

A primary reason for thinking that assemblage theory is important is the fact that it offers new ways of thinking about social ontology. Instead of thinking of the social world as consisting of fixed entities and properties, we are invited to think of it as consisting of fluid agglomerations of diverse and heterogeneous processes. Manuel DeLanda’s recent book Assemblage Theory sheds new light on some of the complexities of this theory.

Particularly important is the question of how to think about the reality of large historical structures and conditions. What is “capitalism” or “the modern state” or “the corporation”? Are these temporally extended but unified things? Or should they be understood in different terms altogether? Assemblage theory suggests a very different approach. Here is an astute description by DeLanda of historical ontology with respect to the historical imagination of Fernand Braudel:

Braudel’s is a multi-scaled social reality in which each level of scale has its own relative autonomy and, hence, its own history. Historical narratives cease to be constituted by a single temporal flow — the short timescale at which personal agency operates or the longer timescales at which social structure changes — and becomes a multiplicity of flows, each with its own variable rates of change, its own accelerations and decelerations. (14)

DeLanda extends this idea by suggesting that the theory of assemblage is an antidote to essentialism and reification of social concepts:

Thus, both ‘the Market’ and ‘the State’ can be eliminated from a realist ontology by a nested set of individual emergent wholes operating at different scales. (16)

I understand this to mean that “Market” is a high-level reification; it does not exist in and of itself. Rather, the things we want to encompass within the rubric of market activity and institutions are an agglomeration of lower-level concrete practices and structures which are contingent in their operation and variable across social space. And this is true of other high-level concepts — capitalism, IBM, or the modern state.

DeLanda’s reconsideration of Foucault’s ideas about prisons is illustrative of this approach. After noting that institutions of discipline can be represented as assemblages, he asks the further question: what are the components that make up these assemblages?

The components of these assemblages … must be specified more clearly. In particular, in addition to the people that are confined — the prisoners processed by prisons, the students processed by schools, the patients processed by hospitals, the workers processed by factories — the people that staff those organizations must also be considered part of the assemblage: not just guards, teachers, doctors, nurses, but the entire administrative staff. These other persons are also subject to discipline and surveillance, even if to a lesser degree. (39)

So how do assemblages come into being? And what mechanisms and forces serve to stabilize them over time?  This is a topic where DeLanda’s approach shares a fair amount with historical institutionalists like Kathleen Thelen (link, link): the insight that institutions and social entities are created and maintained by the individuals who interface with them, and that both parts of this observation need explanation. It is not necessarily the case that the same incentives or circumstances that led to the establishment of an institution also serve to gain the forms of coherent behavior that sustain the institution. So creation and maintenance need to be treated independently. Here is how DeLanda puts this point:

So we need to include in a realist ontology not only the processes that produce the identity of a given social whole when it is born, but also the processes that maintain its identity through time. And we must also include the

downward causal influence

that wholes, once constituted, can exert on their parts. (18)

Here DeLanda links the compositional causal point (what we might call the microfoundational point) with the additional idea that higher-level social entities exert downward causal influence on lower-level structures and individuals. This is part of his advocacy of emergence; but it is controversial, because it might be maintained that the causal powers of the higher-level structure are simultaneously real and derivative upon the actions and powers of the components of the structure (link). (This is the reason I prefer to use the concept of relative explanatory autonomy rather than emergence; link.)

DeLanda summarizes several fundamental ideas about assemblages in these terms:

  1. “Assemblages have a fully contingent historical identity, and each of them is therefore an individual entity: an individual person, an individual community, an individual organization, an individual city.” 
  2. “Assemblages are always composed of heterogeneous components.” 
  3. “Assemblages can become component parts of larger assemblages. Communities can form alliances or coalitions to become a larger assemblage.”
  4. “Assemblages emerge from the interactions between their parts, but once an assemblage is in place it immediately starts acting as a source of limitations and opportunities for its components (downward causality).” (19-21)

There is also the suggestion that persons themselves should be construed as assemblages:

Personal identity … has not only a private aspect but also a public one, the

public persona

that we present to others when interacting with them in a variety of social encounters. Some of these social encounters, like ordinary conversations, are sufficiently ritualized that they themselves may be treated as assemblages. (27)

Here DeLanda cites the writings of Erving Goffman, who focuses on the public scripts that serve to constitute many kinds of social interaction (link); equally one might refer to Andrew Abbott’s processual and relational view of the social world and individual actors (link).

The most compelling example that DeLanda offers here and elsewhere of complex social entities construed as assemblages is perhaps the most complex and heterogeneous product of the modern world — cities.

Cities possess a variety of material and expressive components. On the material side, we must list for each neighbourhood the different buildings in which the daily activities and rituals of the residents are performed and staged (the pub and the church, the shops, the houses, and the local square) as well as the streets connecting these places. In the nineteenth century new material components were added, water and sewage pipes, conduits for the gas that powered early street lighting, and later on electricity and telephone wires. Some of these components simply add up to a larger whole, but citywide systems of mechanical transportation and communication can form very complex networks with properties of their own, some of which affect the material form of an urban centre and its surroundings. (33)

(William Cronon’s social and material history of Chicago in Nature’s Metropolis: Chicago and the Great West is a very compelling illustration of this additive, compositional character of the modern city; link. Contingency and conjunctural causation play a very large role in Cronon’s analysis. Here is a post that draws out some of the consequences of the lack of systematicity associated with this approach, titled “What parts of the social world admit of explanation?”; link.)

Coarse-graining of complex systems

The question of the relationship between micro-level and macro-level is just as important in physics as it is in sociology. Is it possible to derive the macro-states of a system from information about the micro-states of the system? It turns out that there are some surprising aspects of the relationship between micro and macro that physical systems display. The mathematical technique of “coarse-graining” represents an interesting wrinkle on this question. So what is coarse-graining? Fundamentally it is the idea that we can replace micro-level specifics with local-level averages, without reducing our ability to calculate macro-level dynamics of behavior of a system.

A 2004 article by Israeli and Goldenfeld, “Coarse-graining of cellular automata, emergence, and the predictability of complex systems” (link) provides a brief description of the method of coarse-graining. (Here is a Wolfram demonstration of the way that coarse graining works in the field of cellular automata; link.) Israeli and Goldenfeld also provide physical examples of phenomena with what they refer to as emergent characteristics. Let’s see what this approach adds to the topic of emergence and reduction. Here is the abstract of their paper:

We study the predictability of emergent phenomena in complex systems. Using nearest neighbor, one-dimensional Cellular Automata (CA) as an example, we show how to construct local coarse-grained descriptions of CA in all classes of Wolfram’s classification. The resulting coarse-grained CA that we construct are capable of emulating the large-scale behavior of the original systems without accounting for small-scale details. Several CA that can be coarse-grained by this construction are known to be universal Turing machines; they can emulate any CA or other computing devices and are therefore undecidable. We thus show that because in practice one only seeks coarse-grained information, complex physical systems can be predictable and even decidable at some level of description. The renormalization group flows that we construct induce a hierarchy of CA rules. This hierarchy agrees well apparent rule complexity and is therefore a good candidate for a complexity measure and a classification method. Finally we argue that the large scale dynamics of CA can be very simple, at least when measured by the Kolmogorov complexity of the large scale update rule, and moreover exhibits a novel scaling law. We show that because of this large-scale simplicity, the probability of finding a coarse-grained description of CA approaches unity as one goes to increasingly coarser scales. We interpret this large scale simplicity as a pattern formation mechanism in which large scale patterns are forced upon the system by the simplicity of the rules that govern the large scale dynamics.

This paragraph involves several interesting ideas. One is that the micro-level details do not matter to the macro outcome (italics above). Another related idea is that macro-level patterns are (sometimes) forced by the “rules that govern the large scale dynamics” — rather than by the micro-level states.

Coarse-graining methodology is a family of computational techniques that permits “averaging” of values (intensities) from the micro-level to a higher level of organization. The computational models developed here were primarily applied to the properties of heterogeneous materials, large molecules, and other physical systems. For example, consider a two-dimensional array of iron atoms as a grid with randomly distributed magnetic orientations (up, down). A coarse-grained description of this system would be constructed by taking each 3×3 square of the grid and assigning it the up-down value corresponding to the majority of atoms in the grid. Now the information about nine atoms has been reduced to a single piece of information for the 3×3 grid. Analogously, we might consider a city of Democrats and Republicans. Suppose we know the affiliation of each household on every street. We might “coarse-grain” this information by replacing the household-level data with the majority representation of 3×3 grids of households. We might take another step of aggregation by considering 3×3 grids of grids, and representing the larger composite by the majority value of the component grids.

How does the methodology of coarse-graining interact with other inter-level questions we have considered elsewhere in Understanding Society (emergence, generativity, supervenience)? Israeli and Goldenfeld connect their work to the idea of emergence in complex systems. Here is how they describe emergence:

Emergent properties are those which arise spontaneously from the collective dynamics of a large assemblage of interacting parts. A basic question one asks in this context is how to derive and predict the emergent properties from the behavior of the individual parts. In other words, the central issue is how to extract large-scale, global properties from the underlying or microscopic degrees of freedom. (1)

Note that this is the weak form of emergence (link); Israeli and Goldenfeld explicitly postulate that the higher-level properties can be derived (“extracted”) from the micro level properties of the system. So the calculations associated with coarse-graining do not imply that there are system-level properties that are non-derivable from the micro-level of the system; or in other words, the success of coarse-graining methods does not support the idea that physical systems possess strongly emergent properties.

Does the success of coarse-graining for some systems have implications for supervenience? If the states of S can be derived from a coarse-grained description C of M (the underlying micro-level), does this imply that S does not supervene upon M? It does not. A coarse-grained description corresponds to multiple distinct micro-states, so there is a many-one relationship between M and C. But this is consistent with the fundamental requirement of supervenience: no difference at the higher level without some difference at the micro level. So supervenience is consistent with the facts of successful coarse-graining of complex systems.

What coarse-graining is inconsistent with is the idea that we need exact information about M in order to explain or predict S. Instead, we can eliminate a lot of information about M by replacing M with C, and still do a perfectly satisfactory job of explaining and predicting S.

There is an intellectual wrinkle in the Israeli and Goldenfeld article that I haven’t yet addressed here. This is their connection between complex physical systems and cellular automata. A cellular automaton is a simulation governed by simple algorithms governing the behavior of each cell within the simulation. The game of Life is an example of a cellular automaton (link). Here is what they say about the connection between physical systems and their simulations as a system of algorithms:

The problem of predicting emergent properties is most severe in systems which are modelled or described by undecidable mathematical algorithms[1, 2]. For such systems there exists no computationally efficient way of predicting their long time evolution. In order to know the system’s state after (e.g.) one million time steps one must evolve the system a million time steps or perform a computation of equivalent complexity. Wolfram has termed such systems computationally irreducible and suggested that their existence in nature is at the root of our apparent inability to model and understand complex systems [1, 3, 4, 5]. (1)

Suppose we are interested in simulating the physical process through which a pot of boiling water undergoes sudden turbulence shortly before 100 degrees C (the transition point between water and steam). There seem to be two large alternatives raised by Israeli and Goldenfeld: there may be a set of thermodynamic processes that permit derivation of the turbulence directly from the physical parameters present during the short interval of time; or it may be that the only way of deriving the turbulence phenomenon is to provide a molecule-level simulation based on the fundamental laws (algorithms) that govern the molecules. If the latter is the case, then simulating the process will prove computationally impossible.

Here is an extension of this approach in an article by Krzysztof Magiera and Witold Dzwinel, “Novel Algorithm for Coarse-Graining of Cellular Automata” (link). They describe “coarse-graining” in their abstract in these terms:

The coarse-graining is an approximation procedure widely used for simplification of mathematical and numerical models of multiscale systems. It reduces superfluous – microscopic – degrees of freedom. Israeli and Goldenfeld demonstrated in [1,2] that the coarse-graining can be employed for elementary cellular automata (CA), producing interesting interdependences between them. However, extending their investigation on more complex CA rules appeared to be impossible due to the high computational complexity of the coarse-graining algorithm. We demonstrate here that this complexity can be substantially decreased. It allows for scrutinizing much broader class of cellular automata in terms of their coarse graining. By using our algorithm we found out that the ratio of the numbers of elementary CAs having coarse grained representation to “degenerate” – irreducible – cellular automata, strongly increases with increasing the “grain” size of the approximation procedure. This rises principal questions about the formal limits in modeling of realistic multiscale systems.

Here K&D seem to be expressing the view that the the approach to coarse-graining as a technique for simplifying the expected behavior of a complex system offered by Israeli and Goldenfeld will fail in the case of more extensive and complex systems (perhaps including the pre-boil turbulence example mentioned above).

I am not sure whether these debates have relevance for the modeling of social phenomena. Recall my earlier discussion of the modeling of rebellion using agent-based modeling simulations (link, link, link). These models work from the unit level — the level of the individuals who interact with each other. A coarse-graining approach would perhaps replace the individual-level description with a set of groups with homogeneous properties, and then attempt to model the likelihood of an outbreak of rebellion based on the coarse-grained level of description. Would this be feasible?

DeLanda on concepts, knobs, and phase transitions

image: Carnap’s notes on Frege’s Begriffsschrift seminar

Part of Manuel DeLanda’s work in Assemblage Theory is his hope to clarify and extend the way that we understand the ontological ideas associated with assemblage. He introduces a puzzling wrinkle into his discussion in this book — the idea that a concept is “equipped with a variable parameter, the setting of which determines whether the ensemble is coded or decoded” (3). He thinks this is useful because it helps to resolve the impulse towards essentialism in social theory while preserving the validity of the idea of assemblage:

A different problem is that distinguishing between different kinds of wholes involves ontological commitments that go beyond individual entities. In particular, with the exception of conventionally defined types (like the types of pieces in a chess game), natural kinds are equivalent to essences. As we have already suggested, avoiding this danger involves using a single term, ‘assemblage’, but building into it parameters that can have different settings at different times: for some settings the social whole will be a stratum, for other settings an assemblage (in the original sense). (18)

So “assemblage” does not refer to a natural kind or a social essence, but rather characterizes a wide range of social things, from the sub-individual to the level of global trading relationships. The social entities found at all scales are “assemblages” — ensembles of components, some of which are themselves ensembles of other components. But assemblages do not have an essential nature; rather there are important degrees of differentiation and variation across assemblages.

By contrast, we might think of the physical concepts of “metal” and “crystal” as functioning as something like a natural kind. A metal is an unchanging material configuration. Everything that we classify as a metal has a core set of physical-material properties that determine that it will be an electrical conductor, ductile, and solid over a wide range of terrestrial temperatures.

A particular conception of an assemblage (the idea of a city, for example) does not have this fixed essential character. DeLanda introduces the idea that the concept of a particular assemblage involves a parameter or knob that can be adjusted to yield different materializations of the given assemblage. An assemblage may take different forms depending on one or more important parameters.

What are those important degrees of variation that DeLanda seeks to represent with “knobs” and parameters? There are two that come in for extensive treatment: the idea of territorialization and the idea of coding. Territorialization is a measure of homogeneity, and coding is a measure of the degree to which a social outcome is generated by a grammar or algorithm. And DeLanda suggests that these ideas function as something like a set of dimensions along which particular assemblages may be plotted.

Here is how DeLanda attempts to frame this idea in terms of “a concept with knobs” (3).

The coding parameter is one of the knobs we must build into the concept, the other being territorialisation, a parameter measuring the degree to which the components of the assemblage have been subjected to a process of homogenisation, and the extent to which its defining boundaries have been delineated and made impermeable. (3)

Later DeLanda returns to this point:

A different problem is that distinguishing between different kinds of wholes involves ontological commitments that go beyond individual entities. In particular, with the exception of conventionally defined types (like the types of pieces in a chess game), natural kinds are equivalent to essences. As we have already suggested, avoiding this danger involves using a single term, ‘assemblage’, but building into it parameters that can have different settings at different times: for some settings the social whole will be a stratum, for other settings an assemblage (in the original sense). (18)

This is confusing. We normally think of a concept as identifying a range of phenomena; the phenomena are assumed to have characteristics that can be observed, hypothesized, and measured. So it seems peculiar to suppose that the forms of variation that may be found among the phenomena need to somehow be represented within the concept itself.

Consider an example — a nucleated human settlement (hamlet, village, market town, city, global city). These urban agglomerations are assemblages in DeLanda’s sense: they are composed out of the juxtaposition of human and artifactual practices that constitute and support the forms of activity that occur within the defined space. But DeLanda would say that settlements can have higher or lower levels of territorialization, and they can have higher or lower levels of coding; and the various combinations of these “parameters” leads to substantially different properties in the ensemble.

If we take this idea seriously, it implies that compositions (assemblages) sometimes undergo abrupt and important changes in their material properties at critical points for the value of a given variable or parameter.

DeLanda thinks that these ideas can be understood in terms of an analogy with the idea of a phase transition in physics:

Parameters are normally kept constant in a laboratory to study an object under repeatable circumstances, but they can also be allowed to vary, causing drastic changes in the phenomenon under study: while for many values of a parameter like temperature only a quantitative change will be produced, at critical points a body of water will spontaneously change qualitatively, abruptly transforming from a liquid to a solid, or from a liquid to a gas. By analogy, we can add parameters to concepts. Addition these control knobs to the concept of assemblage would allow us to eliminate their opposition to strata, with the result that strata and assemblages (in the original sense) would become phases, like the solid and fluid phases of matter. (19)

These ideas about “knobs”, parameters, and codes might be sorted out along these lines. Deleuze introduces two high-level variables along which social arrangements differ — the degree to which the social ensemble is “territorialized” and the degree to which it is “coded”. Ensembles with high territorialization have some characteristics in common; likewise ensembles with low coding; and so forth. Both factors admit of variable states; so we could represent a territorialization measurement as a value between 0 and 1, and likewise a coding measurement.

When we combine this view with DeLanda’s suggestion that social ensembles undergo “phase transitions,” we get the idea that there are critical points for both variables at which the characteristics of the ensemble change in some important and abrupt way.

W, X, Y, and Z represent the four extreme possibilities of “low coding, low territorialization”, “high coding, low territorialization”, “high coding, high territorialization”, and “low coding, high territorialization”. And the suggestion from DeLanda’s treatment is that assemblages in these four extreme locations will have importantly different characteristics — much as solid, liquid, gas, and plasma states of water have different characteristics. (He asserts that assemblages in the “high-high” quadrant are “strata”, while ensembles at lower values of the two parameters are “assemblages”; 39.)

Here is a phase diagram for water:

There are five material states represented here, along with the critical values of pressure and temperature at which H20 shifts through a phase transition (solid, liquid, compressible liquid, gaseous, and supercritical fluid). (There is a nice discussion of critical points and phase transitions in Wikipedia (link).)

What is most confusing in the theory offered in Assemblage Theory is that DeLanda appears to want to incorporate the ideas of coding (C) and territorialization (T) into the notation itself, as a “knob” or a variable parameter. But this seems like the wrong way of proceeding. Better would be to conceive of the social entity as an ensemble; and the ensemble is postulated to have different properties as C and T increase. This extends the analogy with phase spaces that DeLanda seems to want to develop. Now we might hypothesize that as a market town decreases in territorialization and coding it moves from the upper right quadrant towards the lower left quadrant of the diagram; and (DeLanda seems to believe) there will be a critical point at which the properties of the ensemble are significantly different. (Again, he seems to say that the phase transition is from “assemblage” to “strata” for high values of C and T.)

I think this explication works as a way of interpreting DeLanda’s intentions in his complex assertions about the language of assemblage theory and the idea of a concept with knobs. Whether it is a view that finds empirical or historical confirmation is another matter. Is there any evidence that social ensembles undergo phase transitions as these two important variables increase? Or is the picture entirely metaphorical?

(Gottlob Frege changed logic by introducing a purely formal script intended to suffice to express any scientific or mathematical proposition. The concept of proof was intended to reduce to “derivability according to a specified set of formal operations from a set of axioms.” Here is a link to an interesting notebook in Rudolph Carnap’s hand of his participation in a seminar by Frege; link.)

A new exposition of assemblage theory

Manuel DeLanda has been a prominent exponent of the theory of assemblage for English-speaking readers for at least ten years. His 2006 book A New Philosophy of Society: Assemblage Theory and Social Complexity has been discussed numerous times in this blog (link, link, link). DeLanda has now published a new treatment of the subject, Assemblage Theory. As I’ve pointed out in the earlier discussions, I find assemblage theory to be helpful for sociology and the philosophy of social science because it provides a very appropriate way of conceptualizing the heterogeneity of the social world. The book is well worth discussing.

To start, DeLanda insists that the French term “agencement” has greater semantic depth than its English translation, assemblage. “Assemblage” picks up one part of the meaning of agencement — the product of putting together a set of heterogeneous parts — but it loses altogether the implications of process and activity in the French term. He quotes a passage in which Deleuze and Parnet explain part of the meaning of assemblage (agencement) (1):

What is an assemblage? It is a multiplicity which is made up of many heterogeneous terms and which establishes liaisons, relations between them, across ages, sexes and reigns — different natures. This, the assemblage’s only unity is that of a co-functioning: it is a symbiosis, a ‘sympathy’. It is never filiations which are important, but alliances, alloys; these are not successions, lines of descent, but contagions, epidemics, the wind. (Dialogues II, 69)

This passage from Deleuze and Parnet highlights the core idea of an assemblage bringing together heterogeneous pieces into a new whole. It also signals the important distinction for Deleuze between interiority and exteriority. DeLanda explicates this distinction as indicating the nature of the relations among the elements. “Interior” relations among things are essential, logical, or semantic; whereas exterior relations are contingent and non-essential. Identifying a pair as husband and wife is to identify an interior relation; identifying a pair as a female architect and a male night club bouncer is an exterior relation. This is what Deleuze and Parnet refer to when they refer to alliances, alloys, contagions, epidemics: conjunctions of otherwise independent things or processes.

Let’s look at some of the high-level concepts that play an important role in DeLanda’s exposition.

Individuals

DeLanda makes the important ontological point that assemblages are individuals: historically unique persistent configurations. “Assemblages have a fully contingent historical identity, and each of them is therefore an individual entity: an individual person, an individual community, an individual organization, an individual city” (19).

All assemblages should be considered unique historical entities, singular in their individuality, not as particular members of a general category. But if this is so, then we should be able to specify the individuation process that gave birth to them. (6)

In other words, the whole [assemblage] is immanent, not transcendent. Communities or organizations are historically individuated entities, as much so as the persons that compose them…. It is not incoherent to speak of individual communities, individual organizations, individual cities, or individual countries. The term ‘individual’ has no preferential affinity for a particular scale (persons or organisms) but refers to any entity that  is historically unique. (13)

These passages make it clear that the idea of an individual is not restricted to one ontological level (biological human organism), but is rather available at all levels (individual, labor union, farmers’ cooperative, city, corporation, army).

Parameters

Several important meta-level distinctions about relations among components of an assemblage arise in DeLanda’s exposition. The distinction between relational interiority and exteriority is familiar from his earlier exposition in New Philosophy. Interior relations are conceptual or intrinsic — uncle to nephew. Exterior relations are contingent — street vendor to policeman. A second distinction that DeLanda discusses is coded/decoded. This distinction too is developed extensively in New Philosophy. Relations that are substantially fixed by a code — a grammar, a specific set of rules of behavior, a genetic program — are said to be coded; relations that are substantially indeterminate and left to the choices of the participants are decoded. A third distinction that DeLanda discusses in Assemblage Theory is that between stratum and assemblage. An assemblage is a concrete particular consisting of heterogeneous parts; a stratum is a more or less uniform group of things (organisms, institutions).

Here is a passage from New Philosophy on the concept of coded relations:

[Organizations] do involve rules, such as those governing turn-taking. The more formal and rigid the rules, the more these social encounters may be said to be coded. But in some circumstances these rules may be weakened giving rise to assemblages in which the participants have more room to express their convictions and their own personal styles. (16)

And in Assemblage Theory:

The coding parameter is one of the knobs we must build into the concept [of assemblage], the other being territorialisation, a parameter measuring the degree to which the components of the assemblage have been subjected to a process of homogenisation, and the extent to which its defining boundaries have been delineated and made impermeable. (3)

(In a later post I will discuss DeLanda’s effort to subsume each of these distinctions under the idea of a parameter or “knob” inflecting a particular concept of assemblage (city, linguistic practice). Also of interest there will be DeLanda’s effort to understand the ontology of assemblage and stratum in analogy with the idea in physics of a phase space (gas, liquid, solid).)

Emergence

DeLanda believes that assemblage theory depends on the idea of emergence for macro-level properties:

The very first step in this task is to devise a means to block micro-reductionism, a step usually achieved by the concept of emergent properties, the properties of a whole caused by the interactions between its parts. If a social whole has novel properties that emerge from interactions between people, its reduction to a mere aggregate of many rational decision-makers or many phenomenological experiences is effectively blocked. (9)

Notice that this is a weak conception of emergence; the emergent property is distinguished simply by the fact that it is not an aggregation of the properties of the individual components. This does not imply that the property is not derivable from a theory of the properties of the parts and the causal interactions among them. (Several earlier posts have raised questions about the validity of the idea of emergence; link.)

And in fact DeLanda shortly says some surprising things about emergence and the relations between higher-level and lower-level properties:

The property of density, and the capacity to store reputations and enforce norms, are non-reducible properties and capacities of the entire community, but neither involves thinking of it as a seamless totality in which the very personal identity of the members is created by their relations. (11)

Up to the level of national markets the main emergent property of these increasingly larger trading areas is synchronized price movements. Braudel uncovers evidence that average wholesale prices (determined mostly by demand and supply) move up and down in unison within urban regions, provinces, or entire countries. (15)

These are surprising claims as illustrations of emergence, because all of the properties mentioned here are in fact reducible to facts about the properties of individuals and their relations. Density is obviously so; we can derive density by measuring the number of individuals per unit of space. The capacity of a group to store reputations is also a direct consequence of individuals’ ability to remember facts about other individuals and communicate their memories to others. The community’s representation of “reputation” is nothing over and above this distributed set of beliefs and interactions. And the fact of synchronized price movements over an extended trading area likewise has perfectly visible microfoundations at the individual level: communications and transportation technologies permit traders to take advantage of momentary price differentials in different places, leading to a tendency for all accessible points within the region to reveal prices that are synchronized with each other (modulo the transportation costs that exist between points).

These observations lead me to suspect that the concept of emergence is not doing much real work here. The paraphrase that DeLanda offers as a summary conclusion is correct:

Thus, both ‘the Market’ and ‘the State’ can be eliminated from a realist ontology by a nested set of individual emergent wholes operating at different scales. (16)

But this observation does not imply or presuppose the idea of strong emergence.

It seems, then, that we could put aside the language of emergence and rest on the claim that assemblages at various levels have stable properties that can be investigated empirically and historically; there is no need for reduction to a more fundamental level. So assemblage theory is anti-reductionist and is eclectic with regard to the question of levels of the social world. We can formulate concepts of social entities at a wide range of levels and accommodate those concepts to the basic idea of assemblage, and there is no need for seeking out inter-level reductions. But likewise there is no need to insist on the obscure idea of strong emergence.

Assemblage theory and social realism

This treatment of social theory from the point of view of assemblage theory is distinctly friendly to the language of realism. DeLanda argues that assemblages are real, mind-independent, and ontologically stable. Assemblages are in the world and can be treated as independent individual things. Here is a representative statement:

The distinction between a concept and its cases also has an ontological aspect. The concept itself is a product of our minds and would not exist without them, but concrete assemblages must be considered to be fully independent of our minds. This statement must be qualified, because in the case of social assemblages like communities, organizations, and cities, the assemblages would cease to exist if our minds disappeared. So in this case we should say that social assemblages are independent of the content of our minds, that is, independent of the way in which communities, organizations, and cities are conceived. This is just another way of saying that assemblage theory operates within a realist ontology. (138)

The most important transcendent entity that we must confront and eliminate is the one postulated to explain the existence and endurance of autonomous entities: essences. (139)

Both points are crucial. DeLanda emphasizes that social entities (assemblages) are real items in the social world, with a temporally and causally persistent reality; and he denies that the ideas of “essence”, “kind”, or “inner nature” have a role in science. This is an anti-essentialist realism, and it is a highly appropriate basis for social ontology.

Appraisal

There is much more to discuss in DeLanda’s current treatment of assemblage, and I expect to return to other issues in later posts. What I find particularly interesting about DeLanda’s current book are the substantive observations DeLanda makes about various historical formations — cities, governments, modes of production, capitalism. Assemblage theory is of real value for social scientists only if it provides a better vocabulary for describing social entities and causes. And DeLanda’s illustrations make a persuasive case for this conclusion.

For example, in discussing Braudel on the difference between markets and capitalism he writes:

These are powerful words. But how can anyone dare to suggest that we must distinguish capitalism from the market economy? These two terms are, for both the left and the right, strictly synonymous. However, a close examination of the history of commercial, financial, and industrial organizations shows that there is indeed a crucial difference, and that ignoring it leads to a distortion of our historical explanations. (41)

This discussion has some significant parallels with the treatment of the modern economy offered by Dave Elder-Vass discussed earlier (link). And DeLanda’s closing observation in chapter 1 is quite insightful:

Much of the academic left today has become prey to the double danger of politically targeting reified generalities (Power, Resistance, Capital, Labour) while at the same time abandoning realism. A new left may yet emerge from these ashes but only if it recovers its footing on a mind-independent reality and if it focuses its efforts at the right social scale, that is, if it leaves behind the dream of a Revolution that changes the entire system. This is where assemblage theory may one day make a real difference. (48)

More than the logical exposition of various esoteric concepts associated with assemblage, it is DeLanda’s intelligent characterization of various concrete social and historical processes (for example, his extensive discussion of Braudel in chapter 1) that cements the intellectual importance of assemblage theory for historical and social scientific thinking.

Another important virtue of the treatment here is that DeLanda makes a strong case for a social ontology that is both anti-reductionist and anti-essentialist. Social things have properties that we don’t need to attempt to reduce to properties of ensembles of components; but social things are not transcendent, essential wholes whose behavior is independent from the activities of the individuals and lower-level configurations of which they consist. Further, this view of social ontology has an important implication that DeLanda explicitly calls out: we need to recognize the fact of downward causation from social configurations (individual assemblages) to the actions of the individuals and lesser configurations of which they consist. A community embodying a set of norms about deference and respectful behavior in fact elicits these forms of behavior in the individuals who make up the community. This is so through the very ordinary fact that individuals monitor each others’ behavior and sometimes retaliate when norms are breached. (This was the view of community social power developed several decades ago by Michael Taylor; Community, Anarchy and Liberty.)

Tilly on moving through history

We have long given up on the idea that history has direction or a fundamental motor driving change. There are no iron laws of history, and there is no fundamental driver of history, whether market, class struggle, democracy, or “modernization.” And there is no single path forward into a more modern world. At the same time, we recognize that history is not random or chaotic, and that there are forces and circumstances that make some historical occurrences more likely than others. “Men make their own history, but not in circumstances of their own choosing” (Marx, Eighteenth Brumaire). So historical process is both contingent and constrained. (Here are several earlier posts on the contingency and causation in history; link, link, link, link, link.)

One of the most insightful historical sociologists in generations is Charles Tilly. He was also tremendously prolific. His volume Roads From Past To Future offers a good snapshot of some of his thinking about contention, social change, and political conflict from the 1970s through the 1990s. The essays are all interesting (including a summary appreciation by Arthur Stinchcombe). Particularly interesting are two chapters on the routinization of political struggle, “The Modernization of Political Conflict in France” and “Parliamentarization in Great Britain, 1758-1834”. But especially worthy of comment is the opening essay in which Tilly tries to make sense of his own evolving ideas about social process and social change. And there are some ideas presented there that don’t really have counterparts in other parts of the historical sociology literature. As is so commonly true, Tilly demonstrates his basic ability to bring novelty and innovation to social science topics. 

How do we get from past to future? If we are examining complex processes such as industrialization, state formation, or secularization, we follow roads defined by changing configurations of social interaction. Effective social analysis identifies those roads, describes them in detail, specifies what other itineraries they could have taken, then provides explanations for the itineraries they actually followed. (1)

Especially important here is a distinction that Tilly draws between “degree of scripting” and “degree of local knowledge” to analyze both individual actions and collective actions. He believes we can classify social action in terms of these two dimensions. Figure 1.1 indicates his view of the kinds of action that occur in the four extreme quadrants of this graph — thin and intense ritual, and shallow and deep improvisation. And he offers the examples of science and jazz as exemplars of activities that embody different proportions of the two characteristics.

figure 1.1. Scripting and local knowledge in social interaction, p. 2

The idea of “scripting” refers to the fact that both individuals and groups often act on the basis of habit and received “paradigms” of behavior in response to certain stylized action opportunities. At one stage in his career Tilly referred to these as repertoires of contentious action. And it reflects the idea that individuals and groups learn to engage in contentious politics; they learn new forms of demonstration and opposition in different periods of history, and repeat those forms over multiple generations.

Local knowledge captures for Tilly the feature of social action that is highly responsive to the actors’ intimate knowledge of the environment of contention, and their ability to improvise strategies of resistance in response to the specifics of the local environment. James Scott describes Malaysian peasants who toppled trees in the path of mechanized harvesters (Weapons of the Weak: Everyday Forms of Peasant Resistance; link), and David Graeber describes the strategies adapted by the Spanish anarchist group Ya Basta! as a means of creating disruption at the Summit of the Americas in Quebec in 2001 (link). In each case actors found novel ways suited to current circumstances through which to further their goals.

Tilly’s general point is that historical circumstances are propelled by both these sets of features of action, and that different actions, movements, and conflicts can be characterized in terms of different blends of improvisation and script.

Also important in this chapter is Tilly’s advocacy for what he calls “relationalism” in opposition to individualism and systems theory.

Relational analysis holds great promise for the understanding of social processes. Relational analysis takes social relations, transactions, or ties as the starting points of description and explanation. It claims that recurrent patterns of interaction among occupants of social sites (rather than, say, mentally lodged models of social structures or processes) constitute the subject matter of social science. In relational analysis, social causation operates within the realm of interaction. (7)

This seems to be very similar to the point that Elias makes through his theory of “figurational sociology” (link). This is a theme that recurs frequently in Tilly’s work, including especially in Dynamics of Contention.

Finally, I find his comments about the inadequacy of narrative as a foundation for social explanation to be worth considering carefully.

Negatively, we must recognize that conventional narratives of social life do indispensable work for interpersonal relations but represent the actual causal structure of social processes very badly; narrative is the friend of communication, the enemy of explanation. We must see that the common conception of social processes as the intended consequences of motivated choices by self-contained, self-motivated actors — individuals, groups, or societies — misconstrues the great bulk of human experience. We must learn that culture does not constitute an autonomous, self-driving realm but intertwines inseparably with social relations. (7)

These comments are particularly relevant in response to historians who attempt to explain complex social outcomes as no more than the intersecting series of purposive strategies by numerous actors; Tilly is emphasizing the crucial importance of unintended consequences and conjunctural causation that can only be captured by a more system-level account of the field of change.

And Tilly thinks the two points (relationality and narrative) go together:

Relational analysis meshes badly with narrative, since it necessarily attends to simultaneous, indirect, incremental, and unnoticed cause-effect connections. (9) 

So how does all of this help us think about important events and turning points in our own history? What about the 1965 march from Selma to Birmingham pictured above?

Several of Tilly’s points are clearly relevant for historians seeking to contextualize and explain the Selma march. The march itself reflected a well-understood script within the Civil Rights movement, in its organization, chants, and implementation. At the same time the organizers and participants showed substantial local knowledge that was inflected in some of the improvisations involved in the march — for example, the great distance to be covered. Both script and improvisation found a role on that day. That said, I don’t think this analytical distinction is as fundamental as Tilly believes. It is one useful dimension of analysis, but not the key to understanding the event.

Second, a telling of the story that simply presented a narrative of decisions, actions, and interactions of various individuals would seriously misrepresent the march. In order to understand the demonstration and the movement it reflected we need to understand a great deal about the preceding fifty years of race, economics, and politics in the United States and beyond. And we need to understand some of the realities of the Jim Crow race system in place in Alabama at the time. The event does not stand by itself. So we need something like Doug McAdam’s excellent Political Process and the Development of Black Insurgency, 1930-1970, 2nd Edition if we are to understand the structural conditions within the context of which the movement and the march unfolded. Simple narrative is not sufficient here — just as Tilly argues.

And third, this complex demonstration reflects relationality at every level — leaders, organizations, neighborhoods, and individual participants all played their parts in a complicated interrelated set of engagements.

A different way of putting these points is to say that the Selma march is a single complex event, involving the actions and strategies of numerous actors. It is enormously important and worth focusing on. But it is not a miniature for the whole Civil Rights movement. An adequate treatment of the movement, and a satisfactory understanding of the movement’s transformational role in American society, needs to move beyond the events and actions of the day to the larger structures and conditions within which actors large and small played their parts.

Does the framework of script and local knowledge help much in the task of explaining historical change? This scheme seems to fit the swath of historical change that is most interesting to Tilly, the field of contentious politics. It seems less well suited, though, to other more impersonal historical processes — the rise of global trade, the surge of involuntary migration, or the general trend towards higher-productivity agriculture. In these areas the distinction seems to be somewhat beside the point. What seems more important in Tilly’s reflections here are his emphasis on the contingency of a historical sequence, and his insistence on the idea that social actors in relationships with each other are the “doers” of historical change.

Elias on figurational sociology

 
 
 

A premise shared by all actor-centered versions of sociology is that individuals and their actions are the rock-bottom level of the social world. Every other social fact derives from facts at this level. Norbert Elias raises a strong and credible challenge to this ontological assumption in his work, offering a view of social action that makes “figurations” of actors just as real as individual actors themselves. By figuration he means something like an interlocking set of individuals whose actions are a fluid series of reactions to and expectations about others. Figurations include both conflict and cooperation. And he is insistent that figurations cannot be reduced to the sum of a collection of independent actors and their choices. “Imagine the interlocking of the plans and actions, not of two, but of two thousand or two million interdependent players. The ongoing process which one encounters in this case does not take place independently of individual people whose plans and actions keep it going. Yet it has a structure and demands an explanation sui generis. It cannot be explained in terms of the ‘ideas’ or the ‘actions’ of individual people” (52). So good sociology needs to pay attention to figurations, not just individuals and their mental states.

Elias’s most vivid illustration of what he means by a figuration comes from his reflections on the game of soccer and the flow of action across two teams and twenty-two individual players over extended episodes of play. These arguments constitute the primary topic of volume 7 of his collected writings, Elias and Dunning, Quest for Excitement: Sport and Leisure in the Civilising Process. (This is particularly relevant at a time when millions of people are viewing the Euro Cup.)

The observation of an ongoing game of football can be of considerable help as an introduction to the understanding of such terms as interlocking plans and actions. Each team may have planned its strategy in accordance with the knowledge of their own and their opponents’ skills and foibles. However, as the game proceeds, it often produces constellations which were not intended or foreseen by either side. In fact, the flowing pattern formed by players and ball in a football game can serve as a graphic illustration not only of the concept of ‘figurations’ but also of that of ‘social process’.

The game-process is precisely that, a flowing figuration of human beings whose actions and experiences continuously interlock, a social process in miniature. One of the most instructive aspects of the fast-changing pattern of a football game is the fact that this pattern is formed by the moving players of both sides.

If one concentrated one’s attention only on the activities of the players of one team and turned a blind eye to the activities of the other, one could not follow the game. The actions and experiences of the members of the team which one tried to observe in isolation and independently of the actions and perceptions of the other would remain incomprehensible. In an ongoing game, the two teams form with each other a single figuration. It requires a capacity for distancing oneself from the game to recognize that the actions of each side constantly interlock with those of their opponents and thus that the two opposing sides form a single figuration. So do antagonistic states. Social processes are often uncontrollable because they are fuelled by enmity. Partisanship for one side or another can easily blur that fact. (51-52; italics mine)

Here is a more theoretical formulation from Elias, from “Dynamics of sports groups” in the same volume.

Let us start with the concept of ‘figuration’. It has already been said that a game is the changing figuration of the players on the field. This means that the figuration is not only an aspect of the players. It is not as one sometimes seems to believe if one uses related expressions such as ‘social pattern’, ‘social group’, or ‘society’, something abstracted from individual people. Figurations are formed by individuals, as it were ‘body and soul’. If one watches the players standing and moving on the field in constant inter-dependence, one can actually see them forcing a continuously changing figuration. If groups or societies are large, one usually cannot see the figurations their individual members form with one another. Nevertheless, in these cases too people form figurations with each other — a city, a church, a political party, a state — which are no less real than the one formed by players on a football field, even though one cannot take them in at a glance.

 

 

 

To envisage groupings of people as figurations in this sense, with their dynamics, their problems of tension and of tension control and many others, even though one cannot see them here and now, requires a specific training. This is one of the tasks of figurational sociology, of which the present essay is an example. At present, a good deal of uncertainty still exists with regard to the nature of that phenomenon to which one refers as ‘society’. Sociological theories often appear to start from the assumption that ‘groups’ or ‘societies’, and ‘social phenomean’ in general, are something abstracted from individual people, or at least that they are not quite as ‘real’ as individuals, whatever that may mean. The game of football — as a small-scale model — can help to correct this view. It shows that figurations of individuals are neither more nor less real than the individuals who form them. Figurational sociology is based on observations such as this. In contrast to sociological theories which treat societies as if they were mere names, an ‘ideal type’, a sociologist’s construction, and which are in that sense representative of sociological nominalism, it represents a sociological realism. Individuals always come in figurations and figurations are always formed by individuals. (199)

This ontological position converges closely with the “relational” view of social action advocated by the new pragmatists as well as Chuck Tilly. The pragmatists’ idea that individual actions derive from the flow of opportunities and reactions instigated by the movements of others is particularly relevant. But Elias’s view also seems to have some resonance with the idea of methodological localism as well: “individuals in local social interactions are the molecule of the social world.”

 
What seems correct here is an insight into the “descriptive ontology” of the social world. Elias credibly establishes the fact of entangled, flowing patterns of action by individuals during an episode, and makes it credible that these collective patterns don’t derive fully in any direct way from the individual intentions of the participants. “Figurations are just as real as individuals.” So the sociologist’s ontology needs to include figurations. Moreover the insight seems to cast doubt as well on the analytical sociologists’ strategy of “dissection”. These points suggest that Elias provides a basis for a critique of ontological individualism. And Elias can be understood as calling for more realism in sociological description. 
What this analysis does not provide is any hint about how to use this idea in constructing explanations of larger-scale social outcomes or patterns. Are we forced to stop with the discovery of a set of figurations in play in a given social occurrence? Are we unable to provide any underlying explanation of the emergence of the figuration itself? Answers to these questions are not clear in Elias’s text. And yet this is after all the purpose of explanatory sociology.

It is also not completely convincing to me that the figurations described by Elias could not be derived through something like an agent-based simulation. The derivation of flocking and swarming behavior in fish and birds seems to be exactly this — a generative account of the emergence of a collective phenomenon (figuration) from assumptions about the decision-making of the individuals. So it seems possible that we might look at Elias’s position as seeing a challenge to actor-based sociology that now can be addressed rather than a refutation. 

 
In this sense it appears that figurational sociology is in the same position as various versions of microsociology considered elsewhere (e.g. Goffman): it identifies a theoretical lacuna in rational choice theory and thin theories of the actor, but it does not provide recommendations for how to proceed with a more adequate explanatory theory.

(Recall the earlier discussion on non-generative social facts and ontological individualism; link. That post makes a related argument for the existence of social facts that cannot be derived from facts about the individual actors involved. In each case the problem derives from the highly path-dependent nature of social outcomes.)

Phase transitions and emergence

Image: Phase diagram of water, Solé. Phase Transitions, 4
 
I’ve proposed to understand the concepts of emergence and generativeness as being symmetrical (link). Generative higher-level properties are those that those that can be calculated or inferred based on information about the properties and states of the micro-components. Emergent properties are properties of an ensemble that have substantially different dynamics and characteristics from those of the components. So emergent properties may seem to be non-generative properties. Further, I understand the idea of emergence in a weak and a strong sense: weakly emergent properties of an ensemble are properties that cannot be derived from the characteristics of the components given the limits of observation or computation; and strongly emergent properties are ones that cannot be derived in principle from full knowledge of the properties and states of the components. They must be understood in their own terms.
Conversations with Tarun Menon at the Tata Institute for Social Sciences in Mumbai were very helpful in allowing me to broaden somewhat the way I understand emergence in physical systems. So here I’d like to consider some additional complications for the theory of emergence coming from one specific physical finding, the mathematics of phase transitions. 

Complexity scientists have spent a lot of effort on understanding the properties of complex systems using a different concept, the idea of a phase transition. The transition from liquid water to steam as temperature increases is an example; the transition happens abruptly as the system approaches the critical value of the phase parameter — 100 degrees centigrade at constant pressure of one atmosphere, in the case of liquid-gas transition. 
 
Richard Solé presents the current state of complexity theory with respect to the phenomenon of phase transition in Phase Transitions. Here is how he characterizes the core idea:

In the previous sections we used the term critical point to describe the presence of a very narrow transition domain separating two well-defined phases, which are characterized by distinct macroscopic properties that are ultimately linked to changes in the nature of microscopic interactions among the basic units. A critical phase transition is characterized by some order parameter φ( μ) that depends on some external control parameter μ (such as temperature) that can be continuously varied. In critical transitions, φ varies continuously at μc (where it takes a zero value) but the derivatives of φ are discontinuous at criticality. For the so-called first-order transitions (such as the water-ice phase change) there is a discontinuous jump in φ at the critical point. (10)

So what is the connection between “emergent phenomena” and systems that undergo phase transitions? One possible connection is this: when a system undergoes a phase transition, its micro-components get rapidly reconfigured into a qualitatively different macro-structure. And yet the components themselves are unchanged.  So one might be impressed with the fact that the pre- and post-macro states correspond to very close to the same configurations of micro-states. The steaminess of the water molecules is triggered by an external parameter — change in temperature (or possibly pressure), and their characteristics around the critical point are very similar (their mean kinetic energy is approximately equal before and after transition). The diagram above represents the physical realities of water molecules in the three phase states. 
 
Solé and other complexity theorists see this “phase-transition” phenomenon in a wide range of systems, including simple physical systems but also biological and social systems as well. Solé offers the phenomenon of flocking as an example. We might consider whether the phenomenon of ethnic violence is a phase transition from a mixed but non-aggressive population of individuals to occasional abrupt outbursts of widespread conflict (link).
The disanalogy here is the fact that “unrest” is not a new equilibrium phase of the substrate of dispersed individuals; rather, it is an occasional abnormal state of brief duration. It is as if water sometimes spontaneously transitioned to steam and then returned to the liquid phase. Solé treats “percolation” phenomena later in the book, and rebellion seems more plausibly treated as a percolation process. Solé treats forest fire this way. But the representation works equally for any process based on contiguous contagion.
 
What seems to be involved here is a conclusion that is a little bit different from standard ideas about emergent phenomena. The point seems to be that for a certain class of systems, these systems have dynamic characteristics that are formal and abstract and do not require that we understand the micro mechanisms upon they rest at all. It is enough to know that system S is formally similar to a two-dimensional array of magnetized atoms (the “Ising model”); then we can infer that phase-transition behavior of the system will have specific mathematical properties. This might be summarized with the slogan, “system properties do not require derivation from micro dynamics.” Or in other words: systems have properties that don’t depend upon the specifics of the individual components — a statement that is strongly parallel to but distinct from the definition of emergence mentioned above. It is distinct, because the approach leaves it entirely open that the system properties are generated by the dynamics of the components.

This idea is fundamental to Solé’s analysis, when he argues that it is possible to understand phase transitions without regard to the particular micro-level mechanisms:

Although it might seem very difficult to design a microscopic model able to provide insight into how phase transitions occur, it turns out that great insight has been achieved by using extremely simplified models of reality. (10)

Here is how Solé treats swarm behavior as a possible instance of phase transition.

In social insects, while colonies behave in complex ways, the capacities of individuals are relatively limited. But then, how do social insects reach such remarkable ends? The answer comes to a large extent from self-organization: insect societies share basic dynamic properties with other complex systems. (157)

Intuitively the idea is that a collection of birds, ants, or bees may be in a state of random movement with respect to each other; and then as some variable changes the ensemble snaps into a coordinated “swarm” of flight or movement. Unfortunately he does not provide a mathematical example illustrating swarm behavior; the closest example he provides has to do with patterns of intense activity and slack activity over time in small to medium colonies of ants. This periodicity is related to density. Mark Millonas attempted such an account of swarming in a Santa Fe Institute paper in 1993, “Swarms, Phase Transitions, and Collective Intelligence; and a Nonequilibrium Statistical Field Theory of Swarms and Other Spatially Extended Complex Systems ” (link).
 
This work is interesting, but I am not sure that it sheds new light on the topic of emergence per se. Fundamentally it demonstrates that the aggregation dynamics of complex systems are often non-linear and amenable to formal mathematical modeling. As a critical variable changes a qualitatively new macro-property “emerges” from the ensemble of micro-components from which it is composed. This approach is consistent with the generativity view — the new property is generated by the interactions of the micro-components during an interval of change in critical variables. But it also maintains that systems undergoing phase transitions can be studied using a mathematical framework that abstracts from the physical properties of those micro-components. This is the point of the series of differential equation models that Solé provides. Once we have determined that a particular system has formal properties satisfying the assumptions of the DE model, we can then attempt to measure the critical parameters and derive the evolution of the system without further information about particular mechanisms at the micro-level.
 
%d bloggers like this: