Generativism

There is a seductive appeal to the idea of a “generative social science”. Joshua Epstein is one of the main proponents of the idea, most especially in his book, Generative Social Science: Studies in Agent-Based Computational Modeling. The central tool of generative social science is the construction of an agent-based model (link). The ABM is said to demonstrate the way in which an observable social outcome of pattern is generated by the properties and activities of the component parts that make it up — the actors. The appeal comes from the notion that it is possible to show how complicated or complex outcomes are generated by the properties of the components that make them up. Fix the properties of the components, and you can derive the properties of the composites. Here is Epstein’s capsule summary of the approach:

The agent-based computational model — or artificial society — is a new scientific instrument. It can powerfully advance a distinctive approach to social science, one for which the term “generative” seems appropriate. I will discuss this term more fully below, but in a strong form, the central idea is this: To the generativist, explaining the emergence of macroscopic societal regularities, such as norms or price equilibria, requires that one answer the following question: 

The Generativist’s Question 

*How could the decentralized local interactions of heterogeneous autonomous agents generate the given regularity?  

The agent-based computational model is well-suited to the study of this question, since the following features are characteristic: [heterogeneity, autonomy, explicit space, local interactions, bounded rationality]

(5-6)

And a few pages later:

Agent-based models provide computational demonstrations that a given microspecification is in fact sufficient to generate a macrostructure of interest. . . . To the generativist — concerned with formation dynamics — it does not suffice to establish that, if deposited in some macroconfiguration, the system will stay there. Rather, the generativist wants an account of the configuration’s attainment by a decentralized system of heterogeneous autonomous agents. Thus, the motto of generative social science, if you will, is: If you didn’t grow it, you didn’t explain its emergence. (8)

Here is how Epstein describes the logic of one of the most extensive examples of generative social science, the attempt to understand the disappearance of Anasazi population in the American Southwest nearly 800 years ago.

The logic of the exercise has been, first, to digitize the true history — we can now watch it unfold on a digitized map of Longhouse Valley. This data set (what really happened) is the target — the explanandum. The aim is to develop, in collaboration with anthropologists, microspecifications — ethnographically plausible rules of agent behavior — that will generate the true history. The computational challenge, in other words, is to place artificial Anasazi where the true ones were in 80-0 AD and see if — under the postulated rules — the simulated evolution matches the true one. Is the microspecification empirically adequate, to use van Fraassen’s phrase? (13)

Here is a short video summarizing the ABM developed under these assumptions:

The artificial Anasazi experiment is an interesting one, and one to which the constraints of an agent-based model are particularly well suited. The model follows residence location decision-making based on ground-map environmental information.

But this does not imply that the generativist interpretation is equally applicable as a general approach to explaining important social phenomena.

Note first how restrictive the assumption is of “decentralized local interactions” as a foundation to the model. A large proportion of social activity is neither decentralized nor purely local: the search for muons in an accelerator lab, the advance of an armored division into contested territory, the audit of a large corporation, preparations for a strike by the UAW, the coordination of voices in a large choir, and so on, indefinitely. In all these examples and many more, a crucial part of the collective behavior of the actors is the coordination that occurs through some centralized process — a command structure, a division of labor, a supervisory system. And by its design, ABMs appear to be incapable of representing these kinds of non-local coordination.

Second, all these simulation models proceed from highly stylized and abstract modeling assumptions. And the outcomes they describe capture at best some suggestive patterns that might be said to be partially descriptive of the outcomes we are interested in. Abstraction is inevitable in any scientific work, of course; but once recognizing that fact, we must abandon the idea that the model demonstrates the “generation” of the empirical phenomenon. Neither premises nor conclusions are fully descriptive of concrete reality; both are approximations and abstractions. And it would be fundamentally implausible to maintain that the modeling assumptions capture all the factors that are causally relevant to the situation. Instead, they represent a particular stylized hypothesis about a few of the causes of the situation in question.  Further, we have good reason to believe that introducing more details at the ground level will sometimes lead to significant alteration of the system-level properties that are generated.

 
So the idea that an agent-based model of civil unrest could demonstrate that (or how) civil unrest is generated by the states of discontent and fear experienced by various actors is fundamentally ill-conceived. If the unrest is generated by anything, it is generated by the full set of causal and dynamic properties of the set of actors — not the abstract stylized list of properties. And other posts have made the point that civil unrest or rebellion is rarely purely local in its origin; rather, there are important coordinating non-local structures (organizations) that influence mobilization and spread of rebellious collective action. Further, the fact that the ABM “generates” some macro characteristics that may seem empirically similar to the observed phenomenon is suggestive, but far from a demonstration that the model characteristics suffice to determine some aspect of the macro phenomenon. Finally, the assumption of decentralized and local decision-making is unfounded for civil unrest, given the important role that collective actors and organizations play in the success or failure of social mobilizations around grievances (link).
The point here is not that the generativist approach is invalid as a way of exploring one particular set of social dynamics (the logic of decentralized local decision-makers with assigned behavioral rules). On the contrary, this approach does indeed provide valuable insights into some social processes. The error is one of over-generalization — imagining that this approach will suffice to serve as a basis for analysis of all social phenomena. In a way the critique here is exactly parallel to that which I posed to analytical sociology in an earlier post. In both cases the problem is one of asserting priority for one specific approach to social explanation over a number of other equally important but non-equivalent approaches.

Patrick Grim et al provide an interesting approach to the epistemics of models and simulations in “How simulations fail” (link). Grim and his colleagues emphasize the heuristic and exploratory role that simulations generally play in probing the dynamics of various kinds of social phenomena.

 
Advertisements

Phase transitions and emergence

Image: Phase diagram of water, Solé. Phase Transitions, 4
 
I’ve proposed to understand the concepts of emergence and generativeness as being symmetrical (link). Generative higher-level properties are those that those that can be calculated or inferred based on information about the properties and states of the micro-components. Emergent properties are properties of an ensemble that have substantially different dynamics and characteristics from those of the components. So emergent properties may seem to be non-generative properties. Further, I understand the idea of emergence in a weak and a strong sense: weakly emergent properties of an ensemble are properties that cannot be derived from the characteristics of the components given the limits of observation or computation; and strongly emergent properties are ones that cannot be derived in principle from full knowledge of the properties and states of the components. They must be understood in their own terms.
Conversations with Tarun Menon at the Tata Institute for Social Sciences in Mumbai were very helpful in allowing me to broaden somewhat the way I understand emergence in physical systems. So here I’d like to consider some additional complications for the theory of emergence coming from one specific physical finding, the mathematics of phase transitions. 

Complexity scientists have spent a lot of effort on understanding the properties of complex systems using a different concept, the idea of a phase transition. The transition from liquid water to steam as temperature increases is an example; the transition happens abruptly as the system approaches the critical value of the phase parameter — 100 degrees centigrade at constant pressure of one atmosphere, in the case of liquid-gas transition. 
 
Richard Solé presents the current state of complexity theory with respect to the phenomenon of phase transition in Phase Transitions. Here is how he characterizes the core idea:

In the previous sections we used the term critical point to describe the presence of a very narrow transition domain separating two well-defined phases, which are characterized by distinct macroscopic properties that are ultimately linked to changes in the nature of microscopic interactions among the basic units. A critical phase transition is characterized by some order parameter φ( μ) that depends on some external control parameter μ (such as temperature) that can be continuously varied. In critical transitions, φ varies continuously at μc (where it takes a zero value) but the derivatives of φ are discontinuous at criticality. For the so-called first-order transitions (such as the water-ice phase change) there is a discontinuous jump in φ at the critical point. (10)

So what is the connection between “emergent phenomena” and systems that undergo phase transitions? One possible connection is this: when a system undergoes a phase transition, its micro-components get rapidly reconfigured into a qualitatively different macro-structure. And yet the components themselves are unchanged.  So one might be impressed with the fact that the pre- and post-macro states correspond to very close to the same configurations of micro-states. The steaminess of the water molecules is triggered by an external parameter — change in temperature (or possibly pressure), and their characteristics around the critical point are very similar (their mean kinetic energy is approximately equal before and after transition). The diagram above represents the physical realities of water molecules in the three phase states. 
 
Solé and other complexity theorists see this “phase-transition” phenomenon in a wide range of systems, including simple physical systems but also biological and social systems as well. Solé offers the phenomenon of flocking as an example. We might consider whether the phenomenon of ethnic violence is a phase transition from a mixed but non-aggressive population of individuals to occasional abrupt outbursts of widespread conflict (link).
The disanalogy here is the fact that “unrest” is not a new equilibrium phase of the substrate of dispersed individuals; rather, it is an occasional abnormal state of brief duration. It is as if water sometimes spontaneously transitioned to steam and then returned to the liquid phase. Solé treats “percolation” phenomena later in the book, and rebellion seems more plausibly treated as a percolation process. Solé treats forest fire this way. But the representation works equally for any process based on contiguous contagion.
 
What seems to be involved here is a conclusion that is a little bit different from standard ideas about emergent phenomena. The point seems to be that for a certain class of systems, these systems have dynamic characteristics that are formal and abstract and do not require that we understand the micro mechanisms upon they rest at all. It is enough to know that system S is formally similar to a two-dimensional array of magnetized atoms (the “Ising model”); then we can infer that phase-transition behavior of the system will have specific mathematical properties. This might be summarized with the slogan, “system properties do not require derivation from micro dynamics.” Or in other words: systems have properties that don’t depend upon the specifics of the individual components — a statement that is strongly parallel to but distinct from the definition of emergence mentioned above. It is distinct, because the approach leaves it entirely open that the system properties are generated by the dynamics of the components.

This idea is fundamental to Solé’s analysis, when he argues that it is possible to understand phase transitions without regard to the particular micro-level mechanisms:

Although it might seem very difficult to design a microscopic model able to provide insight into how phase transitions occur, it turns out that great insight has been achieved by using extremely simplified models of reality. (10)

Here is how Solé treats swarm behavior as a possible instance of phase transition.

In social insects, while colonies behave in complex ways, the capacities of individuals are relatively limited. But then, how do social insects reach such remarkable ends? The answer comes to a large extent from self-organization: insect societies share basic dynamic properties with other complex systems. (157)

Intuitively the idea is that a collection of birds, ants, or bees may be in a state of random movement with respect to each other; and then as some variable changes the ensemble snaps into a coordinated “swarm” of flight or movement. Unfortunately he does not provide a mathematical example illustrating swarm behavior; the closest example he provides has to do with patterns of intense activity and slack activity over time in small to medium colonies of ants. This periodicity is related to density. Mark Millonas attempted such an account of swarming in a Santa Fe Institute paper in 1993, “Swarms, Phase Transitions, and Collective Intelligence; and a Nonequilibrium Statistical Field Theory of Swarms and Other Spatially Extended Complex Systems ” (link).
 
This work is interesting, but I am not sure that it sheds new light on the topic of emergence per se. Fundamentally it demonstrates that the aggregation dynamics of complex systems are often non-linear and amenable to formal mathematical modeling. As a critical variable changes a qualitatively new macro-property “emerges” from the ensemble of micro-components from which it is composed. This approach is consistent with the generativity view — the new property is generated by the interactions of the micro-components during an interval of change in critical variables. But it also maintains that systems undergoing phase transitions can be studied using a mathematical framework that abstracts from the physical properties of those micro-components. This is the point of the series of differential equation models that Solé provides. Once we have determined that a particular system has formal properties satisfying the assumptions of the DE model, we can then attempt to measure the critical parameters and derive the evolution of the system without further information about particular mechanisms at the micro-level.
 

Are emergence and microfoundations contraries?

image: micro-structure of a nanomaterial (link)

Are there strong logical relationships among the ideas of emergence, microfoundations, generative dependency, and supervenience? It appears that there are.

 
 
The diagram represents the social world as a laminated set of layers of entities, processes, powers, and laws. Entities at L2 are composed of or caused by some set of entities and forces at L1. Likewise L3 and L4. Arrows indicate microfoundations for L2 facts based on L1 facts. Diamond-tipped arrows indicate the relation of generative dependence from one level to another. Square-tipped lines indicate the presence of strongly emergent facts at the higher level relative to the lower level. The solid line (L4) represents the possibility of a level of social fact that is not generatively dependent upon lower levels. The vertical ellipse at the right indicates the possibility of microfoundations narratives involving elements at different levels of the social world (individual and organizational, for example).
 
We might think of these levels as “individuals,” “organization, value communities, social networks,” “large aggregate institutions like states,” etc.
 
This is only one way of trying to represent the structure of the social world. The notion of a “flat” ontology was considered in an earlier post (link). Another structure that is excluded by this diagram is one in which there is multi-directional causation across levels, both upwards and downwards. For example, the diagram excludes the possibility that L3 entities have causal powers that are original and independent from the powers of L2 or L1 entities. The laminated view described here is the assumption built into debates about microfoundations, supervenience, and emergence. It reflects the language of micro, meso, and macro levels of social action and organization.

Here are definitions for several of the primary concepts.

  • Microfoundations of facts in L2 based on facts in L1 : accounts of the causal pathways through which entities, processes, powers, and laws of L1 bring about specific outcomes in L2. Microfoundations are small causal theories linking lower-level entities to higher-level outcomes.
  • Generative dependence of L2 upon L1: the entities, processes, powers, and laws of L2 are generated by the properties of level L1 and nothing else. Alternatively, the entities, processes, powers, and laws of A suffice to generate all the properties of L2. A full theory of L1 suffices to derive the entities, processes, powers, and laws of L2.
  • Reducibility of y to x : it is possible to provide a theoretical or formal derivation of the properties of y based solely on facts about x.
  • Strong emergence of properties in L2 with respect to the properties of L2: L2 possesses some properties that do not depend wholly upon the properties of L2.
  • Weak emergence of properties in L2 with respect to the properties of L1: L2 possesses some properties for which we cannot (now or in the future) provide derivations based wholly upon the properties of L1.
  • Supervenience of L2 with respect to properties of L1: all the properties of L2 depend strictly upon the properties of L1 and nothing else.
    We also can make an effort to define some of these concepts more formally in terms of the diagram.
 

Consider these statements about facts at levels L1 and L2:

  1. UM: all facts at L2 possess microfoundations at L1. 
  2. XM: some facts at L2 possess inferred but unknown microfoundations at L1. 
  3. SM: some facts at L2 do not possess any microfoundations at L1. 
  4. SE: L2 is strongly emergent from L1. 
  5. WE: L2 is weakly emergent from L1. 
  6. GD: L2 is generatively dependent upon L1. 
  7. R: L2 is reducible to L1. 
  8. D: L2 is determined by L1. 
  9. SS: L2 supervenes upon L1. 

Here are some of the logical relations that appear to exist among these statements.

  1. UM => GD 
  2. UM => ~SE 
  3. XM => WE 
  4. SE => ~UM 
  5. SE => ~GD 
  6. GD => R 
  7. GD => D 
  8. SM => SE 
  9. UM => SS 
  10. GD => SS 

On this analysis, the question of the availability of microfoundations for social facts can be understood to be central to all the other issues: reducibility, emergence, generativity, and supervenience. There are several positions that we can take with respect to the availability of microfoundations for higher-level social facts.

  1. If we have convincing reason to believe that all social facts possess microfoundations at a lower level (known or unknown) then we know that the social world supervenes upon the micro-level; strong emergence is ruled out; weak emergence is true only so long as some microfoundations remain unknown; and higher-level social facts are generatively dependent upon the micro-level.   
  2. If we take a pragmatic view of the social sciences and conclude that any given stage of knowledge provides information about only a subset of possible microfoundations for higher-level facts, then we are at liberty to take the view that each level of social ontology is at least weakly emergent from lower levels — basically, the point of view advocated under the banner of “relative explanatory autonomy” (link). This also appears to be roughly the position taken by Herbert Simon (link). 
  3. If we believe that it is impossible in principle to fully specify the microfoundations of all social facts, then weak emergence is true; supervenience is false; and generativity is false. (For example, we might believe this to be true because of the difficulty of modeling and calculating a sufficiently large and complex domain of units.) This is the situation that Fodor believes to be the case for many of the special sciences. 
  4. If we have reason to believe that some higher-level facts simply do not possess microfoundations at a lower level, then strong emergence is true; the social world is not generatively dependent upon the micro-world; and the social world does not supervene upon the micro-world. 

In other words, it appears that each of the concepts of supervenience, reduction, emergence, and generative dependence can be defined in terms of the availability of inavailability of microfoundations for some or all of the facts at a higher level based on facts at the lower level. Strong emergence and generative dependence turn out to be logical contraries (witness the final two definitions above).

 

Quantum mental processes?

One of the pleasant aspects of a long career in philosophy is the occasional experience of a genuinely novel approach to familiar problems. Sometimes one’s reaction is skeptical at first — “that’s a crazy idea!”. And sometimes the approach turns out to have genuine promise. I’ve had that experience of moving from profound doubt to appreciation several times over the years, and it is an uplifting learning experience. (Most recently, I’ve made that progression with respect to some of the ideas of assemblage and actor-network theory advanced by thinkers such as Bruno Latour; link, link.)

I’m having that experience of unexpected dissonance as I begin to read Alexander Wendt’s Quantum Mind and Social Science: Unifying Physical and Social Ontology. Wendt’s book addresses many of the issues with which philosophers of social science have grappled for decades. But Wendt suggests a fundamental switch in the way that we think of the relation between the human sciences and the natural world. He suggests that an emerging paradigm of research on consciousness, advanced by Giuseppi Vitiello, John Eccles, Roger Penrose, Henry Stapp, and others, may have important implications for our understanding of the social world as well. This is the field of “quantum neuropsychology” — the body of theory that maintains that puzzles surrounding the mind-body problem may be resolved by examining the workings of quantum behavior in the central nervous system. I’m not sure which category to put the idea of quantum consciousness yet, but it’s interesting enough to pursue further.

The familiar problem in this case is the relation between the mental and the physical. Like all physicalists, I work on the assumption that mental phenomena are embodied in the physical infrastructure of the central nervous system, and that the central nervous system works according to familiar principles of electrochemistry. Thought and consciousness are somehow the “emergent” result of the workings of the complex physical structure of the brain (in a safe and bounded sense of emergence). The novel approach is the idea that somehow quantum physics may play a strikingly different role in this topic than ever had been imagined. Theorists in the field of quantum consciousness speculate that perhaps the peculiar characteristics of quantum events at the sub-atomic level (e.g. quantum randomness, complementary, entanglement) are close enough to the action of neural networks that they serve to give a neural structure radically different properties from those expected by a classical-physics view of the brain. (This idea isn’t precisely new; when I was an undergraduate in the 1960s it was sometimes speculated that freedom of the will was possible because of the indeterminacy created by quantum physics. But this wasn’t a very compelling idea.)

Wendt’s further contribution is to immerse himself in some of this work, and then to formulate the question of how these perspectives on intentionality and mentality might affect key topics in the philosophy of society. For example, how do the longstanding concepts of structure and agency look when we begin with a quantum perspective on mental activity?

A good place to start in preparing to read Wendt’s book is Harald Atmanspacher’s excellent article in the Stanford Encyclopedia of Philosophy (link). Atmanspacher organizes his treatment into three large areas of application of quantum physics to the problem of consciousness: metaphorical applications of the concepts of quantum physics; applications of the current state of knowledge in quantum physics; and applications of possible future advances in knowledge in quantum physics.

Among these [status quo] approaches, the one with the longest history was initiated by von Neumann in the 1930s…. It can be roughly characterized as the proposal to consider intentional conscious acts as intrinsically correlated with physical state reductions. (13)

A physical state reduction is the event that occurs when a quantum probability field resolves into a discrete particle or event upon having been measured. Some theorists (e.g. Henry Stapp) speculate that conscious human intention may influence the physical state reduction — thus a “mental” event causes a “physical” event. And some process along these lines is applied to the “activation” of a neuronal assembly:

The activation of a neuronal assembly is necessary to make the encoded content consciously accessible. This activation is considered to be initiated by external stimuli. Unless the assembly is activated, its content remains unconscious, unaccessed memory. (20)

Also of interest in Atmanspacher’s account is the idea of emergence: are mental phenomena emergent from physical phenomena, and in what sense? Atmanspacher specifies a clear but strong definition of emergence, and considers whether mental phenomena are emergent in this sense:

Mental states and/or properties can be considered as emergent if the material brain is not necessary or not sufficient to explore and understand them. (6)

This is a strong conception in a very specific way; it specifies that material facts are not sufficient to explain “emergent” mental properties. This implies that we need to know some additional facts beyond facts about the material brain in order to explain mental states; and it is natural to ask what the nature of those additional facts might be.

The reason this collection of ideas is initially shocking to me is the difference in scale between the sub-atomic level and macro-scale entities and events. There is something spooky about postulating causal links across that range of scales. It would be wholly crazy to speculate that we need to invoke the mathematics and theories of quantum physics to explain billiards. It is pretty well agreed by physicists that quantum mechanics reduces to Newtonian physics at this scale. Even though the component pieces of a billiard ball are quantum entities with peculiar properties, as an ensemble of 10^25 of these particles the behavior of the ball is safely classical. The peculiarities of the quantum level wash out for systems with multiple Avogadro’s numbers of particles through the reliable workings of statistical mechanics. And the intuitions of most people comfortable with physics would lead them to assume that neurons are subject to the same independence; the scale of activity of a neuron (both spatial and temporal) is orders of magnitude too large to reflect quantum effects. (Sorry, Schrodinger’s cat!)

Charles Seife reports a set of fundamental physical computations conducted by Max Tegmark intended to demonstrate this in a recent article in Science Magazine, “Cold Numbers Unmake the Quantum Mind” (link). Tegmark’s analysis focuses on the speculations offered by Penrose and others on the possible quantum behavior of “microtubules.” Tegmark purports to demonstrate that the time and space scales of quantum effects are too short by orders of magnitude to account for the neural mechanisms that can be observed (link). Here is Tegmark’s abstract:

Based on a calculation of neural decoherence rates, we argue that the degrees of freedom of the human brain that relate to cognitive processes should be thought of as a classical rather than quantum system, i.e., that there is nothing fundamentally wrong with the current classical approach to neural network simulations. We find that the decoherence time scales (∼10^−13–10^−20s) are typically much shorter than the relevant dynamical time scales (∼10^−3–10^−1s), both for regular neuron firing and for kinklike polarization excitations in microtubules. This conclusion disagrees with suggestions by Penrose and others that the brain acts as a quantum computer, and that quantum coherence is related to consciousness in a fundamental way. (link)

I am grateful to Atmanspacher for providing such a clear and logical presentation of some of the main ideas of quantum consciousness; but I continue to find myself sceptical. There is a risk in this field to succumb to the temptation towards unbounded speculation: “Maybe if X’s could influence Y’s, then we could explain Z” without any knowledge of how X, Y, and Z are related through causal pathways. And the field seems sometimes to be prey to this impulse: “If quantum events were partially mental, then perhaps mental events could influence quantum states (and from there influence macro-scale effects).”

In an upcoming post I’ll look closely at what Alex Wendt makes of this body of theory in application to the level of social behavior and structure.

Emergentism and generationism

media: lecture by Stanford Professor Robert Sapolsky on chaos and reduction
 

Several recent posts have focused on the topic of simulations in the social sciences. An interesting question here is whether these simulation models shed light on the questions of emergence and reduction that frequently arise in the philosophy of the social sciences. In most cases the models I’ve mentioned are “aggregation” models, in which the simulation attempts to capture the chief dynamics and interaction effects of the units and then work out the behavior and evolution of the ensemble. This is visibly clear when it comes to agent-based models. However, some of the scholars whose work I admire are “complexity” theorists, and a common view within complexity studies is the idea that the system has properties that are difficult or impossible to derive from the features of the units.

So does this body of work give weight to the idea of emergence, or does it incline us more in the direction of supervenience and ontological unit-ism?

John Miller and Scott Page provide an accessible framework within which to consider these kinds of problems in Complex Adaptive Systems: An Introduction to Computational Models of Social Life. They look at certain kinds of social phenomena as constituting what they call “complex adaptive systems,” and they try to demonstrate how some of the computational tools developed in the sciences of complex systems can be deployed to analyze and explain complex social outcomes. Here is how they characterize the key concepts:

Adaptive social systems are composed of interacting, thoughtful (but perhaps not brilliant) agents. (kl 151)

Page and Miller believe that social phenomena often display “emergence” in a way that we can make sense of. Here is the umbrella notion they begin with:

The usual notion put forth underlying emergence is that individual, localized behavior aggregates into global behavior that is, in some sense, disconnected from its origins. Such a disconnection implies that, within limits, the details of the local behavior do not matter to the aggregate outcome. (kl 826)

And they believe that the notion of emergence has “deep intuitive appeal”. They find emergence to be applicable at several levels of description, including “disorganized complexity” (the central limit theorem, the law of large numbers) and “organized complexity” (the behavior of sand piles when grains have a small amount of control).

Under organized complexity, the relationships among the agents are such that through various feedbacks and structural contingencies, agent variations no longer cancel one another out but, rather, become reinforcing. In such a world, we leave the realm of the Law of Large Numbers and instead embark down paths unknown. While we have ample evidence, both empirical and experimental, that under organized complexity, systems can exhibit aggregate properties that are not directly tied to agent details, a sound theoretical foothold from which to leverage this observation is only now being constructed. (kl 976)

Organized complexity, in their view, is a substantive and important kind of emergence in social systems, and this concept plays a key role in their view of complex adaptive systems.

Another — and contrarian — contribution to this field is provided by Joshua Epstein. His three-volume work on agent-based models is a fundamental text book for the field. Here are the titles:

Agent_Zero: Toward Neurocognitive Foundations for Generative Social Science
Growing Artificial Societies: Social Science From the Bottom Up
Generative Social Science: Studies in Agent-Based Computational Modeling

Chapter 1 of Generative Social Science provides an overview of Epstein’s approach is provided in “Agent-based Computational Models and Generative Social Science”, and this is a superb place to begin (link). Here is how Epstein defines generativity:

Agent-based models provide computational demonstrations that a given microspecification is in fact sufficient to generate a macrostructure of interest…. Rather, the generativist wants an account of the configuration’s attainment by a decentralized system of heterogeneous autonomous agents. Thus, the motto of generative social science, if you will, is: If you didn’t grow it, you didn’t explain its emergence. (42)

Epstein describes an extensive attempt to model a historical population using agent-based modeling techniques, the Artificial Anasazi project (link). This work is presented in Dean, Gumerman, Epstein, Axtell, Swedlund, McCarroll, and Parker, “Understanding Anasazi Culture Change through Agent-Based Modeling” in Dynamics in Human and Primate Societies: Agent-Based Modeling of Social and Spatial Processes. The model takes a time series of fundamental environmental, climate, and agricultural data as given, and he and his team attempt to reconstruct (generate) the pattern of habitation that would result. Here is the finding they arrive at:

Generativity seems to be directly incompatible with the idea of emergence, and in fact Epstein takes pains to cast doubt on that idea.

I have always been uncomfortable with the vagueness–and occasional mysticism–surrounding this word and, accordingly, tried to define it quite narrowly…. There, we defined “emergent phenomena” to be simply “stable macroscopic patterns arising from local interaction of agents.” (53)

So Epstein and Page both make use of the methods of agent based modeling, but they disagree about the idea of emergence. Page believes that complex adaptive systems give rise to properties that are emergent and irreducible; whereas Epstein doesn’t think the idea makes a lot of sense. Rather, Epstein’s view depends on the idea that we can reproduce (generate) the macro phenomena based on a model involving the agents and their interactions. Macro phenomena are generated by the interactions of the units; whereas for Page and Miller, macro phenomena in some systems have properties that cannot be easily derived from the activities of the units.

At the moment, anyway, I find myself attracted to Herbert Simon’s effort to split the difference by referring to “weak emergence” (link):

… reductionism in principle even though it is not easy (often not even computationally feasible) to infer rigorously the properties of the whole from knowledge of the properties of the parts. In this pragmatic way, we can build nearly independent theories for each successive level of complexity, but at the same time, build bridging theories that show how each higher level can be accounted for in terms of the elements and relations of the next level down. (Sciences of the Artificial 3rd edition 172)

This view emphasizes the computational and epistemic limits that sometimes preclude generating the phenomena in question — for example, the problems raised by non-linear causal relations and causal interdependence. Many observers have noted that the behavior of tightly linked causal systems may be impossible to predict, even when we are confident that the system outcomes are the result of “nothing but” the interactions of the units and sub-systems.

Why emergence?

It is a fair question to ask, whether the concept of emergence is perhaps less important than it initially appears to be. Part of the interest in emergence seems to derive from the impulse by sociologists and philosophers to try to show that there is a legitimate level of the world that is “social”, and to reject the more extreme versions of reductionism.

Social scientists have a few concrete and important interests in this set of issues. One is a concern for the autonomy of the social science disciplines. Is there a domain of the social that warrants scientific study? Or can we make do with really good microeconomic theories, agent-based modeling techniques, and a dollop of social psychology, and do without strong theories of the causal powers of social entities?

Another concern is apparently related, but on the ontology side of the story: are there social entities that can be studied for their empirical and causal characteristics independently from the individual activities that make them up? Do social entities really exist? Or are there compelling reasons to conclude that social entities are too fluid and plastic to admit of possessing stable empirical properties?

It seems to me that these concerns can be fully satisfied without appealing to a strong conception of emergence. We have perfectly good concepts that individuate entities at a social level, and we have fairly ordinary but compelling reasons for believing that these sorts of things are causally active in the world. But perhaps we can frame some simple ideas about the social world that will allow us to be more relaxed about whether these properties can be reduced to or explained by facts about actors (methodological individualism), or derived from facts about actors, or are instead strongly independent from the level of actors upon which they rest.

Consider the following background propositions about the social world. These are not trivial assumptions, but it would appear that a broad range of social thinkers would accept them, from enlightened analytical sociologists to many critical realists.

  1. Social phenomena are constituted by the actions and thoughts of situated social actors. (“No pure social stuff, no ineffable special sauce”)
  2. Actors are causally influenced by a variety of social structures and and entities. (“Actors are socially constituted and socially situated.”) 
  3. Ensembles have properties that derive from the interactions of the composing entities (actors). (“System properties derive from complex and dynamic relations and structures among constituents.”) 
  4. There are social properties that are not the simple aggregation of the properties of the actors. (“System properties are not simply the sum of constituent properties.”) 
  5. Ensembles sometimes have system-level properties that exert causal powers with regard to their own constituents. (“Systems exert downward causation on their constituents.”) 
  6. The computational challenges involved in modeling large complex systems are often overwhelming. (“The properties and behavior of complex systems are sometimes incalculable based simply on information about constituents and their arrangements.”) 

These assumptions would serve to establish quite a bit of autonomy for social science investigation and explanation, without requiring us to debate whether social entities are nonetheless emergent. And the ontologically cautious among us may be more comfortable with these limited and reasonably clear assumptions than they are with an open-ended concept of emergent phenomena and properties. Assumption 6 suggests that it is not feasible (and likely will never be) to deduce social patterns from individual-level facts. Assumptions 3 and 4 establish that social properties are “autonomous” from individual-level facts. Assumptions 1 and 2 establish the ontological foundation of social entities — the socially constituted individuals whose thoughts and actions constitute them. And assumption 5 establishes that the causal powers of social entities are in fact important and autonomous from facts about individuals, in the very important respect that higher-level properties play a causal role in the constitution of lower-level entities (individuals). This assumption is reflected in assumption 2 as well.

So perhaps we might conclude that not much turns on whether social properties and powers are emergent or not. Instead, we might be better advised to try to capture the issues in this area in different terms. And the alternative that I favor is the idea of relative explanatory autonomy (link). The six core assumptions mentioned above serve to capture the heart of this approach.
%d bloggers like this: