Complex systems

Social complexity

Social ensembles are often said to be “complex”. What does this mean?

Herbert Simon is one of the seminal thinkers in the study of complexity. His 1962 article, “The Architecture of Complexity” (link), put forward several ideas that have become core to the conceptual frameworks of people who now study social complexity. So it is worthwhile highlighting a few of the key ideas that were put forward in that article. Here is Simon’s definition of complexity:

Roughly, by a complex system I mean one made up of a large number of parts that interact in a nonsimple way. In such systems, the whole is more than the sum of the parts, not in an ultimate, metaphysical sense, but in the important pragmatic sense that, given the properties of the parts and the laws of their interaction, it is not a trivial matter to infer the properties of the whole. In the face of complexity, an in-principle reductionist may be at the same time a pragmatic holist. (468)

Notice several key ideas contained here, as well as several things that are not said. First, the complexity of a system derives from the “nonsimple” nature of the interaction of its parts (subsystems). A watch is a simple system, because it has many parts but the behavior of the whole is the simple sum of the direct mechanical interactions of the parts. The watchspring provides an (approximately) constant impulse to the gearwheel, producing a temporally regular motion in the gears. This motion pushes forward the time registers (second, minute, hour) in a fully predictable way. If the spring’s tension influenced not only the gearwheel, but also the size of the step taken by the minute hand; or if the impulse provided by the spring varied significantly according to the alignment of the hour and second hands and the orientation of the spring — then the behavior of the watch would be “complex”. It would be difficult or impossible to predict the state of the time registers by counting the ticks in the watch gearwheel. So this is a first statement of the idea of complexity: the fact of multiple causal interactions among the many parts (subsystems) that make up the whole system.

A second main idea here is that the behavior of the system is difficult to predict as a result of the nonsimple interactions among the parts. In a complex system we cannot provide a simple aggregation model of the system that adds up the independent behaviors of the parts; rather, the parts are influenced in their behaviors by the behaviors of other components. The state of the system is fixed by interdependent subsystems; which implies that the system’s behavior can oscillate wildly with apparently similar initial conditions. (This is one explanation of the Chernobyl nuclear meltdown: engineers attempted to “steer” the system to a safe shutdown by manipulating several control systems at once; but these control systems had complex effects on each other, with the result that the engineers catastrophically lost control of the system.)

A third important point here is Simon’s distinction between “metaphysical reducibility” and “pragmatic holism.” He accepts what we would today call the principle of supervenience: the state of the system supervenes upon the states of the parts. But he rejects the feasibility of performing a reduction of the behavior of the system to an account of the properties of the parts. He does not use the concept of “emergence” here, but this would be another way of putting his point: a metaphysically emergent property of a system is one that cannot in principle be derived from the characteristics of the parts. A pragmatically emergent property is one that supervenes upon the properties of the parts, but where it is computationally difficult or impossible to map the function from the state of the parts to the state of the system. This point has some relevance to the idea of “relative explanatory autonomy” mentioned in an earlier posting (link). The latter idea postulates that we can sometimes discover system properties (causal powers) of a complex system that are in principle fixed by the underlying parts, but where it is either impossible or unnecessary to discover the specific causal sequences through which the system’s properties come to be as they are.

Another key idea in this article is Simon’s idea of a hierarchic system.

By a hierarchic system, or hierarchy, I mean a system that is composed of interrelated subsystems, each of the latter being, in turn, hierarchic in structure until we reach some lowest level of elementary subsystem. (468)

I have already given an example of one kind of hierarchy that is frequently encountered in the social sciences: a formal organization. Business firins, governments, universities all have a clearly visible parts-within-parts structure. (469)

Here the idea is also an important one. It is a formal specification of a particular kind of ensemble in which structures at one level of aggregation are found to be composed separately of structures or subsystems at a lower level of aggregation. Simon offers the example of a biological cell that can be analyzed into a set of exhaustive and mutually independent subsystems nested within each other. It is essential that there is a relation of enclosure as we descend the hierarchy of structures: the substructures of level S are entirely contained within it and do not serve as substructures of some other system S’.

It is difficult to think of biological examples that violate the conditions of hierarchy — though we might ask whether an organism and its symbiote might be best understood as a non-hierarchical system. But examples are readily available in the social world. Labor unions and corporate PACs play significant causal roles in modern democracies. But they are not subsystems of the political process in a hierarchical sense: they are not contained within the state, and they play roles in non-state systems as well. (A business lobby group may influence both the policies chosen by a unit of government and the business strategy of a healthcare system.)

Simon appears to believe that hierarchies reduce the complexity of systems; and they support the feature of what we would now call “modularity”, where we can treat the workings of a subsystem as a self-enclosed unit that works roughly the same no matter what changes occur in other subsystems.

Simon puts this point in his own language of “decomposability.” A system is decomposable if we can disaggregate its behavior onto the sum of the independent behaviors of its parts. A system is “nearly decomposable” if the parts of the system have some effects on each other, but these effects are small relative to the overall workings of the system.

At least some kinds of hierarchic systems can be approximated successfully as nearly decomposable systems. The main theoretical findings from the approach can be summed up in two propositions:

(a) in a nearly decomposable system, the short-run behavior of each of the component subsystems is approximately independent of the short-run behavior of the other components; (b) in the long run, the behavior of any one of the components depends in only an aggregate way on the behavior of the other components. (474)

He illustrates this point in the case of social systems in these terms:

In the dynamics of social systems, where members of a system communicate with and influence other members, near decomposability is generally very prominent. This is most obvious in formal organizations, where the formal authority relation connects each member of the organization with one immediate superior and with a small number of subordinates. Of course many communications in organizations follow other channels than the lines of formal authoritv. But most of these channels lead from any particular individual to a very limited number of his superiors, subordinates, and associates. Hence, departmental boundaries play very much the same role as the walls in our heat example. (475)

And in summary:

We have seen that hierarchies have the property of near-decomposability. Intra-component linkages are generally stronger than intercomponent linkages. This fact has the effect of separating the high-frequency dynamics of a hierarchy — involving the internal structure of the components– from the low frequency dynamics-involving interaction among components. (477)

So why does Simon expect that systems will generally be hierarchical, and hierarchies will generally be near-decomposable?  It turns out that this is an expectation that derives from the notion that systems were created by designers (who would certainly favor these features because they make the system predictable and understandable) or evolved through some process of natural selection from simpler to more complex agglomerations.  So we might expect that hydroelectric plants and motion detector circuits in frogs’ visual systems are hierarchical and near-decomposable.
But here is an important point about social complexity.  Neither of these expectations is likely to be satisfied in the case of social systems.  Take the causal processes (sub-systems) that make up a city. And consider some aggregate properties we may be interested in — emigration, resettlement, crime rates, school truancy, real estate values.  Some of the processes that influence these properties are designed (zoning boards, school management systems), but many are not.  Instead, they are the result of separate and non-teleological processes leading to the present.  And there is often a high degree of causal interaction among these separate processes.  As a result, it might be more reasonable to expect, contrary to Simon’s line of thought here, that social systems are likely to embody greater complexity and less decomposability than the systems he uses as examples.
(A recent visit to the Center for Social Complexity at George Mason University (link) was very instructive for me.  There is a great deal of very interesting work underway at the Center using agent-based modeling techniques to understand large, complicated social processes: population movements, housing markets, deforestation, and more.  Particularly interesting is a blog by Andrew Crooks at the Center on various aspects of agent-based modeling of spatial processes.)

Revisiting Popper


Karl Popper’s most commonly cited contribution to philosophy and the philosophy of science is his theory of falsifiability (The Logic of Scientific Discovery, Conjectures and Refutations: The Growth of Scientific Knowledge). (Stephen Thornton has a very nice essay on Popper’s philosophy in the Stanford Encyclopedia of Philosophy.) In its essence, this theory is an alternative to “confirmation theory.” Contrary to positivist philosophy of science, Popper doesn’t think that scientific theories can be confirmed by more and more positive empirical evidence. Instead, he argues that the logic of scientific research is a critical method in which scientists do their best to “falsify” their hypotheses and theories. And we are rationally justified in accepting theories that have been severely tested through an effort to show they are false — rather than accepting theories for which we have accumulated a body of corroborative evidence. Basically, he argues that scientists are in the business of asking this question: what is the most unlikely consequence of this hypothesis? How can I find evidence in nature that would demonstrate that the hypothesis is false? Popper criticizes theorists like Marx and Freud who attempt to accumulate evidence that corroborates their theories (historical materialism, ego transference) and praises theorists like Einstein who honestly confront the unlikely consequences their theories appear to have (perihelion of Mercury).

At bottom, I think many philosophers of science have drawn their own conclusions about both falsifiability and confirmation theory: there is no recipe for measuring the empirical credibility of a given scientific theory, and there is no codifiable “inductive logic” that might replace the forms of empirical reasoning that we find throughout the history of science. Instead, we need to look in greater detail at the epistemic practices of real research communities in order to see the nuanced forms of empirical reasoning that are brought forward for the evaluation of scientific theories. Popper’s student, Imre Lakatos, makes one effort at this (Methodology of Scientific Research Programmes; Criticism and the Growth of Knowledge); so does William Newton-Smith (The Rationality of Science), and much of the philosophy of science that has proceeded under the rubrics of philosophy of physics, biology, or economics is equally attentive to the specific epistemic practices of real working scientific traditions. So “falsifiability” doesn’t seem to have a lot to add to a theory of scientific rationality at this point in the philosophy of science. In particular, Popper’s grand critique of Marx’s social science on the grounds that it is “unfalsifiable” just seems to miss the point; surely Marx, Durkheim, Weber, Simmel, or Tocqueville have important social science insights that can’t be refuted by deriding them as “unfalsifiable”. And Popper’s impatience with Marxism makes one doubt his objectivity as a sympathetic reader of Marx’s work.

Of greater interest is another celebrated idea that Popper put forward, his critique of “historicism” in The Poverty of Historicism (1957). And unlike the theory of falsifiability, I think that there are important insights in this discussion that are even more useful today than they were in 1957, when it comes to conceptualizing the nature of the social sciences. So people who are a little dismissive of Popper may find that there are novelties here that they will find interesting.

Popper characterizes historicism as “an approach to the social sciences which assumes that historical prediction is their principal aim, and which assumes that this aim is attainable by discovering the ‘rhythms’ or the ‘patterns’, the ‘laws’ or the ‘trends’ that underlie the evolution of history” (3). Historicists differ from naturalists, however, in that they believe that the laws that govern history are themselves historically changeable. So a given historical epoch has its own laws and generalizations – unlike the laws of nature that are uniform across time and space. So historicism involves combining two ideas: prediction of historical change based on a formulation of general laws or patterns; and a recognition that historical laws and patterns are themselves variable over time, in reaction to human agency.

Popper’s central conclusion is that large predictions of historical or social outcomes are inherently unjustifiable — a position taken up several times here (post, post). He finds that “holistic” or “utopian” historical predictions depend upon assumptions that simply cannot be justified; instead, he prefers “piecemeal” predictions and interventions (21). What Popper calls “historicism” amounts to the aspiration that there should be a comprehensive science of society that permits prediction of whole future states of the social system, and also supports re-engineering of the social system if we choose. In other words, historicism in his description sounds quite a bit like social physics: the aspiration of finding a theory that describes and predicts the total state of society.

The kind of history with which historicists wish to identify sociology looks not only backwards to the past but also forwards to the future. It is the study of the operative forces and, above all, of the laws of social development. (45)

Popper rejects the feasibility or appropriateness of this vision of social knowledge, and he is right to do so. The social world is not amenable to this kind of general theoretical representation.

The social thinker who serves as Popper’s example of this kind of holistic social theory is Karl Marx. According to Popper, Marx’s Capital (Marx 1977 [1867]) is intended to be a general theory of capitalist society, providing a basis for predicting its future and its specific internal changes over time. And Marx’s theory of historical materialism (“History is a history of class conflict,” “History is the unfolding of the contradictions between the forces and relations of production”; (Communist Manifesto, Preface to a Contribution to Political Economy)) is Popper’s central example of a holistic theory of history. And it is Marx’s theory of revolution that provides a central example for Popper under the category of utopian social engineering. In The Scientific Marx I argue that Popper’s representation of Marx’s social science contribution is flawed; rather, Marx’s ideas about capitalism take the form of an eclectic combination of sociology, economic theory, historical description, and institutional analysis. It is also true, however, that Marx writes in Capital that he is looking to identify the laws of motion of the capitalist mode of production.

Whatever the accuracy of Popper’s interpretation of Marx, his more general point is certainly correct. Sociology and economics cannot provide us with general theories that permit the prediction of large historical change. Popper’s critique of historicism, then, can be rephrased as a compelling critique of the model of the natural sciences as a meta-theory for the social and historical sciences. History and society are not law-governed systems for which we might eventually hope to find exact and comprehensive theories. Instead, they are the heterogeneous, plastic, and contingent compound of actions, structures, causal mechanisms, and conjunctures that elude systematization and prediction. And this conclusion brings us back to the centrality of agent-centered explanations of historical outcomes.

I chose the planetary photo above because it raises a number of complexities about theoretical systems, comprehensive models, and prediction that need sorting out. Popper observes that metaphors from astronomy have had a great deal of sway with historicists: “Modern historicists have been greatly impressed by the success of Newtonian theory, and especially by its power of forecasting the position of the planets a long time ahead” (36). The photo is of a distant planetary system in the making. The amount of debris in orbit makes it clear that it would be impossible to model and predict the behavior of this system over time; this is an n-body gravitational problem that even Newton despaired to solve. What physics does succeed in doing is identifying the processes and forces that are relevant to the evolution of this system over time — without being able to predict its course in even gross form. This is a good example of a complex, chaotic system where prediction is impossible.

Policy, treatment, and mechanism

Policies are selected in order to bring about some desired social outcome or to prevent an undesired one. Medical treatments are applied in order to cure a disease or to ameliorate its effects. In each case an intervention is performed in the belief that this intervention will causally interact with a larger system in such a way as to bring about the desired state. On the basis of a body of beliefs and theories, we judge that T in circumstances C will bring about O with some degree of likelihood. If we did not have such a belief, then there would be no rational basis for choosing to apply the treatment. “Try something, try anything” isn’t exactly a rational basis for policy choice.

In other words, policies and treatments depend on the availability of bodies of knowledge about the causal structure of the domain we’re interested in — what sorts of factors cause or inhibit what sorts of outcomes. This means we need to have some knowledge of the mechanisms that are at work in this domain. And it also means that we need to have some degree of ability to predict some future states — “If you give the patient an aspirin her fever will come down” or “If we inject $700 billion into the financial system the stock market will recover.”

Predictions of this sort could be grounded in two different sorts of reasoning. They might be purely inductive: “Clinical studies demonstrate that administration of an aspirin has a 90% probability of reducing fever.” Or they could be based on hypotheses about the mechanisms that are operative: “Fever is caused by C; aspirin reduces C in the bloodstream; therefore we should expect that aspirin reduces fever by reducing C.” And ideally we would hope that both forms of reasoning are available — causal expectations are born out by clinical evidence.

Implicitly this story assumes that the relevant causal systems are pretty simple — that there are only a few causal pathways and that it is possible to isolate them through experimental studies. We can then insert our proposed interventions into the causal diagram and have reasonable confidence that we can anticipate their effects. The logic of clinical trials as a way of establishing efficacy depends on this assumption of causal simplicity and isolation.

But what if the domain we’re concerned with isn’t like that? Suppose instead that there are many causal factors and a high degree of causal interdependence among the factors. And suppose that we have only limited knowledge of the strength and form of these interdependencies. Is it possible to make rationally justified interventions within such a system?

This description comes pretty close to what are referred to as complex systems. And the most basic finding in the study of complex systems is the extreme difficulty of anticipating future system states. Small interventions or variations in boundary conditions produce massive variations in later system states. But this is bad news for policy makers who are hoping to “steer” a complex system towards a more desirable state. There are good analytical reasons for thinking that they will not be able to anticipate the nature or magnitude or even direction of the effects of the intervention.

The study of complex systems is a collection of areas of research in mathematics, economics, and biology that attempt to arrive at better ways of modeling and projecting the behavior of systems with these complex causal interdependencies. This is an exciting field of research at places like the Santa Fe Institute and the University of Michigan. One important tool that had been extensively developed is the theory of agent-based modeling — essentially, the effort to derive system properties as the aggregate result of the activities of independent agents at the micro-level. And a fairly durable result has emerged: run a model of a complex system a thousand times and you will get a wide distribution of outcomes. This means that we need to think of complex systems as being highly contingent and path-dependent in their behavior. The effect of an intervention may be a wide distribution of future states.

So far the argument is located at a pretty high level of abstraction. Simple causal systems admit of intelligent policy intervention, whereas complex, chaotic systems may not. But the important question is more concrete: which kind of system are we facing when we consider social policy or disease? Are social systems and diseases examples of complex systems? Can social systems be sufficiently disaggregated into fairly durable subsystems that admit of discrete causal analysis and intelligent intervention? What about diseases such as solid tumors? Can we have confidence in interventions such as chemotherapy? And, in both realms, can the findings of complexity theory be helpful by providing mathematical means for working out the system effects of various possible interventions?

Composition of the social

Our social ontology needs to reflect the insight that complex social happenings are almost invariably composed of multiple causal processes rather than existing as unitary systems. The phenomena of a great social whole — a city over a fifty-year span, a period of sustained social upheaval or revolution (Iran in the 1970s-1980s), an international trading system — should be conceptualized as the sum of a large number of separate processes with intertwining linkages and often highly dissimilar tempos. We can provide analysis and theory for some of the component processes, and we can attempt to model the results of aggregating these processes. And we can attempt to explain the patterns and exceptions that arise as the consequence of one or more of these processes. Some of the subordinate processes will be significantly amenable to theorizing and projection, and some will not. And the totality of behavior will be more than the “sum” of the relatively limited number of processes that are amenable to theoretical analysis. This means that the behavior of the whole will demonstrate contingency and unpredictability modulo the conditions and predictable workings of the known processes.

Consider the example of the development of a large city over time. The sorts of subordinate processes that I’m thinking of here might include —

  • The habitation dynamics created by the nodes of a transportation system
  • The dynamics of electoral competition governing the offices of mayor and city council
  • The politics of land use policy and zoning permits
  • The dynamics and outcomes of public education on the talent level of the population
  • Economic development policies and tax incentives emanating from state government
  • Dynamics of real estate system with respect to race
  • Employment and poverty characteristics of surrounding region

Each of these processes can be investigated by specialists — public policy experts, sociologists of race and segregation, urban politics experts. Each contributes to features of the evolving urban environment. And it is credible that there are consistent patterns of behavior and development within these various types of processes. This justifies a specialist’s approach to specific types of causes of urban change, and rigorous social science can result.

But it must also be recognized that, there are system interdependencies among these groups of factors. More in-migration of extremely poor families may put more stress on the public schools. Enhancement of quality or accessibility of public schools may increase in-migration (the Kalamazoo promise, for example). Political incentives within the city council system may favor land-use policies that encourage the creation of racial or ethnic enclaves. So it isn’t enough to understand the separate processes individually; we need to make an effort to discover these endogenous relations among them.

But over and above this complication of the causal interdependency of recognized factors, there is another and more pervasive complication as well. For any given complex social whole, it is almost always the case that there are likely to be additional causal processes that have not been separately analyzed or theorized. Some may be highly contingent and singular — for example, the many effects that September 11 had on NYC. Others may be systemic and important, but novel and previously untheorized — for example, the global information networks that Saskia Sassen emphasizes for the twenty-first century global city.

The upshot is that a complex social whole exceeds the particular theories we have created for this kind of phenomenon at any given point in time. The social whole is composed of lower-level processes; but it isn’t exhausted by any specific list of underlying processes. Therefore we shouldn’t imagine that the ideal result of investigation of urban phenomena is a comprehensive theory of the city — the goal is chimerical. Social science is always “incomplete”, in the sense that there are always social processes relevant to social outcomes that have not been theorized.

Is there any type of social phenomenon that is substantially more homogeneous than this description would suggest — with the result that we might be able to arrive a neat, comprehensive theories of this kind of social entity? Consider these potential candidates: inner city elementary schools, labor unions, wars of national liberation, civil service bureaus, or multi-national corporations. One might make the case that these terms capture a group of phenomena that are fairly homogeneous and would support simple, unified theories. But I think that this would be mistaken. Rather, much the same kind of causal complexity that is presented by the city of Chicago or London is also presented by elementary schools and labor unions. There are multiple social, cultural, economic, interpersonal, and historical factors that converge on a particular school in a particular place, or a particular union involving specific individuals and issues; and the characteristics of the school or the union are influenced by this complex convergence of factors. (On the union example, consider Howard Kimeldorf’s fascinating study, Battling for American Labor: Wobblies, Craft Workers, and the Making of the Union Movement. Kimeldorf demonstrates the historical contingency and the plurality of social and business factors that led to the significant differences among dock workers’ unions in the United States.)

What analytical frameworks available for capturing this understanding of the compositional nature of society? I have liked the framework of causal mechanisms, suggesting as it does the idea of there being separable causal processes underlying particular social facts that are diverse and amenable to investigation. The ontology of “assemblages” captures the idea as well, in its ontology of separable sub-processes. (Nick Srnicek provides an excellent introduction to assemblage theory in his master’s thesis.) And the language of microfoundations, methodological localism, and the agent-structure nexus convey much the same idea as well. In each case, we have the idea that the social entity is composed of underlying processes that take us back in the direction of agents acting within the context of social and environmental constraints. And we have a premise of causal openness: the behavior of the whole is not fully determined by a particular set of subordinate mechanisms or assemblages.

Heterogeneity of the social

I think heterogeneity is a very basic characteristic of the domain of the social. And I think this makes a big difference for how we should attempt to study the social world “scientifically”. What sorts of things am I thinking about here?

Let’s start with some semantics. A heterogeneous group of things is the contrary of a homogeneous group, and we can define homogeneity as “a group of fundamentally similar units or samples”. A homogeneous body may consist of a group of units with identical properties, or it may be a smooth mixture of different things, consisting of a similar composition at many levels of scale. A fruitcake is non-homogeneous, in that distinct volumes may include just cake or a mix of cake and dried cherries, or cake and the occasional walnut. The properties of fruitcake depend on which sample we encounter. A well mixed volume of oil and vinegar, by contrast, is homogeneous in a specific sense: the properties of each sample volume are the same as any other. The basic claim about the heterogeneity of the social comes down to this: at many levels of scale we continue to find a diversity of social things and processes at work. Society is more similar to fruitcake than cheesecake.

Heterogeneity makes a difference because one of the central goals of positivist science is to discover strong regularities among classes of phenomena, and regularities appear to presuppose homogeneity of the things over which the regularities are thought to obtain. So to observe that social phenomena are deeply heterogeneous at many levels of scale, is to cast fundamental doubt on the goal of discovering strong social regularities.

Let’s consider some of the forms of heterogeneity that the social world illustrates.

First is the heterogeneity of social causes and influences. Social events are commonly the result of a variety of different kinds of causes that come together in highly contingent conjunctions. A revolution may be caused by a protracted drought, a harsh system of land tenure, a new ideology of peasant solidarity, a communications system that conveys messages to the rural poor, and an unexpected spar within the rulers — all coming together at a moment in time. And this range of causal factors, in turn, shows up in the background of a very heterogeneous set of effects. (A transportation network, for example, may play a causal role in the occurrence of an epidemic, the spread of radical ideas, and a long, slow process of urban settlement.) The causes of an event are a mixed group of dissimilar influences with different dynamics and temporalities, and the effects of a given causal factor are also a mixed and dissimilar group.

Second is the heterogeneity that can be discovered within social categories of things — cities, religions, electoral democracies, social movements. Think of the diversity within Islam documented so well by Clifford Geertz (Islam Observed: Religious Development in Morocco and Indonesia); the diversity at multiple levels that exists among great cities like Beijing, New York, Geneva, and Rio (institutions, demography, ethnic groups, economic characteristics, administrative roles, …); the institutional variety that exists in the electoral democracies of India, France, and Argentina; or the wild diversity across the social movements of the right.

Third is the heterogeneity that can be discovered across and within social groups. It is not the case that all Kansans think alike — and this is true for whatever descriptors we might choose in order to achieve greater homogeneity (evangelical Kansans, urban evangelical Kansans, …). There are always interesting gradients within any social group. Likewise, there is great variation in the nature of ordinary, lived experience — for middle-class French families celebrating quatorze Juillet, for Californians celebrating July 4, and for Brazilians enjoying Dia da Independência on September 7.

A fourth form of heterogeneity takes us within the agent herself, when we note the variety of motives, moral frameworks, emotions, and modes of agency on the basis of which people act. This is one of the weaknesses of doctrinaire rational choice theory or dogmatic Marxism, the analytical assumption of a single dimension of motivation and reasoning. Instead, it is visible that one person acts for a variety of motives at a given time, persons shift their motives over time, and members of groups differ in terms of their motivational structure as well. So there is heterogeneity of motives and agency within the agent.

These dimensions of heterogeneity make the point: the social world is an ensemble, a dynamic mixture, and an ongoing interaction of forces, agents, structures, and mentalities. Social outcomes emerge from this heterogeneous and dynamic mixture, and the quest for general laws is deeply quixotic.

Where does the heterogeneity principle take us? It suggests an explanatory strategy: instead of looking for laws of whole categories of events and things, rather than searching for simple answers to questions like “why do revolutions occur?”, we might instead look to a “concatenation” strategy. That is, we might simply acknowledge the fact of molar heterogeneity and look instead for some of the different processes and things in play in a given item of interest, and the build up a theory of the whole as a concatenation of the particulars of the parts.

Significantly, this strategy takes us to several fruitful ideas that already have some currency.

First is the idea of looking for microfoundations for observed social processes; (Microfoundations, Methods, and Causation: On the Philosophy of the Social Sciences). Here the idea is that higher-level social processes, causes, and events, need to be placed within the context of an account of the agent-level institutions and circumstances that convey those processes.

Second is the method of causal mechanisms advocated by McAdam, Tarrow, and Tilly, and discussed frequently here (Dynamics of Contention (Cambridge Studies in Contentious Politics)). Put simply, the approach recommends that we explain an outcome as the contingent result of the concatenation of a set of independent causal mechanisms (escalation, intra-group competition, repression, …).

And third is the theory of “assemblages”, recommended by Nick from accursedshare and derived from some of the theories of Gilles Deleuze. (Manuel Delanda describes this theory in A New Philosophy of Society: Assemblage Theory And Social Complexity.)

Each of these ideas gives expression to the important truth of the heterogeneity principle: that social outcomes are the aggregate result of a number of lower-level processes and institutions that give rise to them, and that social outcomes are contingent results of interaction and concatenation of these lower-level processes.

Agent-based modeling as social explanation

Logical positivism favored a theory of scientific explanation that focused on subsumption under general laws. We explain an outcome by identifying one or more general laws, a set of boundary conditions, and a derivation of the outcome from these statements. A second and competing theory of scientific explanation can be called “causal realism.” On this approach, we explain an outcome by identifying the causal processes and mechanisms that give rise to it. And we explain a pattern of outcomes by identifying common causal mechanisms that tend to produce outcomes of this sort in circumstances like these. (If we observe that patterns of reciprocity tend to break down as villages become towns, we may identify the causal mechanism at work as the erosion of the face-to-face relationships that are a necessary condition for reciprocity.)

But there are other approaches we might take to social explanation and prediction. And one particularly promising avenue of approach is “agent-based simulation.” Here the basic idea is that we want to explain how a certain kind of social process unfolds. We can take our lead from the general insight that social processes depend on microfoundations at the level of socially situated individuals. Social outcomes are the aggregate result of intentional, strategic interactions among large numbers of agents. And we can attempt to implement a computer simulation that represents the decision-making processes and the structural constraints that characterize a large number of interacting agents.

Thomas Schelling’s writings give the clearest exposition to the logic of this approach Micromotives and Macrobehavior. Schelling demonstrates in a large number of convincing cases, how we can explain large and complex social outcomes, as the aggregate consequence of behavior by purposive agents pursuing their goals within constraints. He offers a simple model of residential segregation, for example, by modeling the consequences of assuming that blue residents prefer neighborhoods that are at least 50% blue, and red residents prefer neighborhoods at least 25% red. The consequence — a randomly distributed residential patterns becomes highly segregated in an extended series of iterations of individual moves.

It is possible to model various kinds of social situations by attributing a range of sets of preferences and beliefs across a hypothetical set of agents — and then run their interactions forward over a period of time. SimCity is a “toy” version of this idea — what happens when a region is developed by a set of players with a given range of goals and resources? By running the simulation multiple times it is possible to investigate whether there are patterned outcomes that recur across numerous timelines — or, sometimes, whether there are multiple equilibria that can result, depending on more or less random events early in the simulation.

Robert Axelrod’s repeated prisoners’ dilemma tournaments represent another such example of agent-based simulations. (Axelrod demonstrates that reciprocity, or tit-for-tat, is the winning strategy for a population of agents who are engaged in a continuing series of prisoners’ dilemma games with each other.) The most ambitious examples of this kind of modeling (and predicting and explaining) are to be found in the Santa Fe Institute’s research paradigm involving agent-based modeling and the modeling of complex systems. Interdisciplinary researchers at the University of Michigan pursue this approach to explanation at the Center for the Study of Complex Systems. (Mathematician John Casti describes a number of these sorts of experiments and simulations in Would-Be Worlds: How Simulation is Changing the Frontiers of Science and other books.)

This approach to social analysis is profoundly different from the “subsumption under theoretical principles” approach, the covering-law model of explanation. It doesn’t work on the assumption that there are laws or governing regularities pertaining to the social outcomes or complex systems at all. Instead, it attempts to derive descriptions of the outcomes as the aggregate result of the purposive and interactive actions of the many individuals who make up the social interaction over time. It is analogous to the simulation of swarms of insects, birds, or fish, in which we attribute very basic “navigational” rules to the individual organisms, and then run forward the behavior of the group as the compound of the interactive decisions made by the individuals. (Here is a brief account of studies of swarming behavior.)

How would this model of the explanation of group behavior be applied to real problems of social explanation? Consider one example: an effort to tease out the relationships between transportation networks and habitation patterns. We might begin with a compact urban population of a certain size. We might then postulate several things:

  • The preferences that each individual has concerning housing costs, transportation time and expense, and social and environmental environmental amenities.
  • The postulation of a new light rail system extending through the urban center into lightly populated farm land northeast and southwest
  • The postulation of a set of prices and amenities associated with possible housing sites throughout the region to a distance of 25 miles
  • The postulation of a rate of relocation for urban dwellers and a rate of immigration of new residents

Now run this set of assumptions forward through multiple generations, with individuals choosing location based on their preferences, and observe the patterns of habitation that result.

This description of a simulation of urban-suburban residential distribution over time falls within the field of economic geography. It has a lot in common with the nineteenth-century von Thunen’s Isolated State analysis of a city’s reach into the farm land surrounding it. (Click here for an interesting description of von Thunen’s method written in 1920.) What agent-based modeling adds to the analysis is the ability to use plentiful computational power to run models forward that include thousands of hypothetical agents; and to do this repeatedly so that it is possible to observe whether there are groups of patterns that result in different iterations. The results are then the aggregate consequence of the assumptions we make about large numbers of social agents — rather than being the expression of some set of general laws about “urbanization”.

And, most importantly, some of the results of the agent-based modeling and modeling of complexity performed by scholars associated with the Santa Fe Institute demonstrate the understandable novelty that can emerge from this kind of simulation. So an important theme of novelty and contingency is confirmed by this approach to social analysis.

There are powerful software packages that can provide a platform for implementing agent-based simulations; for example, NetLogo. Here is a screen shot from an implementation called “comsumer behavior” by Yudi Limbar Yasik. The simulation has been configured to allow the user to adjust the parameters of agents’ behavior; the software then runs forward in time through a number of iterations. The graphs provide aggregate information about the results.