Logical positivism favored a theory of scientific explanation that focused on subsumption under general laws. We explain an outcome by identifying one or more general laws, a set of boundary conditions, and a derivation of the outcome from these statements. A second and competing theory of scientific explanation can be called “causal realism.” On this approach, we explain an outcome by identifying the causal processes and mechanisms that give rise to it. And we explain a pattern of outcomes by identifying common causal mechanisms that tend to produce outcomes of this sort in circumstances like these. (If we observe that patterns of reciprocity tend to break down as villages become towns, we may identify the causal mechanism at work as the erosion of the face-to-face relationships that are a necessary condition for reciprocity.)
But there are other approaches we might take to social explanation and prediction. And one particularly promising avenue of approach is “agent-based simulation.” Here the basic idea is that we want to explain how a certain kind of social process unfolds. We can take our lead from the general insight that social processes depend on microfoundations at the level of socially situated individuals. Social outcomes are the aggregate result of intentional, strategic interactions among large numbers of agents. And we can attempt to implement a computer simulation that represents the decision-making processes and the structural constraints that characterize a large number of interacting agents.
Thomas Schelling’s writings give the clearest exposition to the logic of this approach Micromotives and Macrobehavior. Schelling demonstrates in a large number of convincing cases, how we can explain large and complex social outcomes, as the aggregate consequence of behavior by purposive agents pursuing their goals within constraints. He offers a simple model of residential segregation, for example, by modeling the consequences of assuming that blue residents prefer neighborhoods that are at least 50% blue, and red residents prefer neighborhoods at least 25% red. The consequence — a randomly distributed residential patterns becomes highly segregated in an extended series of iterations of individual moves.
It is possible to model various kinds of social situations by attributing a range of sets of preferences and beliefs across a hypothetical set of agents — and then run their interactions forward over a period of time. SimCity is a “toy” version of this idea — what happens when a region is developed by a set of players with a given range of goals and resources? By running the simulation multiple times it is possible to investigate whether there are patterned outcomes that recur across numerous timelines — or, sometimes, whether there are multiple equilibria that can result, depending on more or less random events early in the simulation.
Robert Axelrod’s repeated prisoners’ dilemma tournaments represent another such example of agent-based simulations. (Axelrod demonstrates that reciprocity, or tit-for-tat, is the winning strategy for a population of agents who are engaged in a continuing series of prisoners’ dilemma games with each other.) The most ambitious examples of this kind of modeling (and predicting and explaining) are to be found in the Santa Fe Institute’s research paradigm involving agent-based modeling and the modeling of complex systems. Interdisciplinary researchers at the University of Michigan pursue this approach to explanation at the Center for the Study of Complex Systems. (Mathematician John Casti describes a number of these sorts of experiments and simulations in Would-Be Worlds: How Simulation is Changing the Frontiers of Science and other books.)
This approach to social analysis is profoundly different from the “subsumption under theoretical principles” approach, the covering-law model of explanation. It doesn’t work on the assumption that there are laws or governing regularities pertaining to the social outcomes or complex systems at all. Instead, it attempts to derive descriptions of the outcomes as the aggregate result of the purposive and interactive actions of the many individuals who make up the social interaction over time. It is analogous to the simulation of swarms of insects, birds, or fish, in which we attribute very basic “navigational” rules to the individual organisms, and then run forward the behavior of the group as the compound of the interactive decisions made by the individuals. (Here is a brief account of studies of swarming behavior.)
How would this model of the explanation of group behavior be applied to real problems of social explanation? Consider one example: an effort to tease out the relationships between transportation networks and habitation patterns. We might begin with a compact urban population of a certain size. We might then postulate several things:
- The preferences that each individual has concerning housing costs, transportation time and expense, and social and environmental environmental amenities.
- The postulation of a new light rail system extending through the urban center into lightly populated farm land northeast and southwest
- The postulation of a set of prices and amenities associated with possible housing sites throughout the region to a distance of 25 miles
- The postulation of a rate of relocation for urban dwellers and a rate of immigration of new residents
Now run this set of assumptions forward through multiple generations, with individuals choosing location based on their preferences, and observe the patterns of habitation that result.
This description of a simulation of urban-suburban residential distribution over time falls within the field of economic geography. It has a lot in common with the nineteenth-century von Thunen’s Isolated State analysis of a city’s reach into the farm land surrounding it. (Click here for an interesting description of von Thunen’s method written in 1920.) What agent-based modeling adds to the analysis is the ability to use plentiful computational power to run models forward that include thousands of hypothetical agents; and to do this repeatedly so that it is possible to observe whether there are groups of patterns that result in different iterations. The results are then the aggregate consequence of the assumptions we make about large numbers of social agents — rather than being the expression of some set of general laws about “urbanization”.
And, most importantly, some of the results of the agent-based modeling and modeling of complexity performed by scholars associated with the Santa Fe Institute demonstrate the understandable novelty that can emerge from this kind of simulation. So an important theme of novelty and contingency is confirmed by this approach to social analysis.
There are powerful software packages that can provide a platform for implementing agent-based simulations; for example, NetLogo. Here is a screen shot from an implementation called “consumer behavior” by Yudi Limbar Yasik. The simulation has been configured to allow the user to adjust the parameters of agents’ behavior; the software then runs forward in time through a number of iterations. The graphs provide aggregate information about the results.