Policies are selected in order to bring about some desired social outcome or to prevent an undesired one. Medical treatments are applied in order to cure a disease or to ameliorate its effects. In each case an intervention is performed in the belief that this intervention will causally interact with a larger system in such a way as to bring about the desired state. On the basis of a body of beliefs and theories, we judge that T in circumstances C will bring about O with some degree of likelihood. If we did not have such a belief, then there would be no rational basis for choosing to apply the treatment. “Try something, try anything” isn’t exactly a rational basis for policy choice.
In other words, policies and treatments depend on the availability of bodies of knowledge about the causal structure of the domain we’re interested in — what sorts of factors cause or inhibit what sorts of outcomes. This means we need to have some knowledge of the mechanisms that are at work in this domain. And it also means that we need to have some degree of ability to predict some future states — “If you give the patient an aspirin her fever will come down” or “If we inject $700 billion into the financial system the stock market will recover.”
Predictions of this sort could be grounded in two different sorts of reasoning. They might be purely inductive: “Clinical studies demonstrate that administration of an aspirin has a 90% probability of reducing fever.” Or they could be based on hypotheses about the mechanisms that are operative: “Fever is caused by C; aspirin reduces C in the bloodstream; therefore we should expect that aspirin reduces fever by reducing C.” And ideally we would hope that both forms of reasoning are available — causal expectations are born out by clinical evidence.
Implicitly this story assumes that the relevant causal systems are pretty simple — that there are only a few causal pathways and that it is possible to isolate them through experimental studies. We can then insert our proposed interventions into the causal diagram and have reasonable confidence that we can anticipate their effects. The logic of clinical trials as a way of establishing efficacy depends on this assumption of causal simplicity and isolation.
But what if the domain we’re concerned with isn’t like that? Suppose instead that there are many causal factors and a high degree of causal interdependence among the factors. And suppose that we have only limited knowledge of the strength and form of these interdependencies. Is it possible to make rationally justified interventions within such a system?
This description comes pretty close to what are referred to as complex systems. And the most basic finding in the study of complex systems is the extreme difficulty of anticipating future system states. Small interventions or variations in boundary conditions produce massive variations in later system states. But this is bad news for policy makers who are hoping to “steer” a complex system towards a more desirable state. There are good analytical reasons for thinking that they will not be able to anticipate the nature or magnitude or even direction of the effects of the intervention.
The study of complex systems is a collection of areas of research in mathematics, economics, and biology that attempt to arrive at better ways of modeling and projecting the behavior of systems with these complex causal interdependencies. This is an exciting field of research at places like the Santa Fe Institute and the University of Michigan. One important tool that had been extensively developed is the theory of agent-based modeling — essentially, the effort to derive system properties as the aggregate result of the activities of independent agents at the micro-level. And a fairly durable result has emerged: run a model of a complex system a thousand times and you will get a wide distribution of outcomes. This means that we need to think of complex systems as being highly contingent and path-dependent in their behavior. The effect of an intervention may be a wide distribution of future states.
So far the argument is located at a pretty high level of abstraction. Simple causal systems admit of intelligent policy intervention, whereas complex, chaotic systems may not. But the important question is more concrete: which kind of system are we facing when we consider social policy or disease? Are social systems and diseases examples of complex systems? Can social systems be sufficiently disaggregated into fairly durable subsystems that admit of discrete causal analysis and intelligent intervention? What about diseases such as solid tumors? Can we have confidence in interventions such as chemotherapy? And, in both realms, can the findings of complexity theory be helpful by providing mathematical means for working out the system effects of various possible interventions?