Predicting, forecasting, and superforecasting

I have expressed a lot of reservation about the feasibility of prediction of large, important outcomes in the social world (link, link, link). Here are a couple of observations drawn from these earlier posts:

We sometimes think that there is fundamental stability in the social world, or at least an orderly pattern of development to the large social changes that occur…. But really, our desire to perceive order in the things we experience often deceives us. The social world at any given time is a conjunction of an enormous number of contingencies, accidents, and conjunctures. So we shouldn’t be surprised at the occurrence of crises, unexpected turns, and outbreaks of protest and rebellion. It is continuity rather than change that needs explanation. 

Social processes and causal sequences have a wide range of profiles. Some social processes — for example, population size — are continuous and roughly linear. These are the simplest processes to project into the future. Others, like the ebb and flow of popular names, spread of a disease, or mobilization over a social cause, are continuous but non-linear, with sharp turning points (tipping points, critical moments, exponential takeoff, hockey stick). And others, like the stock market, are discontinuous and stochastic, with lots of random events pushing prices up and down. (link)

One reason for the failure of large-scale predictions about social systems is the complexity of causal influences and interactions within the domain of social causation. We may be confident that X causes Z when it occurs in isolated circumstances. But it may be that when U, V, and W are present, the effect of X is unpredictable, because of the complex interactions and causal dynamics of these other influences. This is one of the central findings of complexity studies — the unpredictability of the interactions of multiple causal powers whose effects are non-linear.

 

Another difficulty — or perhaps a different aspect of the same difficulty — is the typical fact of path dependency of social processes. Outcomes are importantly influenced by the particulars of the initial conditions, so simply having a good idea of the forces and influences the system will experience over time does not tell us where it will wind up.

Third, social processes are sensitive to occurrences that are singular and idiosyncratic and not themselves governed by systemic properties. If the winter of 1812 had not been exceptionally cold, perhaps Napoleon’s march on Moscow might have succeeded, and the future political course of Europe might have been substantially different. But variations in the weather are not themselves systemically explicable — or at least not within the parameters of the social sciences.

Fourth, social events and outcomes are influenced by the actions of purposive actors. So it is possible for a social group to undertake actions that avert the outcomes that are otherwise predicted. Take climate change and rising ocean levels as an example. We may be able to predict a substantial rise in ocean levels in the next fifty years, rendering existing coastal cities largely uninhabitable. But what should we predict as a consequence of this fact? Societies may pursue different strategies for evading the bad consequences of these climate changes — retreat, massive water control projects, efforts at atmospheric engineering to reverse warming. And the social consequences of each of these strategies are widely different. So the acknowledged fact of global warming and rising ocean levels does not allow clear predictions about social development. (link)

When prediction and expectation fail, we are confronted with a “surprise”.

So what is a surprise? It is an event that shouldn’t have happened, given our best understanding of how things work. It is an event that deviates widely from our most informed expectations, given our best beliefs about the causal environment in which it takes place. A surprise is a deviation between our expectations about the world’s behavior, and the events that actually take place. Many of our expectations are based on the idea of continuity: tomorrow will be pretty similar to today; a delta change in the background will create at most an epsilon change in the outcome. A surprise is a circumstance that appears to represent a discontinuity in a historical series. 

It would be a major surprise if the sun suddenly stopped shining, because we understand the physics of fusion that sustains the sun’s energy production. It would be a major surprise to discover a population of animals in which acquired traits are passed across generations, given our understanding of the mechanisms of evolution. And it would be a major surprise if a presidential election were decided by a unanimous vote for one candidate, given our understanding of how the voting process works. The natural world doesn’t present us with a large number of surprises; but history and social life are full of them. 

The occurrence of major surprises in history and social life is an important reminder that our understanding of the complex processes that are underway in the social world is radically incomplete and inexact. We cannot fully anticipate the behavior of the subsystems that we study — financial systems, political regimes, ensembles of collective behavior — and we especially cannot fully anticipate the interactions that arise when processes and systems intersect. Often we cannot even offer reliable approximations of what the effects are likely to be of a given intervention. This has a major implication: we need to be very modest in the predictions we make about the social world, and we need to be cautious about the efforts at social engineering that we engage in. The likelihood of unforeseen and uncalculated consequences is great. 

And in fact commentators are now raising exactly these concerns about the 700 billion dollar rescue plan currently being designed by the Bush administration to save the financial system. “Will it work?” is the headline; “What unforeseen consequences will it produce?” is the subtext; and “Who will benefit?” is the natural followup question. 

It is difficult to reconcile this caution about the limits of our rational expectations about the future based on social science knowledge, with the need for action and policy change in times of crisis. If we cannot rely on our expectations about what effects an intervention is likely to have, then we can’t have confidence in the actions and policies that we choose. And yet we must act; if war is looming, if famine is breaking out, if the banking system is teetering, a government needs to adopt policies that are well designed to minimize the bad consequences. It is necessary to make decisions about action that are based on incomplete information and insufficient theory. So it is a major challenge for the theory of public policy, to attempt to incorporate the limits of knowledge about consequences into the design of a policy process. One approach that might be taken is the model of designing for “soft landings” — designing strategies that are likely to do the least harm if they function differently than expected. Another is to emulate a strategy that safety engineers employ when designing complex, dangerous systems: to attempt to de-link the subsystems to the extent possible, in order to minimize the likelihood of unforeseeable interactions. (link)

One person who has persistently tried to answer the final question posed here — the conundrum of forming expectations in an uncertain world as a necessary basis for action — is Philip Tetlock. Tetlock’s decades-long research on forecasting and judging is highly relevant to this topic. The recent book Superforecasting: The Art and Science of Prediction provides an excellent summary of the primary findings of the research that he and senior collaborators have done on the topic.

Tetlock does a very good job of tracing through the sources of uncertainty that make projections and forecasts of the future so difficult. The uncertainties mentioned above all find discussion in Superforecasting; and he supplements these objective sources of uncertainty with a volume of recent work on cognitive biases leading to over- or under-confidence in a set of expectations. (Both Daniel Kahneman and Scott Page find astute discussions in the book.)

But in spite of these reasons to be dubious about pronouncements about future events, Tetlock finds that there are good theoretical and empirical reasons for believing that a modest amount of forecasting of complex events is nonetheless possible. He takes very seriously the probabilistic nature of social and economic events, so a forecast that “North Korea will perform a nuclear test within six months” must be understood as a probabilistic statement about the world (there is a specific likelihood of such a test in the world); and a Bayesian statement about the forecaster’s degree of confidence in the prediction. And good forecasters aim to be specific about both probabilities: for example, “I have a 75% level of confidence that there is a 55% likelihood of a North Korean nuclear test by date X”.

Moreover, Tetlock argues that it is possible to evaluate individual forecasters on the basis of their performance on specific tasks of forecasting and observation of the outcome. Tetlock would like to see the field of forecasting to follow medicine in the direction of an evidence-based discipline in which practices and practitioners are constantly assessed and permitted to improve their performance. (As he points out, it is not difficult to assess the weatherman on his or her probabilistic forecasts of rain or sun.) The challenge for evaluation is to set clear standards of specificity of the terms of a forecast, and then to be able to test the forecasts against the observed outcomes once the time has expired. This is the basis for the multiple-year tournaments that the Good Judgment Project has conducted over several decades. The idea of a Brier score serves as a way of measuring the accuracy of a set of probabilistic statements (link). Here is an explanation of “Brier scores” in the context of the Good Judgment Project (link); “standardized Brier scores are calculated so that higher scores denote lower accuracy, and the mean score across all forecasters is zero”. As the graph demonstrates, there is a wide difference between the best and the worst forecasters, given their performance over 100 forecasts.

So how is forecasting possible, given all the objective and cognitive barriers that stand in the way? Tetlock’s view is that many problems about the future can be broken down into component problems, some of which have more straightforward evidential bases. So instead of asking whether North Korea will test another nuclear device by November 1, 2016, the forecaster may ask a group of somewhat easier questions: how frequent have their tests been in the past? Do they have the capability to do so? Would China’s opposition to further tests be decisive?

Tetlock argues that the best forecasters do several things: they avoid getting committed to a single point of view; they consider conflicting evidence freely; they break a problem down into components that would need to be satisfied for the outcome to occur; and they revise their forecasts when new information is available. They are foxes rather than hedgehogs. He doubts that superforecasters are distinguished by being of uniquely superior intelligence or world-class subject experts; instead, they are methodical analysts who gather data and estimates about various components of a problem and assemble their findings into a combined probability estimate.

The author follows his own advice by taking conflicting views seriously. He presents both Daniel Kahneman and Nassim Taleb as experts who have made significant arguments against the program of research involved in the Good Judgment Project. Kahneman consistently raises questions about the forms of reasoning and cognitive processes that are assumed by the GJP. More fundamentally, Taleb raises questions about the project itself. Taleb argues in several books that fundamentally unexpected events are key to historical change; and therefore the incremental forms of forecasting described in the GJP are incapable in principle of keeping up with change (The Black Swan: Second Edition: The Impact of the Highly Improbable: With a new section: “On Robustness and Fragility” (Incerto) as well as the more recent Antifragile: Things That Gain from Disorder). These are arguments that resonate with the view of change presented in earlier posts and quoted above, and I have some sympathy for the view. But Tetlock does a good job of establishing that the situation is not nearly so polarized as Taleb asserts. Many “black swan” events (like the 9/11 attacks) can be treated in a more disaggregated way and are amenable to a degree of forecasting along the lines advocated in the book. So it is a question of degree, whether we think that the in-principle unpredictability of major events is more important or the incremental accumulation of many small causes is a preponderance of historical change. Processes that look like the latter pattern are amenable to piecemeal probabilistic forecasting.

Tetlock is not a fan of pundits, for some very good reasons. Most importantly, he argues that the great majority of commentators and prognosticators in the media and cable news are long on self-assurance and short on specificity and accountability. Tetlock argues several important points: first, that it is possible to form reasonable and grounded judgments about future economic, political, and international events; second, that it is crucial to subject this practice to evidence-based assessment; and third, that it is possible to identify the most important styles, heuristics, and analytical approaches that are used by the best forecasters (superforecasters).

(Here is a good article in the New Yorker on Tetlock’s approach; link.)

Advertisements

Social upheaval

image: Monte Carlo simulation of portfolio value

We sometimes think that there is fundamental stability in the social world, or at least an orderly pattern of development to the large social changes that occur. When there are crises — like the financial crisis of 2008 or the riots in London and Stockholm in the past few years — we often try to understand them as deviations from the normal. These are ontological assumptions about the nature of social change.

But really, our desire to perceive order in the things we experience often deceives us. The social world at any given time is a conjunction of an enormous number of contingencies, accidents, and conjunctures. So we shouldn’t be surprised at the occurrence of crises, unexpected turns, and outbreaks of protest and rebellion. It is continuity rather than change that needs explanation.

Social processes and causal sequences have a wide range of profiles. Some social processes — for example, population size — are continuous and roughly linear. These are the simplest processes to project into the future. Others, like the ebb and flow of popular names, spread of a disease, or mobilization over a social cause, are continuous but non-linear, with sharp turning points (tipping points, critical moments, exponential takeoff, hockey stick). And others, like the stock market, are discontinuous and stochastic, with lots of random events pushing prices up and down.

Take unexpected moments of popular uprising — for example, the Arab Spring uprisings or the 2013 riots in Stockholm. Are these best understood as random events, the predictable result of long-running processes, or something else? My preferred answer is something else — in particular, conjunctural intersections of independent streams of causal processes (link). So riots in London or Stockholm are neither fully predictable nor chaotic and random. The fact of growing discontent and unemployment among young people is certainly relevant to the rioting, but many other outcomes were possible — even up through several weeks prior to the outcome. Those background conditions increased the likelihood of civil unrest without making it inevitable. The immediate spark of the Stockholm rioting — the instigating event — was a police shooting of an elderly man (link). This shooting didn’t have to occur, and if it had not, the rioting would not have started at that time.

Moreover, when social tensions rise, various organizations come forward to address the underlying causes of unrest. Organizations focused on improving employment opportunities for young people, improving the quality and civility of policing, and improving social services can have the effect of reducing the likelihood of an outbreak of civil unrest and violence. So social outcomes are subject to a degree of strategic, intentional intervention on the part of individual and collective actors.

Or take the emergence of a novel ideological or religious movement — for example, the Tea Party in the United States or the millenarian Buddhist movements that periodically swept through Late Imperial China (Susan Naquin, Millenarian Rebellion in China: Eight Trigrams Uprising of 1813; D. Little, Understanding Peasant China: Case Studies in the Philosophy of Social Science). Historians and sociologists can enumerate some background social or cultural conditions that were propitious to the emergence of movements like these in the times and places where they occurred. But when we study movements like these we almost invariably find meaningless contingencies that were crucial to the progress of the movement in its historical circumstances — an especially charismatic leader (Zuo Zongtang in the Taiping movement), a new technology of transportation or communication that showed up on the scene, a period of harsh drought or flooding that led rural people to be more amenable to mobilization around an unfamiliar ideology. Social change is contingent and conjunctural.

In other words, social outcomes are always the result of a complex mix of influences. There are some broad underlying social causes that are relevant to the outbreak of civil unrest or ideological change; there are semi-random events that may serve as a flashpoint stimulating an outbreak; and there are countervailing efforts and strategies that are designed to reduce the likelihood of civil unrest or the spread of heterodox ideas. And this demonstrates that these classes of social phenomena are fundamentally indeterminate; they are best understood as being the consequence of a conjunctural set of processes and events that could have unfolded very differently.

The idea of a Monte Carlo simulation represented in the image above is a valuable tool for thinking about social outcomes. Instead of looking at social processes as single pathways from initial conditions to predictable outcomes, we should instead think of a whole ensemble of scenarios that run forward from a certain starting point, in which we introduce variation in many of the parameters and look at the broad range of outcomes that might have ensued.

(Here is an earlier post on the use of scenario-based methods of prediction; link.)

Large predictions in history

To what extent is it possible to predict the course of large-scale history — the rise and fall of empires, the occurrence of revolution, the crises of capitalism, or the ultimate failure of twentieth-century Communism? One possible basis for predictions is the availability of theories of underlying processes. To arrive at a supportable prediction about a state of affairs, we might possess a theory of the dynamics of the situation, the mechanisms and processes that interact to bring about subsequent states, and we might be able to model the future effects of those mechanisms and processes. A biologist’s projection of the spread of a disease through an isolated population of birds is an example. Or, second, predictions might derive from the discovery of robust trends of change in a given system, along with an argument about how these trends will aggregate in the future. For example, we might observe that the population density is rising in water-poor southern Utah, and we might predict that there will be severe water shortages in the region in a few decades. However, neither approach is promising when it comes to large historical change.

One issue needs to be addressed early on: the issue of determinate versus probabilistic predictions. A determinate prediction is one for which we have some basis for thinking that the outcome is necessary or inevitable: if you put the Volvo in the five million pound laboratory press, it will crush. This isn’t a philosophically demanding concept of inevitability; it is simply a reflection of the fact that the Volvo has a known physical structure; it has an approximately known crushing value; and this value is orders of magnitude lower than five million pounds. So it is a practical impossibility that the Volvo will survive uncrushed. A probabilistic prediction, on the other hand, identifies a range of possible outcomes and assigns approximate probabilities to each outcome. Sticking with our test press example — we might subject a steel bridge cable rated at 90,000 pounds of stress to a force of 120,000. We might predict that there is a probability of failure of the cable (40%) and non-failure (60%); the probability of failure rises as the level of stress is increased. But there is a range of values where the probabilities of the two possible outcomes are each meaningfully high, while there are extreme values where one option or the other is impossible.

In general, I believe that large-scale predictions about the course of history are highly questionable. There are several important reasons for this.

One reason for the failure of large-scale predictions about social systems is the complexity of causal influences and interactions within the domain of social causation. We may be confident that X causes Z when it occurs in isolated circumstances. But it may be that when U, V, and W are present, the effect of X is unpredictable, because of the complex interactions and causal dynamics of these other influences. This is one of the central findings of complexity studies — the unpredictability of the interactions of multiple causal powers whose effects are non-linear.

Another difficulty — or perhaps a different aspect of the same difficulty — is the typical fact of path dependency of social processes. Outcomes are importantly influenced by the particulars of the initial conditions, so simply having a good idea of the forces and influences the system will experience over time does not tell us where it will wind up.

Third, social processes are sensitive to occurrences that are singular and idiosyncratic and not themselves governed by systemic properties. If the winter of 1812 had not been exceptionally cold, perhaps Napoleon’s march on Moscow might have succeeded, and the future political course of Europe might have been substantially different. But variations in the weather are not themselves systemically explicable — or at least not within the parameters of the social sciences.

Fourth, social events and outcomes are influenced by the actions of purposive actors. So it is possible for a social group to undertake actions that avert the outcomes that are otherwise predicted. Take climate change and rising ocean levels as an example. We may be able to predict a substantial rise in ocean levels in the next fifty years, rendering existing coastal cities largely uninhabitable. But what should we predict as a consequence of this fact? Societies may pursue different strategies for evading the bad consequencs of these climate changes — retreat, massive water control projects, efforts at atmospheric engineering to reverse warming. And the social consequences of each of these strategies are widely different. So the acknowledged fact of global warming and rising ocean levels does not allow clear predictions about social development.

For these and other reasons, it is difficult to have any substantial confidence in predictions of the large course of change that a society, cluster of institutions, or population will experience. And this is a reason in turn to be skeptical about the spate of recent books about the planet’s future. One such example is Martin Jacques’ provocative book about China’s future dominance of the globe, When China Rules the World: The End of the Western World and the Birth of a New Global Order: Second Edition. The Economist paraphrases his central claims this way (link):

He begins by citing the latest study by Goldman Sachs, which projects that China’s economy will be bigger than America’s by 2027, and nearly twice as large by 2050 (though individual Chinese will still be poorer than Americans). Economic power being the foundation of the political, military and cultural kind, Mr Jacques describes a world under a Pax Sinica. The renminbi will displace the dollar as the world’s reserve currency; Shanghai will overshadow New York and London as the centre of finance; European countries will become quaint relics of a glorious past, rather like Athens and Rome today; global citizens will use Mandarin as much as, if not more than, English; the thoughts of Confucius will become as familiar as those of Plato; and so on.

This is certainly one possible future. But it is only one of many scenarios through which China’s future may evolve, and it overlooks the many contingencies and strategies that may lead to very different outcomes.

(I go into more detail on this question in “Explaining Large-Scale Historical Change”; link.)

Scenario-based projections of social processes

As we have noted in previous posts, social outcomes are highly path-dependent and contingent (linklinklinklink). This implies that it is difficult to predict the consequences of even a single causal intervention within a complex social environment including numerous actors — say, a new land use policy, a new state tax on services, or a sweeping cap-and-trade policy on CO2 emissions. And yet policy changes are specifically designed and chosen in order to bring about certain kinds of outcomes. We care about the future; we adopt policies to improve this or that feature of the future; and yet we have a hard time providing a justified forecast of the consequences of the policy.

This difficulty doesn’t only affect policy choices; it also pertains to large interventions like the democracy uprisings in the Middle East and North Africa. There are too many imponderable factors — the behavior of the military, the reactions of other governments, the consequent strategies of internal political actors and parties (the Muslim Brotherhood in Egypt) — so activists and academic experts alike are forced to concede that they don’t really know what the consequences will be.

One part of this imponderability derives from the fact that social changes are conveyed through sets of individual and collective actors. The actors have a variety of motives and modes of reasoning, and the collective actors are forced to somehow aggregate the actions and wants of subordinate actors. And it isn’t possible to anticipate with confidence the choices that the actors will make in response to changing circumstances. At a very high level of abstraction, it is the task of game theory to model strategic decision-making over a sequence of choices (problems of strategic rationality); but the tools of game theory are too abstract to allow modeling of specific complex social interactions.

A second feature of unpredictability in extended social processes derives from the fact that the agents themselves are not fixed and constant throughout the process. The experience of democracy activism potentially changes the agent profoundly — so the expectations we would have had of his/her choices at the beginning may be very poorly grounded by the middle and end. Some possible changes may make a very large difference in outcomes — actors may become more committed, more open to violence, more ready to compromise, more understanding of the grievances of other groups, … This is sometimes described as endogeneity — the causal components themselves change their characteristics as a consequence of the process.

So the actors change through the social process; but the same is often true of the social organizations and institutions that are involved in the process. Take contentious politics — it may be that a round of protests begins around a couple of loose pre-existing organizations. As actors seek to achieve their political goals through collective action, they make use of the organizations for their communications and mobilization resources. But some actors may then also attempt to transform the organization itself — to make it more effective or to make it more accommodating to the political objectives of this particular group of activists. (Think of Lenin as a revolutionary organization innovator.) And through their struggles, they may elicit changes in the organizations of the “forces of order” — the police may create new tactics (kettling) and new sub-organizations (specialized intelligence units). So the process of change is likely enough to transform all the causal components as well — the agents and their motivations as well as the surrounding institutions of mobilization and control. Rather than a set of billiard balls and iron rods with fixed properties and predictable aggregate consequences, we find a fluid situation in which the causal properties of each of the components of the process are themselves changing.

One way of trying to handle the indeterminacy and causal complexity of these sorts of causal processes is to give up on the goal of arriving at specific “point” predictions about outcomes and instead concentrate on tracing out a large number of possible scenarios, beginning with the circumstances, actors, and structures on the ground. In some circumstances we may find that there is a very wide range of possible outcomes; but we may find that a large percentage of the feasible scenarios or pathways fall within a much narrower range. This kind of reasoning is familiar to economists and financial analysts in the form of Monte Carlo simulations. And it is possible that the approach can be used for modeling likely outcomes in more complex social processes as well — war and peace, ethnic conflict, climate change, or democracy movements.

Agent-based modeling is one component of approaches like these (link).  This means taking into account a wide range of social factors — agents, groups, organizations, institutions, states, popular movements, and then modeling the consequences of these initial assumptions. Robert Axelrod and colleagues have applied a variety of modeling techniques to these efforts (link).

Another interesting effort to carry out such an effort is underway at the RAND Pardee Center, summarized in a white paper called Shaping the Next One Hundred Years: New Methods for Quantitative, Long-Term Policy Analysis. Here is how the lead investigators describe the overall strategy of the effort:

This report describes and demonstrates a new, quantitative approach to long-term policy analysis (LTPA).  These robust decisionmaking methods aim to greatly enhance and support humans’ innate decisionmaking capabilities with powerful quantitative analytic tools similar to those that have demonstrated unparalleled effectiveness when applied to more circumscribed decision problems.  By reframing the question “What will the long-term future bring?” as “How can we choose actions today that will be consistent with our long-term interests?” robust decisionmaking can harness the heretofore unavailable capabilities of modern computers to grapple directly with the inherent difficulty of accurate long-term prediction that has bedeviled previous approaches to LTPA. (iii)

LTPA is an important example of a class of problems requiring decisionmaking under conditions of  deep uncertainty—that is, where analysts do not know, or the parties to a decision cannot agree on, (1) the appropriate conceptual models that describe the relationships among the key driving forces that will shape the long-term future, (2) the probability distributions used to represent uncertainty about key variables and parameters in the mathematical representations of these conceptual models, and/or (3) how to value the desirability of alternative outcomes. (iii)

And here, in a nutshell, is how the approach is supposed to work:

This study proposes four key elements of successful LTPA:

• Consider large ensembles (hundreds to millions) of scenarios.
• Seek robust, not optimal, strategies.
• Achieve robustness with adaptivity.
• Design analysis for interactive exploration of the multiplicity of plausible futures.

These elements are implemented through an iterative process in which the computer helps humans create a large ensemble of plausible scenarios, where each scenario represents one guess about how the world works (a future state of the world) and one choice of many alternative strategies that might be adopted to influence outcomes. Ideally, such ensembles will contain a sufficiently wide range of plausible futures that one will match whatever future, surprising or not, does occur—at least close enough for the purposes of crafting policies robust against it.  (xiii)

Thus, computer-guided exploration of scenario and decision spaces can provide a prosthesis for the imagination, helping humans, working individually or in groups, to discover adaptive near-term strategies that are robust over large ensembles of plausible futures. (xiv)

The hard work of this approach is to identify the characteristics of policy levers, exogenous uncertainties, measures, and relationship (XLRM).  Then the analysis turns to identifying a very large number of possible scenarios, depending on the initial conditions and the properties of the actors and organizations.  (This aspect of the analysis is analogous to multiple plays of a simulation game like SimCity.) Finally, the approach requires aggregating the large number of scenarios to allow the analysis to reach some conclusions about the distribution of futures entailed by the starting position and the characteristics of the actors and institutions.  And the method attempts to assign a measure of “regret” to outcomes, in order to assess the policy steps that might be taken today that lead to the least regrettable outcomes in the distant future.

It appears, then, that there are computational tools and methods that may prove useful for social explanation and social prediction — not of single outcomes, but of the range of outcomes that may be associated with a set of interventions, actors, and institutions.

The inexact science of economics

Image: social accounting matrix, Bolivia, 1997

Economics is an “inexact” science; or so Daniel Hausman argues in The Inexact and Separate Science of Economics (Google Books link).  As it implies, this description conveys that economic laws have only a loose fit with observed economic behavior.  Here are the loosely related interpretations that Hausman offers for this idea, drawing on the thinking of John Stuart Mill:

  1. Inexact laws are approximate.  They are true within some margin of error.
  2. Inexact laws are probabilistic or statistical.  Instead of stating how human beings always behave, economic laws state how they usually behave.
  3. Inexact laws make counterfactual assertions about how things would be in the absence of interferences.
  4. Inexact laws are qualified with vague ceteris paribus clauses. (128)

Economics has also been treated by economists as a separate science: a science capable of explaining virtually all the phenomena in a reasonably well-defined domain of social phenomena.  Here is Hausman’s interpretation of a separate science:

  1. Economics is defined in terms of the causal factors with which it is concerned, not in terms of a domain.
  2. Economics has a distinct domain, in which its causal factors predominate.
  3. The “laws” of the predominating causal factors are already reasonably well-known.
  4. Thus, economic theory, which employs these laws, provides a unified, complete, but inexact account of its domain. (90-91)

These characteristics of economic theories and models have implications for several important areas: truth, prediction, explanation, and confirmation.  Is economics a scientific theory of existing observable economic phenomena?  Or is it an abstract, hypothetical model with only tangential implications for the observable social world?  Is economics an empirical science or a mathematical system?

Let’s look at these questions in turn.  First, can we give a good interpretation of what it would mean to believe that an inexact theory or law is “true”?  Here is a possible answer: we may believe that there are real but unobservable causal processes that “drive” social phenomena.  To say that a social or economic theory is true is to say that it correctly identifies a real causal process — whether or not that process operates with sufficient separation to give rise to strict empirical consequences.  Galilean laws of mechanics are true for falling objects, even if feathers follow unpredictable trajectories through turbulent gases.

Second, how can we reconcile the desire to use economic theories to make predictions about future states with the acknowledged inexactness of those theories and laws? If a theory includes hypotheses about underlying causal mechanisms that are true in the sense just mentioned, then a certain kind of prediction is justified as well: “in the absence of confounding causal factors, the presence of X will give rise to Y.” But of course this is a useless predictive statement in the current situation, since the whole point is that economic processes rarely or never operate in isolation. So we are more or less compelled to conclude that theories based on inexact laws are not a useable ground for empirical prediction.

Third, in what sense do the deductive consequences of an inexact theory “explain” a given outcome — either one that is consistent with those consequences or one that is inconsistent with the consequences? Here inexact laws are on stronger ground: after the fact, it is often possible to demonstrate that the mechanisms that led to an outcome are those specified by the theory. Explanation and prediction are not equIvalent. Natural selection explains the features of Darwin’s finches — but it doesn’t permit prediction of future evolutionary change.

And finally, what is involved in trying to use empirical data to confirm or disconfirm an inexact theory?  Given that we have stipulated that the theory has false consequences, we can’t use standard confirmation theory.  So what kind of empirical argument would help provide empirical evaluation of an inexact theory?  One possibility is that we might require that the predictions of the theory should fall within a certain range of the observable measurements — which is implied by the idea of “approximately true” consequences.  But actually, it is possible that we might hold that a given theory is inexact, true, and wildly divergent from observed experience.  (This would be true of the application of classical mechanics to the problem of describing the behavior of light, irregular objects shot out of guns under water.)  Hausman confronts this type of issue when he asks why we should believe that the premises of general equilibrium theory are true. But here too there are alternatives, including piecemeal confirmation of individual causal hypotheses. Hausman refers to this possibility as a version of Mill’s deductive method.

I take up some of these questions in my article, “Economic Models in Development Economics” link, included in On the Reliability of Economic Models: Essays in the Philosophy of Economics.  This article discusses some related questions about the reliability and applicability of computable general equilibrium models in application to the observed behavior of real economies.  Here are some concluding thoughts from that article concerning the empirical and logical features that are relevant to the assessment of CGE models:

“The general problem of the antecedent credibility of an economic model can be broken down into more specific questions concerning the validity, comprehensiveness, robustness, reliability, and autonomy of the model. I will define these concepts in the following terms.

  • Validity is a measure of the degree to which the assumptions employed in the construction of the model are thought to correspond to the real processes underlying the phenomena represented by the model.
  • Comprehensiveness is the degree to which the model is thought to succeed in capturing the major causal factors that influence the features of the behavior of the system in which we are interested.
  • Robustness is a measure of the degree to which the results of the model persist under small perturbations in the settings of parameters, formulation of equations, etc.
  • Autonomy refers to the stability of the model’s results in face of variation of contextual factors.
  • Reliability is a measure of the degree of confidence we can have in the data employed in setting the values of the parameters.

These are epistemic features of models that can be investigated more or less independently and prior to examination of the empirical success or failure of the predictions of the model.”

(Hausman’s book is virtually definitive in its formulation of the tasks and scope of the philosophy of economics.  When conjoined with the book he wrote with Michael McPherson, Economic Analysis, Moral Philosophy and Public Policy, the philosophy of economics itself becomes a “separate science”: virtually all the important questions are raised throughout a bounded domain, and a reasonable set of theories are offered to answer those questions.)

Predictions

Image: Artillery, 1911. Roger de La Fresnaye. Metropolitan Museum, New York


In general I’m skeptical about the ability of the social sciences to offer predictions about future social developments. (In this respect I follow some of the instincts of Oskar Morgenstern in On the Accuracy of Economic Observations.) We have a hard time answering questions like these:

  • How much will the first installment of TARP improve the availability of credit within three months?
  • Will the introduction of UN peacekeeping units reduce ethnic killings in the Congo?
  • Will the introduction of small high schools improve student performance in Chicago?
  • Will China develop towards more democratic political institutions in the next twenty years?
  • Will American cities witness another round of race riots in the next twenty years?

However, the situation isn’t entirely negative, and there certainly are some social situations for which we can offer predictions in at least a probabilistic form. Here are some examples:

  • The unemployment rate in Michigan will exceed 10% sometime in the next six months.
  • Coalition casualties in the Afghanistan war will be greater in 2009 than in 2008.
  • Illinois Governor Blogojevich will leave office within six months.
  • Germany will be the world leader in solar energy research by 2020 (link).
  • The Chinese government will act strategically to prevent emergence of regional independent labor organizations.

It is worth exploring the logic and function of prediction for a few lines. Fundamentally, it seems that prediction is related to the effort to forecast the effects of interventions, the trajectory of existing trends, and the likely strategies of powerful social actors. We often want to know what will be the net effect of introducing X into the social environment. (For example, what effect on economic development would result from a region’s succeeding in increasing the high school graduation rate from 50% to 75%?) We may find it useful to project into the future some social trends that can be observed in the present. (Demographers’ prediction that the United States will be a “majority-minority” population by 2042 falls in this category (link).) And we can often do quite a bit of rigorous reasoning about the likely actions of leaders, policy makers, and other powerful actors given what we know about their objectives and their beliefs. (We can try to forecast the outcome of the current impasse between Russia and Ukraine over natural gas by analyzing the strategic interests of both sets of decision-makers and the constraints to which they must respond.)

So the question is, what kinds of predictions can we make in the social realm? And what circumstances limit our ability to predict?

Predictions about social phenomena are based on a couple of basic modes of reasoning:

  • extrapolation of current trends
  • modeling of causal hypotheses about social mechanisms and structures
  • reasoning about strategic actions likely to be taken by actors
  • derivation of future states of a system from a set of laws

And predictions can be presented in a range of levels of precision, specificity, and confidence:

  • prediction of a single event or outcome: the selected social system will be in state X at time T.
  • prediction of the range within which a variable will fall: the selected social variable will fall within a range Q ±20%.
  • prediction of the range of outcome scenarios that are most likely: “Given current level of unrest, rebellion 60%, everyday resistance 30%, resolution 10%”
  • prediction of the direction of change: the variable of interest will increase/decrease over the specified time period
  • prediction of the distribution of properties over a group of events/outcomes. X percent of interventions will show improvement of variable Y.

Here are some particular obstacles to reliable predictions in the social realm:

  • unquantifiable causal hypotheses — “small schools improve student performance”. How large is the effect? How does it weigh in relation to other possible causal factors?
  • indeterminate interaction effects — how will school policy changes interact with rising unemployment to jointly influence school attendance and performance?
  • open causal fields. What other currently unrecognized causal factors are in play?
  • the occurrence of unpredictable exogenous events or processes (outbreak of disease)
  • ceteris paribus conditions. These are frequently unsatisfied.

So where does all this leave us with respect to social predictions? A few points seem relatively clear.

Specific prediction of singular events and outcomes seems particularly difficult: the collapse of the Soviet Union, China’s decision to cross the Yalu River in the Korean War, or the onset of the Great Depression were all surprises to the experts.

Projection of stable trends into the near future seems most defensible — though of course we can give many examples of discontinuities in previously stable trends. Projection of trends over medium- and long-term is more uncertain — given the likelihood of intervening changes of structure, behavior, and environment that will alter the trends over the extended time.

Predictions of limited social outcomes, couched in terms of a range of possibilities attached to estimates of probabilities and based on analysis of known causal and strategic processes, also appear defensible. The degree of confidence we can have in such predictions is limited by the possibility of unrecognized intervening causes and processes.

The idea of forecasting the total state of a social system given information about the current state of the system and a set of laws of change is entirely indefensible. This is unattainable; societies are not systems of variables linked by precise laws of transition.

Correspondence, abstraction, and realism

Science is generally concerned with two central semantic features of theories: truth of theoretical hypotheses and reliability of observational predictions. (Philosophers understand the concept of semantics as encompassing the relations between a sentence and the world: truth and reference. This understanding connects with the ordinary notion of semantics as meaning, in that the truth conditions of a sentence are thought to constitute the meaning of the sentence.) Truth involves a correspondence between hypothesis and the world; while predictions involve statements about the observable future behavior of a real system. Science is also concerned with epistemic values: warrant and justification. The warrant of a hypothesis is a measure of the degree to which available evidence permits us to conclude that the hypothesis is approximately true. A hypothesis may be true but unwarranted (that is, we may not have adequate evidence available to permit confidence in the truth of the hypothesis). Likewise, however, a hypothesis may be false but warranted (that is, available evidence may make the hypothesis highly credible, while it is in fact false). And every science possesses a set of standards of hypothesis evaluation on the basis of which practitioners assess the credibility of their theories–for example, testability, success in prediction, inter-theoretical support, simplicity, and the like.

The preceding suggests that there are several questions that arise in the assessment of scientific theories. First, we can ask whether a given hypothesis is a good approximation of the underlying social reality–that is, the approximate truth of the hypothesis. Likewise, we can ask whether the hypothesis gives rise to true predictions about the future behavior of the underlying social reality. Each of these questions falls on the side of the truth value of the hypothesis. Another set of questions concerns the warrant of the hypothesis: the strength of the evidence and theoretical grounds available to us on the basis of which we assign a degree of credibility to the hypothesis. Does available evidence give us reason to believe that the hypothesis is approximately true, and does available evidence give us reason to expect that the hypothesis’s predictions are likely to be true? These questions are centrally epistemic; answers to them constitute the basis of our scientific confidence in the truth of the hypothesis and its predictions.

It is important to note that the question of the approximate truth of the hypothesis is separate from that of the approximate truth of its predictions. It is possible that the hypothesis is approximately true but its predictions are not. This might be the case because the ceteris paribus conditions are not satisfied, or because low precision of estimates for exogenous variables and parameters leads to indeterminate predictive consequences. Therefore it is possible that the warrant attaching to the approximate truth of the hypothesis and the reliability of its predictions may be different. It may be that we have good reason to believe that the hypothesis is a good approximation of the underlying economic reality, while at the same time we have little reason to rely on its predictions about the future behavior of the system. The warrant of the hypothesis is high on this account, while the warrant of its predictions is low.

Whatever position we arrive at concerning the possible truth or falsity of a given economic hypothesis, it is plain that this cannot be understood as literal descriptive truth. Economic hypotheses are not offered as full and detailed representations of the underlying economic reality. For a hypothesis unavoidably involves abstraction, in at least two ways. First, the hypothesis deliberately ignores some empirical characteristics and causal processes of the underlying economic reality. Just as a Newtonian hypothesis of the ballistics of projectiles ignores air resistance in order to focus on gravitational forces and the initial momentum of the projectile, so an economic hypothesis ignores differences in consumption behavior among members of functional defined income groups. Likewise, a hypothesis may abstract from regional or sectional differences in prices or wage rates within a national economy. Daniel Hausman provides an excellent discussion of the scope and limits of economic theories in The Inexact and Separate Science of Economics.

Another epistemically significant feature of social hypotheses is the difficulty of isolating causal factors in real social or economic systems. Hypotheses are generally subject to ceteris paribus conditions. Predictions and counterfactual assertions are advanced conditioned by the assumption that no other exogenous causal factors intervene; that is, the assertive content of the hypothesis is that the economic processes under analysis will unfold in the described manner absent intervening causal factors. But if there are intervening causal factors, then the overall behavior of the system may be indeterminate. In some cases it is possible to specify particularly salient interfering causal factors (e.g. political instability). But it is often necessary to incorporate open-ended ceteris paribus conditions as well.

Finally, social theories and hypotheses unavoidably make simplifying or idealizing assumptions about the populations, properties, and processes that they describe. Consumers are represented as possessing consistent and complete preference rankings; firms are represented as making optimizing choices of products and technologies; product markets are assumed to function perfectly; and so on.

Given, then, that hypotheses abstract from reality, in what sense does it make sense to ask whether a hypothesis is true? We must distinguish between truth and completeness, to start with. To say that a description of a system is true is not to say that it is a complete description. (A complete description provides a specification of the value of all state variables for the system–that is, all variables that have a causal role in the functioning of the system.) The fact that hypotheses are abstractive demonstrates only that they are incomplete, not that they are false. A description of a hockey puck’s trajectory on the ice that assumes a frictionless surface is a true account of some of the causal factors at work: the Newtonian mechanics of the system. The assumption that the surface of the ice is frictionless is false; but in this particular system the overall behavior of the system (with friction) is sufficiently close to the abstract hypothesis (because frictional forces are small relative to other forces affecting the puck). In this case, then, we can say two things: first, the Newtonian hypothesis is exactly true as a description of the forces it directly represents, and second, it is approximately true as a description of the system as a whole (because the forces it ignores are small).

This account takes a strongly realist position on social theory, in that it characterizes truth in terms of correspondence to unobservable entities, processes, or properties. The presumption here is that social systems generally–and economic systems in particular–have objective unobservable characteristics which it is the task of social science theory to identify. The realist position is commonly challenged by some economists, however. Milton Friedman’s famous argument for an instrumentalist interpretation of economic theory (Essays in Positive Economics) is highly unconvincing in this context. The instrumentalist position maintains that it is a mistake to understand theories as referring to real unobservable entities. Instead, theories are simply ways of systematizing observable characteristics of the phenomena under study; the only purpose of scientific theory is to serve as an instrument for prediction. Along these lines, Friedman argues that the realism of economic premises is irrelevant to the warrant of an economic theory; all that matters is the overall predictive success of the theory. The instrumentalist approach to the interpretation of economic theory, then, is highly unpersuasive as an interpretation of the epistemic standing of economic hypotheses. Instead, the realist position appears to be inescapable: we are forced to treat general equilibrium theory as a substantive empirical hypothesis about the real workings of competitive market systems, and our confidence in general equilibrium hypotheses is limited by our confidence in the approximate truth of the general equilibrium theory.

Polling and social knowledge

Here’s a pretty interesting graphic from Pollster.com:


As you can see, the graph summarizes a large number of individual polls measuring support for the two major party candidates from January 1 to October 26. The site indicates that it includes all publicly available polls during the time period. Each poll result is represented with two markers — blue for Obama and red for McCain. The red and blue trend lines are “trend estimates” based on local regressions for the values of the corresponding measurements for a relatively short interval of time (the site doesn’t explicitly say what the time interval is). So, for example, the trend estimate for August 1 appears to be approximately 47%:42% for the two candidates. As the site explains, 47% is not the average of poll results for Obama on August 1; instead, it is a regression result based on the trend of all of Obama’s polling results for the previous several days.

There are a couple of things to observe about this graph and the underlying methodology. First, it’s a version of the “wisdom of the crowd” idea, in that it arrives at an estimate based on a large number of less-reliable individual observations (the dozen or so polling results for the previous several days). Each of the individual poll results has an estimate-of-error which may be in the range of 3-5 percentage points; the hope is that the aggregate result has a higher degree of precision (a narrower error bar).

Second, the methodology attempts to incorporate an estimate of the direction and rate of movement of public opinion, by incorporating trend information based on the prior several days’ polling results.

Third, it is evident that there is likely to be a range of degrees of credibility assigned to the various component polls; but the methodology doesn’t assign greater weight to “more credible” polls. Ordinary readers might be inclined to assign greater weight to a Gallup poll or a CBS poll than a Research2000 or a DailyKos poll; but the methodology treats all results equally. Likewise, the critical reader might assign more credibility to a live phone-based poll than an internet-based or automated phone poll; but this version of the graph includes all polls. (On the website it is possible to filter out internet-generated or automated phone polling results; this doesn’t seem to change the shape of the results noticeably.)

There is also a fundamental question of validity and reliability that the critical reader needs to ask: how valid and reliable are these estimates for a particular point in time? That is, how likely is it that the trend estimate of support for either candidate on a particular day is within a small range of error of the actual value? I assume there is some statistical method for estimating probable error for this methodology, though it doesn’t appear to be explained on the website. But fundamentally, the question is whether we have a rational basis for drawing any of the inferences that the graph suggests — for example, that Obama’s lead over McCain is narrowing in the final 14 days of the race.

Finally, there is the narrative that we can extract from the graph, and it tells an interesting story. From January through March candidate Obama has a lead over candidate McCain; but of course both candidates are deeply engaged in their own primary campaigns. At the beginning of April the candidates are roughly tied at 45%. From April through September Obama rises slowly and maintains support at about 48%, while McCain falls in support until he reaches a low point of 43% in the beginning of August. Then the conventions take place in August and early September — and McCain’s numbers bump up to the point where Obama and McCain cross in the first week of September. McCain takes a brief lead in the trend estimates. His ticket seems to derive more benefit from his “convention bump” than Obama does. But in the early part of September the national financial crisis leaps to center stage and the two candidates fare very differently. Obama’s support rises steeply and McCain’s support falls at about the same rate, opening up a 7 percentage point gap in the trend estimates by the middle of October. From the middle of October the race begins to tighten; McCain’s support picks up and Obama’s begins to dip slightly at the end of October. But the election looms — the trend estimates tell a story that’s hard to read in any way but “too late, too little” for the McCain campaign.

And, of course, it will be fascinating to see where things stand a week from today.

Here is the explanation that the website offers of its methodology:

[quoting from Pollster.com:]
“Where do the numbers come from?

When you hold the mouse pointer over a state, you see a display of the latest “trend estimate” numbers from our charts of all available public polls for that race. The numbers for each candidate correspond to the most recent trend estimate — that is the end point of the trend line that we draw for each candidate. If you click the state on the map, you will be taken to the page on Pollster.com that displays the chart and table of polls results for that race.

In most cases, the numbers are not an “average” but rather regression based trendlines. The specific methodology depends on the number of polls available.

  • If we have at least 8 public polls, we fit a trend line to the dots represented by each poll using a “Loess” iterative locally weighted least squares regression.
  • If we have between 4 and 7 polls, we fit a linear regression trend line (a straight line) to best fit the points.
  • If we have 3 polls or fewer, we calculate a simple average of the available surveys.

How do regression trend lines differ from simple averages?

Charles Franklin, who created the statistical routines that plot our trend lines, provided the following explanation last year:

Our trend estimate is just that, an estimate of the trends and where the race stands as of the latest data available. It is NOT a simple average of recent polling but a “local regression” estimate of support as of the most recent poll. So if you are trying to [calculate] our trend estimates from just averaging the recent polls, you won’t succeed.

Here is a way to think about this: suppose the last 5 polls in a race are 25, 27, 29, 31 and 33. Which is a better estimate of where the race stands today? 29 (the mean) or 33 (the local trend)? Since support has risen by 2 points in each successive poll, our estimator will say the trend is currently 33%, not the 29% the polls averaged over the past 2 or 3 weeks during which the last 5 polls were taken. Of course real data are more noisy than my example, so we have to fit the trend in a more complicated way than the example, but the logic is the same. Our trend estimates are local regression predictions, not simple averaging. If the data have been flat for a while, the trend and the mean will be quite close to each other. But if the polls are moving consistently either up or down, the trend estimate will be a better estimate of opinion as of today while the simple average will be an estimate of where the race was some 3 polls ago (for a 5 poll average– longer ago as more polls are included in the average.) And that’s why we estimate the trends the way we do.”

Policy, treatment, and mechanism

Policies are selected in order to bring about some desired social outcome or to prevent an undesired one. Medical treatments are applied in order to cure a disease or to ameliorate its effects. In each case an intervention is performed in the belief that this intervention will causally interact with a larger system in such a way as to bring about the desired state. On the basis of a body of beliefs and theories, we judge that T in circumstances C will bring about O with some degree of likelihood. If we did not have such a belief, then there would be no rational basis for choosing to apply the treatment. “Try something, try anything” isn’t exactly a rational basis for policy choice.

In other words, policies and treatments depend on the availability of bodies of knowledge about the causal structure of the domain we’re interested in — what sorts of factors cause or inhibit what sorts of outcomes. This means we need to have some knowledge of the mechanisms that are at work in this domain. And it also means that we need to have some degree of ability to predict some future states — “If you give the patient an aspirin her fever will come down” or “If we inject $700 billion into the financial system the stock market will recover.”

Predictions of this sort could be grounded in two different sorts of reasoning. They might be purely inductive: “Clinical studies demonstrate that administration of an aspirin has a 90% probability of reducing fever.” Or they could be based on hypotheses about the mechanisms that are operative: “Fever is caused by C; aspirin reduces C in the bloodstream; therefore we should expect that aspirin reduces fever by reducing C.” And ideally we would hope that both forms of reasoning are available — causal expectations are born out by clinical evidence.

Implicitly this story assumes that the relevant causal systems are pretty simple — that there are only a few causal pathways and that it is possible to isolate them through experimental studies. We can then insert our proposed interventions into the causal diagram and have reasonable confidence that we can anticipate their effects. The logic of clinical trials as a way of establishing efficacy depends on this assumption of causal simplicity and isolation.

But what if the domain we’re concerned with isn’t like that? Suppose instead that there are many causal factors and a high degree of causal interdependence among the factors. And suppose that we have only limited knowledge of the strength and form of these interdependencies. Is it possible to make rationally justified interventions within such a system?

This description comes pretty close to what are referred to as complex systems. And the most basic finding in the study of complex systems is the extreme difficulty of anticipating future system states. Small interventions or variations in boundary conditions produce massive variations in later system states. But this is bad news for policy makers who are hoping to “steer” a complex system towards a more desirable state. There are good analytical reasons for thinking that they will not be able to anticipate the nature or magnitude or even direction of the effects of the intervention.

The study of complex systems is a collection of areas of research in mathematics, economics, and biology that attempt to arrive at better ways of modeling and projecting the behavior of systems with these complex causal interdependencies. This is an exciting field of research at places like the Santa Fe Institute and the University of Michigan. One important tool that had been extensively developed is the theory of agent-based modeling — essentially, the effort to derive system properties as the aggregate result of the activities of independent agents at the micro-level. And a fairly durable result has emerged: run a model of a complex system a thousand times and you will get a wide distribution of outcomes. This means that we need to think of complex systems as being highly contingent and path-dependent in their behavior. The effect of an intervention may be a wide distribution of future states.

So far the argument is located at a pretty high level of abstraction. Simple causal systems admit of intelligent policy intervention, whereas complex, chaotic systems may not. But the important question is more concrete: which kind of system are we facing when we consider social policy or disease? Are social systems and diseases examples of complex systems? Can social systems be sufficiently disaggregated into fairly durable subsystems that admit of discrete causal analysis and intelligent intervention? What about diseases such as solid tumors? Can we have confidence in interventions such as chemotherapy? And, in both realms, can the findings of complexity theory be helpful by providing mathematical means for working out the system effects of various possible interventions?

System tendencies?


A central theme of many of the posts here is the contingency, heterogeneity, and path dependency of social processes. I used the metaphor of a “constrained random walk” in an earlier posting to characterize many social processes. This figure is intended to stand in contrast to the idea of an inevitable development towards an optimum or equilibrium point, on the one hand, or the idea of an inevitable system failure, on the other.

The idea here is this: from starting point A, there are numerous possible states of affairs Oi that might be reached over an extended period of time. There is no sense in which the course from A to the actual historical outcome Om is inevitable or unique. (From the starting point of Europe in 1910, including the social, political, and economic realities of the nations of Europe, multiple outcomes were accessible by the time of 1920: exhausting war, emergence of new and effective international organizations that sustained the peace, inspired just-in-time diplomacy bringing hostilities to an early termination, …). Each of the pathways leading from A to Oi might be individually explicable, in terms of the situations of structure and agency that were present during the period of development. Virtually every point in the “space” of outcomes would be accessible, although some outcomes might be substantially less likely than others. Along the way there are likely to be cul-de-sacs; but in the aggregate, the space of possible outcomes from many historical starting points covers the full sphere of possibilities. Putting the point crudely, you can get anywhere from anywhere.

This conception emphasizes deep contingency in social change. But what about the symmetrical facts of “constraint” and “imperative” — the limitations imposed by existing institutions and organizations at any specific stage and the positive impulses to change that are often embodied in the incentive structures of existing institutions? Is the contingency of social events to some extent reduced by the relative durability of existing core social institutions? Is there such a thing as a “logic of institutions” that is embodied in a particular configuration of core social institutions, with the result that societies embodying these institutions will be most likely to develop in one way rather than another?

This description lies at the heart of Marx’s analysis of social systems as modes of production. Marx believed that the core institutions that defined the property system, the system of labor control, and the distribution of wealth have deep effects on individual agency, leading and constraining agents to behave in ways that lead in the aggregate to certain kinds of social outcomes. Modes of production have system tendencies that can be inferred from their basic institutional features. A particularly clear example is his analysis of the “law” of the falling rate of profit within capitalism: firms are required to maximize profits; they have the opportunity of introducing capital-intensive technologies that lower costs, thereby increasing profits in the short run; competition with other profit-maximizing firms pushes prices down to the new cost of production; the rising capital-labor ratio in industry creates a falling rate of profit. So capitalism embodies a system tendency towards a falling rate of profit over time. Similar reasoning underlies Marx’s prediction of financial crises within capitalism. (See an earlier posting on Marx’s conception of capitalism.)

And in fact, if we could make two assumptions, then Marx’s reasoning about the tendencies of capitalism would be very compelling: the assumption that the core economic institutions are fixed and unchanging, and the assumption that there are no other social-political-economic institutions in play that might serve as resources for policies and actions that would offset the predicted tendencies of capitalism. However, neither of these assumptions is correct. The institutions of any major social order — feudalism, the Chinese agrarian economy, capitalism, state socialism — are always the composite of a vast number of lower-level institutions; and these lower-level institutions are usually in a state of flux. So the core institutions are not fixed and unchanging. The traditional Chinese agrarian economy was remarkably resilient in face of a range of deep challenges over centuries; adjustment of basic social institutions permitted Chinese society to cope better with environmental and international circumstances than a modeled Chinese economy would have predicted.

Second, even more fundamentally, a society is not simply a “mode of production,” constituted by an economic structure. Rather, there are a range of other, equally fundamental institutions and practices — cultural, political, legal, community-based and national — through which resourceful agents attempt to solve personal or social problems at various points in time. So the “logic” of the economic institutions is only one part of the overall social trajectory; instead, we have the strategic interaction and aggregation of political, cultural, social, demographic, and legal institutions that complement and offset the workings of the economic structure. And further, we can correctly say that each of these aspects of social organization has its own “system tendencies.” Elected legislatures have a logic that derives from the calculations of political self-interest of the legislators, community-based organizations have their own logic, various demographic regimes have their own tendencies (for example, the favoring of boy children produces skewed sex ratios that have negative political effects), and so forth.

So the tentative conclusion that I draw from these various considerations is, once again, to give the nod to contingency while recognizing the partial imperatives created by the various sets of core institutions that are embodied in a society at a given time. Structures do of course constrain agents. But structures interact with each other, leading to surprising results. And structures change in response to a variety of causes, including the strategic efforts of agents to modify them. So the upshot is, once again, that we should expect a high degree of contingency in outcomes over extended periods of historical time. Historical experience may well support the discovery that “capitalism creates a tendency towards X” or “fascist politics create a tendency towards Y”. To that extent, there are “system tendencies”. But it is rare for one particular sub-system (property relations, electoral system, demographic regime) to dominate the overall historical trajectory. And so the system tendencies of one partial set of core institutions rarely become the system tendencies of the overall social whole.

%d bloggers like this: