Levi Martin on explanation

John Levi Martin’s The Explanation of Social Action is a severe critique of the role of “theory” in the social sciences. He thinks our uses of this construct follow from a bad conception of social explanation: we explain something by showing how it relates (often through law-like processes) to something radically different from the thing to be explained. But Levi Martin adopts one strand of Weber’s conception of sociology — “the science of meaningful social action” (kl 92) — and advocates a fundamentally different conception of sociological thinking. Let us explain why social things happen in terms of the ways that ordinary people understand their actions and meanings. Levi thinks the usual approach is Cartesian, in that it presupposes a radical separation between the individual who acts and the considerations that explain his/her action.

Sociology and its near kin have adopted an understanding of theoretical explanation that privileges “third-person” explanations and, in particular, have decided that the best explanation is a “causal” third-person explanation, in which we attribute causal power to something other than flesh-and-blood individuals. (kl 75)

His own approach is anti-cartesian — explanans and explanandum are on the same level.

The main argument of this book may be succinctly put as follows: the social sciences (in part, but in large part) explain what people do, and they explain what it means to carry out such an explanation. (kl 14)

Rather than drawing on examples from the law-based natural sciences, he prefers to understand actors in their own terms, and he offers the example of three antecedent theories:

But there are alternatives. I trace three, all of which dissented from the Cartesian dualism underlying the Durkheimian approach. These are the Russian activity school (with Vygotsky the prime example), the German Gestalt school (with Kohler the prime example), and the American pragmatist school (with Dewey the prime example). All point to a serious social science that attempts to understand why people do what they do, a science that while rigorous and selective (as opposed to needing to “examine everything”) is not analytic in the technical sense of decomposing unit acts into potentially independent (if empirically interrelated) components. (kl 147)

I like Levi Martin’s work quite a bit, and wrote earlier on his approach to social structures in his Social Structures (2009). But here I do not finding his reasoning as persuasive. 

First, his account leaves out a vast proportion of what many social scientists are interested in explaining — why did fascism prevail in Germany and Italy, why were East Coast dock worker unions corrupt while West Coast dock workers were political, why do young democracies fall prey to far-right nationalist movements, why did the decision-making process surrounding the launch of Space Shuttle Challenger go so disastrously wrong? These Why questions involve understanding the workings of institutions and structures, and they admit of causal explanations. And they are not inherently equivalent to explanations of nested sets of social actions. 

Second, L-M’s polemic against current ideas of social explanation assumes there are only two choices in play: correlational studies of associated variables and actor-level explanations of why people behave as they do. But this is incorrect. The new institutionalism in sociology, for example, explains outcomes as the result of a socially embedded system of rules that produce a characteristic set of actions on the part of participants. Institutional differences across settings lead to significantly different outcomes. And the causal-mechanisms approach to explanation (Tilly) identifies meso-level social mechanisms that can be seen to aggregate into identifiable social processes. L-M’s polemic against “causalism” is, in my perception, directed against an untenable regularity-based theory of causation; but the causal-mechanism approach provides a far more defensible theory of social causation and explanation. 

Third, his current account is mono-thematic: the only thing that counts is what is in the head of the participant. But this seems wrong in two ways: first, we are often interested in social facts that are not social actions in Weber’s sense. And second, sometimes factors that the individuals themselves have not conceptualized are critical to understanding what is going on.

I also find that Levi Martin gives the causal mechanisms approach a wholly unfair dismissal. The CM approach is actually an alternative way of disaggregating social explanations and de-dramatizing the quest for unifying theories of the social world. The CM approach brings forward a heterogeneous view of the workings of a social world that is itself heterogeneous and multi-factoral. It is a mid-level theory of explanation, not a generalization-worshipping theory of explanation. So I believe that much of what Levi says in favor of his own approach can be said with equal force about the CM theory. 

Moreover, CM theory too is an actor-centered approach to social explanation. The mechanisms to which sociologists in this tradition appeal work through the choices and actions of individuals. But CM also recognizes that there are relatively stable structures and relations that influence individual action and that have consequences for yet other meso-level factors. The CM approach asks a great fundamental question: through what processes and mechanisms did the social phenomenon in question take place? And crucially, there is no reason to expect the participants to conceptualize or understand these mechanisms and institutions. So there is a proper place for the hypotheses and constructs of the observer and theoretician after all. 

One way to put these criticisms is to suggest that the book is mis-titled. The title offers a comprehensive theory of social explanation, but it doesn’t in fact succeed in providing any such thing. And it gives the impression that there is only one legitimate mode of social explanation — something that most sociologists would reject for good reasons.

In fact, the book is not so much a theory of explanation as it is a theory of the actor: what she knows, what she wants, how she chooses, and how she acts. It is a theory that draws upon American pragmatism and phenomenology. And the implication for “explanation” is a very simple lemma: “Explanations of social actions need to be couched in terms of the real experiences and cognitive-practical setup of the actors.” Here is the title I would have preferred: “Towards a More Adequate Theory of the Human Actor”.

Here, then, is the heart of my assessment of The Explanation of Social Action: L-M reduces the many explanatory tasks of the social sciences to just one — the explanation of meaningful social actions by individuals. There is more to sociology than that, and identifying causal structures and mechanisms is a legitimate sociological task that has no place in L-M’s conceptual space here. But as a contribution to the topic to which it is really directed — the attempt to formulate a more adequate theory of the actor — I believe it is a major contribution.

Advertisements

Relative explanatory autonomy

In an earlier post I indicated a degree of disagreement with the premises of analytical sociology concerning the validity of methodological individualism (link). This disagreement comes down to three things.

First, for reasons I’ve referred to several times here and elsewhere (link), I prefer to refer to methodological localism rather than methodological individualism.

This theory of social entities affirms that there are large social structures and facts that influence social outcomes. But it insists that these structures are only possible insofar as they are embodied in the actions and states of socially constructed individuals. The “molecule” of all social life is the socially constructed and socially situated individual, who lives, acts, and develops within a set of local social relationships, institutions, norms, and rules. (link)

I believe that the ideas of localism and socially constructed, socially situated actors do a better job of capturing the social molecule that underlies larger social processes than the simple idea of an “individual”. Structural individualism seems to come to a similar idea, but less intuitively.

Second, the requirement of providing microfoundations for social assertions is preferable to methodological individualism because it is not inherently reductionist (link). A microfoundation is:

a specification of the ways that properties, structural features, and causal powers of a social entity are produced and reproduced by the actions and dispositions of socially situated individuals. (link)

We need to be confident that our theories and concepts about social structures, entities, and forces appropriately supervene upon facts about individuals; but we don’t need to rehearse those links in every theory or explanation. In other words, we can make careful statements about macro-macro and macro-meso links without proceeding according to the logic of Coleman’s boat — up and down the struts. Jepperson and Meyer make this point in “Multiple Levels of Analysis and the Limitations of Methodological Individualisms” (link), and they offer an alternative to Coleman’s macro-micro boat that incorporates explanations referring to meso-level causes (66).

Third, these points leave room for a meta-theory of relative explanatory autonomy for social explanations. The key insight here is that there are good epistemic and pragmatic reasons to countenance explanations at a meso-level of organization, without needing to reduce these explanations to the level of individual actors. Here is a statement of the idea of relative explanatory autonomy, provided by a distinguished philosopher of science, Lawrence Sklar, with respect to areas of the physical sciences:

Everybody agrees that there are a multitude of scientific theories that are conceptually and explanatorily autonomous with respect to the fundamental concepts and fundamental explanations of foundational physical theories. Conceptual autonomy means that there is no plausible way to define the concepts of the autonomous theories in terms of the concepts that we use in our foundational physics. This is so even if we allow a rather liberal notion of “definition” so that concepts defined as limit cases of the applicability of the concepts of foundational physics are still considered definable. Explanatory autonomy means that there is no way of deriving the explanatory general principles, the laws, of the autonomous theory from the laws of foundational physics. Once again this is agreed to be the case even if we use a liberal notion of “derivability” for the laws so that derivations that invoke limiting procedures are still counted as derivations. (link)

The idea of relative explanatory autonomy has been invoked by cognitive scientists against the reductionist claims of neuro-scientists. Of course cognitive mechanisms must be grounded in neurophysiological processes. But this doesn’t entail that cognitive theories need to be reduced to neurophysiological statements. Sacha Bem reviews these arguments in “The Explanatory Autonomy of Psychology: Why a Mind is Not a Brain” (link). Michael Strevens summarizes some of these issues in “Explanatory Autonomy and Explanatory Irreducibility” (link). And here Geoffrey Hellman addresses the issues of reductionism and emergence in the special sciences in “Reductionism, Determination, Explanation”.

These arguments are directly relevant to the social sciences, subject to several important caveats. First is the requirement of microfoundations: we need always to be able to plausibly connect the social constructs we hypothesize to the actions and mentalities of situated agents. And second is the requirement of ontological and causal stability: if we want to explain a meso-level phenomenon on the basis of the causal properties of other meso-level structures, we need to have confidence that the latter properties are reasonably stable over different instantiations. For example, if we believe that a certain organizational structure for tax collection is prone to corruption of the ground-level tax agents and want to use that feature as a cause of something else — we need to have empirical evidence supporting the assertion of the corruption tendencies of this organizational form.

Explanatory autonomy is consistent with our principle requiring microfoundations at a lower ontological level. Here we have the sanction of the theory of supervenience to allow us to say that composition and explanation can be separated. We can settle on a level of meso or macro explanation without dropping down to the level of the actor. We need to be confident there are microfoundations, and the meso properties need to be causally robust. But if this is satisfied, we don’t need to extend the explanation down to the actors.

Woven throughout this discussion are the ideas of reduction and emergence. An area of knowledge is reducible to a lower level if it is possible to derive the statements of the higher-level science from the properties of the lower level. A level of organization is emergent if it has properties that cannot be derived from features of its components. The strong sense of emergence holds that a composite entity sometimes possesses properties that are wholly independent from the properties of the units that compose it. Vitalism and mind-body dualism were strong forms of emergentism: life and mind were thought to possess characteristics that do not derive from the properties of inanimate molecules. Physicalism maintains that all phenomena — including living systems — depend ultimately upon physical entities and structures, so strong emergentism is rejected. But physicalism does not entail reductionism, so it is scientifically acceptable to provide explanations that presuppose relative explanatory autonomy.

Once we have reason to accept something like the idea of relative explanatory autonomy in the social sciences, we also have a strong basis for rejecting the exclusive validity of one particular approach to social explanation, the reductionist approach associated with methodological individualism and Coleman’s boat. Rather, social scientists can legitimately aggregate explanations that call upon meso-level causal linkages without needing to reduce these to derivations from facts about individuals. And this implies the legitimacy of a fairly broad conception of methodological pluralism in the social science, constrained always by the requirement of microfoundations.

Criteria for assessing economic models

How can we assess the epistemic warrant of an economic model that purports to represent some aspects of economic reality?  The general problem of assessing the credibility of an economic model can be broken down into more specific questions concerning the validity, comprehensiveness, robustness, reliability, and autonomy of the model. Here are initial definitions of these concepts.

  • Validity is a measure of the degree to which the assumptions employed in the construction of the model are thought to correspond to the real processes underlying the phenomena represented by the model. 
  • Comprehensiveness is the degree to which the model is thought to succeed in capturing the major causal factors that influence the features of the behavior of the system in which we are interested. 
  • Robustness is a measure of the degree to which the results of the model persist under small perturbations in the settings of parameters, formulation of equations, etc. 
  • Autonomy refers to the stability of the model’s results in face of variation of contextual factors. 
  • Reliability is a measure of the degree of confidence we can have in the data employed in setting the values of the parameters. 

These are features of models that can be investigated more or less independently and prior to examination of the empirical success or failure of the predictions of the model.

Let us look more closely at these standards of adequacy. The discussion of realism elsewhere suggests that we may attempt to validate the model deductively, by examining each of the assumptions underlying construction of the model for its plausibility or realism (link). (This resembles Mill’s “deductive method” of theory evaluation.) Economists are highly confident in the underlying general equilibrium theory. The theory is incomplete (or, in Daniel Hausman’s language, inexact; link), in that economic outcomes are not wholly determined by purely economic forces. But within its scope economists are confident that the theory identifies the main causal processes: an equilibration of supply and demand through market-determined prices.

Validity can be assessed through direct inspection of the substantive economic assumptions of the model: the formulation of consumer and firm behavior, the representation of production and consumption functions, the closure rules, and the like. To the extent that the particular formulation embodied in the model is supported by accepted economic theory, the validity of the model is enhanced. On the other hand, if particular formulations appear to be ad hoc (introduced, perhaps, to make the problem more tractable), the validity of the model is reduced. If, for example, the model assumes linear demand functions and we judge that this is a highly unrealistic assumption about the real underlying demand functions, then we will have less confidence in the predictive results of the model.

Unfortunately, there can be no fixed standard of evaluation concerning the validity of a model. All models make simplifying and idealizing assumptions; so to that extent they deviate from literal realism. And the question of whether a given idealization is felicitous or not cannot always be resolved on antecedent theoretical grounds; instead, it is necessary to look at the overall empirical adequacy of the model. The adequacy of the assumption of fixed coefficients of production cannot be assessed a priori; in some contexts and for some purposes it is a reasonable approximation of the economic reality, while in other cases it introduces unacceptable distortion of the actual economic processes (when input substitution is extensive). What can be said concerning the validity of a model’s assumptions is rather minimal but not entirely vacuous. The assumptions should be consistent with existing economic theory; they should be reasonable and motivated formulations of background economic principles; and they should be implemented in a mathematically acceptable fashion.

Comprehensiveness too is a weak constraint on economic models. It is plain that all economic theories and models disregard some causal factors in order to isolate the workings of specific economic mechanisms; moreover, there will always be economic forces that have not been represented within the model. So judgment of the comprehensiveness of a model depends on a qualitative assessment of the relative importance of various economic and non-economic factors in the particular system under analysis. If a given factor seems to be economically important (e.g. input substitution) but unrepresented within the model, then the model loses points on comprehensiveness.

Robustness can be directly assessed through a technique widely used by economists, sensitivity analysis. The model is run a large number of times, varying the values assigned to parameters (reflecting the range of uncertainty in estimates or observations). If the model continues to have qualitatively similar findings, it is said to be robust. If solutions vary wildly under small perturbations of the parameter settings, the model is rightly thought to be a poor indicator of the underlying economic mechanisms.

Autonomy is the theoretical equivalent of robustness. It is a measure of the stability of the model under changes of assumptions about the causal background of the system. If the model’s results are highly sensitive to changes in the environment within which the modeled processes take place, then we should be suspicious of the results of the model.

Assessment of reliability is also somewhat more straightforward than comprehensiveness and validity. The empirical data used to set parameters and exogenous variables have been gathered through specific well-understood procedures, and it is mandatory that we give some account of the precision of the resulting data.

Note that reliability and robustness interact; if we find that the model is highly robust with respect to a particular set of parameters, then the unreliability of estimates of those parameters will not have much effect on the reliability of the model itself. In this case it is enough to have “stylized facts” governing the parameters that are used: roughly 60% of workers’ income is spent on food, 0% is saved, etc.

Failures along each of these lines can be illustrated easily.

  1. The model assumes that prices are determined on the basis of markup pricing (costs plus a fixed exogenous markup rate and wage). In fact, however, we might believe (along neoclassical lines) that prices, wages, and the profit rate are all endogenous, so that markup pricing misrepresents the underlying price mechanism. This would be a failure of validity; the model is premised on assumptions that may not hold. 
  2. The model is premised on a two-sector analysis of the economy. However, energy production and consumption turn out to be economically crucial factors in the performance of the economy, and these effects are overlooked unless we represent the energy sector separately. This would be a failure of comprehensiveness; there is an economically significant factor that is not represented in the model. 
  3. We rerun the model assuming a slightly altered set of production coefficients, and we find that the predictions are substantially different: the increase in income is only 33% of what it was, and deficits are only half what they were. This is a failure of robustness; once we know that the model is extremely sensitive to variations in the parameters, we have strong reason to doubt its predictions. The accuracy of measurement of parameters is limited, so we can be confident that remeasurement would produce different values. So we can in turn expect that the simulation will arrive at different values for the endogenous variables. 
  4. Suppose that our model of income distribution in a developing economy is premised on the international trading arrangements embodied in GATT. The model is designed to represent the domestic causal relations between food subsidies and the pattern of income distribution across classes. If the results of the model change substantially upon dropping the GATT assumption, then the model is not autonomous with respect to international trading arrangements. 
  5. Finally, we examine the data underlying the consumption functions and we find that these derive from one household study in one Mexican state, involving 300 households. Moreover, we determine that the model is sensitive to the parameters defining consumption functions. On this scenario we have little reason to expect that the estimates derived from the household study are reliable estimates of consumption in all social classes all across Mexico; and therefore we have little reason to depend on the predictions of the model. This is a failure of reliability. 

These factors–validity, comprehensiveness, robustness, autonomy, and reliability–figure into our assessment of the antecedent credibility of a given model. If the model is judged to be reasonably valid and comprehensive; if it appears to be fairly robust and autonomous; and if the empirical data on which it rests appears to be reliable; then we have reason to believe that the model is a reasonable representation of the underlying economic reality. But this deductive validation of the model does not take us far enough. These are reasons to have a priori confidence in the model. But we need as well to have a basis for a posteriori confidence in the particular results of this specific model. And since there are many well-known ways in which a generally well-constructed model can nonetheless miss the mark–incompleteness of the causal field, failure of ceteris paribus clauses, poor data or poor estimates of the exogenous variables and parameters, proliferation of error to the point where the solution has no value, and path-dependence of the equilibrium solution–we need to have some way of empirically evaluating the results of the model.

(Here is an application of these ideas to computable general equilibrium (CGE) models in an article published in On the Reliability of Economic Models: Essays in the Philosophy of Economics; link.  See also Lance Taylor’s reply and discussion in the same volume.)

The inexact science of economics

Image: social accounting matrix, Bolivia, 1997

Economics is an “inexact” science; or so Daniel Hausman argues in The Inexact and Separate Science of Economics (Google Books link).  As it implies, this description conveys that economic laws have only a loose fit with observed economic behavior.  Here are the loosely related interpretations that Hausman offers for this idea, drawing on the thinking of John Stuart Mill:

  1. Inexact laws are approximate.  They are true within some margin of error.
  2. Inexact laws are probabilistic or statistical.  Instead of stating how human beings always behave, economic laws state how they usually behave.
  3. Inexact laws make counterfactual assertions about how things would be in the absence of interferences.
  4. Inexact laws are qualified with vague ceteris paribus clauses. (128)

Economics has also been treated by economists as a separate science: a science capable of explaining virtually all the phenomena in a reasonably well-defined domain of social phenomena.  Here is Hausman’s interpretation of a separate science:

  1. Economics is defined in terms of the causal factors with which it is concerned, not in terms of a domain.
  2. Economics has a distinct domain, in which its causal factors predominate.
  3. The “laws” of the predominating causal factors are already reasonably well-known.
  4. Thus, economic theory, which employs these laws, provides a unified, complete, but inexact account of its domain. (90-91)

These characteristics of economic theories and models have implications for several important areas: truth, prediction, explanation, and confirmation.  Is economics a scientific theory of existing observable economic phenomena?  Or is it an abstract, hypothetical model with only tangential implications for the observable social world?  Is economics an empirical science or a mathematical system?

Let’s look at these questions in turn.  First, can we give a good interpretation of what it would mean to believe that an inexact theory or law is “true”?  Here is a possible answer: we may believe that there are real but unobservable causal processes that “drive” social phenomena.  To say that a social or economic theory is true is to say that it correctly identifies a real causal process — whether or not that process operates with sufficient separation to give rise to strict empirical consequences.  Galilean laws of mechanics are true for falling objects, even if feathers follow unpredictable trajectories through turbulent gases.

Second, how can we reconcile the desire to use economic theories to make predictions about future states with the acknowledged inexactness of those theories and laws? If a theory includes hypotheses about underlying causal mechanisms that are true in the sense just mentioned, then a certain kind of prediction is justified as well: “in the absence of confounding causal factors, the presence of X will give rise to Y.” But of course this is a useless predictive statement in the current situation, since the whole point is that economic processes rarely or never operate in isolation. So we are more or less compelled to conclude that theories based on inexact laws are not a useable ground for empirical prediction.

Third, in what sense do the deductive consequences of an inexact theory “explain” a given outcome — either one that is consistent with those consequences or one that is inconsistent with the consequences? Here inexact laws are on stronger ground: after the fact, it is often possible to demonstrate that the mechanisms that led to an outcome are those specified by the theory. Explanation and prediction are not equIvalent. Natural selection explains the features of Darwin’s finches — but it doesn’t permit prediction of future evolutionary change.

And finally, what is involved in trying to use empirical data to confirm or disconfirm an inexact theory?  Given that we have stipulated that the theory has false consequences, we can’t use standard confirmation theory.  So what kind of empirical argument would help provide empirical evaluation of an inexact theory?  One possibility is that we might require that the predictions of the theory should fall within a certain range of the observable measurements — which is implied by the idea of “approximately true” consequences.  But actually, it is possible that we might hold that a given theory is inexact, true, and wildly divergent from observed experience.  (This would be true of the application of classical mechanics to the problem of describing the behavior of light, irregular objects shot out of guns under water.)  Hausman confronts this type of issue when he asks why we should believe that the premises of general equilibrium theory are true. But here too there are alternatives, including piecemeal confirmation of individual causal hypotheses. Hausman refers to this possibility as a version of Mill’s deductive method.

I take up some of these questions in my article, “Economic Models in Development Economics” link, included in On the Reliability of Economic Models: Essays in the Philosophy of Economics.  This article discusses some related questions about the reliability and applicability of computable general equilibrium models in application to the observed behavior of real economies.  Here are some concluding thoughts from that article concerning the empirical and logical features that are relevant to the assessment of CGE models:

“The general problem of the antecedent credibility of an economic model can be broken down into more specific questions concerning the validity, comprehensiveness, robustness, reliability, and autonomy of the model. I will define these concepts in the following terms.

  • Validity is a measure of the degree to which the assumptions employed in the construction of the model are thought to correspond to the real processes underlying the phenomena represented by the model.
  • Comprehensiveness is the degree to which the model is thought to succeed in capturing the major causal factors that influence the features of the behavior of the system in which we are interested.
  • Robustness is a measure of the degree to which the results of the model persist under small perturbations in the settings of parameters, formulation of equations, etc.
  • Autonomy refers to the stability of the model’s results in face of variation of contextual factors.
  • Reliability is a measure of the degree of confidence we can have in the data employed in setting the values of the parameters.

These are epistemic features of models that can be investigated more or less independently and prior to examination of the empirical success or failure of the predictions of the model.”

(Hausman’s book is virtually definitive in its formulation of the tasks and scope of the philosophy of economics.  When conjoined with the book he wrote with Michael McPherson, Economic Analysis, Moral Philosophy and Public Policy, the philosophy of economics itself becomes a “separate science”: virtually all the important questions are raised throughout a bounded domain, and a reasonable set of theories are offered to answer those questions.)

Modest predictions in history

Image: the owl of Minerva

In spite of their reputations as historical determinists, Hegel and Marx each had their own versions of skepticism about “learning from history” — in particular, the possibility of predicting the future based on historical knowledge. Notwithstanding his view that history embodies reason, Hegel is famous for his idea in the Philosophy of Right: “When philosophy paints its grey in grey then has a shape of life grown old. By philosophy’s grey in grey it cannot be rejuvenated but only understood. The owl of Minerva spreads its wings only with the falling of dusk.” And Marx puts the point more sardonically in the Eighteenth Brumaire: “Hegel remarks somewhere that all great world-historic facts and personages appear, so to speak, twice. He forgot to add: the first time as tragedy, the second time as farce.” Both, then, cast specific doubt on the idea that history presents us with general patterns that can be projected into the future. Marx’s remarks to Vera Zasulich about the prospects for communist revolution in Russia are instructive: “I thus expressly limited the ‘historical inevitability’ of this process to the countries of Western Europe.”

This is a view I agree with very profoundly: history is contingent, there are always alternative pathways that might have been taken, and history has no general plan. So — no grand predictions in history.

But then we have to ask a different sort of question. Specifically — what kinds of predictions or projections are possible in history? And what is the intellectual base of grounded historical predictions? Here are a few predictions that seem to be supportable, drawn from recent postings on UnderstandingSociety:

  • The Alsatian language is likely to disappear as a functioning medium of communication in Alsace within the next fifty years.
  • Labor unrest in China will intensify over the next ten years.
  • Social unrest will continue to occur over the next decade in Thailand, with a gradual increase in influence to dispossessed groups (red shirts).
  • Large and deadly technology failures will occur in Europe and the United States in the next decade.
  • Social movements will arise more frequently and more adaptively as a result of the use of social media (twitter, blogs, facebook, email).
  • Conflicts between Arabs and Jews in East Jerusalem will continue to deepen in the next ten years as a consequence of the practical politics of land use and reclamation in the city.

Several things are apparent when we consider these predictions. First, they are limited in scope; they are small-scale features of the historical drama. Second, they depend on specific and identifiable social circumstances, along with clear ideas about social mechanisms connecting the present to the fruture. Third, they are at least by implication probabilistic; they indicate likelihoods rather than inevitabilities. Fourth, they imply the existence of ceteris paribus conditions: “Absent intervening factors, such-and-so is likely to occur.” But, finally, they all appear to be intellectually justifiable. They may not be true, but they can be grounded in an empirically and historically justified analysis of the mechanisms that produce social change, and a model projecting the future effects of those mechanisms in combination.

The heart of prediction is our ability to identify dynamic processes and mechanisms that are at work in the present, and our ability to project their effects into the future. Modest predictions are those that single out fairly humdrum current processes in specific detail, and derive some expectations about how these processes will play out in the relatively short run. Grand predictions, on the other hand, purport to discover wide and encompassing patterns of development and then to extrapolate their civilizational consequences over a very long period. A modest prediction about China is the expectation that labor protest will intensify over the next ten years. A grand prediction about China is that it will become the dominant economic and military superpower of the late twenty-first century. We can have a fair degree of confidence in the first type of prediction; whereas there are vastly too many possible branches in history, too many “countervailing tendencies,” too many accidents and contingencies, that may occur to give us any confidence in the latter prediction.

Ceteris paribus conditions are unavoidable in formulating historical expectations about the future, because social change is inherently complex and multi-causal. So even if it is case that a given process, accurately described in the present, creates a tendency for a certain kind of result — it remains the case that there may well be other processes at work that will offset this result. The tendency of powerful agents to seize opportunities for enhancing their wealth through processes of urban development implies a certain kind of urban geography in the future; but this outcome might be offset by a genuinely robust and sustained citizens’ movement at the city council level.

The idea that historical predictions are generally probabilistic is partly a consequence of the fact of the existence of unknown ceteris paribus conditions. But it is also, more fundamentally, a consequence of the fact that social causation itself is almost always probabilistic. If we say that rising conflict over important resources (X) is a cause of inter-group violence (Y), we don’t mean that X is necessarily followed by Y; instead, we mean that X raises the likelihood of the occurrence of Y.

So two conclusions seem justified. First, there is a perfectly valid intellectual role for making historical predictions. But these need to be modest predictions: limited in scope, closely tied to theories of existing social mechanisms, and accompanied by ceteris paribus conditions. And second, grand predictions should be treated with great suspicion. At their best, they depend on identifying a few existing mechanisms and processes; but the fact of multi-causal historical change, the fact of the compounding of uncertainties, and the fact of the unpredictability of complex systems should all make us dubious about large and immodest claims about the future. For the big answers, we really have to wait for the owl of Minerva to spread her wings.

Great structures?



The scholars of the Annales school of French history characteristically placed their analysis of historical change within the context of the large structures — economic, social, or demographic — within which ordinary people live out their lives. They postulate that the broad and enduring social relations that exist in a society — for example, property relations, administrative and political relations, or the legal system — constitute a stable structure within which agents act, and they determine the distribution of crucial social resources that become the raw materials on the basis of which agents exercise power over other individuals and groups. So the particular details of a social structure create the conditions that set the stage for historical change in the society. (The recently translated book by André Burguière provides an excellent discussion of the Annales school; The Annales School: An Intellectual History.)

The Annales school also put forward a concept that applies to the temporal structure of historical change: the idea that some historical changes unfold over very long periods of time and are all but invisible to participants — the history of the longue durée. So large enduring structures, applying their effects over very long periods of historical time, provided a crucial part of the historical imagination of the Annales school.

Marc Bloch’s own treatment of French feudalism illustrates a sustained analysis of a group of great structures enduring centuries over much of the territory of France (Feudal Society: Vol 1: The Growth and Ties of Dependence), as does Le Roy Ladurie’s treatment of the causes of change and stasis in Languedoc in The Peasants of Languedoc. Fernand Braudel’s Civilization and Capitalism, 15th-18th Century, Vol. I: The Structure of Everyday Life represents another clear example of historical research organized around analysis of great structures. And though not a member of the Annales school, I would include M. I. Finley’s treatment of the ancient economy as another important example (The Ancient Economy); Finley attempts to trace out the features of property, economy, and political and military power through which ordinary life and historical change proceeded in the ancient world. But there is an important difference among the several works: Bloch, Braudel, and Finley represent an analysis of these structures as a whole, while Le Roy Ladurie’s work largely attempts to explain features of life over a very long time that show the imprint of such structures. One is macrohistory, while the other is microhistory.

What are some examples of putative “great structures”? There are several that readily come to mind: a nation’s economic system, its system of law, legislation, and enforcement; its system of government, taxation, and policy-making, its educational system, religious organizations and traditions, the composite system of organizations that exist within civil society, and the norms and relations of the family.

The scope of action matters here; the background assumption is that a great structure encompasses a large population and territory. (So we would not call the specific marriage customs that govern a small group of Alpine villages but extend no further a “great structure.”) And it is further assumed that the hypothesized structure possesses a high degree of functional continuity and integration; there are assumed to be concrete social processes that assure that the structure works in roughly the same way throughout its scope to regulate behavior.

The idea of a “great structure” thus requires that we attend to the contrast between locally embodied institutions showing significant variation across time and space, and the supposedly more homogeneous workings of “great structures.” We need to be able to provide an account of the extended social mechanisms that establish the effects and stability of the great structure. If we cannot validate these assumptions about scope, continuity, and functional similarity, then the concept of a “great structure” collapses onto a concatenation of vaguely similar institutions in different times and places.

To fit the bill, then, a great structure should have some specific features of scope and breadth. It should be geographically widespread, affecting a large population. It should have roughly similar characteristics and effects on behavior in the full range of its scope. And it should be persistent over an extended period of time — decades or longer.

The most basic question is this: are there great structures? On the positive side, it is possible to identify social mechanisms that secure the functional stability of certain institutions over a large reach of territory and time. A system of law is enforced by the agents of the state; so it is reasonable to assume that there will be similar legal institutions in Henan and Sichuan when there is an effective imperial government. A system of trading and credit may have centrally enforced and locally reinforcing mechanisms that assure that it works similarly in widely separated places. A normative system regulating marriage may be stabilized by local behaviors over a wide space. The crucial point here is simply this: if we postulate that a given structure has scope over a wide range, we need to have a theory of some of the social mechanisms that convey its power and its reproduction over time.

So the existence of great structures is ambiguous. Yes—in that there are effective institutions of politics, economics, and social life that are real and effectual within given historical settings, and we have empirical understanding of some of the mechanisms that reproduce these structures. But no—in that all social structures are historically rooted; so there is no “essential” state or economy which recurs in different settings. Instead, political and economic structures may be expected to evolve in different historical settings. And a central task of historical research is to discover both the unifying dynamics and the differentiating expressions which these abstract processes take in different historical settings.

Maps, narratives, and abstraction




It is obvious that maps are selective representations of the world. They represent an abstraction: a representation of a complex, dense reality that signifies some characteristics while deliberately ignoring other aspects. The principles of selection used by the cartographer are highly dependent on the expected interests of the user. Topography will be relevant to the hiker but not the motorist. Location of points of interest will be important to the tourist but not the long-distance trucker. Location of railroad hubs will be valued by the military planner but not the birdwatcher. So there is no such thing as a comprehensive map — one that represents all geographical details; and there is also no such thing as a truly “all-purpose” map — one that includes all the details that any user could want.

We also know that there are different schemes of representation of geography — different projections, different conventions for representing items and relationships, etc. So there is no objectively best map of a given terrain. Rather, comparing maps for adequacy, accuracy, and usefulness requires semantic and pragmatic comparison. (Here the word “semantic” is used in a specialized sense: “having to do with the reference relationship between a sign and the signified.”) Semantically, we are interested in the correspondence between the map and the world. The conventions of a given cartography imply a specific set of statements about the spatial relations that actually exist among places, as well as denoting a variety of characteristics of places. So there is a perfectly natural question to ask of a given map: is it representationally accurate? This sort of assessment leads to judgments like these: This map does a more accurate job of representing driving distances than that one, given the rules of representation that each presupposes. This map errs in representing the relative population sizes of Cleveland and Peoria. These are features that have to do with the accuracy of the correspondence between the map and the world.

The pragmatic considerations have to do with how well the representation or its underlying conventions conform to how various people want to use it. Maps are particularly dependent on pragmatic considerations. We need to assess the value of a map with respect to a set of practical interests. How well does the map convey the information about places and spatial relationships that the user will want to consult? How have the judgments about what to include and what to exclude worked out from the point of view of the user? Pragmatic considerations lead to judgments like these: this mapping convention corresponds better to the needs of the military planner or the public health official than that one. The pragmatic questions about a map have to do with a different kind of fit — fit between the features and design of the map and the practical interests of a particular set of users. Do the conventions of the given cartography correspond well to the interests that specific sets of users have in the map?

Here is the point of this discussion: are there useful analogies between the epistemology of maps and the cognitive situation of other representational constructs — for example, historical narratives and scientific theories? Several points of parallel seem particularly evident. First, narratives and theories are selective too. It is impossible to incorporate every element of a historical event or natural process into a theory or narrative; rather, it is necessary to select a storyline that permits us to provide a partial account of what happened. This is true for the French Revolution; but it is also true for the trajectory of a hockey puck.

Second, there is a parallel point about veridicality that applies to narratives and theories as much as to maps. No map stands as an isolated representation; rather, it is embedded within a set of conventions of representation. We must apply the conventions in order to discover what “assertions” are contained in the representation. So maps are in an important sense “conventional.” However, given the conventions of the map, we can undertake to evaluate its accuracy. And this is true for narratives and theories as well; we can attempt to assess the degree of approximate truth possessed by the construction. Are the statements about the nature of the events and their sequence approximately true? (Given that an account of the French Revolution singles out class interests of parties within the narrative, has the historian correctly described the economic interests of the Jacobins?)

And third, the point about the relevance of users’ interests to assessment of the construction seems pertinent to narratives and theories as well. The civil engineer who is investigating the collapse of a building will probably find a truthful analysis of the thermodynamics of the HVAC system unhelpful, even though it is true. The detective investigating a robbery of a party store will probably become impatient at a narrative that highlights the sequence of street noises that were audible during the heist, rather than the descriptions and actions of the visitors during the relevant time.

When it comes to narratives and theories, there is another value dimension that we want to impose on the construction: the idea of explanatory adequacy. A narrative ought to provide a basis for explaining the “how and why” of historical events; it ought to single out the circumstances and reasoning that help to explain the actions of participants, and it ought to highlight some of the environmental circumstances that influenced the outcome. A scientific theory is intended to identify some of the fundamental causal factors that explain a puzzling phenomenon — the turbulence that occurs in a pot of water as it approaches the boiling point, for example. So when we say that a narrative or a theory is an abstraction, part of what we’re getting at is the idea that the historian or natural scientist has deliberately excluded factors that don’t make a difference, in order to highlight a set of factors that do make a difference.

Contingent historical development




Here’s a relatively limited historical puzzle to solve. A powerful new technology — the railroad — was developed in the first part of the nineteenth century. The nature and characteristics of the technology were essentially homogeneous across the national settings in which it appeared in Europe and North America. However, it was introduced and built out in three countries — the United States, Britain, and France — in markedly different ways. The ways in which the railroads and their technologies were regulated and encouraged were very different in the three countries, and the eventual rail networks had very different properties in the three countries. The question for explanation is this: can we explain the differences in these three national experiences on the basis of some small set of structural or cultural differences that existed among the three countries and that causally explain the resulting differences in build-out, structure, and technical frameworks? Or, possibly, are the three historical experiences different simply because of the occurrence of a large but cumulative number of unimportant and non-systemic events?

These are the questions that historical sociologist Frank Dobbin poses in his book, Forging Industrial Policy: The United States, Britain, and France in the Railway Age. He argues that there were significantly different cultures of political and industrial policy in the three countries that led to substantial differences in the ways in which government and business interacted in the development of the railroads. “Each Western nation-state developed a distinct strategy for governing industry” (1). The laissez-faire culture of the United States permitted a few large railroad magnates and corporations to make the crucial decisions about technology, standards, and routes that would govern the development of the rail system. The regulated market culture of Great Britain favored smaller companies and strove to prevent the emergence of a small number of oligopolistic rail companies. And the technocratic civil-service culture of France gave a great deal of power to the engineers and civil servants who were charged to make decisions about technology choice, routes, and standards.

These differences led to systemic differences in the historical implementation of the railroads, the rail networks that were developed, and the regulatory regimes that surrounded them. The U.S. rail network developed as the result of competition among a small number of rail magnates for the most profitable routes. This turned out to favor a few east-west trunk lines connecting urban centers, including New York, Boston, Chicago, and San Francisco. The British rail network gave more influence to municipalities who demanded service; as a result, the network that developed was a more distributed one across a larger number of cities. And the French rail network was rationally designed to conform to the economic and military needs of the French state, with a system of rail routes that largely centered on Paris. These differences are evident in the maps at the top of the posting.

This example illustrates the insights that can be distilled from comparative historical sociology. Dobbin takes a single technology and documents a range of outcomes in the way in which the technology is built out into a national system. And he attempts to isolate the differences in structures and cultures in the three settings that would account for the differences in outcomes. He offers a causal analysis of the development of the technology in the three settings, demonstrating how the mechanism of policy culture imposes effects on the development of the technology. The inherent possibilities represented by the technology intersect with the economic circumstances and the policy cultures of the three national settings, and the result is a set of differentiated organizations and outcomes in the three countries. The analysis is rich in its documentation of the social mechanisms through which policy culture influenced technology development; the logic of his analysis is more akin to process tracing than to the methods of difference and similarity in Mill’s methods.

The research establishes several important things. First, it refutes any sort of technological determinism, according to which the technical characteristics of the technology determine the way it will be implemented. To the contrary, Dobbin’s work demonstrates the very great degree of contingency that existed in the social implementation of the railroad. Second, it makes a strong case for the idea that an element of culture — the framework of assumptions, precedents, and institutions defining the “policy culture” of a country — can have a very strong effect on the development of large social institutions. Dobbin emphasizes the role that things like traditions, customs, and legacies play in the unfolding of important historical developments. And finally, the work makes it clear that these highly contingent pathways of development nonetheless admit of explanation. We can identify the mechanisms and local circumstances that led, in one instance, to a large number of firms and hubs and in the other, a small number of firms and trunk lines.

Continuity

Throughout much of our social experience we expect continuity: tomorrow will be pretty similar to today, and when changes occur they will be small and gradual. We expect our basic institutions — economic, social, and political — to maintain their core characteristics over long periods of time. We expect social attitudes and values to change only slowly, through gradual evolution rather than abrupt transformation. And we expect the same of a range of social conditions — for example, highway safety, crime rates, teen pregnancy rates, and similar social features.

It is evident that this expectation of gradual, continuous change is not always a valid guide to events. Abrupt, unexpected events occur — revolutions, mass cultural changes like the 1960s, sweeping political and legislative changes along the lines of the Reagan revolution. And of course we have the current example of abrupt declines in financial markets — see the graph of the Dow Jones Industrial Average for the week of September 23-30, 2008 below. So the expectation of continuity sometimes leads us astray. But continuity is probably among our most basic heuristic assumptions about the future when it comes to our expectations about the social world and our plans for the future.

The deeper question is an ontological one: what features of social causation and processes would either support or undermine the expectation of continuity? We can say quite a bit about the features of continuity and discontinuity in physical systems; famously, “non-linearities” occur in some physical systems that lead to singularities and discontinuities, but many physical systems are safely linear and continuous all the way down. And these mathematical features follow from the fundamental physical mechanisms that underlie physical systems. But what about the social world?

Take first the stability of large social and political institutions. Is there a reason to expect that major social and political institutions will retain their core features in face of disturbing influences? Consider for example the SEC as a financial regulatory institution; the European Union as a multinational legislative body; or a large health maintenance organization. Here institutional sociologists have provided a number of important insights. First, institutions often change through the accumulation of a myriad of small adaptations in different locations within the institution. This is a process that is likely to give rise to slow, continuous, gradual change for the institution as a whole; and this is continuous behavior. Second, though, institutional sociologists have identified important internal forces that work actively to preserve the workings of the institution: the stakeholders who benefit from the current arrangements. Stakeholders are given incentives to actively reinforce and preserve the current institutional arrangements — the status quo. Both of these factors suggest that institutional change will often be slow, gradual, and continuous.

Consider next the ways in which attitudes and values change in a population. Here it is plausible to observe that individuals change their attitudes and values slowly, through exposure to other individuals and behaviors. And the attitudes and values of a new generation are usually transmitted through processes that are highly decentralized — again suggesting a slow and gradual process of change. So this suggests that changes in attitudes and values might behave analogously to the spread of a pathogen through a population — with a slow and continuous spread of “contagion” resulting in a gradual change in population attitudes.

Consider last the example of social measures such as crime rates or teen pregnancy rates. If we take it as a premise that crime and teen pregnancy are influenced by social factors that in turn influence the behavior of individuals, and if we take it that these background social factors change slowly and continuously — then it is credible to reason that the aggregate measures of the associated behaviors will change slowly and continuously as well. The reasoning here is probabilistic: when large numbers of people with a specified set of background social psychologies are exposed to common environmental circumstances, then it is plausible to predict that the average rate of teen pregnancy will remain fairly steady if the background circumstances remain steady.

These are all reasons for expecting a degree of stability and continuity in social arrangements and social behavior. But before we conclude that the social world is a continuous place, consider this: We also have some pretty clear models of how social phenomena might occur in a dis-continuous fashion. Critical mass phenomena, tipping points, and catastrophic failures are examples of groups of social phenomena where we should expect discontinuities. The behavior of a disease in a population may change dramatically once a certain percentage of the population is infected (critical mass); a new slang expression (“yada yada yada”) may abruptly change its frequency of useage once a certain number of celebrities have adopted it (tipping point); a civic organization may be stretched to the breaking point by the addition of new unruly members and may suddenly collapse or mutate. (The hypothesis of punctuated equilibrium brings this sort of discontinuity into Darwinian theory of evolution.)

So there are some good foundational reasons for expecting a degree of continuity in the social environment; but there are also convincing models of social behavior that lead to important instances of discontinuous outcomes. This all seems to lead to the slightly worrisome piece of advice: don’t bet on the future when the stakes are high. Stock markets collapse; unexpected wars occur; and previously harmonious social groups fall into fratricidal violence. And there is no fool-proof way of determining whether a singularity is just around the corner.

Equilibrium reasoning

A system is in equilibrium with respect to a given characteristic when there is a system of forces in play that push the system back to the equilibrium state when it is subjected to small disturbances or changes. This is referred to as a homeostatic system.

The temperature in a goldfish bowl is in equilibrium if the bowl is provided with a thermostatically controlled heater and cooler; when the external temperature falls and the water temperature begins to fall as well, the thermostat registers the change of temperature and turns on the heater, and when the external temperature rises, the thermostat turns on the cooler. A population of squirrels in a bounded forest may reach an equilibrium size that is balanced by excess reproductive capacity (pushing the population upwards when it falls below the feeding capacity of the environment) and by excess mortality from poor nutrition (pushing the population downwards when it rises above the feeding capacity of the environment).

These examples embody very different causal mechanisms; but each represents a case in which the variable in question (temperature or population size) oscillates around the equilibrium value determined by the features of the environment and the features of the adjustment mechanisms.

There are other physical systems where the concept of equilibrium has no role. The trajectory of a fly ball is not an equilibrium outcome, but rather the direct causal consequence of the collision between bat and ball. And if the course of the baseball is disturbed — by impact with a passing bird or an updraft of wind — then the terminus of the ball’s flight will be different. The number of telephone calls between Phoenix and Albany is not an equilibrium outcome, even if it is fairly stable over time, but simply the aggregate consequence of the contingent telephone behavior of large numbers of people in the two cities. So systems that reach and maintain equilibrium are somewhat special.

It is also interesting to observe that there are other circumstances that produce a stable steady state in a system besides equilibrium processes. We may observe that elevators in a busy office building most frequently have 10 passengers. And the explanation for this may go along these lines: 10 is the maximum number of adults who can be squeezed into the elevator car; there are always many people waiting for an elevator; so virtually every car is at full capacity of 10 persons. This is an example of a pattern that derives from a population of events that demands full utilization, and a limit condition in the location where activity takes place. In this example, 10 passengers is not an equilibrium outcome but rather a forced outcome deriving from excess demand and a logistical constraint on the volume of activity. A large city may show a population history that indicates a trend of population increase from 4 million to 6 million to 10 million — and then it stops growing. And the explanation of the eventual stable population size of 10 million may depend on the fact that the water supplies available to the city cannot support a population significantly larger than 10 million.

To what extent are social ensembles and processes involved in equilibrium conditions? The paradigm example of equilibrium reasoning in the social sciences arises in microeconomic theory. Supply and demand curves are postulated as being fixed, and the price of a good is the equilibrium position where the quantity produced at this price is equal to the quantity consumed at this price. If the price rises, demand for the good falls and the price falls; if the price falls below the equilibrium position, producers manufacture less of the good and consumers demand more of it, which induces a price rise.

Another important example of equilibrium analysis in social behavior is the application of central place theory to economic geography. The theory is that places (cities, towns, villages) will be positioned across the countryside in a way that embodies a set of urban hierarchies and a set of commercial pathways. The topology of the system and the size of the nodes are postulated to be controlled by social variables such as transport cost and demand density. And individuals’ habitation decisions are influenced in a way that reinforces the topology and size hierarchy of the central place system.

However, even in these simple examples there are circumstances that can make the equilibrium condition difficult to attain. If the supply and demand curves shift periodically, then the equilibrium price itself moves around. If the price and production response characteristics are too large in their effects, then the system may keep bouncing around the equilibrium price, from “too high” to “too low” without the capacity of finetuning production and consumption. The resulting behavior would look like a graph of the stock market rather than a stable, regular system returning to its “equilibrium” value. And, in the case of habitation patterns, some places may gain a reputation for fun that offsets their disadvantages from the point of view of transport and demand density — thereby disrupting the expected equilibrium outcomes.

So if the conditions defining the terms of an equilibrium change too quickly, or if the feedback mechanisms that work to adjust the system value to current equilibrium conditions are too coarse, then we should not expect the system to arrive at an equilibrium state. (The marble rolling on a rotating dinner plate will continue to roll across all parts of the plate rather than arriving at the lowest point on the plate and staying there.)

I’m inclined to think that equilibria are relatively rare in the social world. The reasons for this are several: it is uncommon to be able to discover homeostatic mechanisms that adjust social variables; when quasi-homeostatic mechanisms exist, they are often too coarse to lead to equilibrium; and, most fundamentally, the constraints that constitute the boundary conditions for idealized equilibria among social variables are often themselves changing too rapidly to permit an equilibrium to emerge. Instead, social outcomes more often look like constrained random walks, in which social actions occur in a fairly uncoordinated way at the individual level and aggregate to singular social outcomes that are highly path-dependent and contingent. Social outcomes are more often stochastic than being guided by homeostatic mechanisms.

%d bloggers like this: