Generativity and emergence

Social entities and structures have properties that exercise causal influence over all of us, and over the continuing development of the society in which we live. Schools, corporations, armies, terror networks, transport networks, markets, churches, and cities all fall in this range — they are social compounds or entities that shape the behavior of the individuals who live and work within them, and they have substantial effects on the broader society as well.

So it is unsurprising that sociologists and ordinary observers alike refer to social structures, organizations, and practices as real components of the social world. Social entities have properties that make a difference, at the individual level and at the social and historical level. Individuals are influenced by the rules and practices of the organizations that employ them; and political movements are influenced by the competition that exists among various religious organizations. Putting the point simply, social entities have real causal properties that influence daily life and the course of history.

What is less clear in the social sciences, and in the areas of philosophy that take an interest in such things, is where those causal properties come from. We know from physics that the causal properties of metallic silver derive from the quantum-level properties of the atoms that make it up. Is something parallel to this true in the social realm as well? Do the causal properties of a corporation derive from the properties of the individual human beings who make it up? Are social properties reducible to individual-level facts?

John Stuart Mill was an early advocate for methodological individualism. In 1843 he wrote his System of Logic: Ratiocinative and Inductive, which contained his view of the relationships that exist between the social world and the world of individual thought and action:

All phenomena of society are phenomena of human nature, generated by the action of outward circumstances upon masses of human beings; and if, therefore, the phenomena of human thought, feeling, and action are subject to fixed laws, the phenomena of society can not but conform to fixed laws. (Book VI, chap. VI, sect. 2)

With this position he set the stage for much of the thinking in social science disciplines like economics and political science, with the philosophical theory of methodological individualism.

About sixty years later Emile Durkheim took the opposite view. He believed that social properties were autonomous with respect to the individuals that underlie them. In 1901 he wrote in the preface to the second edition of Rules of Sociological Method:

Whenever certain elements combine and thereby produce, by the fact of their combination, new phenomena, it is plain that these new phenomena reside not in the original elements but in the totality formed by their union. The living cell contains nothing but mineral particles, as society contains nothing but individuals. Yet it is patently impossible for the phenomena characteristic of life to reside in the atoms of hydrogen, oxygen, carbon, and nitrogen…. Let us apply this principle to sociology. If, as we may say, this synthesis constituting every society yields new phenomena, differing from those which take place in individual consciousness, we must, indeed, admit that these facts reside exclusively in the very society itself which produces them, and not in its parts, i.e., its members…. These new phenomena cannot be reduced to their elements. (preface to the 2nd edition)

These ideas provided the basis for what we can call “methodological holism”.

So the issue between Mill and Durkheim is the question of whether the properties of the higher-level social entity can be derived from the properties of the individuals who make up that entity. Mill believed yes, and Durkheim believed no.

This debate persists to the current day, and the positions are both more developed, more nuanced, and more directly relevant to social-science research. Consider first what we might call “generativist social-science modeling”. This approach holds that methodological individualism is obviously true, and the central task for the social sciences is to actually perform the reduction of social properties to the actions of individuals by providing computational models that reproduce the social property based on a model of the interacting individuals. These models are called “agent-based models” (ABM). Computational social scientist Joshua Epstein is a recognized leader in this field, and his book Growing Artificial Societies: Social Science From the Bottom Up provides developed examples of ABMs designed to explain well-known social phenomena from the disappearance of the Anasazi in the American Southwest to the occurrence of social unrest. Here is his summary statement of the approach:

To the generativist, explaining macroscopic social regularities, such as norms, spatial patterns, contagion dynamics, or institutions requires that one answer the following question: How could the autonomous local interactions of heterogeneous boundedly rational agents generate the given regularity?Accordingly, to explain macroscopic social patterns, we generate—or “grow”—them in agent models. 

Epstein’s memorable aphorism summarizes the field — “If you didn’t grow it, you didn’t explain its emergence.” A very clear early example of this approach is an agent-based simulation of residential segregation provided by Thomas Schelling in “Dynamic Models of Segregation” (Journal of Mathematical Sociology, 1971; link). The model shows that simple assumptions about the neighborhood-composition preferences of individuals of two groups, combined with the fact that individuals can freely move to locations that satisfy their preferences, leads almost invariably to strongly segregated urban areas.

There is a surface plausibility to the generativist approach, but close inspection of many of these simulations lays bare some important deficiencies. In particular, a social simulation necessarily abstracts mercilessly from the complexities of both the social environment and the dynamics of individual action. It is difficult to represent the workings of higher-level social entities within an agent-based model — for example, organizations and social practices. And ABMs are not well designed for the task of representing dynamic social features that other researchers on social action take to be fundamental — for example, the quality of leadership, the content of political messages, or the high degree of path dependence that most real instances of political mobilization reflect.

So if methodological individualism is a poor guide to social research, what is the alternative? The strongest opposition to generativism and reductionism is the view that social properties are “emergent”. This means that social ensembles sometimes possess properties that cannot be explained by or reduced to the properties and actions of the participants. For example, it is sometimes thought that a political movement (e.g. Egyptian activism in Tahrir Square in 2011) possessed characteristics that were different in kind from the properties of the individuals and activists who made it up.

There are a few research communities currently advocating for a strong concept of emergence. One is the field of critical realism, a philosophy of science developed by Roy Bhaskar in A Realist Theory of Science (1975) and The Possibility of Naturalism (1979). According to Bhaskar, we need to investigate the social world by looking for the real (though usually unobservable) mechanisms that give rise to social stability and change. Bhaskar is anti-reductionist, and he maintains that social entities have properties that are different in kind from the properties of individuals. In particular, he believes that the social mechanisms that generate the social world are themselves created by the autonomous causal powers of social entities and structures. So attempting to reduce a process of social change to the actions of the individuals who make it up is a useless exercise; these individuals are themselves influenced by the autonomous causal powers of larger social forces.

Another important current line of thought that defends the idea of emergence is the theory of assemblage, drawn from Gilles Deleuze but substantially developed by Manuel DeLanda in A New Philosophy of Society: Assemblage Theory and Social Complexity (2006) and Assemblage Theory (2016). This theory argues for a very different way of conceptualizing the social world. This approach proposes that we should understand complex social entities as a compound of heterogeneous and independent lesser entities, structures, and practices. Social entities do not have “essences”. Instead, they are continent and heterogenous ensembles of parts that have been brought together in contingent ways. But crucially, DeLanda maintains that assemblages too have emergent properties that do not derive directly from the properties of the parts. A city has properties that cannot be explained in terms of the properties of its parts. So assemblage theory too is anti-reductionist. 

The claim of emergence too has a superficial appeal. It is clear, for one thing, that social entities have effects that are autonomous with respect to the particular individuals who compose them. And it is clear as well that there are social properties that have no counterpart at the individual level (for example, social cohesion). So there is a weak sense in which it is possible to accept a concept of emergence. However, that weak sense does not rule out either generativity or reduction in principle. It is possible to hold both generativity and weak emergence consistently. And the stronger sense — that emergent properties are unrelated to and underivable from lower level properties — seems flatly irrational. What could strongly emergent properties depend on, if not the individuals and social relations that make up these higher-level social entities?

For this reason it is reasonable for social scientists to question both generativity and strong emergence. We are better off avoiding the strong claims of both generativity and emergence, in favor of a more modest social theory. Instead, it is reasonable to advocate for the idea of the relative explanatory autonomy of social properties. This position comes down to a number of related ideas. Social properties are ultimately fixed by the actions and thoughts of socially constituted individuals. Social properties are stable enough to admit of direct investigation. Social properties are relatively autonomous with respect to the specific individuals who occupy positions within these structures. And there is no compulsion to perform reductions of social properties through ABMs or any other kind of derivation. (These are ideas that were first advocated in 1974 by Jerry Fodor in “Special sciences: Or: The disunity of science as a working hypothesis” (link).)

It is interesting to note that a new field of social science, complexity studies, has relevance to both ends of this dichotomy. Joshua Epstein himself is a complexity theorist, dedicated to discovering mathematical methods for understanding complex systems. Other complexity scientists like John Miller and Scott Page are open to the idea of weak emergence in Complex Adaptive Systems: An Introduction to Computational Models of Social Life. Here is how Miller and Page address the idea of emergence in CAS:

The usual notion put forth underlying emergence is that individual, localized behavior aggregates into global behavior that is, in some sense, disconnected from its origins. Such a disconnection implies that, within limits, the details of the local behavior do not matter to the aggregate outcome. (CAS, p. 44)

Herbert Simon is another key contributor to modern complexity studies. Simon believed that complex systems have properties that are irreducible to the properties of their components for pragmatic reasons, including especially computational intractability. It is therefore reasonable, in his estimation, to look at higher-level social properties as being emergent — even though we believe in principle that these properties are ultimately determined by the properties of the components. Here is his treatment in the third edition of The Sciences of the Artificial – 3rd Edition (1996):

[This amounts to] reductionism in principle even though it is not easy (often not even computationally feasible) to infer rigorously the properties of the whole from knowledge of the properties of the parts. In this pragmatic way, we can build nearly independent theories for each successive level of complexity, but at the same time, build bridging theories that show how each higher level can be accounted for in terms of the elements and relations of the next level down. (172)

The debate over generativity and emergence may seem like an arcane issue that is of interest only to philosophers and the most theoretical of social scientists. But in fact, disputes like this one have real consequences for the conduct of an area of scientific research. Suppose we are interested in the sociology of hate-based social movements. If we begin with the framework of reductionism and generativism, we may be led to focus on the social psychology of adherents and the aggregative processes through which potential followers are recruited into a hate-based movement. If, on the other hand, we believe that social structures and practices have relatively autonomous causal properties, then we will be led to consider the empirical specifics of the workings of organizations like White Citizens Councils, legal structures like the laws that govern hate-based political expressions in Germany and France, and the ways that the Internet may influence the spread of hate-based values and activism. In each of these cases the empirical research is directed in important measure to the concrete workings of the higher-level social institutions that are hypothesized to influence the emergence and shape of hate-based movements. In other words, the sociological research that we conduct is guided in part by the assumptions we make about social ontology and the composition of the social world.

Social generativity and complexity

The idea of generativity in the realm of the social world expresses the notion that social phenomena are generated by the actions and thoughts of the individuals who constitute them, and nothing else (linklink). More specifically, the principle of generativity postulates that the properties and dynamic characteristics of social entities like structures, ideologies, knowledge systems, institutions, and economic systems are produced by the actions, thoughts, and dispositions of the set of individuals who make them up. There is no other kind of influence that contributes to the causal and dynamic properties of social entities. Begin with a population of individuals with such-and-so mental and behavioral characteristics; allow them to interact with each other over time; and the structures we observe emerge as a determinate consequence of these interactions.

This view of the social world lends great ontological support to the methods associated with agent-based models (link). Here is how Joshua Epstein puts the idea in Generative Social Science: Studies in Agent-Based Computational Modeling):

Agent-based models provide computational demonstrations that a given microspecification is in fact sufficient to generate a macrostructure of interest…. Rather, the generativist wants an account of the configuration’s attainment by a decentralized system of heterogeneous autonomous agents. Thus, the motto of generative social science, if you will, is: If you didn’t grow it, you didn’t explain its emergence. (42)

Consider an analogy with cooking. The properties of the cake are generated by the properties of the ingredients, their chemical properties, and the sequence of steps that are applied to the assemblage of the mixture from the mixing bowl to the oven to the cooling board. The final characteristics of the cake are simply the consequence of the chemistry of the ingredients and the series of physical influences that were applied in a given sequence.

Now consider the concept of a complex system. A complex system is one in which there is a multiplicity of causal factors contributing to the dynamics of the system, in which there are causal interactions among the underlying causal factors, and in which causal interactions are often non-linear. Non-linearity is important here, because it implies that a small change in one or more factors may lead to very large changes in the outcome. We like to think of causal systems as consisting of causal factors whose effects are independent of each other and whose influence is linear and additive.

A gardener is justified in thinking of growing tomatoes in this way: a little more fertilizer, a little more water, and a little more sunlight each lead to a little more tomato growth. But imagine a garden in which the effect of fertilizer on tomato growth is dependent on the recent gradient of water provision, and the effects of both positive influencers depends substantially on the recent amount of sunlight available. Under these circumstances it is difficult to predict the aggregate size of the tomato given information about the quantities of the inputs.

One of the key insights of complexity science is that generativity is fully compatible with a wicked level of complexity. The tomato’s size is generated by its history of growth, determined by the sequence of inputs over time. But for the reason just mentioned, the complexity of interactions between water, sunlight, and fertilizer in their effects on growth mean that the overall dynamics of tomato growth are difficult to reconstruct.

Now consider the idea of strong emergence — the idea that some aggregates possess properties that cannot in principle be explained by reference to the causal properties of the constituents of the aggregate. This means that the properties of the aggregate are not generated by the workings of the constituents; otherwise we would be able in principle to explain the properties of the aggregate by demonstrating how they derive from the (complex) pathways leading from the constituents to the aggregate. This version of the absolute autonomy of some higher-level properties is inherently mysterious. It implies that the aggregate does not supervene upon the properties of the constituents; there could be different aggregate properties with identical constituent properties. And this seems ontological untenable.

The idea of ontological individualism captures this intuition in the setting of social phenomena: social entities are ultimately composed of and constituted by the properties of the individuals who make them up, and nothing else. This does not imply methodological individualism; for reasons of complexity or computational limitations it may be practically impossible to reconstruct the pathways through which the social entity is generated out of the properties of individuals. But ontological individualism places an ontological constraint on the way that we conceptualize the social world. And it gives a concrete meaning to the idea of the microfoundations for a social entity. The microfoundations of a social entity are the pathways and mechanisms, known or unknown, through which the social entity is generated by the actions and intentionality of the individuals who constitute it.

What is anchor individualism?

Brian Epstein has attempted to shake up some of our fundamental assumptions about the social world in the past several years by challenging the idea of “ontological individualism” — the idea that social things consist of facts about individuals in action, thought, and interaction, and nothing else. Here is how he puts the idea in “Ontological Individualism Reconsidered”: “Ontological individualism is the thesis that facts about individuals exhaustively determine social facts” (link). He believes this ontological concept is false; he disputes the idea that the social world supervenes upon facts about individuals; and he argues that there are some social facts or circumstances that cannot be parsed in terms of facts about combinations of individuals. His arguments are pulled together in a very coherent way in The Ant Trap: Rebuilding the Foundations of the Social Sciences, but he has made the case in earlier articles as well (link).

Epstein’s primary reason for doubting ontological individualism is a notion he shares with John Searle: that social action often involves a setting of law, convention, interpretation, presupposition, implicature, or rule that cannot be “reduced” to facts interior to the individuals involved in an activity. Searle’s concept of a “status fact” is an example (link): the fact that John is an Imam is not a purely individual-level fact about John. Instead, it presupposes a structure of religious institutions, rules, procedures, and beliefs, in light of which John’s history of interactions with other individuals and settings qualifies him as “Imam”.

There is another kind of individualism that Epstein considers as a more adequate version — what he refers to as “anchor individualism.” The diagram below represents his graphical explanation of the relationship between anchor individualism and ontological individualism. What does he mean by this idea?

Here is one of his efforts to explain the point:

What I will call “anchor individualism” is a claim about how frame principles can be anchored. Ontological individualism, in contrast, is best understood as a claim about how social facts can be grounded. (101)

Frames, evidently, are institutional contexts, or contexts of meaning, in the terms of which individual actions are situated. They constitute the difference between a bare set of behaviors and a full-blooded social action. Alfred lifts his right hand to his cap; this is a bodily motion. Alfred salutes his superior officer; this is an institutionally defined action that depends upon a frame of military authority and obligation, in the context of which the behavior constitutes a certain kind of social action. (This sounds rather similar, incidentally, to Ryle and Geertz on the “wink” and the distinction between thin and thick description; Geertz, “Thick Description” in The Interpretation Of Cultures.) A frame principle is a stipulation of how an action, performance, or symbolic artifact is constituted, what makes it the socially meaningful thing that it is — a hundred dollar bill, a first-degree murder, or an Orthodox rabbi. Plainly a frame principle looks a lot like a rule or a constitutive declaration: “any person who received the degree of Bachelors of Science in Accounting, completed 150 credit hours of study, and passed the CPA exam is counted as a “certified public accountant”.)

But a mere stipulation of status is not sufficient. If one person individually decides that a university president shall be henceforward be understood to have the authority to perform marriage ceremonies, this private declaration does not change the status definition of “university president.” Rather, the stipulation must itself have some sort of social validity. It must be “anchored”. We can say specifically what would be required to anchor the status definition of university president considered here; it would require a valid act of legislation that creates this power, and there would need to be widespread recognition of the political legitimacy and bindingness of the new legislation.

Epstein observes that Searle believes that anchoring of a frame principle always comes down to “collective acceptance” (103). But Epstein notes that other theorists have a broader conception of anchoring: attitudes, conforming behaviors, conventions, shared values about political legitimacy, acts of legislatures, and so on. What anchor individualism asserts is that each of these forms of anchoring can be related to the attitudes, beliefs, and performances of individuals and groups of individuals.

So on Epstein’s view, there are two complementary versions of individualism. Ontological individualism is a thesis about what is required for grounding a social fact. Ontological individualism maintains that social facts are grounded in the behaviors and thoughts of individuals. But Epstein thinks there is still something else to represent in our picture of social ontology. We need to be able to specify what circumstances anchor the frame principles themselves. That is the circumstances that make an action or performance the kind of action that it is. To call a performance a “marriage” brings with it a long set of presuppositions about history, status, and validity. These presuppositions constitute a certain kind of frame principle. But we can then ask the question, what makes the frame principle binding in the circumstances? This is where anchoring comes in; anchoring is the set of facts that create or document the “bindingness” of the frame principles in question.

In my reading what makes this a distinctive view from traditional thinking about the relationship between individuals and social facts is the effort it represents to formalize the logical standing of circumstances that are intuitively crucial in social interactions: the significance, rule-abiding-ness, legitimacy, and conventionality of a given individual-level behavior. And these circumstances are necessarily distributed across a large group of people, involving the kinds of socially reflexive ideas that Searle thinks are constitutive of the social world: presuppositions, implicatures, rules, rituals, conventions, meanings, and practices. There is no private language, and there is no private practice. (There are things we do purely individually and privately; but then these do not constitute “practices” in the socially meaningful sense.) So the kinds of things that an anchor analysis calls out are social things.

But it also seems fair to observe that the facts that anchor a practice, convention, or rule are indeed facts that depend upon states of mind and action of individual actors. So anchor individualism remains a coherent kind of individualism. These anchoring facts have microfoundations in the thoughts, behavior, habits, and practices of socially situated individuals.

Are emergence and microfoundations contraries?

image: micro-structure of a nanomaterial (link)

Are there strong logical relationships among the ideas of emergence, microfoundations, generative dependency, and supervenience? It appears that there are.

The diagram represents the social world as a laminated set of layers of entities, processes, powers, and laws. Entities at L2 are composed of or caused by some set of entities and forces at L1. Likewise L3 and L4. Arrows indicate microfoundations for L2 facts based on L1 facts. Diamond-tipped arrows indicate the relation of generative dependence from one level to another. Square-tipped lines indicate the presence of strongly emergent facts at the higher level relative to the lower level. The solid line (L4) represents the possibility of a level of social fact that is not generatively dependent upon lower levels. The vertical ellipse at the right indicates the possibility of microfoundations narratives involving elements at different levels of the social world (individual and organizational, for example).
We might think of these levels as “individuals,” “organization, value communities, social networks,” “large aggregate institutions like states,” etc.
This is only one way of trying to represent the structure of the social world. The notion of a “flat” ontology was considered in an earlier post (link). Another structure that is excluded by this diagram is one in which there is multi-directional causation across levels, both upwards and downwards. For example, the diagram excludes the possibility that L3 entities have causal powers that are original and independent from the powers of L2 or L1 entities. The laminated view described here is the assumption built into debates about microfoundations, supervenience, and emergence. It reflects the language of micro, meso, and macro levels of social action and organization.

Here are definitions for several of the primary concepts.

  • Microfoundations of facts in L2 based on facts in L1 : accounts of the causal pathways through which entities, processes, powers, and laws of L1 bring about specific outcomes in L2. Microfoundations are small causal theories linking lower-level entities to higher-level outcomes.
  • Generative dependence of L2 upon L1: the entities, processes, powers, and laws of L2 are generated by the properties of level L1 and nothing else. Alternatively, the entities, processes, powers, and laws of A suffice to generate all the properties of L2. A full theory of L1 suffices to derive the entities, processes, powers, and laws of L2.
  • Reducibility of y to x : it is possible to provide a theoretical or formal derivation of the properties of y based solely on facts about x.
  • Strong emergence of properties in L2 with respect to the properties of L2: L2 possesses some properties that do not depend wholly upon the properties of L2.
  • Weak emergence of properties in L2 with respect to the properties of L1: L2 possesses some properties for which we cannot (now or in the future) provide derivations based wholly upon the properties of L1.
  • Supervenience of L2 with respect to properties of L1: all the properties of L2 depend strictly upon the properties of L1 and nothing else.
    We also can make an effort to define some of these concepts more formally in terms of the diagram.

Consider these statements about facts at levels L1 and L2:

  1. UM: all facts at L2 possess microfoundations at L1. 
  2. XM: some facts at L2 possess inferred but unknown microfoundations at L1. 
  3. SM: some facts at L2 do not possess any microfoundations at L1. 
  4. SE: L2 is strongly emergent from L1. 
  5. WE: L2 is weakly emergent from L1. 
  6. GD: L2 is generatively dependent upon L1. 
  7. R: L2 is reducible to L1. 
  8. D: L2 is determined by L1. 
  9. SS: L2 supervenes upon L1. 

Here are some of the logical relations that appear to exist among these statements.

  1. UM => GD 
  2. UM => ~SE 
  3. XM => WE 
  4. SE => ~UM 
  5. SE => ~GD 
  6. GD => R 
  7. GD => D 
  8. SM => SE 
  9. UM => SS 
  10. GD => SS 

On this analysis, the question of the availability of microfoundations for social facts can be understood to be central to all the other issues: reducibility, emergence, generativity, and supervenience. There are several positions that we can take with respect to the availability of microfoundations for higher-level social facts.

  1. If we have convincing reason to believe that all social facts possess microfoundations at a lower level (known or unknown) then we know that the social world supervenes upon the micro-level; strong emergence is ruled out; weak emergence is true only so long as some microfoundations remain unknown; and higher-level social facts are generatively dependent upon the micro-level.   
  2. If we take a pragmatic view of the social sciences and conclude that any given stage of knowledge provides information about only a subset of possible microfoundations for higher-level facts, then we are at liberty to take the view that each level of social ontology is at least weakly emergent from lower levels — basically, the point of view advocated under the banner of “relative explanatory autonomy” (link). This also appears to be roughly the position taken by Herbert Simon (link). 
  3. If we believe that it is impossible in principle to fully specify the microfoundations of all social facts, then weak emergence is true; supervenience is false; and generativity is false. (For example, we might believe this to be true because of the difficulty of modeling and calculating a sufficiently large and complex domain of units.) This is the situation that Fodor believes to be the case for many of the special sciences. 
  4. If we have reason to believe that some higher-level facts simply do not possess microfoundations at a lower level, then strong emergence is true; the social world is not generatively dependent upon the micro-world; and the social world does not supervene upon the micro-world. 

In other words, it appears that each of the concepts of supervenience, reduction, emergence, and generative dependence can be defined in terms of the availability of inavailability of microfoundations for some or all of the facts at a higher level based on facts at the lower level. Strong emergence and generative dependence turn out to be logical contraries (witness the final two definitions above).


Social causation


The idea of social causation is a difficult one, as we dig more deeply into it. What does it mean to say that “poor education causes increased risk of delinquency” or “population growth causes technology change” or “the existence of paramilitary organizations contributed to the rise of German fascism”? What sorts of things can function as “social causes” — events, structures, actions, forces, other? What social interactions extend over time in the social world to establish the links between cause and effect? What kinds of evidence are available to support the claim that “social factor X causes a change in social factor Y”?

Helen Beebee, Christopher Hitchcock, and Peter Menzies’ The Oxford Handbook of Causation is a valuable resource on topics involving the philosophy of causation, and several of the contributions are immediately relevant to current debates within the philosophy of social science.

Harold Kincaid considers a number of the hard questions about social causation in his contribution to the Handbook, “Causation in the Social Sciences”. Perhaps most relevant to my ongoing concerns is his defense of non-reductionist claims about social causation. It is often maintained (by methodological individualists) that causal relations exist only among individuals, not among higher-level social entities or structures. (Elster and Hedstrom make claims along these lines in multiple places.) Kincaid rejects this view and affirms the legitimacy of macro- or meso-level causal assertions.

When a particular corporation acts in a market, it has causal influence. The influence of that specific entity is realized by the actions of the individuals composing it just as the influence of the baseball on the breaking window is realized by the sum of particles composing it. The social level causal claims pick out real causal patterns as types that may not be captured by individual kinds because multiple realizability is real. (kl 16102)

These arguments are a valuable antidote to the tendency towards reductionism to the level of individual activity that has often guided philosophers when they have considered the nature of social causation.

Phil Dowe’s discussion of causal process theories is useful for the social sciences (“Causal Process Theories”). It is hard to think of the social world as an amalgam of discrete events; it is easier to think of a variety of processes unfolding, subject to a range of forces and obstacles.  Dowe gives much of the credit for current interest in causal process language to Wesley Salmon (along with resurrection of the idea of a “rope of causation” to replace that of a “chain of causation”).

For Salmon the causal structure of the world consists in the nexus of causal processes and interactions. A process is anything with constancy of structure over time. (kl 4924)

The language of causal processes seems to fit the nature of social causation better than that of events and systems of billiard balls. And we have the makings of a metaphysics of process available in the social sciences, in the form of a stream of actions and reactions of individuals aggregating to recognizable social patterns. So when we say that “population increase stimulates technology innovation”, we can picture the swarming series of interactions, demands, and opportunities that flows from greater population density, to rewards for innovation, to a more rapid rate of innovation.

Another useful contribution in the Handbook with special relevance to the social world is Stephen Mumford’s contribution, “Causal Powers and Capacities.”

The powers ontology accepts necessary connections in nature, in which the causal interactions of a thing, in virtue of its properties, can be essential to it. Instead of contingently related cause and effect, we have power and its manifestation, which remain distinct existences but with a necessary connection between. (kl 5971)

The language of causal powers allows us to incorporate a number of typical causal assertions in the social sciences: “Organizations of type X produce lower rates of industrial accidents,” “paramilitary organizations promote fascist mobilization,” “tenure systems in research universities promote higher levels of faculty research productivity.” In each case we are asserting that a certain kind of social organization possesses, in light of the specifics of its rules and functioning, a disposition to stimulate certain kinds of participant behavior and certain kinds of aggregate outcomes. This is to attribute a specific causal power to species of organizations and institutions.

Stuart Glennan’s “Mechanisms” is also highly relevant to causation in the social realm. Here is how Glennan puts the mechanisms theory (quoting his own earlier formulations):

Glennan … characterizes mechanisms in this way: ‘A mechanism underlying a behavior is a complex system which produces that behavior by the interaction of a number of parts according to direct causal laws (Glennan 1996: 52). Glennan then suggests that two events are causally related when and only when they are connected by an intervening mechanism. (kl 7069)

This definition works pretty well with typical examples of social mechanisms, with one important exception — the reference to “direct causal laws”. When we say that Organization X works to minimize accidents, the sub-transactions that are involved in the workings of the overall process are not typically “direct causal laws,” but rather the intelligible results of individual actors performing their actions within the rules, incentives, and sanctions of the organization. We can tell a mechanism story along these lines: “Organization X embodies a set of protocols of operation, a training regime, a supervisory regime, and and enforcement regime. The protocols have the result that, when followed consistently, accidents are rare. Employees are ‘programmed’ to perform their tasks according to this set of protocols. Supervisors are trained to observe and measure employee performance against the protocols. Enforcement provides sanctions and incentives for bad and good performance.” The complex mechanism of the organization works to implement and maintain the smooth functioning of the guiding protocols. So the organization embodies just the kind of complex system that Glennan describes as a mechanism.

So the Handbook is a good resource for all of us who are interested in working through a more satisfactory account of what it means to look at social phenomena as embodying causal relations in ways that support explanations.

(The photo of ice forming on glass included above is a metaphorical reference to social causation. Intricate patterns have emerged from a causal process; but it is a process that reflects a high degree of contingency and path-dependency across the expanse of the scene. And there is no overall order to the multiple patterns that emerge; each location is independent of other locations, and there is no answer to questions like this: “Why is Structure A located in the particular position and orientation that it is found to be?” Patterns coalesce and they do so as a result of locally operative causal processes, but there is no overall guiding hand or teleology to the process. The greatest disanalogy I can see here is the fact that the ice-formation process is much simpler than typical social-causal systems. Instead of a single causal mechanism at work in the ice case, there are dozens of overlapping and interactive causal mechanisms at work in most social processes.)

Elster on Tocqueville

Jon Elster is one of the people whose thinking about society and the social sciences has made a consistently important contribution to the philosophy of social science. So Elster’s treatment of Tocqueville as a social scientist in Alexis de Tocqueville, the First Social Scientist will be of interest to anyone who wants to know how we have come to analyze societies in the terms we have.

Elster demonstrates a deep familiarity with Tocqueville’s writings, though he focuses in this book on L’Ancien regime and Democracy in America. So Elster’s Tocqueville is textually well supported. At the same time, Tocqueville is not really a theoretical writer. Instead, it is necessary to infer his theoretical ideas from the comments he makes about historical events and actors. So Elster is forced to engage in a fair amount of rational reconstruction of the theories that underlay a variety of Tocqueville’s observations about the politics of France and America.

There are several elements of Elster’s interpretation of Tocqueville that seem particularly significant. One is Elster’s view that Tocqueville operated on the basis of a conception of social explanation that depended on social mechanisms rather than general laws. Elster believes that the most important feature of Tocqueville’s claim to being a sociologist is his consistent search for causes. The other key to Elster’s analysis of Tocqueville is his focus on features of the actor — reason, interests, and passions, or what Tocqueville refers to as “habits of the heart”.

Among the social mechanisms that Elster focuses on are those that surround preference formation. This question is plainly key to having a theory of political psychology: why do people make the choices that they do? He singles out three distinct psychological mechanisms that Tocqueville alludes to: the spillover effect, the compensation effect, and the satiation effect (kl 292). Preference formation is a topic that has consistently interested Elster, and he spends much time on the question in his early writings, including the formal question of time preferences.

What is “enlightened self-interest”? Elster finds that Tocqueville contrasts “egoism” with “enlightened self-interest” as well as with altruism. Egoism means an exclusive attention to one’s own interests in the moment. So it is opposed both to altruism (concern for the interests of others) and foresight (concern for one’s future interests) (kl 1113). (This bears out Amartya Sen’s comment in “Rational Fools” that the purely economic man is indeed close to being a social moron; link.)

Elster is particularly interested in Tocqueville’s treatment of the passions. He specifically discusses Envy, Fear, Hatred , Enthusiasm, Contempt, and Shame as emotions (passions) that often drive behavior in opposition to both interests and reason. This brings his discussion into intersection with that of Albert Hirschman in The Passions and the Interests. (The Kindle edition includes a very interesting introduction by Amartya Sen; link.) Hirschman’s book looks at the ways that early political economists and philosophers such as Smith and Hume thought about the relationships among reason, passion, and interest, with a view toward the generally moderating effects of interests on behavior in many historical settings. Elster finds a very similar line of thought in Tocqueville.

Elster addresses the topic of the micro-macro relationship in the conclusion. He finds that Tocqueville is interested in both directions of influence — from micro to macro and from macro to micro. He provides a diagram that looks a lot like an inverted version of Coleman’s boat:

Elster doesn’t put his views in these terms, but much of what he has to say about Tocqueville can be put in the category of piecing together Tocqueville’s theory of the actor: why people behave as they do. His discussions of preferences, individualism, norms, and passions all fall in the domain of a theory of the actor.

Elster’s treatment of Tocqueville is of interest in part because of its direct relevance to the explication of Tocqueville’s thought. But I find it more interesting for what it shows about Elster’s own thinking about sociological investigation. It is plain that Elster favors an actor-centered sociology. In some writings he explicitly describes his view as methodological individualism. Here the approach is somewhat more tolerant of schemes of explanation that are not directly reductionist. But it is focused on the varieties and sources of human action, and the ways that these features of action compound into unexpected social outcomes.

(Here is an earlier post where I discussed Tocqueville’s status as a founding sociologist; link.)

Supervenience of the social?

I have found it appealing to try to think of the macro-micro relation in terms of the idea of supervenience (link).  Supervenience is a concept that was developed in the context of physicalism and psychology, as a way of specifying a non-reductionist but still constraining relationship between psychological properties and physical states of the brain. Physicalism and ontological individualism are both ontological theories about the relationship between higher and lower levels of entities in several different domains. But neither doctrine dictates how explanations in these domains need to proceed; i.e., neither forces us to be reductionist in either psychology or sociology.

The supervenience relation holds that —

  • X supervenes on Y =df no difference in X without some difference in the states of Y

Analogously, to say that the “social” supervenes upon “the totality of individuals making up a social arrangement” seems to have a superficial plausibility, without requiring that we attempt to reduce the social characteristics to ensembles of facts about individuals.

I’m no longer so sure that this is a helpful move, however, for the purposes of the macro-micro relationship.  Suppose we are considering a statement along these lines:

  • The causal properties of organization X supervene on the states of the individuals who make up X and who interact with X.

There seem to be quite a few problems that arise when we try to make use of this idea.

(a) First, what are we thinking of when we specify “the states of the individuals”? Is it all characteristics, known and unknown? Or is it a specific list of characteristics? If it is all characteristics of the individual, including as-yet unknown characteristics, then the supervenience relation is impossible to apply in practice. We would never know whether two substrate populations were identical all the way down. This represents a kind of “twin-earth” thought experiment that doesn’t shed light on real sociological questions.

In the psychology-neurophysiology examples out of which supervenience theory originated these problems don’t seem so troubling. First, we think we know which properties of nerve cells are relevant to their functioning: electrical properties and network connections. So our supervenience claim for psychological states is more narrow:

  • The causal properties of a psychological process supervene on the functional properties of the states of the nerve cells of the corresponding brain. 

The nerve cells may differ in other ways that are irrelevant to the psychological processes at the higher level: they may be a little larger or smaller, they may have a slightly different content of trace metals, they may be of different ages. But our physicalist claim is generally more refined than this; it ignores these “irrelevant” differences across cells and specifies identity among the key functional characteristics of the cells. Put this way, the supervenience claim is an empirical theory; it says that electrical properties and network connections are causally relevant to psychological processes, but cell mass and cell age are not (within broad parameters).

(b) Second and relatedly, there are always some differences between two groups of people, no matter how similar; and if the two groups are different in the slightest degree — say, one member likes ice cream and the corresponding other does not — then the supervenience relation says nothing about the causal properties of X. The organizational features may be as widely divergent as could be imagined; supervenience is silent about the delta to epsilon relations from substrate to higher level. It specifies only that identical substrates produce identical higher level properties. More useful would be something like the continuity concept in calculus to apply here: small deviations in lower-level properties result in small deviations in higher-level properties. But it is not clear that this is true in the social case.

(c) Also problematic for the properties of social structures is an issue that depends upon the idea of path dependence. Let’s say that we are working with the idea that a currently existing institution depends for its workings (its properties) on the individuals who make it up at present. And suppose that the institution has emerged through a fifty-year process of incremental change, while populated at each step by approximately similar individuals. The well-established fact of path dependence in the evolution of institutions (Thelen, How Institutions Evolve: The Political Economy of Skills in Germany, Britain, the United States, and Japan) entails that the properties of the institution today are not uniquely determined by the features of the individuals currently involved in the institution in its various stages. Rather there were shaping events that pushed the evolution of the institution in this direction or that at various points in time. This means that the current properties of the institution are not best explained by the current properties of the substrate individuals at present, but rather by the history of development that led this population to this point.

It will still be true that the workings of the institution at present are dependent on the features of the individuals at present; but the path-dependency argument says that those individuals will have adjusted in small ways so as to embody the regulative system of the institution in its current form, without becoming fundamentally different kinds of individuals. Chiefly they will have internalized slightly different systems of rules that embody the current institution, and this is what gives the institution its characteristic mode of functioning in the present.

So explanation of the features of the institution in the present is not best couched in terms of the current characteristics of the individuals who make it up, but rather by an historical account of the path that led to this point (and the minute changes in individual beliefs and behaviors that went along with this).

These concerns make me less satisfied with the general idea of supervenience as a way of specifying the relation between social structures and substrate individuals. What would satisfy me more would be something like this:

  • Social structures supervene upon the states of individuals in the substrate described at a given level of granularity corresponding to our current theory of the actor.
  • Small differences in the substrate will produce only small differences in the social structure.

These add up to a strong claim; they entail that any organization with similar rules of behavior involving roughly similar actors (according to the terms of our best theory of the actor) will have roughly similar causal properties. And this in turn invites empirical investigation through comparative methods.

As for the path-dependency issue raised in comment (c), perhaps this is the best we can say: the substrate analysis of the behavior of the individuals tells us how the institution works, but the historical account of the path-dependent process through which the institution came to have the characteristics it currently has tells us why it works this way. And these are different kinds of explanations.

Methodological localism and actor-centered sociology


I’ve advocated in earlier posts for two related ideas: the idea of actor-centered sociology and the idea of methodological localism. The first idea recommends that sociologists couch their research and theories in terms of more specific and nuanced theories of the actors whose thoughts and actions make up the social processes of interest. The second idea is an alternative to the equally unappealing doctrines of methodological individualism and holism. According to methodological localism, the “molecule” of the social world is the socially constituted, socially situated actor in ongoing relationships with other social actors. This is a conception of social reality that is social all the way down; it conceives of the individual actor within a set of social relationships as the basic unit of social phenomena.

Examination of some important work in sociology and neighboring fields in the past several decades shows that the actor-centered approach corresponds pretty well to the research approach taken by a number of innovative investigators. Here are a few examples: C. K. Lee, Against the Law: Labor Protests in China’s Rustbelt and Sunbelt; Michael Mann, The Dark Side of Democracy: Explaining Ethnic Cleansing; George Steinmetz, The Devil’s Handwriting: Precoloniality and the German Colonial State in Qingdao, Samoa, and Southwest Africa; Al Young, The Minds of Marginalized Black Men: Making Sense of Mobility, Opportunity, and Future Life Chances; Elinor Ostrom, Governing the Commons: The Evolution of Institutions for Collective Action. Each of these research projects makes significant use of more nuanced theories of the actor as an important part of the analysis and explanations offered.

These examples validate the usefulness of several of the key imperatives of the doctrine of methodological localism: in particular, emphasis on the centrality of the socially situated and socially constructed actor within more complex social processes. Methodological localism implies that we need to be cautious about over-simplifying the mentality of the actor—not simply a utility maximizing egoist, not simply a norm-driven robot, not simply an adherent of a religious worldview.

Instead, it is often useful to pay attention to the details and the differences that we find in the historical setting of important social processes and outcomes and the forms of mentality these create: the specific forms of education received by scientists, the specific social environment in which prospective administrators were socialized, the specific mental frameworks associated with this or that historically situated community. These details help us to do a much better job of understanding how the actors perceived social situations and how they chose to act within them.

And likewise, it is often useful to pay attention to the regulative and incentive-generating context within which actors constructed their actions. This is the role that the intellectual and policy field plays in Steinmetz’s account; it is also the role that specific property and contract arrangements play in the new institutionalism and Elinor Ostrom. And both Bourdieu and the new institutionalists are right that small differences in the institutional setting can result in large differences in outcome, as actors respond to institutions and incentives to pursue their ends. So paying close and detailed attention to the particulars of the institutions of career, economic opportunity, family, power, and prestige allows us to perceive the causes of important differences in outcomes.

In short, it seems that sociology has a lot to gain by paying more attention to the specifics of the actors whose thinking and actions constitute the social processes of interest to them. This advice does not imply reductionism; it is entirely legitimate for sociologists to make use of causal claims at a variety of levels. But it does imply that there is substantive and valuable work to be done in almost every field of sociology at the level of the actor. Sociology gains when researchers attempt to gain a more nuanced understanding of the constitutions and situations of the actors with whom they are concerned.

To be sure, not all research in sociology takes this approach. And in fact there is very good recent work in sociology that doesn’t pay much attention to the actor. A good example of this category is Robert Sampson’s Great American City: Chicago and the Enduring Neighborhood Effect.  Sampson’s approach has everything to do with the behavior of particular actors in particular circumstances. He wants to show that we can identify certain patterns of causation that exist in urban street-scapes that are amenable to quantitative investigation.  But his research is not particularly socio-ethnographic; no interviews, no attempt to capture the states of mentality of the urban young people who make up the neighborhoods he studies.  The level of analysis that he has chosen is largely higher than the individual actors — the meso-level environmental and organizational features that appear to have an effect on collective behavior. And one of his main methodological contributions is to oppose the idea that urban phenomena can be derived from facts about the individuals who make up a neighborhood or city. So Sampson’s research and explanations are evidently not “actor-centered.” But I think that Sampson’s work is compatible nonetheless with the thesis of methodological localism, though this is less clear. Sampson insists that the neighborhood-level characteristics have causal consequences that do not disaggregate into individual-level patterns. But this can be understood in the “relative explanatory autonomy” interpretation offered elsewhere (link): microfoundations exist for these effects, but it isn’t necessary to trace them through in order to validate the causal linkage at the neighborhood level.

These observations suggest that the status of these two big ideas is rather different. The idea of “actor-centered” sociology shouldn’t be understood as a general prescription for all sociological research, but rather as simply a promising line of investigation as we try to shed light on various social processes and outcomes.  The idea of methodological localism, on the other hand, is a fairly general ontological claim about what the social world is made up of, and it is intended as a general premise for how we think about all social phenomena.  It doesn’t entail a particular theory of explanation, but it does provide a general account of the constitution of social phenomena. And it has implications for how we should think about the micro-composition of social causation.

Neighborhood effects


In Great American City: Chicago and the Enduring Neighborhood Effect Robert Sampson provides a very different perspective on the “micro-macro” debate. He rejects the methodologies associated both poles of the debate: methodological individualism (“derive important social outcomes from the choices of rational individuals”) and methodological structuralism (“derive important social outcomes from the features of large-scale structures like globalization”). Instead, he argues for the causal importance of a particular kind of “meso” — the neighborhood. He takes the view that neither “bottom-up” or “top-down” sociology will suffice. Instead, we need to look at processes at the level of socially situated individuals.

In this book I proposed an alternative to these two perspectives by offering a unified framework on neighborhood effects, the larger social organization of urban life, and social causality in general…. Contrary to much received wisdom, the evidence presented in this book demands attention to life in the neighborhoods that shape it. (357)

I argue that we need to treat social context as an important unit of analysis in its own right.  This calls for new measurement strategies as well as a theoretical framework that do not treat the neighborhood simply as a “trait” of the individual. (60)

Sampson offers his own instantiation of Coleman’s Boat to illustrate his thinking:

But unlike Coleman (and like the argument I offered in an earlier post about meso-level explanation; link), Sampson allows for the validity of type-4 causal mechanisms, from “neighborhood structure and culture” to “rates of social behavior”. So neighborhoods are not simply outcomes of individual choices and behavior; they are social ensembles that exert their own causal powers.

Sampson offers an articulated methodology for the study of the social life of a city, in the form of ten principles. These include:

  1. Focus on social context
  2. Study contextual variations in their own right
  3. focus on social-interactional, social psychological, organizational, and cultural mechanisms of social life
  4. integrate a life-course focus on neighborhood change
  5. look for processes and mechanisms that explain stability
  6. embed in the study of neighborhood dynamics the role of individual selection decisions
  7. go beyond the local
  8. incorporate macro processes 
  9. pay attention to human concerns with public affairs 
  10. emphasize the integrative theme of theoretically interpretive empirical research while maintaining methodological pluralism (67-68)

The heart of “neighborhood sociology” can be summarized, Sampson asserts, in a few simple themes:

First, there is considerable social inequality between neighborhoods, especially in terms of socioeconomic position and racial/ethnic segregation.  

Second, these factors are connected in that concentrated disadvantage often coincides with the geographic isolation of racial minority and immigrant groups.  

Third, a number of crime- and health-related problems tend to come bundled together at the neighborhood level and are predicted by neighborhood characteristics such as the concentration of poverty, racial isolation, single-parent families, and to a lesser extent rates of residential and housing instability.  

Fourth, a number of social indicators at the upper end of what many would consider progress, such as affluence, computer literacy, and elite occupational attainment, are also clustered geographically. (46)

This set of themes asserts a series of important correlations between neighborhood features and social outcomes. The hard question is to identify the social mechanisms that underlie these correlations. “It is from this idea that in recent decades we have witnessed another turning point in the form of a renewed commitment to uncovering the social processes and mechanisms that account for neighborhood (or concentration) effects. Social mechanisms provide theoretically plausible accounts of how neighborhoods bring about change in a given phenomenon” (46).

This is a fascinating and methodologically innovative piece of urban sociology. Sampson’s use of large data sets to establish some of the intriguing neighborhood patterns he identifies is highly proficient, and his efforts to place his reasoning within a more theoretically sophisticated framework of multi-level social mechanisms is admirable. In an interesting twist, Sampson shows how it is possible to expand on the very costly video-based methodology of the original PHDCN study by making use of Google Street View to do systematic observations of neighborhoods in Chicago and other cities (361).

(Here is an earlier post on Sampson’s ideas about neighborhood effects.)

Methodological individualism today

Is it possible to draw a few conclusions on the topic of methodological individualism after dozens of years of debate? (Lars Udehn’s Methodological Individualism: Background, History and Meaning is a great study of the long history of the debate over this issue. It is unfortunate there isn’t an affordable digital edition of the book. Joseph Heath’s entry on the subject in the Stanford Encyclopedia of Philosophy gives a very good overview; link.) Here is Jon Elster’s formulation of the concept in Nuts and Bolts for the Social Sciences (1989)

The elementary unit of social life is the individual human action. To explain social institutions and social change is to show how they arise as the result of the actions and interaction of individuals. This view, often referred to as methodological individualism, is in my view trivially true. (13)

Max Weber is often identified as the modern originator of theory of methodological individualism. (Weber’s student Joseph Schumpeter was the first to use the concept in print.) Weber’s reason for advocating for MI derived from his view of action as purposive behavior, and his view that social outcomes need to be explained on the basis of the purposive actions of the individual actors who constitute them. So MI began with a presupposition about the unique importance of rational-intentional behavior in social life. Weber insisted on a rational actor foundation for the social sciences. And this prepared the ground for a joining of forces between methodological individualism and rational choice theory. 

The emphasis on methodological individualism sometimes reflected a strong disposition towards eliminative reductionism with respect to social entities and properties: the early twentieth century exponents like J.W.N. Watkins wanted to find logical formulations through which social terms could be eliminated in favor of a logical compound of statements about individuals. And what was the motivation for this effort? It appears to be a version of the physicist’s preference for reduction to ensembles of simple homogeneous “atoms” transported to the social and behavioral sciences. This demand for reduction might take the form of conceptual reduction or compositional reduction. The latter takes the form of demonstrations of how higher level properties are made up of lower level systems. The conceptual reduction program didn’t work out well, any more than Carnap’s phenomenological physics did.

In addition to this bias derived from positivist philosophy of science, there was also a political subtext in some formulations of the theory in the 1950s. Karl Popper and JWN Watkins advocated for MI because they thought this methodology was less conducive to the “collectivist” theories of Marx and the socialists. If collectivities don’t exist, then collectivism is foolish.

Another phase of thinking was more ontological than conceptual. These thinkers wanted to make it clear that social things, causes, and structures depended on the activities of individuals and nothing else. Another way of putting the point is to say that social entities are composed of ensembles of individuals and nothing else. Their concern was to avoid the social analogue of vitalism — the idea in the life sciences that there is some special “sauce” of life activity that is wholly independent from the molecular and physical structures that make up the organism. Essentially this crowd wants to hold that the properties of the whole are fixed solely and completely by the physical structures that make it up. The theory of supervenience pretty well captures this ontological position: no differences at the upper level without some difference at the lower level. (This position doesn’t imply its converse statement: if two physical systems differ then their upper-level systems must differ too. This is the point of multiple functional realizability.) The position does rule out some forms of emergentism, however. The idea of microfoundations comes into this line of thought. If we make a claim about the structural or causal properties of an upper-level thing, we need to be confident that there are microfoundations that would show how this feature comes about. In the strongest case, we need to actually provide the microfoundations.

There is another important stream of MI thinking that derives from a set of ideas about how higher-level facts ought to be explained: they should be explained on the basis of demonstrations of how the upper-level entity is given its properties by the organized system of elements from which it is comprised. This is essentially what the analytical sociologists seem to demand, by insisting on the logic of Coleman’s boat. This approach privileges a certain kind of explanation–constructive or compositional explanations.

There is one aspect of the tradition that I haven’t mentioned yet: the idea that we can carve out the individual as separate from and prior to the social — a view sometimes referred to as “atomistic”. In classical physics the analogous claim is supportable. Sodium atoms are homogeneous and interchangeable. But it is not plausible in the human world. Social facts intertwine with the mind and actions of individuals all the way down. So from the start, it would seem that the program of MI should be formulated in terms of reduction from the big-social to the small-social, not the non-social.

So what kinds of social claims do these various formulations rule out?

All of them rule out spooky holism, those social theories that claim that social entities exist that are wholly independent of the features of individuals.

Several of them rule out strong emergentism — the view that there are social properties that could not in principle be derived from full knowledge about the states and properties of the constituent individuals.

They by and large rule out explanatory autonomy for the social level. This is the idea that there might be fully satisfactory causal arguments that proceed from statements about the properties of one set of social factors and the explains another set of social outcomes on this basis. (The ontological thesis does not have this implication.)

As Heath argues in his SEP essay, they rule out macro-level statistical explanations and what he calls micro-level sub-intentional explanations.

In my view, the only claims about methodological individualism that seem unequivocally plausible today are the ontological requirements — the various formulations of the notion that social things are composed of the actions and thoughts of individuals and nothing else. This implies as well that the supervenience claim and the microfoundations claim are plausible as well.

But to concede that x’s are composed of y’s does not entail the need for any kind of reductionism from x to y. And this extends to the idea of explanatory reduction as well. So methodological individualism does not create valid limits on the structure of social explanations, and meso-level explanations are not excluded.

So it seems as though we can now draw several conclusions about the field of methodological individualism. The ontological thesis is roughly true, but it is compatible with a range of different ideas about within- and cross-level explanation. So reductionism doesn’t follow. The micro-level can’t be a hypothetical pre-social or non-social individual. Finally, there is no reason to associate the plausible core of MI theory with one specific theory of action, the rational-intentional theory. As pragmatist sociologists are now arguing, there are compelling theories of the actor that do not privilege the model of conscious deliberative choice.

%d bloggers like this: