The social mechanisms approach to the social sciences aligns well with two key intellectual practices, experiments and policies. In an experiment we are interesting in testing whether a given factor has the effect it is thought to have. In a policy design we are interested in affecting an outcome of interest by manipulating some of the background conditions and factors. In both instances having a theory of the mechanisms in play in a domain permits us to frame our thinking better when it comes to designing experiments and policies.
Let’s say that we are interested in reducing the high school dropout rate in a high-poverty school. We may have a hypothesis that one important causal factor that leads to a higher likelihood of dropping out is that high-poverty students have a much greater burden of family and social problems than students in low-poverty populations. We might describe the mechanism in question in these terms:
H1: (a) high burden of social/familial problems => (b) student has higher likelihood of becoming discouraged => (c) student has higher likelihood of stopping attending => (d) student has a higher likelihood of dropping out of high school
We can evaluate this hypothesis about one of the mechanisms of dropping out of high school in several ways. First, we note that each clause invokes a likelihood. This means that we need to look at sets of students rather than individual students. Single cases or individual pairs of cases will not suffice, since we cannot make any inference from data like these:
A. Individual X has high burden of social/familial problems; Individual X does not become discouraged; Individual X does not drop out of high school.
B. Individual Y has a low burden of social/familial problems; Individual Y does become discouraged; Individual Y does drop out of high school.
Observations A and B are both compatible with the possible truth of the mechanisms hypothesis. Instead, we need to examine groups of individuals with various configurations of the characteristics mentioned in the hypothesis. If H1 is true, it can only be evaluated using population observations.
In theory we might approach H1 experimentally: randomly select two groups G1 and G2 of individuals; expose G1 to a high burden of social/familial problems while G2 is exposed to a low burden of social/familial problems; and observe the incidence of dropping out of high school. This would be to treat the hypothesis through an experiment based on the logic of random controlled trials. The difficulty here is obvious: we are harming the individuals in G1 in order to assess the causal consequences of the harmful treatment. This raises an irresolvable ethical problem. (Here is a discussion of Nancy Cartwright’s critique of the logic of RCT methodology in Evidence Based Policy; link.)
A slightly different experimental design would pass the ethics test. Select two schools S1 and S2 with comparable levels of high-poverty students and high burdens of social/familial problems for the individuals at the schools and comparable historical dropout rates. Now expose the students at S1 to a “treatment” that reduces the burden of social/familial problems (provide extensive social work services in the school that students can call upon). This design too conforms to the logic of a random controlled trial. Continue the treatment for four academic years and observe the graduation rates of the two schools. If H1 is true, we should expect that S1 will have a higher graduation rate than S2.
A third approach takes the form of a “quasi-experiment”. Identify pairs of schools that are similar in many relevant respects, but differ with respect to the burden of social/familial problems. This is one way of “controlling” for the causal influence of other observable factors — family income, race, degree of segregation in the school, etc. Now we have N pairs of matched schools and we can compute the graduation rate for the two components of the matches; that is, graduation rates for “high burden school” and “low burden school”. If we find that the high burden schools have a lower graduation rate than the low burden schools, and if we are satisfied that the schools do not differ systematically in any other dimension, then we have a degree of confirmation for the causal hypothesis H1. But Stanley Lieberson in Making It Count poses some difficult challenges for the logic of this kind of experimental test; he believes that there are commonly unrecognized forms of selection bias in the makeup of the test cases that potentially invalidates any possible finding (link).
So far we have looked at ways of experimentally evaluating the link between (a) and (d). But H1 is more complex; it hypothesizes that social/familial problems exercise their influence through two behavioral stages that may themselves be the object of intervention. The link from (b) to (c) is an independent hypothetical causal relation, and likewise the link from (c) to (d). So we might attempt to tease out the workings of these links in the mechanism as well. Here we might design our experiments around populations of high burden students, but attempt to find ways of influencing either discouragement or the link from discouragement to non-attendance (or possibly the link from non-attendance to full dropping out).
Here our intervention might go along these lines: the burden of social/familial problems is usually exogenous and untreatable. But within-school programs like intensive peer mentoring and encouragement might serve to offset the discouragement that otherwise results from high burden of social/familial problems. This can be experimentally evaluated using one or another of the designs mentioned above. Or we might take discouragement as a given but find an intervention that prevents the discouraged student from becoming a truant — perhaps a strong motivational incentive dependent on achieving 90% attendance during a six-week period.
In other words, causal hypotheses about causal mechanisms invite experimental and quasi-experimental investigation.
What about the other side of the equation; how do hypotheses about mechanisms contribute to policy intervention? This seems even more straightforward than the first question. The mechanism hypothesis points to several specific locations where intervention could affect the negative outcome with which we are concerned — dropping out of high school in this case. If we have experimental evidence supporting the links specified in the hypothesis, then equally we have a set of policy options available to us. We can design a policy intervention that seeks to do one or more of the following things: reduce the burden of social/familial problems; increase the level of morale of students who are exposed to a high burden; find means of encouraging high-burden students to persevere; and design an intervention to encourage truants to return to school. This suite of interventions touches each of the causal connections specified in the hypothesis H1.
Now, finally, we are ready to close the circle by evaluating the success of interventions like these. Does the graduation rate of schools where the interventions have been implemented work out to be higher than those where the interventions were not implemented? Can we begin to assign efficacy assessments to various parts of the policy? Can we arrive at secondary hypotheses about why this policy intervention (“reduce the burden of social/familial issues”) doesn’t succeed, whereas another policy intervention (“bolster morale among high-risk students”) does appear to succeed?
The upshot is that experiments and policies are opposite sides of the same coin. Both proceed from the common assumption that social causes are real; that we can assess the causal significance of various factors through experimentation and controlled observation; and that we can intervene in real-world processes with policy tools designed to exert influence at key junctures in the causal process.