I’ve just finished reading Sharon Bertsch McGrayne’s book on Bayesian statistics, The Theory That Would Not Die: How Bayes’ Rule Cracked the Enigma Code, Hunted Down Russian Submarines, and Emerged Triumphant from Two Centuries of Controversy. McGrayne presents a very interesting story of the advancement of a scientific idea over a very long period (1740s through the 1950s). As she demonstrates at length, the idea that “subjective prior beliefs” could enhance our knowledge about causation and the future was regarded as paradoxical and irrational by mathematicians and statisticians for well over a century.

McGrayne’s book does a very good job of highlighting the scientific controversies that have arisen with respect to Bayesian methods, and the book also makes a powerful case for the value of the methods in many important contemporary problems. But it isn’t very detailed about the logic and mathematics of the field. She gives a single example of applied Bayesian reasoning in appendix b, using the example of breast cancer and mammograms. This is worth reading carefully, since it makes clear how the conditional probabilities of a Bayesian calculation work.

As McGrayne demonstrates with many examples, Bayesian reasoning permits a very substantial ability to draw novel conclusions based on piecemeal observations and some provisional assumptions about mechanisms in the messy world of complex causation. Examples can be found in epidemiology (the cause of lung cancer), climate science, and ecology. And she documents how Bayesian ideas have been used to enhance search processes for missing things — for example, lost hydrogen bombs and nuclear submarines. Here is an important example of the power of Bayesian reasoning to identify causal linkages to lung cancer, including especially cigarette smoking.

In 1951 Cornfield used Bayes’ rule to help answer the puzzle. As his prior hypothesis he used the incidence of lung cancer in the general population. Then he combined that with NIH’s latest information on the prevalence of smoking among patients with and without lung cancer. Bayes’ rule provided a firm theoretical link, a bridge, if you will, between the risk of disease in the population at large and the risk of disease in a subgroup, in this case smokers. Cornfield was using Bayes as a philosophy-free mathematical statement, as a step in calculations that would yield useful results. He had not yet embraced Bayes as an all-encompassing philosophy. Cornfield’s paper stunned research epidemiologists.

More than anything else, it helped advance the hypothesis that cigarette smoking was a cause of lung cancer. Out of necessity, but without any theoretical justification, epidemiologists had been using case studies of patients to point to possible causes of problems. Cornfield’s paper showed clearly that under certain conditions (that is, when subjects in a study were carefully matched with controls) patients’ histories could indeed help measure the strength of the link between a disease and its possible cause. Epidemiologists could estimate disease risk rates by analyzing nonexperimental clinical data gleaned from patient histories. By validating research findings arising from case-control studies, Cornfield made much of modern epidemiology possible. In 1961, for example, case-control studies would help identify the antinausea drug thalidomide as the cause of serious birth defects. (110-111)

One fairly specific thing that strikes me after reading the book concerns the blindspots that existed in the neo-positivist tradition in the philosophy of science that set the terms for the field in the 1960s and 1970s (link). This tradition is largely focused on theories and theoretical explanation, to the relative exclusion of inductive methods. It reveals an underlying predilection for the idea that scientific knowledge takes the form of hypothetico-deductive systems describing unobservables. The hypothetico-deductive model of explanation and confirmation makes a lot of sense in the context of this perspective. But after reading McGrayne I’m retrospectively surprised at the relatively low priority given within standard philosophy of science curriculum to probabilistic reasoning — either frequentist or Bayesian. Many philosophers of science have absorbed a degree of disregard for “inductive logic”, or the idea that we can discover important features of the world through careful observation and statistical analysis. The basic assumption seems to have been that statistical reasoning is boring and Humean — not really capable of discovering new things about nature or society. But in hindsight, this disregard for inductive reasoning is an odd distortion of the domain of scientific knowledge, and, in particular, of the project of sorting out causes.

Some philosophers of science have indeed given substantial attention to Bayesian reasoning. (Here is a good article on Bayesian epistemology by Bill Talbott in the *Stanford Encyclopedia of Philosophy*; link.) Ian Hacking’s textbook An Introduction to Probability and Inductive Logic provides a very accessible introduction to the basics of inductive logic and Bayesian reasoning, and his The Emergence of Probability: A Philosophical Study of Early Ideas about Probability, Induction and Statistical Inference provides an excellent treatment of the history of the subject from a philosophy of science point of view. Another philosopher of science who has treated Bayesian reasoning in detail is Michael Strevens. Here Strevens provides a good brief treatment of the subject from the point of view of the philosophy of science (link). And here is a first-rate unpublished manuscript by Strevens on the use of Bayesian ideas as a theory of confirmation (link). Strevens’ recent Tychomancy: Inferring Probability from Causal Structure is also relevant. And the research program on causal reasoning of Judea Pearl has led to a flourishing of Bayesian reasoning in the theory of causality (link).

What is the potential relevance of Bayesian reasoning in sociology and other areas of the social sciences? Can Bayesian reasoning lead to new insights in assessing social causation? Several features of the social world seem particularly distinctive in the context of a Bayesian approach. Bayesianism conforms very naturally to a scenario-based way of approaching the outcomes of a system or a complicated process; and it provides an elegant and rigorous way of incorporating “best guesses” (subjective probability estimates) into the analysis of a given process. Both features are well suited to the social world. One reason for this is the relatively narrow limits of frequency-based estimates of probabilities of social events. The social sciences are often concerned with single-instance events — the French Revolution, the Great Depression, the rise of ISIS. In cases like these frequency-based probabilities are not available. Second, there is the problem of causal heterogeneity in many social causal relations. If we are interested in the phenomenon of infant mortality, we are led immediately to the realization that there are multiple social factors and conditions that influence this population characteristic; so the overall infant mortality rate of Bangladesh or France is the composite effect of numerous social and demographic causes. This means that there is no single underlying causal property X, where X can be said to create differences in infant mortality rates in various countries. And this in turn implies that it is dubious to assume that there are durable objective probabilities underlying the creation of a given rate of infant mortality. This is in contrast to the situation of earthquakes or hurricanes, where a small number of physical factors are causally relevant to the occurrence of the outcome.

Both these factors suggest that subjective probabilities based on expert-based assessment of the likelihood of various scenarios represent a more plausible foundation for assigning probabilities to a given social outcome. This is the logic underlying Philip Tetlock’s approach to reliable forecasting in Superforecasting: The Art and Science of Prediction and Expert Political Judgment: How Good Is It? How Can We Know? (link). Both points suggest that Bayesian reasoning may have even more applicability in the social world than in the natural sciences.

The joining of Monte Carlo methods with Bayesian reasoning that McGrayne describes in the case of the search for the missing nuclear submarine Thresher (199 ff.) is particularly relevant to social inquiry, it would seem. This is true because of the conjunctural nature of social causation and the complexity of typical causal intersections in the social domain. Consider a forecasting problem similar to those considered by Tetlock — for example, the likelihood that Russia will attempt to occupy Latvia in the next five years. One way of analyzing this problem is to identify a handful of political scenarios moving forward from the present that lead to consideration of this policy choice by Russian leadership; assign prior probabilities to the component steps of each scenario; and calculate a large number of Monte Carlo “runs” of the scenarios, based on random assignment of values to the component steps of each of the various scenarios according to the prior probabilities assigned by the experts. Outcomes can then be classified as “Russia attempts to occupy Latvia” and “Russia does not attempt to occupy Latvia”. The number of outcomes in the first cell allows an estimate of the overall likelihood of this outcome. The logic of this exercise is exactly parallel to the calculation that McGrayne describes for assigning probabilities to geographic cells of ocean floor for the final resting spot of the submarine, given the direction and speed scenarios considered. And the Bayesian contribution of updating of priors is illuminating in this analysis as well: as experts’ judgments of the probabilities of the component steps change given new information, the overall probability of the outcome changes as well.

Here is a very simple illustration of a scenario analysis. The four stages of the scenario are:

A: NATO signals unity

B: LATVIA accepts anti-missile defense

C: US signals lack of interest

D: KREMLIN in turmoil

Here is a diagram of the scenarios, along with hypothetical “expert judgments” about the likelihoods of outcomes of the branch points:

This analysis leads to a forecast of an 7.8% likelihood of occupation (O1, O10, O13). And an important policy recommendation can be derived from this analysis as well: most of the risk of occupation falls on the lower half of the tree, stemming from a NATO signal of disunity. This risk can be avoided by NATO giving the signal of unity instead; then the risk of occupation falls to less than 1%.