The soft side of critical realism

Critical realism has appealed to a range of sociologists and political scientists, in part because of the legitimacy it renders for the study of social structures and organizations. However, many of the things sociologists study are not “things” at all, but rather subjective features of social experience — mental frameworks, identities, ideologies, value systems, knowledge frameworks. Is it possible to be a critical realist about “subjective” social experience and formations of consciousness? Here I want to argue in favor of a CR treatment of subjective experience and thought.

First, let’s recall what it means to be realist about something. It means to take a cognitive stance towards the formation that treats it as being independent from the concepts we use to categorize it. It is to postulate that there are facts about the formation that are independent from our perceptions of it or the ways we conceptualize it. It is to attribute to the formation a degree of solidity in the world, a set of characteristics that can be empirically investigated and that have causal powers in the world. It is to negate the slogan, “all that is solid melts into air” with regard to these kinds of formations. “Real” does not mean “tangible” or “material”; it means independent, persistent, and causal.  

 
So to be realist about values, cognitive frameworks, practices, or paradigms is to assert that these assemblages of mental attitudes and features have social instantiation, that they persist over time, and that they have causal powers within the social realm. By this definition, mental frameworks are perfectly real. They have visible social foundations — concrete institutions and practices through which they are transmitted and reproduced. And they have clear causal powers within the social realm.
A few examples will help make this clear.
Consider first the assemblage of beliefs, attitudes, and behavioral repertoires that constitute the race regime in a particular time and place. Children and adults from different racial groups in a region have internalized a set of ideas and behaviors about each other that are inflected by race and gender. These beliefs, norms, and attitudes can be investigated through a variety of means, including surveys and ethnographic observation. Through their behaviors and interactions with each other they gain practice in their mastery of the regime, and they influence outcomes and future behaviors. They transmit and reproduce features of the race regime to peers and children. There is a self-reinforcing discipline to such an assemblage of attitudes and behaviors which shapes the behaviors and expectations of others, both internally and coercively. This formation has causal effects on the local society in which it exists, and it is independent from the ideas we have about it. It is by this set of factors, a real part of local society. (If is also a variable and heterogeneous reality, across time and space.) We can trace the sociological foundations of the formation within the population, the institutional arrangements through which minds and behaviors are shaped. And we can identify many social effects of specific features of regimes like this. (Here is an earlier post on the race regime of Jim Crow; link, link.)
 
Here is a second useful example — a knowledge and practice system like Six Sigma. This is a bundle of ideas about business management. It involves some fairly specific doctrines and technical practices. There are training institutions through which individuals become expert at Six Sigma. And there is a distributed group of expert practitioners across a number of companies, consulting firms, and universities who possess highly similar sets of knowledge, judgment, and perception.  This is a knowledge and practice community, with specific and identifiable causal consequences. 
 
These are two concrete examples. Many others could be offered — workingclass solidarity, bourgeois modes of dress and manners, the social attitudes and behaviors of French businessmen, the norms of Islamic charity, the Protestant Ethic, Midwestern modesty. 
So, indeed, it is entirely legitimate to be a critical realist about mental frameworks. More, the realist who abjures study of such frameworks as social realities is doomed to offer explanations with mysterious gaps. He or she will find large historical anomalies, where available structural causes fail to account for important historical outcomes.
Consider Marx and Engels’ words in the Communist Manifesto:

All fixed, fast-frozen relations, with their train of ancient and venerable prejudices and opinions, are swept away, all new-formed ones become antiquated before they can ossify. All that is solid melts into air, all that is holy is profaned, and man is at last compelled to face with sober senses his real conditions of life, and his relations with his kind.

This is an interesting riff on social reality, capturing both change and persistence, appearance and reality. A similar point of view is expressed in Marx’s theory of the fetishism of commodities: beliefs exist, they have social origins, and it is possible to demystify them on occasion by uncovering the distortions they convey of real underlying social relations. 
There is one more perplexing twist here for realists. Both structures and features of consciousness are real in their social manifestations. However, one goal of critical philosophy is to show how the mental structures of a given class or gender are in fact false consciousness. It is a true fact that British citizens in 1871 had certain ideas about the workings of contemporary capitalism. But it is an important function of critical theory to demonstrate that those beliefs were wrong, and to more accurately account for the underlying social relations they attempt to describe. And it is important to discover the mechanisms through which those false beliefs came into existence.

So critical realism must both identify real structures of thought in society and demystify these thought systems when they systematically falsify the underlying social reality. Decoding the social realities of patriarchy, racism, and religious bigotry is itself a key task for a critical social sciences.

Dave Elder-Vass is one of the few critical realists who have devoted attention to the reality of a subjective social thing, a system of norms. In The Causal Power of Social Structures: Emergence, Structure and Agency he tries to show how the ideas of a “norm circle” helps explicate the objectivity, persistence, and reality of a socially embodied norm system. Here’s is an earlier post on E-V’s work (link).

 
 

Discovering the nucleus

In the past year or so I’ve been reading a handful of fascinating biographies and histories involving the evolution of early twentieth-century physics, paying attention to the individuals, the institutions, and the ideas that contributed to the making of post-classical physics. The primary focus is on the theory of the atom and the nucleus, and the emergence of the theory of quantum mechanics. The major figures who have come into this complex narrative include Dirac, Bohr, Heisenberg, von Neumann, Fermi, Rutherford, Blackett, Bethe, and Feynman, along with dozens of other mathematicians and physicists. Institutions and cities played a key role in this story — Manchester, Copenhagen, Cambridge, Göttingen, Budapest, Princeton, Berkeley, Ithaca, Chicago. And of course written throughout this story is the rise of Nazism, World War II, and the race for the atomic bomb. This is a crucially important period in the history of science, and the physics that was created between 1900 and 1960 has fundamentally changed our view of the natural world.

       

One level of interest for me in doing this reading is the math and physics themselves. As a high school student I was fascinated with physics. I learned some of the basics of the story of modern physics before I went to college — the ideas of special relativity theory, the hydrogen spectrum lines, the twin-slit experiments, the puzzles of radiation and the atom leading to the formulation of the quantum theory of electromagnetic radiation, the discoveries of superconductivity and lasers. In college I became a physics and mathematics major at the University of Illinois, though I stayed with physics only through the end of the first two years of course work (electricity and magnetism, theoretical and applied mechanics, several chemistry courses, real analysis, advanced differential equations). (Significantly for the recent reading I’ve been doing, I switched from physics to philosophy while I was taking the junior level quantum mechanics course.) I completed a mathematics major, along with a philosophy degree, and did a PhD in philosophy because I felt philosophy offered a broader intellectual platform on questions that mattered.

 
So I’ve always felt I had a decent layman’s understanding of the questions and issues driving modern physics. One interesting result of reading all this historical material about the period of 1910-1935, however, is that I’ve realized what large holes there are in my mental map of the topics, both in the physics and the math. And it is genuinely interesting to realize that there are deeply fascinating questions in this terrain which I haven’t really got an inkling about. It is energizing to know that it is entirely possible to open up new areas of knowledge and inquiry for oneself. 
 
Of enduring interest in this story is the impression that emerges of amazingly rapid progress in physics in these few decades, with major discoveries and new mathematical methods emerging in weeks and months rather than decades and centuries. The intellectual pace in places like Copenhagen, Princeton, and Göttingen was staggering, and scientists like Bohr, von Neumann, and Heisenberg genuinely astonish the reader with the fertility of their scientific abilities. Moreover, the theories and mathematical formulations that emerged had amazingly precise and unexpected predictive consequences. Physical theory and experimentation reached a fantastic degree of synergy together. 
 
The institutions of research that developed through this period are fascinating as well. The Cavendish lab at Cambridge, the Institute for Advanced Studies at Princeton, the Niels Bohr Institute in Copenhagen, the math and physics centers at Göttingen, and the many conferences and journals of the period facilitated rapid progress of atomic and nuclear physics. The USSR doesn’t come into the story as fully as one would like, and it is intriguing to speculate about the degree to which Stalinist dogmatism interfered with the development of Soviet physics. 
 
I also find fascinating in retrospect the relations that seem to exist between physics and the philosophy of science in the twentieth century. In philosophy we tend to think that the discipline of the philosophy of science in its twentieth-century development was too dependent on physics. That is probably true. But it seems that the physics in question was more often classical physics and thermodynamics, not modern mathematical physics. Carnap, for example, gives no serious attention to developments in the theory of quantum mechanics in his lectures, Philosophical Foundations of Physics. The philosophy of the Vienna Circle could have reflected relativity theory and quantum mechanics, but it didn’t to any significant degree. Instead, the achievements of nineteenth-century physics seem to have dominated the thinking of Carnap, Schlick, and Popper. Logical positivism doesn’t seem to be much influenced by modern physics, including relativity theory, quantum theory, and mathematical physics.  Post-positivist philosophers Kuhn, Hanson, and Feyerabend refer to some of the discoveries of twentieth-century physics, but their works don’t add up to a new foundation for the philosophy of science. Since the 1960s there has been a robust field of philosophy of physics, and the focus of this field has been on quantum mechanics; but the field has had only limited impact on the philosophy of science more broadly. (Here is a guide to the philosophy of physics provided to philosophy graduate students at Princeton; link.)

On the other hand, quantum mechanics itself seems to have been excessively influenced by a hyper version of positivism and verificationism. Heisenberg in particular seems to have favored a purely instrumentalist and verificationist interpretation of quantum mechanics — the idea that the mathematics of quantum mechanics serve solely to summarize the results of experiment and observation, not to allow for true statements about unobservables. It is anti-realist and verificationist.

I suppose that there are two rather different ways of reading the history of twentieth-century physics. One is that quantum mechanics and relativity theory demonstrate that the physical world is incomprehensibly different from our ordinary Euclidean and Kantian ideas about ordinary-sized objects — with the implication that we can’t really understand the most fundamental level of the physical world. Ordinary experience and relativistic quantum-mechanical reality are just fundamentally incommensurable. But the other way of reading this history of physics is to marvel at the amount of new insight and clarity that physics has brought to our understanding of the subatomic world, in spite of the puzzles and anomalies that seem to remain. Mathematical physical theory made possible observation, measurement, and technological use of the microstructure of the world in ways that the ancients could not have imagined. I am inclined towards the latter view.

It is also sobering for a philosopher of social science to realize that there is nothing comparable to this history in the history of the social sciences. There is no comparable period where fundamental and enduring new insights into the underlying nature of the social world became possible to a degree comparable to this development of our understanding of the physical world. In my view as a philosopher of social science, that is perfectly understandable; the social world is not like the physical world. Social knowledge depends on fairly humdrum discoveries about actors, motives, and constraints. But the comparison ought to make us humble even as we explore new theoretical ideas in sociology and political science.

If I were asked to recommend only one out of all these books for a first read, it would be David Cassidy’s Heisenberg volume, Beyond Uncertainty. Cassidy makes sense of the physics in a serious but not fully technical way, and he raises important questions about Heisenberg the man, including his role in the German search for the atomic bomb. Also valuable is Richard Rhodes’ book, The Making of the Atomic Bomb: 25th Anniversary Edition.

Inductive reasoning and the philosophy of science

I’ve just finished reading Sharon Bertsch McGrayne’s book on Bayesian statistics, The Theory That Would Not Die: How Bayes’ Rule Cracked the Enigma Code, Hunted Down Russian Submarines, and Emerged Triumphant from Two Centuries of Controversy. McGrayne presents a very interesting story of the advancement of a scientific idea over a very long period (1740s through the 1950s). As she demonstrates at length, the idea that “subjective prior beliefs” could enhance our knowledge about causation and the future was regarded as paradoxical and irrational by mathematicians and statisticians for well over a century.

McGrayne’s book does a very good job of highlighting the scientific controversies that have arisen with respect to Bayesian methods, and the book also makes a powerful case for the value of the methods in many important contemporary problems. But it isn’t very detailed about the logic and mathematics of the field. She gives a single example of applied Bayesian reasoning in appendix b, using the example of breast cancer and mammograms. This is worth reading carefully, since it makes clear how the conditional probabilities of a Bayesian calculation work.

As McGrayne demonstrates with many examples, Bayesian reasoning permits a very substantial ability to draw novel conclusions based on piecemeal observations and some provisional assumptions about mechanisms in the messy world of complex causation. Examples can be found in epidemiology (the cause of lung cancer), climate science, and ecology. And she documents how Bayesian ideas have been used to enhance search processes for missing things — for example, lost hydrogen bombs and nuclear submarines. Here is an important example of the power of Bayesian reasoning to identify causal linkages to lung cancer, including especially cigarette smoking.

In 1951 Cornfield used Bayes’ rule to help answer the puzzle. As his prior hypothesis he used the incidence of lung cancer in the general population. Then he combined that with NIH’s latest information on the prevalence of smoking among patients with and without lung cancer. Bayes’ rule provided a firm theoretical link, a bridge, if you will, between the risk of disease in the population at large and the risk of disease in a subgroup, in this case smokers. Cornfield was using Bayes as a philosophy-free mathematical statement, as a step in calculations that would yield useful results. He had not yet embraced Bayes as an all-encompassing philosophy. Cornfield’s paper stunned research epidemiologists. 

More than anything else, it helped advance the hypothesis that cigarette smoking was a cause of lung cancer. Out of necessity, but without any theoretical justification, epidemiologists had been using case studies of patients to point to possible causes of problems. Cornfield’s paper showed clearly that under certain conditions (that is, when subjects in a study were carefully matched with controls) patients’ histories could indeed help measure the strength of the link between a disease and its possible cause. Epidemiologists could estimate disease risk rates by analyzing nonexperimental clinical data gleaned from patient histories. By validating research findings arising from case-control studies, Cornfield made much of modern epidemiology possible. In 1961, for example, case-control studies would help identify the antinausea drug thalidomide as the cause of serious birth defects. (110-111)

One fairly specific thing that strikes me after reading the book concerns the blindspots that existed in the neo-positivist tradition in the philosophy of science that set the terms for the field in the 1960s and 1970s (link). This tradition is largely focused on theories and theoretical explanation, to the relative exclusion of inductive methods. It reveals an underlying predilection for the idea that scientific knowledge takes the form of hypothetico-deductive systems describing unobservables. The hypothetico-deductive model of explanation and confirmation makes a lot of sense in the context of this perspective. But after reading McGrayne I’m retrospectively surprised at the relatively low priority given within standard philosophy of science curriculum to probabilistic reasoning — either frequentist or Bayesian. Many philosophers of science have absorbed a degree of disregard for “inductive logic”, or the idea that we can discover important features of the world through careful observation and statistical analysis. The basic assumption seems to have been that statistical reasoning is boring and Humean — not really capable of discovering new things about nature or society. But in hindsight, this disregard for inductive reasoning is an odd distortion of the domain of scientific knowledge, and, in particular, of the project of sorting out causes.

Some philosophers of science have indeed given substantial attention to Bayesian reasoning. (Here is a good article on Bayesian epistemology by Bill Talbott in the Stanford Encyclopedia of Philosophy; link.) Ian Hacking’s textbook An Introduction to Probability and Inductive Logic provides a very accessible introduction to the basics of inductive logic and Bayesian reasoning, and his The Emergence of Probability: A Philosophical Study of Early Ideas about Probability, Induction and Statistical Inference provides an excellent treatment of the history of the subject from a philosophy of science point of view. Another philosopher of science who has treated Bayesian reasoning in detail is Michael Strevens. Here Strevens provides a good brief treatment of the subject from the point of view of the philosophy of science (link). And here is a first-rate unpublished manuscript by Strevens on the use of Bayesian ideas as a theory of confirmation (link). Strevens’ recent Tychomancy: Inferring Probability from Causal Structure is also relevant. And the research program on causal reasoning of Judea Pearl has led to a flourishing of Bayesian reasoning in the theory of causality (link).

What is the potential relevance of Bayesian reasoning in sociology and other areas of the social sciences? Can Bayesian reasoning lead to new insights in assessing social causation? Several features of the social world seem particularly distinctive in the context of a Bayesian approach. Bayesianism conforms very naturally to a scenario-based way of approaching the outcomes of a system or a complicated process; and it provides an elegant and rigorous way of incorporating “best guesses” (subjective probability estimates) into the analysis of a given process. Both features are well suited to the social world. One reason for this is the relatively narrow limits of frequency-based estimates of probabilities of social events. The social sciences are often concerned with single-instance events — the French Revolution, the Great Depression, the rise of ISIS. In cases like these frequency-based probabilities are not available. Second, there is the problem of causal heterogeneity in many social causal relations. If we are interested in the phenomenon of infant mortality, we are led immediately to the realization that there are multiple social factors and conditions that influence this population characteristic; so the overall infant mortality rate of Bangladesh or France is the composite effect of numerous social and demographic causes. This means that there is no single underlying causal property X, where X can be said to create differences in infant mortality rates in various countries. And this in turn implies that it is dubious to assume that there are durable objective probabilities underlying the creation of a given rate of infant mortality. This is in contrast to the situation of earthquakes or hurricanes, where a small number of physical factors are causally relevant to the occurrence of the outcome.

Both these factors suggest that subjective probabilities based on expert-based assessment of the likelihood of various scenarios represent a more plausible foundation for assigning probabilities to a given social outcome. This is the logic underlying Philip Tetlock’s approach to reliable forecasting in Superforecasting: The Art and Science of Prediction and Expert Political Judgment: How Good Is It? How Can We Know? (link). Both points suggest that Bayesian reasoning may have even more applicability in the social world than in the natural sciences.

The joining of Monte Carlo methods with Bayesian reasoning that McGrayne describes in the case of the search for the missing nuclear submarine Thresher (199 ff.) is particularly relevant to social inquiry, it would seem. This is true because of the conjunctural nature of social causation and the complexity of typical causal intersections in the social domain. Consider a forecasting problem similar to those considered by Tetlock — for example, the likelihood that Russia will attempt to occupy Latvia in the next five years. One way of analyzing this problem is to identify a handful of political scenarios moving forward from the present that lead to consideration of this policy choice by Russian leadership; assign prior probabilities to the component steps of each scenario; and calculate a large number of Monte Carlo “runs” of the scenarios, based on random assignment of values to the component steps of each of the various scenarios according to the prior probabilities assigned by the experts. Outcomes can then be classified as “Russia attempts to occupy Latvia” and “Russia does not attempt to occupy Latvia”. The number of outcomes in the first cell allows an estimate of the overall likelihood of this outcome. The logic of this exercise is exactly parallel to the calculation that McGrayne describes for assigning probabilities to geographic cells of ocean floor for the final resting spot of the submarine, given the direction and speed scenarios considered. And the Bayesian contribution of updating of priors is illuminating in this analysis as well: as experts’ judgments of the probabilities of the component steps change given new information, the overall probability of the outcome changes as well.

Here is a very simple illustration of a scenario analysis. The four stages of the scenario are:

A: NATO signals unity

B: LATVIA accepts anti-missile defense

C: US signals lack of interest

D: KREMLIN in turmoil

Here is a diagram of the scenarios, along with hypothetical “expert judgments” about the likelihoods of outcomes of the branch points:

 


This analysis leads to a forecast of an 7.8% likelihood of occupation (O1, O10, O13). And an important policy recommendation can be derived from this analysis as well: most of the risk of occupation falls on the lower half of the tree, stemming from a NATO signal of disunity. This risk can be avoided by NATO giving the signal of unity instead; then the risk of occupation falls to less than 1%.


Predicting, forecasting, and superforecasting

I have expressed a lot of reservation about the feasibility of prediction of large, important outcomes in the social world (link, link, link). Here are a couple of observations drawn from these earlier posts:

We sometimes think that there is fundamental stability in the social world, or at least an orderly pattern of development to the large social changes that occur…. But really, our desire to perceive order in the things we experience often deceives us. The social world at any given time is a conjunction of an enormous number of contingencies, accidents, and conjunctures. So we shouldn’t be surprised at the occurrence of crises, unexpected turns, and outbreaks of protest and rebellion. It is continuity rather than change that needs explanation. 

Social processes and causal sequences have a wide range of profiles. Some social processes — for example, population size — are continuous and roughly linear. These are the simplest processes to project into the future. Others, like the ebb and flow of popular names, spread of a disease, or mobilization over a social cause, are continuous but non-linear, with sharp turning points (tipping points, critical moments, exponential takeoff, hockey stick). And others, like the stock market, are discontinuous and stochastic, with lots of random events pushing prices up and down. (link)

One reason for the failure of large-scale predictions about social systems is the complexity of causal influences and interactions within the domain of social causation. We may be confident that X causes Z when it occurs in isolated circumstances. But it may be that when U, V, and W are present, the effect of X is unpredictable, because of the complex interactions and causal dynamics of these other influences. This is one of the central findings of complexity studies — the unpredictability of the interactions of multiple causal powers whose effects are non-linear.

 

Another difficulty — or perhaps a different aspect of the same difficulty — is the typical fact of path dependency of social processes. Outcomes are importantly influenced by the particulars of the initial conditions, so simply having a good idea of the forces and influences the system will experience over time does not tell us where it will wind up.

Third, social processes are sensitive to occurrences that are singular and idiosyncratic and not themselves governed by systemic properties. If the winter of 1812 had not been exceptionally cold, perhaps Napoleon’s march on Moscow might have succeeded, and the future political course of Europe might have been substantially different. But variations in the weather are not themselves systemically explicable — or at least not within the parameters of the social sciences.

Fourth, social events and outcomes are influenced by the actions of purposive actors. So it is possible for a social group to undertake actions that avert the outcomes that are otherwise predicted. Take climate change and rising ocean levels as an example. We may be able to predict a substantial rise in ocean levels in the next fifty years, rendering existing coastal cities largely uninhabitable. But what should we predict as a consequence of this fact? Societies may pursue different strategies for evading the bad consequences of these climate changes — retreat, massive water control projects, efforts at atmospheric engineering to reverse warming. And the social consequences of each of these strategies are widely different. So the acknowledged fact of global warming and rising ocean levels does not allow clear predictions about social development. (link)

When prediction and expectation fail, we are confronted with a “surprise”.

So what is a surprise? It is an event that shouldn’t have happened, given our best understanding of how things work. It is an event that deviates widely from our most informed expectations, given our best beliefs about the causal environment in which it takes place. A surprise is a deviation between our expectations about the world’s behavior, and the events that actually take place. Many of our expectations are based on the idea of continuity: tomorrow will be pretty similar to today; a delta change in the background will create at most an epsilon change in the outcome. A surprise is a circumstance that appears to represent a discontinuity in a historical series. 

It would be a major surprise if the sun suddenly stopped shining, because we understand the physics of fusion that sustains the sun’s energy production. It would be a major surprise to discover a population of animals in which acquired traits are passed across generations, given our understanding of the mechanisms of evolution. And it would be a major surprise if a presidential election were decided by a unanimous vote for one candidate, given our understanding of how the voting process works. The natural world doesn’t present us with a large number of surprises; but history and social life are full of them. 

The occurrence of major surprises in history and social life is an important reminder that our understanding of the complex processes that are underway in the social world is radically incomplete and inexact. We cannot fully anticipate the behavior of the subsystems that we study — financial systems, political regimes, ensembles of collective behavior — and we especially cannot fully anticipate the interactions that arise when processes and systems intersect. Often we cannot even offer reliable approximations of what the effects are likely to be of a given intervention. This has a major implication: we need to be very modest in the predictions we make about the social world, and we need to be cautious about the efforts at social engineering that we engage in. The likelihood of unforeseen and uncalculated consequences is great. 

And in fact commentators are now raising exactly these concerns about the 700 billion dollar rescue plan currently being designed by the Bush administration to save the financial system. “Will it work?” is the headline; “What unforeseen consequences will it produce?” is the subtext; and “Who will benefit?” is the natural followup question. 

It is difficult to reconcile this caution about the limits of our rational expectations about the future based on social science knowledge, with the need for action and policy change in times of crisis. If we cannot rely on our expectations about what effects an intervention is likely to have, then we can’t have confidence in the actions and policies that we choose. And yet we must act; if war is looming, if famine is breaking out, if the banking system is teetering, a government needs to adopt policies that are well designed to minimize the bad consequences. It is necessary to make decisions about action that are based on incomplete information and insufficient theory. So it is a major challenge for the theory of public policy, to attempt to incorporate the limits of knowledge about consequences into the design of a policy process. One approach that might be taken is the model of designing for “soft landings” — designing strategies that are likely to do the least harm if they function differently than expected. Another is to emulate a strategy that safety engineers employ when designing complex, dangerous systems: to attempt to de-link the subsystems to the extent possible, in order to minimize the likelihood of unforeseeable interactions. (link)

One person who has persistently tried to answer the final question posed here — the conundrum of forming expectations in an uncertain world as a necessary basis for action — is Philip Tetlock. Tetlock’s decades-long research on forecasting and judging is highly relevant to this topic. The recent book Superforecasting: The Art and Science of Prediction provides an excellent summary of the primary findings of the research that he and senior collaborators have done on the topic.

Tetlock does a very good job of tracing through the sources of uncertainty that make projections and forecasts of the future so difficult. The uncertainties mentioned above all find discussion in Superforecasting; and he supplements these objective sources of uncertainty with a volume of recent work on cognitive biases leading to over- or under-confidence in a set of expectations. (Both Daniel Kahneman and Scott Page find astute discussions in the book.)

But in spite of these reasons to be dubious about pronouncements about future events, Tetlock finds that there are good theoretical and empirical reasons for believing that a modest amount of forecasting of complex events is nonetheless possible. He takes very seriously the probabilistic nature of social and economic events, so a forecast that “North Korea will perform a nuclear test within six months” must be understood as a probabilistic statement about the world (there is a specific likelihood of such a test in the world); and a Bayesian statement about the forecaster’s degree of confidence in the prediction. And good forecasters aim to be specific about both probabilities: for example, “I have a 75% level of confidence that there is a 55% likelihood of a North Korean nuclear test by date X”.

Moreover, Tetlock argues that it is possible to evaluate individual forecasters on the basis of their performance on specific tasks of forecasting and observation of the outcome. Tetlock would like to see the field of forecasting to follow medicine in the direction of an evidence-based discipline in which practices and practitioners are constantly assessed and permitted to improve their performance. (As he points out, it is not difficult to assess the weatherman on his or her probabilistic forecasts of rain or sun.) The challenge for evaluation is to set clear standards of specificity of the terms of a forecast, and then to be able to test the forecasts against the observed outcomes once the time has expired. This is the basis for the multiple-year tournaments that the Good Judgment Project has conducted over several decades. The idea of a Brier score serves as a way of measuring the accuracy of a set of probabilistic statements (link). Here is an explanation of “Brier scores” in the context of the Good Judgment Project (link); “standardized Brier scores are calculated so that higher scores denote lower accuracy, and the mean score across all forecasters is zero”. As the graph demonstrates, there is a wide difference between the best and the worst forecasters, given their performance over 100 forecasts.

So how is forecasting possible, given all the objective and cognitive barriers that stand in the way? Tetlock’s view is that many problems about the future can be broken down into component problems, some of which have more straightforward evidential bases. So instead of asking whether North Korea will test another nuclear device by November 1, 2016, the forecaster may ask a group of somewhat easier questions: how frequent have their tests been in the past? Do they have the capability to do so? Would China’s opposition to further tests be decisive?

Tetlock argues that the best forecasters do several things: they avoid getting committed to a single point of view; they consider conflicting evidence freely; they break a problem down into components that would need to be satisfied for the outcome to occur; and they revise their forecasts when new information is available. They are foxes rather than hedgehogs. He doubts that superforecasters are distinguished by being of uniquely superior intelligence or world-class subject experts; instead, they are methodical analysts who gather data and estimates about various components of a problem and assemble their findings into a combined probability estimate.

The author follows his own advice by taking conflicting views seriously. He presents both Daniel Kahneman and Nassim Taleb as experts who have made significant arguments against the program of research involved in the Good Judgment Project. Kahneman consistently raises questions about the forms of reasoning and cognitive processes that are assumed by the GJP. More fundamentally, Taleb raises questions about the project itself. Taleb argues in several books that fundamentally unexpected events are key to historical change; and therefore the incremental forms of forecasting described in the GJP are incapable in principle of keeping up with change (The Black Swan: Second Edition: The Impact of the Highly Improbable: With a new section: “On Robustness and Fragility” (Incerto) as well as the more recent Antifragile: Things That Gain from Disorder). These are arguments that resonate with the view of change presented in earlier posts and quoted above, and I have some sympathy for the view. But Tetlock does a good job of establishing that the situation is not nearly so polarized as Taleb asserts. Many “black swan” events (like the 9/11 attacks) can be treated in a more disaggregated way and are amenable to a degree of forecasting along the lines advocated in the book. So it is a question of degree, whether we think that the in-principle unpredictability of major events is more important or the incremental accumulation of many small causes is a preponderance of historical change. Processes that look like the latter pattern are amenable to piecemeal probabilistic forecasting.

Tetlock is not a fan of pundits, for some very good reasons. Most importantly, he argues that the great majority of commentators and prognosticators in the media and cable news are long on self-assurance and short on specificity and accountability. Tetlock argues several important points: first, that it is possible to form reasonable and grounded judgments about future economic, political, and international events; second, that it is crucial to subject this practice to evidence-based assessment; and third, that it is possible to identify the most important styles, heuristics, and analytical approaches that are used by the best forecasters (superforecasters).

(Here is a good article in the New Yorker on Tetlock’s approach; link.)

Defining social phenomena

How does a field of phenomena come into focus as a subject of scientific study? When we want to know about weather, we can identify a relatively small number of variables that represent the whole of the topic — temperature, air pressure, wind velocity, rainfall. And we can pick out the aspects of physics that seem to be causally relevant to the atmospheric dynamics that give rise to variations in these variables. Weather is a closed system, if a complex one.

Deciding what factors are important and amenable to scientific study in the social world is not so easy. Population size or density? Economic product? Inter-group conflict? Public opinion and values? Political systems? Racial and ethnic identities? All of these factors are of interest to the social sciences, to be sure. But none of this looks like anything like a definition of the whole of the social realm. Rather, there are indefinitely many other research questions that can be posed about the social world — style and fashion, trends of social media, forms of etiquette, sources of power, and on and on.

For that matter, these don’t look much like a macro-set of factors that are generated in some straightforward way by the simple actions of individual persons. These social factors aren’t really analogous to macro-level weather factors, emerging from the local cells of temperature-pressure-humidity-direction. Rather, these social concepts or constructs are theorized and developed in a complicated back-and-forth by sociologists or political scientists seeking to identify social-level constructs that seem to give some insight into the ordinary and systematic experiences we have of the social world.

 
Most particularly, there isn’t a natural way of mapping these social concepts into an integrated and comprehensive mental model of the whole of the social world. Instead, these high-level social concepts are partial and perspectival. And this is different from the situation of weather or climate. In the latter domains there are finitely many higher level concepts that serve to characterize the whole of the domain of global climate phenomena. Call this “high-level conceptual closure.” There are no questions about climate that cannot be phrased in terms of these concepts. But the social world is not amenable to this kind of closure. We lack high-level conceptual closure for the social world. 
 
One way of making sense of these observations is to say that there isn’t a domain of the social in general. Rather, there are more narrowly specified domains within the undifferentiated social whole. Ethnic conflict is a specifiable domain of social phenomena at the macro level, and it has a fuzzy but plausible set of domains of microfoundations that are scientifically relevant to this domain. For example, we might canvas the kinds of causal factors that students of ethnic conflict have found relevant to the eruption of ethnic conflict: features of actors’ mentality; institutions and organizations on the ground; spatial distribution of actors; political framework of contention (government, legislation, police). Research on ethnic conflict will focus on factors at this level. 
 
A different empirical focus might be population quality of life. Researchers on this topic will consider a very different set of individual-level factors — form of agriculture, property relations, family structure. Some factors will overlap with the field of study of ethnic conflict, but most will not. And, importantly, the findings of each field are relevant to study of the other topic. The dynamics of ethnic conflict have consequences for population quality of life, and fluctuations in quality of life (food security, for example), will be causally relevant to the dynamics of ethnic conflict. But the research programmes and communities are substantially distinct. 
 
And there are indefinitely many other social programmes of research, unlimited in principle. The social sciences in general are nothing more than the sum of research in these different research fields. And more controversially, we might say that the social world is the patchwork sum of these qualitatively heterogeneous kinds of social phenomena. 
 
So there is no general answer to the question, what is the domain of the social; there is no systematic and final definition of the social world. And there is similarly no hope for “unifying” the social sciences under a master set of theoretical premises about social behavior or structure. Significantly, this seems to be Weber’s point in “‘Objectivity’ in social science and social policy” in Methodology of Social Sciences (105) when he writes that topics for social research are novel for each generation.

Accordingly the synthetic concepts used by historians are either imperfectly defined or, as soon as the elimination of ambiguity is sought for, the concept becomes an abstract ideal type and reveals itself therewith as a theoretical and hence “one-sided” viewpoint which illuminates the aspect of reality with which it can be related. But these concepts are shown to be obviously inappropriate as schema into which reality could be completely integrated. For none of those systems of ideas, which are absolutely indispensable in the understanding of those segments of reality which are meaningful at a particular moment, can exhaust its infinite richness.

Contrast this with Marx’s effort at conceptual closure by providing an integrated schema for capitalist society of its forces and relations of production; its economic structure; and its political and cultural superstructure (link). Here is a famous passage from Marx’s Preface to A Contribution to the Critique of Political Economy (1859):

In the social production of their existence, men inevitably enter into definite relations, which are independent of their will, namely relations of production appropriate to a given stage in the development of their material forces of production. The totality of these relations of production constitutes the economic structure of society, the real foundation, on which arises a legal and political superstructure and to which correspond definite forms of social consciousness. The mode of production of material life conditions the general process of social, political and intellectual life. It is not the consciousness of men that determines their existence, but their social existence that determines their consciousness. At a certain stage of development, the material productive forces of society come into conflict with the existing relations of production or – this merely expresses the same thing in legal terms – with the property relations within the framework of which they have operated hitherto. From forms of development of the productive forces these relations turn into their fetters. Then begins an era of social revolution. The changes in the economic foundation lead sooner or later to the transformation of the whole immense superstructure.

In studying such transformations it is always necessary to distinguish between the material transformation of the economic conditions of production, which can be determined with the precision of natural science, and the legal, political, religious, artistic or philosophic – in short, ideological forms in which men become conscious of this conflict and fight it out. Just as one does not judge an individual by what he thinks about himself, so one cannot judge such a period of transformation by its consciousness, but, on the contrary, this consciousness must be explained from the contradictions of material life, from the conflict existing between the social forces of production and the relations of production. No social order is ever destroyed before all the productive forces for which it is sufficient have been developed, and new superior relations of production never replace older ones before the material conditions for their existence have matured within the framework of the old society.

Phase transitions and emergence

Image: Phase diagram of water, Solé. Phase Transitions, 4
 
I’ve proposed to understand the concepts of emergence and generativeness as being symmetrical (link). Generative higher-level properties are those that those that can be calculated or inferred based on information about the properties and states of the micro-components. Emergent properties are properties of an ensemble that have substantially different dynamics and characteristics from those of the components. So emergent properties may seem to be non-generative properties. Further, I understand the idea of emergence in a weak and a strong sense: weakly emergent properties of an ensemble are properties that cannot be derived from the characteristics of the components given the limits of observation or computation; and strongly emergent properties are ones that cannot be derived in principle from full knowledge of the properties and states of the components. They must be understood in their own terms.
Conversations with Tarun Menon at the Tata Institute for Social Sciences in Mumbai were very helpful in allowing me to broaden somewhat the way I understand emergence in physical systems. So here I’d like to consider some additional complications for the theory of emergence coming from one specific physical finding, the mathematics of phase transitions. 

Complexity scientists have spent a lot of effort on understanding the properties of complex systems using a different concept, the idea of a phase transition. The transition from liquid water to steam as temperature increases is an example; the transition happens abruptly as the system approaches the critical value of the phase parameter — 100 degrees centigrade at constant pressure of one atmosphere, in the case of liquid-gas transition. 
 
Richard Solé presents the current state of complexity theory with respect to the phenomenon of phase transition in Phase Transitions. Here is how he characterizes the core idea:

In the previous sections we used the term critical point to describe the presence of a very narrow transition domain separating two well-defined phases, which are characterized by distinct macroscopic properties that are ultimately linked to changes in the nature of microscopic interactions among the basic units. A critical phase transition is characterized by some order parameter φ( μ) that depends on some external control parameter μ (such as temperature) that can be continuously varied. In critical transitions, φ varies continuously at μc (where it takes a zero value) but the derivatives of φ are discontinuous at criticality. For the so-called first-order transitions (such as the water-ice phase change) there is a discontinuous jump in φ at the critical point. (10)

So what is the connection between “emergent phenomena” and systems that undergo phase transitions? One possible connection is this: when a system undergoes a phase transition, its micro-components get rapidly reconfigured into a qualitatively different macro-structure. And yet the components themselves are unchanged.  So one might be impressed with the fact that the pre- and post-macro states correspond to very close to the same configurations of micro-states. The steaminess of the water molecules is triggered by an external parameter — change in temperature (or possibly pressure), and their characteristics around the critical point are very similar (their mean kinetic energy is approximately equal before and after transition). The diagram above represents the physical realities of water molecules in the three phase states. 
 
Solé and other complexity theorists see this “phase-transition” phenomenon in a wide range of systems, including simple physical systems but also biological and social systems as well. Solé offers the phenomenon of flocking as an example. We might consider whether the phenomenon of ethnic violence is a phase transition from a mixed but non-aggressive population of individuals to occasional abrupt outbursts of widespread conflict (link).
The disanalogy here is the fact that “unrest” is not a new equilibrium phase of the substrate of dispersed individuals; rather, it is an occasional abnormal state of brief duration. It is as if water sometimes spontaneously transitioned to steam and then returned to the liquid phase. Solé treats “percolation” phenomena later in the book, and rebellion seems more plausibly treated as a percolation process. Solé treats forest fire this way. But the representation works equally for any process based on contiguous contagion.
 
What seems to be involved here is a conclusion that is a little bit different from standard ideas about emergent phenomena. The point seems to be that for a certain class of systems, these systems have dynamic characteristics that are formal and abstract and do not require that we understand the micro mechanisms upon they rest at all. It is enough to know that system S is formally similar to a two-dimensional array of magnetized atoms (the “Ising model”); then we can infer that phase-transition behavior of the system will have specific mathematical properties. This might be summarized with the slogan, “system properties do not require derivation from micro dynamics.” Or in other words: systems have properties that don’t depend upon the specifics of the individual components — a statement that is strongly parallel to but distinct from the definition of emergence mentioned above. It is distinct, because the approach leaves it entirely open that the system properties are generated by the dynamics of the components.

This idea is fundamental to Solé’s analysis, when he argues that it is possible to understand phase transitions without regard to the particular micro-level mechanisms:

Although it might seem very difficult to design a microscopic model able to provide insight into how phase transitions occur, it turns out that great insight has been achieved by using extremely simplified models of reality. (10)

Here is how Solé treats swarm behavior as a possible instance of phase transition.

In social insects, while colonies behave in complex ways, the capacities of individuals are relatively limited. But then, how do social insects reach such remarkable ends? The answer comes to a large extent from self-organization: insect societies share basic dynamic properties with other complex systems. (157)

Intuitively the idea is that a collection of birds, ants, or bees may be in a state of random movement with respect to each other; and then as some variable changes the ensemble snaps into a coordinated “swarm” of flight or movement. Unfortunately he does not provide a mathematical example illustrating swarm behavior; the closest example he provides has to do with patterns of intense activity and slack activity over time in small to medium colonies of ants. This periodicity is related to density. Mark Millonas attempted such an account of swarming in a Santa Fe Institute paper in 1993, “Swarms, Phase Transitions, and Collective Intelligence; and a Nonequilibrium Statistical Field Theory of Swarms and Other Spatially Extended Complex Systems ” (link).
 
This work is interesting, but I am not sure that it sheds new light on the topic of emergence per se. Fundamentally it demonstrates that the aggregation dynamics of complex systems are often non-linear and amenable to formal mathematical modeling. As a critical variable changes a qualitatively new macro-property “emerges” from the ensemble of micro-components from which it is composed. This approach is consistent with the generativity view — the new property is generated by the interactions of the micro-components during an interval of change in critical variables. But it also maintains that systems undergoing phase transitions can be studied using a mathematical framework that abstracts from the physical properties of those micro-components. This is the point of the series of differential equation models that Solé provides. Once we have determined that a particular system has formal properties satisfying the assumptions of the DE model, we can then attempt to measure the critical parameters and derive the evolution of the system without further information about particular mechanisms at the micro-level.
 

Critical realism meets peasant studies

Critical realism is a philosophical theory of social ontology and social science knowledge. This philosophy has been expressed through the writings of systematic thinkers such as Roy Bhaskar, Margaret Archer, and other philosophers and sociologists over the past 40 years. Most of the leaders have emphasized the systematic nature of the theory of critical realism. It builds on a philosophical base, the application of the transcendental method of philosophy, developed by Roy Bhaskar. The theory is now being recommended within sociology as a better way of thinking about sociological method and theory.

Critical realism has a number of very positive aspects for consideration by social scientists. It is inspired by a deep critique of the philosophy of science associated with logical positivism, it offers a clear defense of the idea that there is a social and natural reality which it is the task of scientific inquiry to learn about, and it gives valuable attention and priority to the challenge of discovering concrete causal mechanisms which lead to real outcomes in the natural and social world. There is, however, some tendency for this tradition to express itself in an inward-looking and even dogmatic fashion.

So how can the fields of sociological method and critical realism progress today? One thing is clear: the value and relevance of critical realism is not to provide a template for scientific research or the form that a good scientific research project should take. There are no such templates. Mechanical application of any philosophy, whether critical realism, positivism, or any other theory of science, is not a fruitful way of proceeding as a scientist. However, with this point understood, it is in fact valuable for sociologists and other social scientists to think reflectively and seriously about some of the assumptions about the social world and the nature of social explanation which are involved in critical realism. The advice to look for real and persistent structures and processes underlying observable phenomena, the idea that “generative causal mechanisms” are crucial to processes of change and stability, the ideas associated with morphogenesis, and the idea that causation is not simply a summary of constant conjunction — these are valuable contributions to social science thinking.

This answers one half of the question raised here: sociological method can benefit from involvement in some open-minded debates inspired by the field of critical realism.

But what about the field of critical realism itself? How can this research community move forward? It would seem that the process involved in textual argumentation–“what would Roy say about this question or that question?”–is not a good way of making progress in critical or any other field of philosophy of science. More constructive would be for philosophers and social scientists within the field of critical realism to think open-mindedly about some of the shortcomings and blind spots of this field. And an open-minded consideration of some complementary or competing visions of the social world would strengthen the field as well — the ideas of heterogeneity, plasticity, the social construction of the self, and assemblage, for example.

I think that one good way of posing this challenge to critical realism might be to undertake a careful, rigorous study of very strong examples of social research that involves good inquiry and good theoretical models. The field of critical realism has tended to be to self-contained, with the result that debates are increasingly hermetically separated from actual research problems in the social sciences. Careful and non-dogmatic study of extended, clear examples of social inquiry would be very productive.

As a first step, it would be very stimulating to identify the empirical and explanatory work of a genuinely innovative social scientist like James Scott, and do a careful, reflective, and serious investigation of the definition of research problem, the research methods which were used, the central theoretical or explanatory ideas which were introduced, and the overall trajectory and development of this thinker’s thought.

Scott’s key ideas include moral economy, hidden transcripts, Zomia, weapons of the weak, seeing like a state, and the social reality of anarchism. And Scott attempts to explain social phenomena as diverse as peasant rebellion, resistance to agricultural modernization, the ways in which English novelists represent class conflict, the strategies of the state and its elusive opponents in southeast Asia, and many other topics of rural society. Many of Scott’s narratives can be analyzed in terms of the discovery of novel social mechanisms, strategies of resistance and domination, and embodied large social forces like taxation and conscription. Scott’s social worlds are populated by real social actors engaged in concrete social mechanisms and processes which can be known through research. Scott is a realist, but realist in his own terms: he discovers real social relations, social mechanisms and processes, and modes of social change at the local level and the national level and he puts substantial empirical detail on these things. His way of thinking about peasant society is relational–he pays close attention to the relationships that exist within a village, across lines of property and kinship, in cooperation towards collective action. He gives a role to the important powers of the state, but always with an understanding that the power of the state must be conveyed through a set of capillaries of agents in positions extending down to the village level. And in fact, his treatment in anarchy and seeing like a state is a summing up of many of the mechanisms of control and supervision which traditional states have used to control rural populations. (Scott’s work has been discussed frequently in earlier posts.)

In fact, I could imagine a series of carefully chosen case studies of innovative, insightful social researchers who have changed the terms of debate and understanding in a particular field. Other examples might include researchers such as Robert Putnam, Robert Axelrod, Charles Tilly, Michael Mann, Clifford Geertz, Albert Soboul, Simon Schama, Bin Wong, Robert Darnton, and Benedict Anderson.

Studies like these would have the potential for significantly broadening the terms of discussion and debate within the field of CR and help it engage more deeply with social scientists in several disciplines. This kind of inquiry might help open up some of the blind spots as well. These kinds of discussions might give greater importance to processes leading to the social construction of the self, greater awareness of the heterogeneity of social processes, and a bit more openness to philosophical ideas outside the corpus. No philosophy can proceed solely on the basis of its own premises; interaction with the practices of innovative scientists can significantly broaden the approach in a positive way.

What parts of the social world admit of explanation?

image: John Dos Passos

When Galileo, Newton, or Lavoisier confronted the natural world as “scientists,” they had in mind reasonably clear bodies of empirical phenomena that required explanation: the movements of material objects, the motions of the planets, the facts about combustion. They worked on the hope that nature conformed to a relatively small number of “fundamental” laws which could be discovered through careful observation and analysis. The success of classical physics and chemistry is the result. In a series of areas of research throughout the eighteenth and nineteenth centuries  it turned out that there were strong governing laws of nature — mechanics, gravitational attraction, conservation of matter and energy, electromagnetic propagation — which served to explain a vast range of empirically given natural phenomena. The “blooming, buzzing confusion” of the natural world could be reduced to the operation of a small number of forces and entities.

This finding was not metaphysically or logically inevitable. Nature might have been less regular and less unified than it turned out to be. Natural causes could have fluctuated in their effects and could have had more complex interactions with other causes than has turned out to be the case. Laws of nature might have varied over time and space in unpredictable ways. So the success of the project of the natural sciences is both contingent and breathtakingly powerful. There are virtually no bodies of empirical phenomena for which we lack even a good guess about the underlying structure and explanation of these phenomena; and these areas of ignorance seem to fall at the sub-atomic and the super-galactic levels. 

The situation in the social world is radically different, much as positivistically minded social scientists have wanted to think otherwise. There are virtually no social processes that have the features of predictability and smoothness that are displayed by natural phenomena. Rather, we can observe social processes of unlimited granularity unfolding over time and space, intermingling with other processes; leading sometimes to crashes and exponential accelerations; and sometimes morphing into something completely different.

Imagine that we think of putting together a slow-motion data graphic representing the creation, growth, and articulation of a great city — Chicago, Mexico City, or Cairo, for example. We will need to represent many processes within this graphic: spatial configuration, population size, ethnic and racial composition, patterns of local cooperation and conflict, the emergence and evolution of political authority, the configuration of a transportation and logistics system, the effects of war and natural disaster, the induced transformation of the surrounding hinterland, and the changing nature of relationships with external political powers, to name a few. And within the population itself we will want to track various characteristics of interest: literacy levels, school attendance, nutrition and health, political and social affiliation, gender and racial attitudes and practices, cultural and religious practices, taste and entertainment, and processes of migration and movement. We might think of this effort as a massive empirical project, to provide a highly detailed observational history of the city over a very long period of time. (Cronon’s Nature’s Metropolis: Chicago and the Great West is a treatment of the city of Chicago over the period of about a century with some of these aspirations.) 

But now what? How can we treat this massive volume of data “scientifically”? And can we aspire to the ambition of showing how these various processes derive from a small number of more basic forces? Does the phenomenon of the particular city admit of a scientific treatment along the lines of Galileo, Newton, or Lavoisier?

The answer is resoundingly no. Such a goal displays a fundamental misunderstanding of the social world. Social things and processes at every level are the contingent and interactive result of the activities of individual actors. Individuals are influenced by the social environment in which they live; so there is no reductionist strategy available here, reducing social properties to purely individual properties. But the key words here are “contingent” and “interactive”. There is no God’s-eye answer to the question, why did Chicago become the metropolis of the central North American continent rather than St. Louis? Instead, there is history — the choices made by early railroad investors and route designers, the availability of timber in Michigan but not Missouri, a particularly effective group of early city politicians in Chicago compared to St. Louis, the comparative influence on the national scene of Illinois and Missouri. These are all contingent and path-dependent factors deriving from the situated choices of actors at various levels of decision making throughout the century. And when we push down into lower levels of the filigree of social activity, we find equally contingent processes. Why did Motown come to dominate musical culture for a few decades in Detroit and beyond? Why did professional football take off but professional soccer did not? Why are dating patterns different in Silicon Valley than Iowa City? None of these questions have law-driven answers. Instead, in every case the answer will be a matter of pathway-tracing, examining the contingent turning points that brought us to the situation in question.

What this argument is meant to make clear is that the social world is not like the natural world. It is fundamentally “historical” (meaning that the present is unavoidably influenced by the past); contingent (meaning that events could have turned out differently); and causally plural (meaning that there is no core set of “social forces” that jointly serve to drive all social change). 

It also means that there is no “canonical” description of the social world. With classical physics we had the idea that nature could be described as a set of objects with mass and momentum; electromagnetic radiation with properties of frequency and velocity; atoms and molecules with fixed properties and forces; etc. But this is not the case with the social world. New kinds of processes come and go, and it is always open to a social researcher to identify a new trend or process and to attempt to make sense of this process in its context. 

I don’t mean to suggest that social phenomena do not admit of explanation at all. We can provide mid-level explanations of a vast range of social patterns and events, from the denuding of Michigan forests in the 1900s to the incidence of first names over time. What we cannot do is to provide a general theory that suffices as an explanatory basis for identifying and explaining all social phenomena. The social sciences are at their best when they succeed in identifying mechanisms that underlie familiar social patterns. And these mechanisms are most credible when they are actor-centered, in the sense that they illuminate the ways that individual actors’ behavior is influenced or generated so as to produce the outcome in question. 

In short: the social realm is radically different from the natural realm, and it is crucial for social scientists to have this in mind as they formulate their research and theoretical ideas.

(I used the portrait of Dos Passos above for this post because of the fragmented and plural way in which he seeks to represent a small slice of social reality in U.S.A. This works better than a single orderly narrative of events framed by the author’s own view of the period.)

Is the mind/body problem relevant to social science?

Is solving the mind-body problem crucial to providing a satisfactory sociological theory?

 

No, it isn’t, in my opinion. But Alex Wendt thinks otherwise in Quantum Mind and Social Science: Unifying Physical and Social Ontology. In fact, he thinks a solution to the mind-body problem is crucial to a coherent social science. Which is to say, in Wendt’s words:

Some of the deepest philosophical controversies in the social sciences are just local manifestations of the mind–body problem. So if the theory of quantum consciousness can solve that problem then it may solve fundamental problems of social science as well. (5)

Why so? There are two core problems in the philosophy of mind that Wendt thinks are unavoidable and must be confronted by the social sciences. The first is the problem of consciousness and intentionality; the second is the problem of freedom of the will. How is it possible for a physical, material system (a computer, a brain, a vacuum cleaner) to possess any of these mental properties?

Experts refer to the “hard problem” in the philosophy of mind. We might also call this the discontinuity problem: the unavoidable necessity of a radical break between a non conscious substrate and a conscious super-strate. How is it possible for an amalgamation of inherently non-conscious things (neurons, transistors, routines in an AI software package) to create an ensemble that possesses consciousness? Isn’t this as mysterious as imagining a world in which matter is composed of photons, where the constituents lack mass and the ensemble possesses mass? In such a case we would get mass out of non-mass; in the case of consciousness we get consciousness out of non-consciousness. “Pan-massism” would be a solution: all things, from stars to boulders to tables and chairs to subatomic components, possess mass.
 

But physicalist philosophers of mind are not persuaded by the discontinuity argument. As we have noted many times in this place, there are abundant examples of properties that are emergent in a non-spooky way. It simply is not the case that the sciences need to proceed in a Cartesian, foundationalist fashion. We do not need to reduce each level of the world to the workings of a lower level of things and processes.

 
Consider a parallel problem: is solving the question of the fundamental mechanisms of quantum mechanics crucial for understanding chemistry and the material properties of medium-scale objects? Here it seems evident that we can’t require this level of ontological continuity from micro to macro — in fact, there may reasons for believing the task cannot be carried out in principle. (See the earlier post on the question of whether chemistry supervenes upon quantum theory; link.)
 
Here is the solution to the mind-body problem that Wendt favors: panpsychism. Panpsychism is the notion that consciousness is a characteristic of the world all the way down — from human beings to sub-atomic particles.

Panpsychism takes a known effect at the macroscopic level–that we are conscious–and scales it downward to the sub-atomic level, meaning that matter is intrinsically minded. (30) 

Exploiting this possibility, quantum consciousness theorists have identified mechanisms in the brain that might allow this sub-atomic proto-consciousness to be amplified to the macroscopic level. (5)

Quantum consciousness theory builds on these intuitions by combining two propositions: (1) the physical claim of quantum brain theory that the brain is capable of sustaining coherent quantum states ( Chapter 5 ), and (2) the metaphysical claim of panpsychism that consciousness inheres in the very structure of matter ( Chapter 6 ). (92)

Panpsychism strikes me as an extravagant and unhelpful theoretical approach, however. Why should we attempt to analyze “Robert is planning to embarrass the prime minister” into a vast ensemble of psychic bits associated with the sub-atomic particles of his body? How does it even make sense to imagine a “sub-atomic bit of consciousness”? And how does the postulation of sub-atomic characteristics of consciousness give us any advantage in understanding ordinary human consciousness, deliberation, and intentionality?

Another supposedly important issue in the domain of the mind-body problem is the problem of freedom of the will. As ordinary human beings in the world we work on the assumption that individuals make intentional choices among feasible alternatives; their behavior is not causally determined by any set of background conditions. But if individuals are composed of physically deterministic parts (classical physics) then how is it possible for the organism to be “free”? And equally, if individuals are composed of physically indeterministic parts (probabilistic sub-particles) then how is it possible for the organism to be intentional (since chance doesn’t produce intentionality)? So neither classical physics nor quantum physics seems to leave room for intentional free choice among alternatives.

Consider the route of the Roomba robotic vacuum cleaner through the cluttered living room (link): its course may appear either random or strategic, but in fact it is neither. Instead, the Roomba’s algorithms dictate the turns and trajectories that the device takes in either an unobstructed run or an obstructed run. The behavior of the Roomba is determined by its algorithms and the inputs of its sensors; there is no room for freedom of choice in the Roomba. How can it be different for a dog or a human being, given that we too are composed of algorithmic computing systems?

Social theory presupposes intentional actors; but our current theories of neuroscience don’t permit us to reproduce how intentionality, consciousness, and freedom are possible. So don’t we need to solve the problem of freedom of the will before we can construct valid sociological theories that depend upon conscious, intentional and free actors?

Again, my answer is negative. It is an interesting question, to be sure, how freedom, consciousness, and intentionality can emerge from the wetware of the brain. But it is not necessary to solve this problem before we proceed with social science. Instead, we can begin with phenomenological truisms: we are conscious, we are intentional, and we are (in a variety of conditioned senses) free. How the organism achieves these higher-level capabilities is intriguing to study; but we don’t have to premise our sociological theories on any particular answer to this question.

So the position I want to take here is that we don’t have to solve the mysteries of quantum mechanics in order to understand social processes and social causation. We can bracket the metaphysics of the quantum world — much as the Copenhagen interpretation sought to do — without abandoning the goal of providing a good explanation of aspects of the social world and social actors. Wendt doesn’t like this approach:

Notwithstanding its attractions to some, this refusal to deal with ontological issues also underlies the main objection to the Copenhagen approach: that it is essentially incomplete. (75)

But why is incompleteness a problem for the higher-level science (psychology or sociology, for example)? Why are we not better served by a kind of middle-level theory of human action and the social world, a special science, that refrains altogether from the impulse of reductionism? This middle-level approach would certainly leave open the research question of how various capabilities of the conscious, intentional organism are embodied in neurophysiology. But it would not require providing such an account in order to validate the human-level or social-level theory.

How to do cephalapod philosophy

How should researchers attempt to investigate non-human intelligence? The image above raises difficult questions. The octopus is manipulating (tenticlating?) the Rubik’s cube. But there are a raft of questions that are difficult to resolve on the basis of simple inductive observation. And some of those questions are as much conceptual as they are empirical. Is the octopus “attempting to solve the cube”? Does it understand the goal of the puzzle? Does it have a mental representation of a problem which it is undertaking to solve? Does it have temporally extended intentionality? How does octopus consciousness compare to human consciousness? (Here is a nice website by several biologists at Reed College on the subject of octopus cognition; link.)

An octopus-consciousness theorist might offer a few hypotheses:

  1. The organism possesses a cognitive representation of its environment (including the object we refer to as “Rubik’s cube”).
  2. The organism possesses curiosity — a behavioral disposition to manipulate the environment and observe the effects of manipulation.
  3. The organism has a cognitive framework encompassing the idea of cause and effect.
  4. The organism has desires and intentions.
  5. The organism has beliefs about the environment.
  6. The organism is conscious of itself within the environment.

How would any of these hypotheses be evaluated?

One resource that the cephalapod behavior theorist has is the ability to observe octopi in their ordinary life environments and in laboratory conditions. These observations constitute a rich body of data about behavioral capacities and dispositions. For example:

Here we seem to see the organism conveying a tool (coconut shell) to be used for an important purpose later (concealment) (link). This behavior seems to imply several cognitive states: recognition of the physical characteristics of the shell; recognition of the utility those characteristics may have in another setting; and a plan for concealment. The behavior also seems to imply a capacity for learning — adapting behavior by incorporating knowledge learned at an earlier time.

Another tool available to the cephalapod theorist is controlled experimentation. It is possible to test the perceptual, cognitive, and motor capacities of the organism by designing simple experimental setups inviting various kinds of behavior. The researcher can ask “what-if” questions and frame experiments that serve to answer them — for example, what if the organism is separated from the shell but it remains in view; will the organism reaquire the shell?

A third tool available to the cephalapod researcher is the accumulated neuro-physiology that is available for the species. How does the perceptual system work? What can we determine about the cognitive system embodied in the organism’s central nervous system?

Finally, the researcher might consult with philosophers working on the mind-body problem for human beings, to canvass whether there are useful frameworks in that discipline that might contribute to octopus-mind-body studies. (Thomas Nagel’s famous article, “What is it Like to Be a Bat?”, comes to mind, in which he walks through the difficulty of imagining the consciousness of a bat whose sensory world depends on echo-location; link.)

In short, it seems that cephalapod cognition is a research field that necessarily combines detailed empirical research with conceptual and theoretical framing; and the latter efforts require as much rigor as the former.

%d bloggers like this: