Analytic philosophy of meaning and smart AI bots

One of the impulses of the early exponents of analytic philosophy was to provide strict logical simplifications of hitherto vague or indefinite ideas. There was a strong priority placed on being clear about the meaning of philosophical concepts, and more generally, about “meaning” in language simpliciter.

Here are the opening paragraphs of Rudolf Carnap’s The Logical Structure of the World and Pseudoproblems in Philosophy:

The present investigations aim to establish a “constructional system”, that is, an epistemic-logical system of objects or concepts. The word “object” is here always used in its widest sense, namely, for anything about which a statement can be made. Thus, among objects we count not only things, but also properties and classes, relations in extension and intension, states and events, what is actual as well as what is not. Unlike other conceptual systems, a constructional system undertakes more than the division of concepts into various kinds and the investigation of the differences and mutual relations between these kinds. In addition, it attempts a step-by-step derivation or “construction” of all concepts from certain fundamental concepts, so that a genealogy of concepts results in which each one has its definite place. It is the main thesis of construction theory that all concepts can in this way be derived from a few fundamental concepts, and it is in this respect that it differs from most other ontologies. (Carnap 1928 [1967]: 5)

But the idea of absolute, fundamental clarity about the meanings of words and concepts has proven to be unattainable. Perhaps more striking, it is ill conceived. Meanings are not molecules that can be analyzed into their unchanging components. Consider Wittgenstein’s critique of the project of providing a “constructional system” of the meaning of language in the Philosophical Investigations:

12. It is like looking into the cabin of a locomotive. There are handles there, all looking more or less alike. (This stands to reason, since they are all supposed to be handled.) But one is the handle of a crank, which can be moved continuously (it regulates the opening of a valve); another is the handle of a switch, which has only two operative positions: it is either off or on; a third is the handle of a brakelever, the harder one pulls on it, the harder the braking; a fourth, the handle of a pump: it has an effect only so long as it is moved to and fro.

Here Wittgenstein’s point, roughly, is that it is a profound philosophical error to expect a single answer to the question, how does language work? His metaphor of the locomotive cabin suggests that language works in many ways — to describe, to denote, to command, to praise, or to wail and moan; and it is an error to imagine that all of this diverse set of uses should be reducible to a single thing.

Or consider Paul Grice’s theory of meaning in terms of intentions and conversational implicatures. His theory of meaning considers language in use: what is the point of an utterance, and what presuppositions does it make? If a host says to a late-staying dinner guest, “You have a long drive home”, he or she might be understood to be making a Google-maps kind of factual statement about the distance between “your current location” and “home”. But the astute listener will hear a different message: “It’s late, I’m sleepy, there’s a lot of cleaning up to do, it’s time to call it an evening.” There is an implicature in the utterance that depends upon the context, the normal rules of courtesy (“Don’t ask your guests to leave peremptorily!”), and the logic of indirection. The meaning of the utterance is: “I’m asking you courteously to leave.” Here is a nice description of Grice’s theory of “meaning as use” in Richard Grandy and Richard Warner’s article on Grice in the Stanford Encyclopedia of Philosophy (link).

This approach to meaning invites a distinction between “literal” meaning and “figurative” or contextual meaning, and it suggests that algorithmic translation is unlikely to succeed for many important purposes. On Grice’s approach, we must also understand the “subtext”.

Hilary Putnam confronted the question of linguistic meaning (semantics) directly in 1975 in his essay “The meaning of ‘meaning'” (link). Putnam questions whether “meaning” is a feature of the psychological state of an individual user of language; are meanings “mental” entities; and he argues that they are not. Rather, meanings depend upon a “social division of labor” in which the background knowledge required to explicate and apply a term is distributed over a group of experts and quasi-experts.

A socio-linguistic hypothesis. The last two examples depend upon a fact about language that seems, surprisingly, never to have been pointed out: that there is division of linguistic labor. ‘Ve could hardly use such words as “elm” and “aluminum” if no one possessed a way of recognizing elm trees and aluminum metal; but not everyone to whom the distinction is important has to be able to make the distinction. (144

Putnam links his argument to the philosophical concepts of sense and reference. The reference (or extension) of a term is the set of objects to which the term refers; and the sense of the term is the set of mental features accessible to the individual that permits him or her to identify the referent of the term. But Putnam offers arguments about hypothetical situations that are designed to show that two individuals may be in identical psychological states with respect to a concept X, but may nonetheless identify different referents or extensions of X. “We claim that it is possible for two speakers to be in exactly the same psychological state (in the narrow sense), even though the extension of the term A in the idiolect of the one is different from the extension of the term A in the idiolect of the other. Extension is not determined by psychological state” (139).

A second idea that Putnam develops here is independent from this point about the socially distributed knowledge needed to identify the extension of a concept. This is his suggestion that we might try to understand the meaning of a noun as being the “stereotype” that competent language users have about that kind of thing.

In ordinary parlance a “stereotype” is a conventional (frequently malicious) idea (which may be wildly inaccurate) of what an X looks like or acts like or is. Obviously, I am trading on some features of the ordinary parlance. I am not concerned with malicious stereotypes (save where the language itself is malicious); but I am concerned with conventional ideas, which may be inaccurate. I am suggesting that just such a conventional idea is associated with “tiger,” with “gold,” etc., and, . moreover, that this is the sole element of truth in the “concept” theory. (169)

Here we might summarize the idea of a thing-stereotype as a cluster of beliefs about the thing that permits conversation to get started. “I’m going to tell you about glooples…” “I’m sorry, what do you mean by “gloople”?” “You know, that powdery stuff that you put in rice to make it turn yellow and give it a citrous taste.” Now we have an idea of what we’re talking about; a gloople is a bit of ground saffron. But of course this particular ensemble of features might characterize several different spices — cumin as well as saffron, say — in which case we do not actually know what is meant by “gloople” for the speaker. This is true; there is room for ambiguity, misunderstanding, and misidentification in the kitchen — but we have a place to start the conversation about the gloople needed for making the evening’s curry. And, as Putnam emphasizes in this essay and many other places, we are aided by the fact that there are “natural kinds” in the world — kinds of thing that share a fixed inner nature and that can be reidentified in different settings. This is where Putnam’s realism intersects with his theory of meaning.

What is interesting about this idea about the meaning of a concept term is that it makes the meaning of a concept or term inherently incomplete and corrigible. We do not offer “necessary and sufficient conditions” for applying the concept of gloople, and we are open to discussion about whether the characteristic taste is really “citrous” or rather more like vinegar. This line of thought — a more pragmatic approach to concept meaning — seems more realistic and more true to actual communicative practice than the sparse logical neatness of the first generation of logical positivists and analytic philosophers.

Here is how Putnam summarizes his analysis in “The Meaning of “Meaning””:

Briefly, my proposal is to define “meaning” not by picking out an object which will be identified with the meaning (although that might be done in the usual set-theoretic style if one insists), but by specifying a normal form (or, rather, a type of normal form) for the description of meaning. If we know what a “normal form description” of the meaning of a word should be, then, as far as I am concerned, we know what meaning is in any scientifically interesting sense.

My proposal is that the normal form description of the meaning of a word should be a finite sequence, or “vector,” whose components should certainly include the following (it might be desirable to have other types of components as well): ( 1) the syntactic markers that apply to the word, e.g., “noun”; (2) the semantic markers that apply to the word, e.g., “animal,” “period of time”; ( 3) a description of the additional features of the stereotype, if any; ( 4) a description of the extension. (190)

Rereading this essay after quite a few years, what is striking is that it seems to offer three rather different theories of meaning: the “social division of labor” theory, the stereotype theory, and the generative semantics theory. Are they consistent? Or are they alternative approaches that philosophers and linguists can take in their efforts to understand ordinary human use of language?

There is a great deal of diversity of approach, then, in the ways that analytical philosophers have undertaken to explicate the question of the meaning of language. And the topic — perhaps unlike many in philosophy — has some very important implications and applications. In particular, there is an intersection between “General artificial intelligence” research and the philosophy of language: If we want our personal assistant bots to be able to engage in extended and informative conversations with us, AI designers will need to have useable theories of the representation of meaning. And those representations cannot be wholly sequential (Markov chain) systems. If Alexa is to be a good conversationalist, she will need to be able to decode complex paragraphs like this, and create a meaningful “to-do” list of topics that need to be addressed in her reply.

Alexa, I was thinking about my trip to Milan last January, where I left my umbrella. Will I be going back to Milan soon? Will it rain this afternoon? Have I been to Lombardy in the past year? Do I owe my hosts at the university a follow-up letter on the discussions we had? Did I think I might encounter rain in my travels to Europe early in the year?

Alexa will have a tough time with this barrage of thoughts. She can handle the question about today’s weather. But how should her algorithms handle the question about what I thought about the possibility of rain during my travels last January? I had mentioned forgetting my umbrella in Milan; that implies I had taken an umbrella; and that implies that I thought there was a possibility of rain. But Alexa is not good at working out background assumptions and logical relationships between sentences. Or in Gricean terms, Alexa doesn’t get conversational implicatures.

Luca Gasparri and Diego Marconi provide a very interesting article on “Word Meaning” in the Stanford Encyclopedia of Philosophy (link) that allows the reader to see where theories of meaning have gone in philosophy, linguistics, and cognitive science since the 1970s. For example, linguists have developed a compositional theory of word meaning:

The basic idea of the Natural Semantic Metalanguage approach (henceforth, NSM; Wierzbicka 1972, 1996; Goddard & Wierzbicka 2002) is that word meaning is best described through the combination of a small set of elementary conceptual particles, known as semantic primes. Semantic primes are primitive (i.e., not decomposable into further conceptual parts), innate (i.e., not learned), and universal (i.e., explicitly lexicalized in all natural languages, whether in the form of a word, a morpheme, a phraseme, and so forth). According to NSM, the meaning of any word in any natural language can be defined by appropriately combining these fundamental conceptual particles. (36)

This approach is strikingly similar to that described in the Carnap passage above, in that it attempts to decompose meanings of complex concepts into “atoms” of meaning.

Some research on language in computer science and artificial intelligence has been striking and surprising. For example, OpenAI’s GT3 natural language text producer, grounded in massive machine learning on billions of pages of text, creates text that appears to be conversational and intelligent — without a milligram of meaning (link). Here is a short (two paragraph) piece of philosophy text about freedom of speech, generated purely mechanically by Arram Sabeti using GT3 as a demonstration of the capabilities of GT3 (link):

There’s a concept that comes up often in debates about free speech called “shouting fire in a crowded theater”. The basic idea is that you can’t just say whatever you want whenever you want – especially if it’s likely to cause panic. And the reason for this is obvious: panic leads to violence, and violence hurts people. If you are causing panic, then your right to free speech ends where the other person’s right to not get hurt begins.
But this isn’t really true. If shouting fire in a crowded theater were illegal, then there wouldn’t be any actual fires in theaters, would there? How could there be? If everyone knew that shouting fire would get them thrown in jail, then they’d all keep quiet when they smelled smoke – and thus there would never be any fires at all. The only way someone could shout fire is if they were lying – but lying is also illegal. So if shouting fire in a crowded theater were illegal, then theaters would never burn down, and no one would ever die from a fire.

The way I see it, the phrase “shouting fire in a crowded theater” should be replaced with “yelling ‘there’s an angry mob coming with torches and pitchforks’ in a theater”. This doesn’t sound quite as catchy, but it has the advantage of being true. And this matters because the whole concept of free speech is based on people getting the facts right. If you’re going to shout fire in a crowded theater, then you need to actually have seen a fire – not just heard about one from someone else. You need to have checked that there really is a fire before you warn everyone else that there’s a fire. Otherwise you’re just causing panic for no reason – and that’s not free speech, that’s just yelling “boo” in a crowded theater.

The passage is no argument at all — no guiding intelligence, and obvious non-sequiturs from one sentence to another. The first four sentences make sense. But then the next several sentences are nonsensical and illogical. In sentence 5 — what “isn’t really true”? Sentence 6 is flatly illogical. In fact, it is as illogical as Trump’s insistence that if we had less testing then there would be less COVID in the United States. And the statement, “… but lying is also illegal” — no, it’s not. The bot is misinformed about the law. Or more precisely: these are just words and phrases strung together algorithmically with no logical construction or understanding guiding the statements. And the second paragraph has the same features. It is kind of entertaining to see the logical flaws of the text; but maybe there is an important underlying discovery as well: machine learning cannot create or discover rules of logic that allow for argument and deduction. The passage is analogous to Noam Chomsky’s example of a syntactically correct but semantically meaningless sentence, “Colorless green ideas sleep furiously”. This GT3 text is syntactically correct from phrase to phase, but lacks the conceptual or logical coherence of a meaningful set of thoughts. And it seems pretty clear that the underlying approach is a dead end when it comes to the problem of natural language comprehension.

The Malthusian problem for scientific research

It seems that there is a kind of inverse Malthusian structure to scientific research and knowledge. Topics for research and investigation multiply geometrically, while actual research and the creation of knowledge can only proceed in a selective and linear way. This is true in every field — natural science, biology, social science, poetry. Take Darwin. He specialized in finches for a good while. But he could easily have taken up worms, beetles, or lizards, or he could have turned to conifers, oak trees, or cactuses. The evidence of speciation lies everywhere in the living world, and it is literally impossible for a generation of scientists of natural history to study them all.

Or consider a topic of current interest to me, the features that lead to dysfunctional performance in organizations large and small. Once we notice that the specific workings of an organization lead to harmful patterns that we care about a great deal, it makes sense to consider case studies of an unbounded number of organizations in every sector. How did the UAW work such that rampant corruption emerged? What features of the Chinese Communist Party led it to the profound secrecy tactics routinely practiced by its officials? What features of the Xerox Corporation made it unable to turn the mouse-based computer interface system into a commercial blockbuster? Each of these questions suggests the value of an organized case study, and surely we would learn a lot from each study. But each such study takes a person-year to complete, and a given scholar is unlikely to want to spend the rest of her career doing case studies like these. So the vast majority of such studies will never be undertaken. 

This observation has very intriguing implications for the nature of our knowledge about the world — natural, biological, and social. It seems to imply that our knowledge of the world will always be radically incomplete, with vast volumes of research questions unaddressed and sources of empirical phenomena unexamined. We might take it as a premise that there is nothing in the world that cannot be understood if investigated scientifically; but these reflections suggest that we are still forced to conclude that there is a limitless range of phenomena that have not been investigated, and will never be.

It is possible that philosophers of physics would argue that this “incompleteness” result does not apply to the realm of physical phenomena, because physics is concerned to discover a small number of fundamental principles and laws about how the micro- and macro-worlds of physical phenomena work. The diversity of the physical world is then untroubling, because every domain of physics can be subsumed under these basic principles and theories. Theories of gravitation, subatomic particles and forces, space-time relativity, and the quantum nature of the world are obscure but general and simple, and there is at least the hope that we might arrive at a comprehensive physics with the resources needed to explain all physical phenomena, from black-hole pairs to the nature of dark matter.

Whatever the case with physics, the phenomena of the social world are plainly not regulated by a simple set of fundamental principles and laws. Rather, heterogeneity, exception, diversity, and human creativity are fundamental characteristics of the social world. And this implies the inherent incompleteness of social knowledge. Variation and heterogeneity are the rule; so novel cases are always available, and studying them always leads to new insights and knowledge. Therefore there are always domains of phenomena that have not yet been examined, understood, or explained. This conclusion is a bit like the diagonal proof of the existence of irrational numbers that drove Cantor mad: every number can be represented as an infinite decimal, and yet for every list of infinite decimals it is simple to generate another infinite decimal that is not on the list.

Further, in this respect it may seem that the biological realm resembles the social realm in these respects, so that biological science is inherently incomplete as well. Even granting that the theories of evolution and natural selection are fundamental and universal in biological systems, the principles specified in these theories guarantee diversification and variation in biological outcomes. As a result we might argue that the science of living systems too is inherently incomplete, with new areas of inquiry outstripping the ability of the scientific enterprise to investigate them. In a surprising way the uncertainties we confront in the Covid-19 crisis seem to illustrate this situation. We don’t know whether this particular virus will stimulate an enduring immunity in individuals who have experienced the infection, and “first principles” in virology do not seem to afford a determinate answer to the question.

Consider these two patterns. The first is woven linen; the second is the pattern of habitat for invasive species across the United States. The weave of the linen is mechanical and regular; it covers all parts of the space with a grid of fiber. The second is the path-dependent result of invasion of habitat by multiple invasive species. Certain areas are intensively inhabited, while other areas are essentially free of invasive species. The regularity of the first image is a design feature of the process that created the fabric; the irregularity and variation of the second image is the consequence of multiple independent and somewhat stochastic yet opportunistic exploratory movements of the various species. Is scientific research more similar to the first pattern or the second?

I would suggest that scientific research more resembles the second process than the first. Researchers are guided by their scientific curiosity, the availability of research funding, and the assumptions about the importance of various topics embodied in their professions; and the result is a set of investigations and findings that are very intensive in some areas, while completely absent in other areas of the potential “knowledge space”.

Is this a troubling finding? Only if one thought that the goal of science is to eventually provide an answer to every possible empirical question, and to provide a general basis for explaining everything. If, on the other hand, we believe that science is an open-ended process, and that the selection of research topics is subject to a great deal of social and personal contingency, then the incompleteness of science comes as no surprise. Science is always exploratory, and there is much to explore in human experience.

(Several earlier posts have addressed the question of defining the scope of the social sciences; linklinklinklinklink.)

The tempos of capitalism

I’ve been interested in the economic history of capitalism since the 1970s, and there are a few titles that stand out in my memory. There were the Marxist and neo-Marxist economic historians (Marx’s Capital, E.P. Thompson, Eric Hobsbawm, Rodney Hilton, Robert Brenner, Charles Sabel); the debate over the nature of the industrial revolution (Deane and Cole, NFR Crafts, RM Hartwell, EL Jones); and volumes of the Cambridge Economic History of Europe. The history of British capitalism poses important questions for social theory: is there such a thing as “capitalism”, or are there many capitalisms? What are the features of the capitalist social order that are most fundamental to its functioning and dynamics of development? Is Marx’s intellectual construction of the “capitalist mode of production” a useful one? And does capitalism have a logic or tendency of development, as Marx believed, or is its history fundamentally contingent and path-dependent? Putting the point in concrete terms, was there a probable path of development from the “so-called primitive accumulation” to the establishment of factory production and urbanization to the extension of capitalist property relations throughout much of the world?
 
Part of the interest of detailed research in economic history in different places — England, Sweden, Japan, the United States, China — is the light that economic historians have been able to shed on the particulars of modern economic organization and development, and the range of institutions and “life histories” they have identified for these different historically embodied social-economic systems. For this reason I have found it especially interesting to read and learn about the ways in which the early modern Chinese economy developed, and different theories of why China and Europe diverged in this period. Kenneth Pomeranz, Philip Huang, William Skinner, Mark Elvin, Bozhong Li, James Lee, and Joseph Needham all shed light on different aspects of this set of questions, and once again the Cambridge Economic History of China was a deep and valuable resource.
 
A  new title that recently caught my eye is Pierre Dockès’ Le Capitalisme Et Ses Rythmes, quatre siècles en perspective: Tome I Sous Le Regard Des Géants. Intriguing features of the book include the long sweep of the book (400 years, over 950 pages, with volume II to come), and the question of whether there is something new to say about this topic. After reading large parts of the book, I think the answer to the last question is “yes”.
 
Dockès is interested in both the history of capitalism as an economic system and the history of economic science and political economy during the past four centuries. And he is particularly interested in discovering what we can learn about our current economic challenges from both these stories.
 
He specifically distances himself from “mainstream” economic theory and couches his own analysis in a less orthodox and more eclectic set of ideas. He defines mainstream economics in terms of five ideas: first, its strong commitment to mathematization and formalization of economic ideas; second, its disciplinary tendency towards hyper-specialization; third, its tendency to take the standpoint of the capitalist and the free market in its analyses; fourth, the propensity to extend these neoliberal biases to the process of selection and hiring of academics; and fifth, its underlying “scientism” and positivism leads its practitioners to devalue the history of the discipline or the historical conditions through which modern institutions came to be (9-12).
 
Dockès holds that the history of the economic facts and the ideas researchers have had about these facts go hand in hand; economic history and the history of economics need to be studied together. Moreover, Dockès believes that mainstream economics has lost sight of insights from the innovators in the history of economics which still have value — Ricardo, Smith, Keynes, Walras, Sismondi, Hobbes. The solitary focus of the discipline of mainstream economics in the past forty years on formal, mathematical representations of a market economy precludes these economists from “seeing” the economic world through the conceptual lenses of gifted predecessors. They are trapped in a paradigm or an “epistemological framework” from which they cannot escape. (These ideas are explored in the introduction to the volume.)
 
The substantive foundation of the book is Dockès’ idea that capitalism has long-term rhythms punctuated by crises, and that these fluctuations themselves are amenable to historical-causal and institutional analysis.

En un mot, croissance et crise sont inséparables et inhérents au processus de développement capitaliste laissé à lui-même.

[In a word, growth and crisis are inseparable and inherent in the process of capitalist development left to itself.] (13)

The fluctuations of capitalism over the longterm are linked in a single system of causation — growth, depression, financial crisis, and growth again are linked. Therefore, Dockès believes, it should be possible to discover the systemic causes of the development of various capitalist economies by uncovering the dynamics of crisis. Further, he underlines the serious social and political consequences that have ensued from economic crises in the past, including the rise of the Nazi regime out of the global economic crisis of the 1930s.

Etudier ces rythmes impose une analyse des logiques de fonctionnement du capitalism.

[Studying these rhythms imposes an analysis of the logic of functioning of capitalism.] (12).

Dockès is explicit in saying that economic history does not “repeat” itself, and the crises of capitalism are not replicas of each other over the decades or centuries. Historicity of the time and place is fundamental, and he underlines the path dependency of economic development in some of its aspects as well. But he argues that there are important similarities across various kinds of economic crises, and it is worthwhile discovering these similarities. He takes debt crises as an example: there are great differences among several centuries of experience of debt crisis. But there is something in common as well:

Permanence aussi dans les relations de pouvoir et dans let intérêts des uns (les créanciers partisans de la déflation, des taux élevés) et des autres (les débiteurs inflationnistes), dan les jeux de l’état entre ces deux groupes de pression. On peut tirer deux conséquences des homologies entre le passé et le présent.

[Permanence also in the relations of power and in the interests of some (creditors who favor deflation, high rates) and others (inflationary debtors), in the games of the state between these two pressure groups. We can draw two resulting homologies between the past and the present.] (20)

And failing to consider carefully and critically the economies and crises of the past is a mistake that may lead contemporary economic experts and advisors into ever-deeper economic crises in the future.

L’oubli est dommageable, celui des catastrophes, celui des enseignements qu’elles ont rendu possible, celui des corpus théoriques du passé. Ouvrir la perspective par l’économie historique peut aider à une meilleure compréhension du présent, voire à préparer l’avenir. (21)

[Forgetting is harmful, especially forgetting past catastrophes, forgetting the lessons they have made possible, forgetting the theoretical corpus of the past. Embracing the perspective of the concrete economic history can help lead to a better understanding of the present, or even prepare for the future.] (21)

The scope and content of the book are evident in the list of the book’s chapters:
  1. Crises et rythmes économiques
  2. Périodisation, mutations et rythmes longs
  3. Le capitalism d’Ancien Régime, ses crises
  4. Le “Haut Capitalism”, ses crises et leur théorisation (1800-1870)
  5. Karl Marx et les crises
  6. Capitalisme “Monopoliste” et grande industrie (1870-1914)
  7. Interlude
  8. Á l’âge de l’acier, les rythmes de l’investissement et de l’innovation
  9. Impulsion monétaire et effets réels
  10. La monnaie hégémonique
  11. “Le chien dans la mangeoire”
  12. La grande crise des années trente
  13. Keynes et la “Théorie Générale”La “Haute Théorie”, la dynamique, le cycle (1926-1946)
  14. En guise de conclusion d’étape
As the chapter titles make evident, Dockès delivers on his promise of treating both the episodes, trends, and facts of economic history as well as the history of the theories through which economists have sought to understand those facts and their dynamics.
 

The sociology of scientific discipline formation

There was a time in the philosophy of science when it may have been believed that scientific knowledge develops in a logical, linear way from observation and experiment to finished theory. This was something like the view presupposed by the founding logical positivists like Carnap and Reichenbach. But we now understand that the creation of a field of science is a social process with a great deal of contingency and path-dependence. The institutions through which science proceeds — journals, funding agencies, academic departments, Ph.D. programs — are all influenced by the particular interests and goals of a variety of actors, with the result that a field of science develops (or fails to develop) with a huge amount of contingency. Researchers in the history of science and the sociology of science and technology approach this problem in fairly different ways.

Scott Frickel’s 2004 book Chemical Consequences: Environmental Mutagens, Scientist Activism, and the Rise of Genetic Toxicology represents an effort to trace out the circumstances of the emergence of a new scientific sub-discipline, genetic toxicology. “This book is a historical sociological account of the rise of genetic toxicology and the scientists’ social movement that created it” (kl 37).

Frickel identifies two large families of approaches to the study of scientific disciplines: “institutionalist accounts of discipline and specialty formation” and “cultural studies of ‘disciplinarity’ [that] make few epistemological distinctions between the cognitive core of scientific knowledge and the social structures, practices, and processes that advance and suspend it” (kl 63). He identifies himself primarily with the former approach:

I draw from both modes of analysis, but I am less concerned with what postmodernist science studies call the micropolitics of meaning than I am with the institutional politics of knowledge. This perspective views discipline building as a political process that involves alliance building, role definition, and resource allocation. … My main focus is on the structures and processes of decision making in science that influence who is authorized to make knowledge, what groups are given access to that knowledge, and how and where that knowledge is implemented (or not). (kl 71)

Crucial for Frickel’s study of genetic toxicology is this family of questions: “How is knowledge produced, organized, and made credible ‘in-between’ existing disciplines? What institutional conditions nurture interdisciplinary work? How are porous boundaries controlled? Genetic toxicology’s advocates pondered similar questions. Some complained that disciplinary ethnocentrism prevented many biologists’ appreciation for the broader ecological implications of their own investigations…. ” (kl 99).

The account Frickel provides involves all of the institutional contingency that we might hope for; at the same time, it is an encouraging account for anyone committed to the importance of scientific research in charting a set of solutions to the enormous problems humanity currently faces.

Led by geneticists, these innovations were also intensely interdisciplinary, reflecting the efforts of scientists working in academic, government, and industry settings whose training was rooted in more than thirty disciplines and departments ranging across the biological, agricultural, environmental, and health sciences. Although falling short of some scientists’ personal visions of what this new science could become, their campaign had lasting impacts. Chief among these outcomes have been the emergence of a set of institutions, professional roles, and laboratory practices known collectively as “genetic toxicology.” (kl 37)

Frickel gives prominence to the politics of environmental activism in the emergence and directions of the new discipline of genetic toxicology. Activists on campus and in the broader society gave impetus to the need for new scientific research on the various toxic effects of pesticides and industrial chemicals; but they also affected the formation of the scientists themselves.

Also of interest is an edited volume on interdisciplinary research in the sciences edited by Frickel, Mathieu Albert, and Barbara Prainsack, Investigating Interdisciplinary Collaboration: Theory and Practice across Disciplines. The book takes special notice of some of the failures of interdisciplinarity, and calls for a careful assessment of the successes and failures of interdisciplinary research projects.

 We think that these celebratory accounts give insufficient analytical attention to the insistent and sustained push from administrators, policy makers, and funding agencies to engineer new research collaborations across disciplines. In our view, the stakes of these efforts to seed interdisciplinary research and teaching “from above” are sufficiently high to warrant a rigorous empirical examination of the academic and social value of interdisciplinarity. (kl 187)

In their excellent introduction Frickel, Albert, and Prainsack write:

A major problem that one confronts in assuming the superiority of interdisciplinary research is a basic lack of studies that use comparative designs to establish that measurable differences in fact exist and to demonstrate the value of interdisciplinarity relative to disciplinary research. (kl 303)

They believe that the appreciation of “interdisciplinary research projects” for its own sake depends on several uncertain presuppositions: that interdisciplinary knowledge is better knowledge, that disciplines constrain interdisciplinary knowledge, and that interdisciplinary interactions are unconstrained by hierarchies. They believe that each of these assumptions is dubious.

Both books are highly interesting to anyone concerned with the development and growth of scientific knowledge. Once we abandoned the premises of logical positivism, we needed a more sophisticated understanding of how the domain of scientific research, empirical and theoretical, is constituted in actual social institutional settings. How is it that Western biology did better than Lysenko? How can environmental science re-establish its credentials for credibility with an increasingly skeptical public?  How are we to cope with the proliferation of pseudo-science in crucial areas — health and medicine, climate, the feasibility of human habitation on Mars? Why should we be confident that the institutions of university science, peer review, tier-one journals, and National Academy selection committees succeed in guiding us to better, more veridical understandings of the empirical world around us?

Earlier posts have addressed topics concerning social studies of science; link, link, link.)

How things seem and why

The idea that there is a stark separation between many of our ideas of the social world, on the one hand, and the realities of the social world in which we live is an old one. We think “fairness and equality”, but what we get is exploitation, domination, and opportunity-capture. And there is a reasonable suspicion that this gap is in some sense intentional: interested parties have deceived us. In some sense it was the lesson of Plato’s allegory of the cave; it is the view that Marx expresses in his ideas of ideology and false consciousness; Gramsci’s theory of hegemony expresses the view; Nietzsche seems to have this separation in mind in much of his writing; and the Frankfurt School made much of it as well. The antidote to these forms of illusion, according to many of these theorists, is critique: careful, penetrating analysis and criticism of the presuppositions and claims of the ideological theory. (Here are several efforts within Understanding Society to engage in this kind of work; link, link, link.)

Peter Baehr’s recent book The Unmasking Style in Social Theory takes on this intellectual attitude of “unmasking” with a critical and generally skeptical eye. Baehr is an expert on the history of sociological theory who has written extensively on Hannah Arendt, Max Weber, and other fundamental contributors to contemporary social theory, and the book shows a deep knowledge of the history and intellectual traditions of social thought.

 The book picks out one particular aspect of the sociological tradition, the “style” of unmasking that he finds to be common in that history (and current practice). So what does Baehr mean by a style?

A style, in the sense used here, is a distinctive way of talking and writing. It is epitomized by characteristic words, images, metaphors, concepts and, especially, techniques. I refer to these collectively as elements or ingredients. (2)

The elements of the unmasking style that he identifies include rhetorical tools including weaponization, reduction and positioning, inversion, deflation, hyperbole and excess, and exclusive claims of emancipation (chapter 1).

The idea of an intellectual style is innocuous enough — we can recognize the styles of analytic philosophy, contemporary literary criticism, and right-wing political commentary when we read or hear them. But there is a hidden question here: is there more than style to these traditions of thought? Are there methods of inquiry and reasoning, traditions of assessment of belief, and habits of scholarly interaction that underlie these various traditions? In much of Baehr’s book he ignores these questions when it comes to the content of Marxist analysis, feminist theory, or the sociology of race in America. The impression he gives is that it is all style and rhetoric, with no rigorous research and analysis to support the claims.

In fact the overarching impression given by the book is that Baehr believes that much “unmasking” is itself biased, unfair, and dogmatic. He writes:

Unmasking aspires to create this roused awareness. The kind of analysis it requires is never conveyed to the reader as an interpretation of events, hypothetical and contestable. Nor does it allow scientific refutation or principled disagreement. True as fiat, unmasking statements brook no contradiction. (3)

Such an approach to theory and politics is problematic for several reasons. Its authoritarianism is obvious. So is its exclusivity: I am right, you can shut up. Yet ongoing discord, unlike slapdash accusation, is a good thing. (131)

Part of Baehr’s suspicion of the “style” of unmasking seems to derive from an allergy to the language of post-modernism in the humanities and some areas of social theory:

To be sure, unmask is a common term in social theory and political and cultural criticism. Find it consorting with illusion, disguise, fiction, hieroglyph, critique, mystification, fantasy, reversal, hegemony, myth, real interest, objective interest, semantic violence, symbolic violence, alienation, domination, revolution and emancipation. The denser this cluster, the more unmasking obtrudes from it. (5)

And he also associates the unmasking “style” with a culture of political correctness and a demand for compliance with a “progressive” agenda of political culture:

Rarely a day passes on Twitter without someone, somewhere, being upbraided for wickedness. When even a gesture or an intonation is potentially offensive to an aggrieved constituency on high alert, the opportunities for unmasking are endless. Some targets of censure are cowed. They apologize for an offense they were not conscious of committing. Publicly chastened, they resolve to be better behaved henceforth. (7)

A third salient difference between unmasking in popular culture and in academic social theory is that in the academy unmasking is considered progressive. Detecting concealed racism, white privilege, patriarchy, trans-gender phobia and colonial exploitation is the stock in trade of several disciplines, sub-disciplines and pseudo-disciplines across the humanities and social sciences. The common thread is the ubiquity of domination. (8)

Marxism lives on in sociology, in the humanities and social sciences, and in pockets of the wider culture. And wherever one finds Marxism, typically combined today to race and gender politics, and to postcolonial critique, one finds aspects of the unmasking template. (91)

These are currents of thought — memes, theoretical frameworks, apperceptions of the true nature of contemporary society — with which Baehr appears to have little patience.

But here are a few considerations in favor of unmasking in the world of politics, economics, and culture in which we now live.

First, Baehr’s aversion to active efforts to reveal the pernicious assumptions and motives of specific voices in social media is misplaced. When the language of hate, white supremacy, denigration of Muslims, gays, and audacious women, and memes that seem to derive directly from the fascist and neo-Nazi toolbox, is it not entirely appropriate to call those voices to task? Is it not important, even vital, to unmask the voices of hate that challenge the basis of a liberal and inclusive democracy (link)? Is it the unmaskers or the trolls conveying aggressive hate and division who most warrant our disapproval?

And likewise in the area of the thought-frameworks surrounding the facts of modern market society. In some sense the claim that class interest (corporate interest, business interest, elite interest) strives hard to create public understandings of the world that are at odds with the real power relations that govern us is too obviously true to debate. This is the purpose of much corporate public relations and advertising, self-serving think-tanking, and other concrete mechanisms of shifting the terms of public understanding in a direction more favorable to the interests of the powerful. (Here is an article in the New York Times describing research documenting sustained efforts by ExxonMobil to cast doubt in  public opinion about the reality of global warming and climate change; link.) And there is no barrier to conducting careful, rigorous, and intellectually responsible “decoding” of these corporate efforts at composing a fantasy; this is precisely what Conway and Oreskes do with such force in Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming in the case of corporate efforts to distort scientific reality concerning their products and their effects (link).

Baehr’s statements about the unavoidable dogmatism of “unmasking” analysis and criticism are also surprisingly categorical. “The kind of analysis it requires is never conveyed to the reader as an interpretation of events, hypothetical and contestable.” Really? Are there no honest scholars in the field of critical race theory, or in feminist epistemology and philosophy of science, or in the sociology of science and technology? What is this statement other than precisely the kind of wholesale rejection of the intellectual honesty of one’s opponents that otherwise seems to animate Baehr’s critique?

The Unmasking Style is a bit of a paradox, in my view. It denounces the “style” of unmasking, and yet it reads as its own kind of wholesale discrediting of an intellectual orientation for which Baehr plainly has no patience. This is the orientation that takes seriously the facts of power, privilege, wealth, and racial and gender domination that continue to constitute the skeleton of our world. It is fine, of course, to disagree with this fundamental diagnosis of the dynamics of power, domination, and exploitation in the current world. But Baehr’s book has many of the features of tone and rhetoric that the author vigorously criticizes in others. It is perplexing to find that this book offers so little of what the author seems to be calling for — an intellectually open effort to discern the legitimate foundations of one’s opponent’s positions. For my view, readers of The Unmasking Style would be well advised to read as well one or two books by scholars like Frédéric Vandenberghe, including A Philosophical History of German Sociology, to gain a more sympathetic view of critical sociological theory and its efforts to discern the underlying power relations of the modern world (link).

In general, I find that there is much more intellectual substance to efforts to uncover the interest-bias of various depictions of the capitalist world than Baehr is willing to recognize. How do energy companies shape the debate over climate change? How did Cold War ideologies influence the development of the social sciences in the 1950s? How has pro-business, anti-regulation propaganda made the roll-back of protections of the health and safety of the public possible? What is the meaning of the current administration’s persistent language about “dangerous immigrants” in terms of racial prejudice? These are questions that invoke some kind of “demystifying” analysis that would seem to fall in the category of what Baehr classifies as “unmasking”; and yet it is urgent that we undertake those inquiries.

A companion essay by Baehr, “The image of the veil in social theory”, appears in Theory and Society this month (link), and takes a nuanced approach to the question of “mask” and “veil”. The essay has little of the marks of polemical excess that seem to permeate the book itself. Here is the abstract to the essay:

Social theory draws energy not just from the concepts it articulates but also from the images it invokes. This article explores the image of the veil in social theory. Unlike the mask, which suggests a binary account of human conduct (what is covered can be uncovered), the veil summons a wide range of human experiences. Of special importance is the veil’s association with religion. In radical social thought, some writers ironize this association by “unveiling” religion as fraudulent (a move indistinguishable from unmasking it.) Baron d’Holbach and Marx offer classic examples of this stratagem. But other writers, notably Du Bois and Fanon, take a more nuanced and more theoretically productive approach to both religion and the veil. Refusing to debunk religion, these authors treat the veil—symbol and material culture—as a resource to theorize about social conflict. Proceeding in three stages, I, first, contrast the meanings of mask and unmasking with more supple veil imagery; second, identify anti-religious unveiling that is tantamount to unmasking; and, third, examine social theories of the veil that clarify the stakes of social adversity and political struggle. Du Bois’s and Fanon’s contributions to veil imagery receive special attention.

The Unmasking Style is erudite and interesting, and plainly designed to provoke debate. I only wish that it gave more consideration to the very real need we have to confront the lies and misrepresentations that currently pervade our contemporary world.

Philosophy and the study of technology failure

image: Adolf von Menzel, The Iron Rolling Mill (Modern Cyclopes)

Readers may have noticed that my current research interests have to do with organizational dysfunction and largescale technology failures. I am interested in probing the ways in which organizational failures and dysfunctions have contributed to large accidents like Bhopal, Fukushima, and the Deepwater Horizon disaster. I’ve had to confront an important question in taking on this research interest: what can philosophy bring to the topic that would not be better handled by engineers, organizational specialists, or public policy experts?

One answer is the diversity of viewpoint that a philosopher can bring to the discussion. It is evident that technology failures invite analysis from all of these specialized experts, and more. But there is room for productive contribution from reflective observers who are not committed to any of these disciplines. Philosophers have a long history of taking on big topics outside the defined canon of “philosophical problems”, and often those engagements have proven fruitful. In this particular instance, philosophy can look at organizations and technology in a way that is more likely to be interdisciplinary, and perhaps can help to see dimensions of the problem that are less apparent from a purely disciplinary perspective.

There is also a rationale based on the terrain of the philosophy of science. Philosophers of biology have usually attempted to learn as much about the science of biology as they can manage, but they lack the level of expertise of a research biologist, and it is rare for a philosopher to make an original contribution to the scientific biological literature. Nonetheless it is clear that philosophers have a great deal to add to scientific research in biology. They can contribute to better reasoning about the implications of various theories, they can probe the assumptions about confirmation and explanation that are in use, and they can contribute to important conceptual disagreements. Biology is in a better state because of the work of philosophers like David Hull and Elliot Sober.

Philosophers have also made valuable contributions to science and technology studies, bringing a viewpoint that incorporates insights from the philosophy of science and a sensitivity to the social groundedness of technology. STS studies have proven to be a fruitful place for interaction between historians, sociologists, and philosophers. Here again, the concrete study of the causes and context of large technology failure may be assisted by a philosophical perspective.

There is also a normative dimension to these questions about technology failure for which philosophy is well prepared. Accidents hurt people, and sometimes the causes of accidents involve culpable behavior by individuals and corporations. Philosophers have a long history of contribution to these kinds of problems of fault, law, and just management of risks and harms.

Finally, it is realistic to say that philosophy has an ability to contribute to social theory. Philosophers can offer imagination and critical attention to the problem of creating new conceptual schemes for understanding the social world. This capacity seems relevant to the problem of describing, analyzing, and explaining largescale failures and disasters.

The situation of organizational studies and accidents is in some ways more hospitable for contributions by a philosopher than other “wicked problems” in the world around us. An accident is complicated and complex but not particularly obscure. The field is unlike quantum mechanics or climate dynamics, which are inherently difficult for non-specialists to understand. The challenge with accidents is to identify a multi-layered analysis of the causes of the accident that permits observers to have a balanced and operative understanding of the event. And this is a situation where the philosopher’s perspective is most useful. We can offer higher-level descriptions of the relative importance of different kinds of causal factors. Perhaps the role here is analogous to messenger RNA, providing a cross-disciplinary kind of communications flow. Or it is analogous to the role of philosophers of history who have offered gentle critique of the cliometrics school for its over-dependence on a purely statistical approach to economic history.

So it seems reasonable enough for a philosopher to attempt to contribute to this set of topics, even if the disciplinary expertise a philosopher brings is more weighted towards conceptual and theoretical discussions than undertaking original empirical research in the domain.

What I expect to be the central finding of this research is the idea that a pervasive and often unrecognized cause of accidents is a systemic organizational defect of some sort, and that it is enormously important to have a better understanding of common forms of these deficiencies. This is a bit analogous to a paradigm shift in the study of accidents. And this view has important policy implications. We can make disasters less frequent by improving the organizations through which technology processes are designed and managed.

The insights of biography

I have always found biographies a particularly interesting source of learning and stimulation. A recent example is a biography and celebration of Muthuvel Kalaignar Karunanidhi published in a recent issue of the Indian semi-weekly Frontline. Karunanidhi was an enormously important social and political leader in India for over sixty years in the Dravidian movement in southern India and Tamil Nadu, and his passing earlier this month was the occasion for a special issue of Frontline. Karunanidhi was president of the Dravidian political party Dravida Munnetra Kazhagam (DMK) for more than fifty years. And he is an individual I had never heard of before opening up Frontline. In his early life he was a script writer and film maker who was able to use his artistic gifts to create characters who inspired political activism among young Tamil men and women. And in the bulk of his career he was an activist, orator, and official who had great influence on politics and social movements in southern India. The recollection and biography by A.S. Panneerselvan is excellent. (This article derives from Panneerselvan’s forthcoming biography of Karunanidhi.) Here is how Panneerselvan frames his narrative:

In a State where language, empowerment, self-respect, art, literary forms and films coalesce to lend political vibrancy, Karunanidhi’s life becomes a sort of natural metaphor of modern Tamil Nadu. His multifaceted personality helps to understand the organic evolution of the Dravidian Movement. To understand how he came to the position to wield the pen and his tongue for his politics, rather than bombs and rifles for revolution, one has to look at his early life. (7)

I assume that Karunanidhi and the Dravidian political movement would be common currency for Indian intellectuals and political activists. For an American with only a superficial understanding of Indian politics and history, his life story opens a whole new aspect of India’s post-independence experience. I think of the primary dynamic of Indian politics since Independence as being a struggle between the non-sectarian political ideas of Congress, the Hindu nationalism of BJP, and the secular and leftist position of India’s Communist movement. But the Dravidian movement diverges in specific ways from each of these currents. In brief, the central thread of the Dravidian is the rejection of the cultural hegemony of Hindi language, status, and culture, and an expression of pride and affirmation in the cultures and traditions of Tamil India. Panneerselvan describes an internal difference of emphasis on the topic of language and culture within the early stage of the Dravidian movement:

The duality of the Self-Respect Movement emerged very clearly during this phase. While Periyar and Annadurai were in total agreement in the diagnosis of the social milieu, their prognoses were quite opposite: For Periyar, language was an instrument for communication; for Annadurai, language was an organic socio-cultural oeuvre that lends a distinct identity and a sense of pride and belonging to the people. (13).

The Dravidian Movement was broadly speaking a movement for social justice, and it was fundamentally supportive of the rights and status of dalits. The tribute by K. Veeramani expresses the social justice commitments of DMK and Karunanidhi very well:

The goal of dispensation of social justice is possible only through reservation in education and public employment, giving adequate representation to the Scheduled Castes, the Scheduled Tribes and Other Backward Classes. Dispensation of social justice continues to be the core principle of the Dravidian movement, founded by South Indian Liberal Federation (SILF), popularly known as the Justice Party. (36) … The core of Periyar’s philosophy is to bring about equality through equal opportunities in a society rife with birth-based discrimination. Periyar strengthened the reservation mode as a compensation for birth-based inequalities. In that way, reservation has to be implemented as a mode of compensatory discrimination. (38)

Also important in the political agenda of the Dravidian Movement was a sustained effort to improve the conditions of tenants and agricultural workers through narrowing of the rights of landlords. J. Jeyaranjan observes:

The power relation between the landlord and the tenant is completely reversed, with the tenant enjoying certain powers to negotiate compensation for giving up the right to cultivate. Mobilisations by the undivided Communist Party of India (CPI) and the Dravidian movement, the Dravidar Kazhagam in particular, have been critical to the creation of a culture of collective action and resistance to landlord power. Further, the coming to power of the Dravida Munnetra Kazhagam (DMK) in 1967 created conditions for consolidating the power of lower-caste tenants who benefited both from a set of State initiatives launched by the DMK and the culture of collective action against Brahmin landlords. (52)

What can be learned from a detailed biography of a figure like Karunanidhi? For myself the opportunity such a piece of scholarship permits is to significantly broaden my own understanding of the nuances of philosophy, policy, values, and institutions through which the political developments of a relatively unfamiliar region of the world have developed. Such a biography allows the reader to gain a vivid experience of the issues and passions that motivated people, both intellectuals and laborers, in the 1920s, the 1960s, and the 1990s. And it gives a bit of insight into the complicated question of how talented individuals develop into impactful, committed, and dedicated leaders and thinkers.

(Here is a collection of snippets from Karunandhi’s films; link.)

Social science and policy

One of the important reasons that we value scientific knowledge is the possibility that it will allow us to intervene in the world to solve problems that we care about. Good climate science allows us to have high confidence in the causes of global climate change; and it also provides a sound basis for policy interventions to help to mitigate the pace of climate change. Good cellular biology permits a better understanding of autoimmune disease; and it also suggests avenues for prevention and treatment. There is thus an important component of pragmatism in our esteem for scientific knowledge.

In the social sciences we would like to assume that something similar is possible. If we have good sociological understanding of the causes of teen pregnancy or gang violence, perhaps that understanding will also provide a basis for designing effective interventions that reduce the incidence of the social problems we study. In other words, perhaps we can count on social science to provide a valuable and effective basis for the design of social policy.

The philosophy of social science that I’ve developed in this blog and in New Directions in the Philosophy of Social Science raises some challenges to that hope. It is argued here that the social world is contingent, heterogeneous, plastic, and conjunctural. In the words of Roy Bhaskar, social causation takes place in an open system in which we cannot arrive at confident predictions of particular social outcomes. In place of general theories and comprehensive social laws, it is argued here that we are best advised to seek out particular causal mechanisms that underlie various social outcomes of interest. And it is emphasized that it is difficult to make predictions in particular circumstances even when we have an idea of some of the operative social mechanisms, because of the perennial possibility of contingent interventions by additional factors.

So the hard question is this: to what extent is it at all possible for social science research to provide a confident basis for the design and implementation of social policies to address important social problems?

One approach that does not seem promising is the methodology of random controlled trials (RCT). The logical shortcomings of this approach when applied to social phenomena have been highlighted by Nancy Cartwright and Jeremy Hardie in Evidence-Based Policy: A Practical Guide to Doing It Better, and I discuss these problems In an earlier post (link). So it does not seem promising to expect that we will be able to isolate causal mechanisms (for example, “provide after-school tutoring”) and use the method of RCT to demonstrate the efficacy of this mechanism in reducing a given social harm (say, “high school absenteeism”).

The problem of establishing a strong relationship between theory and policy has been considered in several areas of social research. One such study is in the field of international relations. Stephen Walt’s 2005 article, “The relationship between theory and policy in international relations”, is an extended treatment of the topic (link). Here is the abstract to Walt’s paper:

Policy makers pay relatively little attention to the vast theoretical literature in IR, and many scholars seem uninterested in doing policy-relevant work. These tendencies are unfortunate because theory is an essential tool of statecraft. Many policy debates ultimately rest on competing theoretical visions, and relying on a false or flawed theory can lead to major foreign policy disasters. Theory remains essential for diagnosing events, explaining their causes, prescribing responses, and evaluating the impact of different policies. Unfortunately, the norms and incentives that currently dominate academia discourage many scholars from doing useful theoretical work in IR. The gap between theory and policy can be narrowed only if the academic community begins to place greater value on policy-relevant theoretical work.

Fundamentally the article raises the question of whether there is a useful relationship between international relations theories and the practice of diplomacy and foreign policy. Can IR theory guide the construction of a successful foreign policy?

Here are some of the ways that Walt believes theory can be used to support policy analysis. Walt believes that theory can assist policy analysis in four important ways, including diagnosis, prediction, prescription, and evaluation. Unfortunately, none of the examples that he offers provide much confidence in any of these capabilities in a significant way. Diagnosis comes down to classification; but given that the idea of a social kind is suspect, we do not add much to our knowledge by classifying a given regime as “fascist”, because we know that there is substantial variation across the group of fascist states. Prediction (as Gandhi said about Western civilization) would be nice; but it is almost never attainable in real social situations. Prescription requires a sound knowledge of the likely causal dynamics of a situation; but the open nature of social reality implies that we cannot have such knowledge in any comprehensive way. And evaluation is subject to similar issues. Walt assumes we can evaluate the success of a policy in a quasi-experimental way — observe the cases where the intervention took place and measure the frequency of the desired outcome. But unfortunately this quasi-experimental method is also suspect.

An important drawback of Walt’s treatment is the fairly traditional view that Walt takes with regard to the content of scientific knowledge. There is an underlying preposition of a fairly Humean view of cause and effect.

Policy makers can also rely on empirical laws. An empirical law is an observed correspondence between two or more phenomena that systematic inquiry has shown to be reliable. (25)

But in fact, there are very few useful “empirical laws” in the social realm that might serve as a basis for simple cause-and-effect policy design.

At present, then, there is a still a significant gap between an empirically supported social theory and a well designed social intervention. Unfortunately social causation is rarely as simple and regular as the empiricist framework presupposes. This is indeed disappointing, because it is certainly true that we most urgently need guidance in designing strategies for solving important social problems. (Here is an earlier post that offers a somewhat more positive assessment of the relevance of theory to policy; link.)

Sustaining a philosophy research community

 

The European Network for Philosophy of Social Science (ENPOSS) completed its annual conference in Krakow last week. It was a stimulating and productive success, with scholars from many countries and at every level of seniority. ENPOSS is one of the most dynamic networks where genuinely excellent work in philosophy of social science is taking place (link). Philosophers from Germany, Poland, Norway, Spain, France, the Netherlands, the UK, and other countries came together for three intensive days of panels and discussions. The discussions made it clear that this is an integrated research community with a common understanding of a number of research problems and a common vocabulary. There is a sense of continuing progress on key issues — micro-macro ontology, social mechanisms, naturalism, intentionality, institutional imperatives, fact-value issues, computational social science, and intersections of disciplinary perspectives, to name several.

Particular highlights were keynote addresses by Dan Hausman (“Social scientific naturalism revisited”), Anne Alexandrova (“Are social scientists experts on values?”), and Bartosz Brozek (“The architecture of the legal mind”). There were also lively book discussions on several current books in the philosophy of social science — Chris Mantzavinos’s Explanatory Pluralism, Lukasz Hardt’s Economics Without Laws: Towards a New Philosophy of Economics, and my own New Directions in the Philosophy of Social Science. Thanks to Eleonora Montuschi, Gianluca Manzo, and Federica Russo for excellent and stimulating discussion of my book.

It is interesting to observe that the supposed divide between analytic and Continental philosophy is not in evidence in this network of scholars. These are philosophers whose Ph.D. training took place all over Europe — Italy, Belgium, Finland, Germany, France, Spain, the UK … They are European philosophers. But their philosophical ideas do not fall within the stereotyped boundaries of “Continental philosophy.” The philosophical vocabulary in evidence is familiar from analytic philosophy. At the same time, this is not simply an extension of Anglo-American philosophy. The style of reasoning and analysis is not narrowly restricted to the paradigms reflected by Russell, Dummett, or Parfit. It is, perhaps, a new style of European philosophy. There is a broad commitment to engaging with the logic and content of particular social sciences at a level that would also make sense to the practitioners of sociology, political science, or economics. And there is a striking breadth to the substantive problems of social life that these philosophers are attempting to better understand. The overall impression is of a research community that has the features of what Imre Lakatos referred to as a “progressive research programme” in Criticism and the Growth of Knowledge — one in which problems are being addressed and treated in ways that sheds genuinely new light on the problem. Progress is taking place.

There were two large topic areas that perhaps surprisingly did not find expression in the ENPOSS program. One is the field of critical realism and the ideas about social explanation advanced by Roy Bhaskar and Margaret Archer. And the second is the theory of assemblages put forward by Deleuze and subsequently elaborated by DeLanda and Latour. These topic areas have drawn a fair amount of attention by social theorists and philosophers in other parts of the philosophy of social science research community. So it is interesting to realize that they were largely invisible in Krakow. This leads one to think that this particular network of scholars is simply not very much influenced by these ideas.

Part of the dynamism of the ENPOSS conference, both in Krakow and in prior years, is the broad sense that these issues matter a great deal. There was a sense of the underlying importance of the philosophy of social science. Participants seem to share the idea that the processes of social change and periodic crisis that we face in the contemporary world are both novel and potentially harmful to human flourishing, and that the social sciences need to develop better methods, ontologies, and theories if they are to help us to understand and improve the social world in which we live. So the philosophy of social science is not just a contribution to a minor area within the grand discipline of philosophy; more importantly, it is a substantial and valuable contribution to our collective ability to bring a scientific perspective to social problems and social progress.

Next year’s meeting will take place in early September at the University of Hannover and will be a joint meeting with the US-based Philosophy of Social Science Roundtable. The call for papers will be posted on the ENPOSS website.

Time for a critical-realist epistemology

The critical realism network in North America is currently convened in Montreal in a three-day intensive workshop (link). In attendance are many of the sociologists and philosophers who have an active interest in critical realism, and the talks are of genuine interest. A session this morning on pragmatist threads of potential interest to critical realists, including Mead, Abbott, and Elias, was highly stimulating. And there are 29 sessions altogether — roughly 85 papers. This is an amazing wealth of sociological research.

Perhaps a third of the papers are presentations of original sociological research from a CR point of view. This is very encouraging because it demonstrates that CR is moving beyond the philosophy of social science to the concrete practice of social science. Researchers are working hard to develop research methods in the context of CR that permit concrete investigation of particular social and historical phenomena. And this implies as well that there is a growing body of thinking about methodology within the field of CR.

CR theorists began with ontology, and a great deal of the existing literature takes the form of theoretical expositions of various ontological theses. And this was deliberate; following Bhaskar, theorists have argued that we need better ontology before science can progress. (This seems particularly true in the social realm; link.) So ontology needs to come first, then epistemology.

I believe the time has come when CR needs to give more explicit and extended attention to epistemology.

What is epistemology? It is an organized effort to answer the question, what is (scientific) knowledge? It attempts to provide a justified theory of empirical justification. Epistemology is an attempt to articulate the desired relationship between evidence and assertion; more specifically, it is an attempt to uncover the nuances of the domain of “evidence” across the realm of social research. Most fundamentally, it is an attempt to articulate how the practices of science are “truth-enhancing”: a given set of epistemic practices (methodologies) are hoped to result in a higher level of veridicality over time.

Like a left handed quarterback, CR has a disadvantage in formulating an epistemology because of its blind side. In the case of CR, the blind side is the movement’s visceral rejection of positivism. CR theorists are so strongly motivated to reject all elements of positivism that they are disposed to avoid positions they actually need to take. For example, The two following statements sound very similar:

A “Sociological claims must be evaluated on the basis of objective empirical evidence”

B “Sociological claims need to be confirmed or falsified”

And so the CR theorist is inclined to reject A as well as B. But this is a philosophical misstep caused by fear of the blind side. A is actually a perfectly valid requirement of epistemological rationality.

So what do we need from a developed epistemology for CR? Essentially we need three things.

First, we need an explicit commitment to empirical evaluation.

Second, we need a nuanced discussion of the complications involved in identifying “empirical evidence” in social research; for example, the impossibility of theory-independent or perspective-independent social data, the constructive nature of most historical and social observation, and the problem of selectivity in the collection of evidence.

Third, we need a discussion of the modes of inference — deductive, inductive, statistical, causal, and Bayesian — on the basis of which social scientists can arrive at an estimate of likelihood for a statement given a set of evidence statements.

Finally, our CR epistemology needs to give an appropriate discussion of the fallibility of all scientific research.

The epistemological frame that I currently favor is the coherence methodology described by philosophers like Quine and Goodman. The social sciences constitute a web of belief, and provisional conclusions in one area may serve to establish a method or valuation for findings in another area of the web. both ontological positions and epistemological maxims may require adjustment in light of future empirical and theoretical findings. Rawls’s conception of reflective equilibrium illustrates this epistemology in the moral field. This approach has an unexpected affinity for CR, because there is an emerging interest in the pragmatist philosophy from which this approach derives.

Epistemology allows us to place various specific methodological approaches into context. So we can locate the method of process tracing into the context of justification, and therefore into epistemology. It also validates the idea of methodological pluralism: there are multiple avenues through which researchers can create evidence through which to prove and evaluate a variety of sociological claims.

Critical realism seeks to significantly influence the practice and content of social science theory and research. In order to do this it will need to be able to state with confidence the commitments made by CR researchers to empirical standards and evidence-based findings. This will help CR to fulfill the promise of discovering some of the real structures and processes of the social world based on publicly accessible standards of theory discovery and acceptance.