Analytic philosophy of meaning and smart AI bots

One of the impulses of the early exponents of analytic philosophy was to provide strict logical simplifications of hitherto vague or indefinite ideas. There was a strong priority placed on being clear about the meaning of philosophical concepts, and more generally, about “meaning” in language simpliciter.

Here are the opening paragraphs of Rudolf Carnap’s The Logical Structure of the World and Pseudoproblems in Philosophy:

The present investigations aim to establish a “constructional system”, that is, an epistemic-logical system of objects or concepts. The word “object” is here always used in its widest sense, namely, for anything about which a statement can be made. Thus, among objects we count not only things, but also properties and classes, relations in extension and intension, states and events, what is actual as well as what is not. Unlike other conceptual systems, a constructional system undertakes more than the division of concepts into various kinds and the investigation of the differences and mutual relations between these kinds. In addition, it attempts a step-by-step derivation or “construction” of all concepts from certain fundamental concepts, so that a genealogy of concepts results in which each one has its definite place. It is the main thesis of construction theory that all concepts can in this way be derived from a few fundamental concepts, and it is in this respect that it differs from most other ontologies. (Carnap 1928 [1967]: 5)

But the idea of absolute, fundamental clarity about the meanings of words and concepts has proven to be unattainable. Perhaps more striking, it is ill conceived. Meanings are not molecules that can be analyzed into their unchanging components. Consider Wittgenstein’s critique of the project of providing a “constructional system” of the meaning of language in the Philosophical Investigations:

12. It is like looking into the cabin of a locomotive. There are handles there, all looking more or less alike. (This stands to reason, since they are all supposed to be handled.) But one is the handle of a crank, which can be moved continuously (it regulates the opening of a valve); another is the handle of a switch, which has only two operative positions: it is either off or on; a third is the handle of a brakelever, the harder one pulls on it, the harder the braking; a fourth, the handle of a pump: it has an effect only so long as it is moved to and fro.

Here Wittgenstein’s point, roughly, is that it is a profound philosophical error to expect a single answer to the question, how does language work? His metaphor of the locomotive cabin suggests that language works in many ways — to describe, to denote, to command, to praise, or to wail and moan; and it is an error to imagine that all of this diverse set of uses should be reducible to a single thing.

Or consider Paul Grice’s theory of meaning in terms of intentions and conversational implicatures. His theory of meaning considers language in use: what is the point of an utterance, and what presuppositions does it make? If a host says to a late-staying dinner guest, “You have a long drive home”, he or she might be understood to be making a Google-maps kind of factual statement about the distance between “your current location” and “home”. But the astute listener will hear a different message: “It’s late, I’m sleepy, there’s a lot of cleaning up to do, it’s time to call it an evening.” There is an implicature in the utterance that depends upon the context, the normal rules of courtesy (“Don’t ask your guests to leave peremptorily!”), and the logic of indirection. The meaning of the utterance is: “I’m asking you courteously to leave.” Here is a nice description of Grice’s theory of “meaning as use” in Richard Grandy and Richard Warner’s article on Grice in the Stanford Encyclopedia of Philosophy (link).

This approach to meaning invites a distinction between “literal” meaning and “figurative” or contextual meaning, and it suggests that algorithmic translation is unlikely to succeed for many important purposes. On Grice’s approach, we must also understand the “subtext”.

Hilary Putnam confronted the question of linguistic meaning (semantics) directly in 1975 in his essay “The meaning of ‘meaning'” (link). Putnam questions whether “meaning” is a feature of the psychological state of an individual user of language; are meanings “mental” entities; and he argues that they are not. Rather, meanings depend upon a “social division of labor” in which the background knowledge required to explicate and apply a term is distributed over a group of experts and quasi-experts.

A socio-linguistic hypothesis. The last two examples depend upon a fact about language that seems, surprisingly, never to have been pointed out: that there is division of linguistic labor. ‘Ve could hardly use such words as “elm” and “aluminum” if no one possessed a way of recognizing elm trees and aluminum metal; but not everyone to whom the distinction is important has to be able to make the distinction. (144

Putnam links his argument to the philosophical concepts of sense and reference. The reference (or extension) of a term is the set of objects to which the term refers; and the sense of the term is the set of mental features accessible to the individual that permits him or her to identify the referent of the term. But Putnam offers arguments about hypothetical situations that are designed to show that two individuals may be in identical psychological states with respect to a concept X, but may nonetheless identify different referents or extensions of X. “We claim that it is possible for two speakers to be in exactly the same psychological state (in the narrow sense), even though the extension of the term A in the idiolect of the one is different from the extension of the term A in the idiolect of the other. Extension is not determined by psychological state” (139).

A second idea that Putnam develops here is independent from this point about the socially distributed knowledge needed to identify the extension of a concept. This is his suggestion that we might try to understand the meaning of a noun as being the “stereotype” that competent language users have about that kind of thing.

In ordinary parlance a “stereotype” is a conventional (frequently malicious) idea (which may be wildly inaccurate) of what an X looks like or acts like or is. Obviously, I am trading on some features of the ordinary parlance. I am not concerned with malicious stereotypes (save where the language itself is malicious); but I am concerned with conventional ideas, which may be inaccurate. I am suggesting that just such a conventional idea is associated with “tiger,” with “gold,” etc., and, . moreover, that this is the sole element of truth in the “concept” theory. (169)

Here we might summarize the idea of a thing-stereotype as a cluster of beliefs about the thing that permits conversation to get started. “I’m going to tell you about glooples…” “I’m sorry, what do you mean by “gloople”?” “You know, that powdery stuff that you put in rice to make it turn yellow and give it a citrous taste.” Now we have an idea of what we’re talking about; a gloople is a bit of ground saffron. But of course this particular ensemble of features might characterize several different spices — cumin as well as saffron, say — in which case we do not actually know what is meant by “gloople” for the speaker. This is true; there is room for ambiguity, misunderstanding, and misidentification in the kitchen — but we have a place to start the conversation about the gloople needed for making the evening’s curry. And, as Putnam emphasizes in this essay and many other places, we are aided by the fact that there are “natural kinds” in the world — kinds of thing that share a fixed inner nature and that can be reidentified in different settings. This is where Putnam’s realism intersects with his theory of meaning.

What is interesting about this idea about the meaning of a concept term is that it makes the meaning of a concept or term inherently incomplete and corrigible. We do not offer “necessary and sufficient conditions” for applying the concept of gloople, and we are open to discussion about whether the characteristic taste is really “citrous” or rather more like vinegar. This line of thought — a more pragmatic approach to concept meaning — seems more realistic and more true to actual communicative practice than the sparse logical neatness of the first generation of logical positivists and analytic philosophers.

Here is how Putnam summarizes his analysis in “The Meaning of “Meaning””:

Briefly, my proposal is to define “meaning” not by picking out an object which will be identified with the meaning (although that might be done in the usual set-theoretic style if one insists), but by specifying a normal form (or, rather, a type of normal form) for the description of meaning. If we know what a “normal form description” of the meaning of a word should be, then, as far as I am concerned, we know what meaning is in any scientifically interesting sense.

My proposal is that the normal form description of the meaning of a word should be a finite sequence, or “vector,” whose components should certainly include the following (it might be desirable to have other types of components as well): ( 1) the syntactic markers that apply to the word, e.g., “noun”; (2) the semantic markers that apply to the word, e.g., “animal,” “period of time”; ( 3) a description of the additional features of the stereotype, if any; ( 4) a description of the extension. (190)

Rereading this essay after quite a few years, what is striking is that it seems to offer three rather different theories of meaning: the “social division of labor” theory, the stereotype theory, and the generative semantics theory. Are they consistent? Or are they alternative approaches that philosophers and linguists can take in their efforts to understand ordinary human use of language?

There is a great deal of diversity of approach, then, in the ways that analytical philosophers have undertaken to explicate the question of the meaning of language. And the topic — perhaps unlike many in philosophy — has some very important implications and applications. In particular, there is an intersection between “General artificial intelligence” research and the philosophy of language: If we want our personal assistant bots to be able to engage in extended and informative conversations with us, AI designers will need to have useable theories of the representation of meaning. And those representations cannot be wholly sequential (Markov chain) systems. If Alexa is to be a good conversationalist, she will need to be able to decode complex paragraphs like this, and create a meaningful “to-do” list of topics that need to be addressed in her reply.

Alexa, I was thinking about my trip to Milan last January, where I left my umbrella. Will I be going back to Milan soon? Will it rain this afternoon? Have I been to Lombardy in the past year? Do I owe my hosts at the university a follow-up letter on the discussions we had? Did I think I might encounter rain in my travels to Europe early in the year?

Alexa will have a tough time with this barrage of thoughts. She can handle the question about today’s weather. But how should her algorithms handle the question about what I thought about the possibility of rain during my travels last January? I had mentioned forgetting my umbrella in Milan; that implies I had taken an umbrella; and that implies that I thought there was a possibility of rain. But Alexa is not good at working out background assumptions and logical relationships between sentences. Or in Gricean terms, Alexa doesn’t get conversational implicatures.

Luca Gasparri and Diego Marconi provide a very interesting article on “Word Meaning” in the Stanford Encyclopedia of Philosophy (link) that allows the reader to see where theories of meaning have gone in philosophy, linguistics, and cognitive science since the 1970s. For example, linguists have developed a compositional theory of word meaning:

The basic idea of the Natural Semantic Metalanguage approach (henceforth, NSM; Wierzbicka 1972, 1996; Goddard & Wierzbicka 2002) is that word meaning is best described through the combination of a small set of elementary conceptual particles, known as semantic primes. Semantic primes are primitive (i.e., not decomposable into further conceptual parts), innate (i.e., not learned), and universal (i.e., explicitly lexicalized in all natural languages, whether in the form of a word, a morpheme, a phraseme, and so forth). According to NSM, the meaning of any word in any natural language can be defined by appropriately combining these fundamental conceptual particles. (36)

This approach is strikingly similar to that described in the Carnap passage above, in that it attempts to decompose meanings of complex concepts into “atoms” of meaning.

Some research on language in computer science and artificial intelligence has been striking and surprising. For example, OpenAI’s GT3 natural language text producer, grounded in massive machine learning on billions of pages of text, creates text that appears to be conversational and intelligent — without a milligram of meaning (link). Here is a short (two paragraph) piece of philosophy text about freedom of speech, generated purely mechanically by Arram Sabeti using GT3 as a demonstration of the capabilities of GT3 (link):

There’s a concept that comes up often in debates about free speech called “shouting fire in a crowded theater”. The basic idea is that you can’t just say whatever you want whenever you want – especially if it’s likely to cause panic. And the reason for this is obvious: panic leads to violence, and violence hurts people. If you are causing panic, then your right to free speech ends where the other person’s right to not get hurt begins.
But this isn’t really true. If shouting fire in a crowded theater were illegal, then there wouldn’t be any actual fires in theaters, would there? How could there be? If everyone knew that shouting fire would get them thrown in jail, then they’d all keep quiet when they smelled smoke – and thus there would never be any fires at all. The only way someone could shout fire is if they were lying – but lying is also illegal. So if shouting fire in a crowded theater were illegal, then theaters would never burn down, and no one would ever die from a fire.

The way I see it, the phrase “shouting fire in a crowded theater” should be replaced with “yelling ‘there’s an angry mob coming with torches and pitchforks’ in a theater”. This doesn’t sound quite as catchy, but it has the advantage of being true. And this matters because the whole concept of free speech is based on people getting the facts right. If you’re going to shout fire in a crowded theater, then you need to actually have seen a fire – not just heard about one from someone else. You need to have checked that there really is a fire before you warn everyone else that there’s a fire. Otherwise you’re just causing panic for no reason – and that’s not free speech, that’s just yelling “boo” in a crowded theater.

The passage is no argument at all — no guiding intelligence, and obvious non-sequiturs from one sentence to another. The first four sentences make sense. But then the next several sentences are nonsensical and illogical. In sentence 5 — what “isn’t really true”? Sentence 6 is flatly illogical. In fact, it is as illogical as Trump’s insistence that if we had less testing then there would be less COVID in the United States. And the statement, “… but lying is also illegal” — no, it’s not. The bot is misinformed about the law. Or more precisely: these are just words and phrases strung together algorithmically with no logical construction or understanding guiding the statements. And the second paragraph has the same features. It is kind of entertaining to see the logical flaws of the text; but maybe there is an important underlying discovery as well: machine learning cannot create or discover rules of logic that allow for argument and deduction. The passage is analogous to Noam Chomsky’s example of a syntactically correct but semantically meaningless sentence, “Colorless green ideas sleep furiously”. This GT3 text is syntactically correct from phrase to phase, but lacks the conceptual or logical coherence of a meaningful set of thoughts. And it seems pretty clear that the underlying approach is a dead end when it comes to the problem of natural language comprehension.

A big-data contribution to the history of philosophy

The history of philosophy is generally written by subject experts who explore and follow a tradition of thought about which figures and topics were “pivotal” and thereby created an ongoing research field. This is illustrated, for example, in Stephen Schwartz’s A Brief History of Analytic Philosophy: From Russell to Rawls. Consider the history of Anglophone philosophy since 1880 as told by a standard narrative in the history of philosophy of this period. One important component was “logicism” — the idea that the truths of mathematics can be derived from purely logical axioms using symbolic logic. Peano and Frege formulated questions about the foundations of arithmetic; Russell and Whitehead sought to carry out this program of “logicism”; and Gödel proved the impossibility of carrying out this program: any set of axioms rich enough to derive theorems of arithmetic is either incomplete or inconsistent. This narrative serves to connect the dots in this particular map of philosophical development. We might want to add details like the impact of logicism on Wittgenstein and the impact of Tractatus Logico-Philosophicus, but the map is developed by tracing contacts from one philosopher to another, identifying influences, and aggregating groups of topics and philosophers into “schools”.

Brian Weatherson, a philosopher at the University of Michigan, has a different idea about how we might proceed in mapping the development of philosophy over the past century (link) (Brian Weatherson, A History of Philosophy Journals: Volume 1: Evidence from Topic Modeling, 1876-2013. Vol. 1. Published by author on Github, 2020; link). Professional philosophy in the past century has been primarily expressed in the pages of academic journals. So perhaps we can use a “big data” approach to the problem of discovering and tracking the emergence of topics and fields within philosophy by analyzing the frequency and timing of topics and concepts as they appear in academic philosophy journals.

Weatherson pursues this idea systematically. He has downloaded from JSTOR the full contents of twelve leading journals in anglophone philosophy for the period 1876-2013, producing a database of some 32,000 articles and lists of all words appearing in each article (as well as their frequencies). Using the big data technique called “topic modeling” he has arrived at 90 subjects (clusters of terms) that recur in these articles. Here is a quick description of topic modeling.

Topic modeling is a type of statistical modeling for discovering the abstract “topics” that occur in a collection of documents. Latent Dirichlet Allocation (LDA) is an example of topic model and is used to classify text in a document to a particular topic. It builds a topic per document model and words per topic model, modeled as Dirichlet distributions. (link)

Here is Weatherson’s description of topic modeling:

An LDA model takes the distribution of words in articles and comes up with a probabilistic assignment of each paper to one of a number of topics. The number of topics has to be set manually, and after some experimentation it seemed that the best results came from dividing the articles up into 90 topics. And a lot of this book discusses the characteristics of these 90 topics. But to give you a more accessible sense of what the data looks like, I’ll start with a graph that groups those topics together into familiar contemporary philosophical subdisciplines, and displays their distributions in the 20th and 21st century journals. (Weatherson, introduction)

Now we are ready to do some history. Weatherson applies the algorithms of LDA topic modeling to this database of journal articles and examines the results. It is important to emphasize that this method is not guided by the intuitions or background knowledge of the researcher; rather, it algorithmically groups documents into clusters based on the frequencies of various words appearing in the documents. Weatherson also generates a short list of keywords for each topic: words of a reasonable frequency in which the probability of the word appearing in articles in the topic is significantly greater than the probability of it occurring in a random article. And he further groups the 90 subjects into a dozen familiar “categories” of philosophy (History of Philosophy, Idealism, Ethics, Philosophy of Science, etc.). This exercise of assigning topics to categories requires judgment and expertise on Weatherson’s part; it is not algorithmic. Likewise, the assignment of names to the 90 topics requires expertise and judgment. From the point of view of the LDA model, the topics could be given entirely meaningless names: T1, T2, …, T90.

Now every article has been assigned to a topic and a category, and every topic has a set of keywords that are algorithmically determined. Weatherson then goes back and examines the frequency of each topic and category over time, presented as graphs of the frequencies of each category in the aggregate (including all twelve journals) and singly (for each journal). The graphs look like this:

We can look at these graphs as measures of the rise and fall of prevalence of various fields of philosophy research in the Anglophone academic world over the past century. Most striking is the contrast between idealism (precipitous decline since 1925) and ethics (steady increase in frequency since about the same time, but each category shows some interesting characteristics.

Now consider the disaggregation of one topic over the twelve journals. Weatherson presents the results of this question for all ninety topics. Here is the set of graphs for the topic “Methodology of Science”:

All the journals — including Ethics and Mind — have articles classified under the topic of “Methodology of Science”. For most journals the topic declines in frequency from roughly the 1950s to 2013. Specialty journals in the philosophy of science — BJPS and Philosophy of Science — show a generally higher frequency of “Methodology of Science” articles, but they too reveal a decline in frequency over that period. Does this suggest that the discipline of the philosophy of science declined in the second half of the twentieth century (not the impression most philosophers would have)? Or does it rather reflect the fact that the abstract level of analysis identified by the topic of “Methodology of Science” was replaced with more specific and concrete studies of certain areas of the sciences (biology, psychology, neuroscience, social science, chemistry)?

These results permit many other kinds of questions and discoveries. For example, in chapter 7 Weatherson distills the progression of topics across decades by listing the most popular five topics in each decade:

This table too presents intriguing patterns and interesting questions for further research. For example, from the 1930s through the 1980s a topic within the general field of the philosophy of science is in the list of the top five topics: methodology of science, verification, theories and realism. These topics fall off the list in the 1990s and 2000s. What does this imply — if anything — about the prominence or importance of the philosophy of science within Anglophone philosophy in the last several decades? Or as another example — idealism is the top-ranked topic from the 1890s through the 1940s, only disappearing from the list in the 1960s. This is surprising because the standard narrative would say that idealism was vanquished within philosophy in the 1930s. And another interesting example — ordinary language. Ordinary language is a topic on the top five list for every decade, and is the most popular topic from the 1950s through the present. And yet “ordinary language philosophy” would generally be thought to have arisen in the 1940s and declined permanently in the 1960s. Finally, topics in the field of ethics are scarce in these lists; “promises and imperatives” is the only clear example from the topics listed here, and this topic appears only in the 1960s and 1970s. That seems to imply that the fields of ethics and social-political philosophy were unimportant throughout this long sweep of time — hard to reconcile with the impetus given to substantive ethical theory and theory of justice in the 1960s and 1970s. For that matter, the original list of 90 topics identified by the topic-modeling algorithm is surprisingly sparse when it comes to topics in ethics and political philosophy: 2.16 Value, 2.25 Moral Conscience, 2.31 Social Contract Theory, 2.33 Promises and Imperatives, 2.41 War, 2.49 Virtues, 2.53 Liberal Democracy, 2.53 Duties, 2.65 Egalitarianism, 2.70 Medical Ethics and Freud, 2.83 Population Ethics, 2.90 Norms. Where is “Justice” in the corpus?

Above I described this project as a new approach to the history of philosophy (surely applicable as well to other fields such as art history, sociology, or literary criticism). But it seems clear that the modeling approach Weatherson pursues is not a replacement for other conceptions of intellectual history, but rather a highly valuable new source of data and questions that historians of philosophy will want to address. And in fact, this is how Weatherson treats the results of this work: not as replacement but rather as a supplement and a source of new puzzles for expert historians of philosophy.

(There is an interesting parallel between this use of big data and the use of Ngrams, the tool Google created to map the frequency of the occurrences of various words in books over the course of several centuries. Here are several earlier posts on the use of Ngrams: linklink. Gabriel Abend made use of this tool in his research on the history of business ethics in The Moral Background: An Inquiry into the History of Business Ethics. Here is a discussion of Abend’s work; link. The topic-modeling approach is substantially more sophisticated because it does not reduce to simple word frequencies over time. As such it is a very significant and innovative contribution to the emerging field of “digital humanities” (link).)

An existential philosophy of technology

Ours is a technological culture, at least in the quarter of the countries in the world that enjoy a high degree of economic affluence. Cell phones, computers, autonomous vehicles, CT scan machines, communications satellites, nuclear power reactors, artificial DNA, artificial intelligence bots, drone swarms, fiber optic data networks — we live in an environment that depends unavoidably upon complex, scientifically advanced, and mostly reliable artifacts that go well beyond the comprehension of most consumers and citizens. We often do not understand how they work. But more than that, we do not understand how they affect us in our social, personal, and philosophical lives. We are different kinds of persons than those who came before us, it often seems, because of the sea of technological capabilities in which we swim. We think about our lives differently, and we relate to the social world around us differently.

How can we begin investigating the question of how technology affects the conduct of a “good life”? Is there such a thing as an “existential” philosophy of technology — that is, having to do with the meaning of the lives of human beings in the concrete historical and technological circumstances in which we now find ourselves? This suggests that we need to consider a particularly deep question: in what ways does advanced technology facilitate the good human life, and in what ways does it frustrate and block the good human life? Does advanced technology facilitate and encourage the development of full human beings, and lives that are lived well, or does it interfere with these outcomes?

We are immediately drawn to a familiar philosophical question, What is a good life, lived well? This has been a central question for philosophers since Aristotle and Epicurus, Kant and Kierkegaard, Sartre and Camus. But let’s try to answer it in a paragraph. Let’s postulate that there are a handful of characteristics that are associated with a genuinely valuable human life. These might include the individual’s realization of a capacity for self-rule, creativity, compassion for others, reflectiveness, and an ability to grow and develop. This suggests that we start from the conception of a full life of freedom and development offered by Amartya Sen in Development as Freedom and the list of capabilities offered by Martha Nussbaum in Creating Capabilities: The Human Development Approach — capacities for life, health, imagination, emotions, practical reason, affiliation with others, and self-respect. And we might say that a “life lived well” is one in which the person has lived with integrity, justice, and compassion in developing and fulfilling his or her fundamental capacities. Finally, we might say that a society that enables the development of each of these capabilities in all its citizens is a good society.

Now look at the other end of the issue — what are some of the enhancements to human living that are enabled by modern technologies? There are several obvious candidates. One might say that technology facilitates learning and the acquisition of knowledge; technology can facilitate health (by finding cures and preventions of disease; and by enhancing nutrition, shelter, and other necessities of daily life); technology can facilitate human interaction (through the forms of communication and transportation enabled by modern technology); technology can enhance compassion by acquainting us with the vivid life experiences of others. So technology is sometimes life-enhancing and fulfilling of some of our most fundamental needs and capabilities.

How might Dostoevsky, Dos Passos, Baldwin, or Whitman have adjusted their life plans if confronted by our technological culture? We would hope they would not have been overwhelmed in their imagination and passion for discovering the human in the ordinary by an iPhone, a Twitter feed, and a web browser. We would like to suppose that their insights and talents would have survived and flourished, that poetry, philosophy, and literature would still have emerged, and that compassion and commitment would have found its place even in this alternative world.

But the negative side of technology for human wellbeing is also easy to find. We might say that technology encourages excessive materialism; it draws us away from real interactions with other human beings; it promotes a life consisting of a series of entertaining moments rather than meaningful interactions; and it squelches independence, creativity, and moral focus. So the omnipresence of technologies does not ensure that human beings will live well and fully, by the standards of Aristotle, Epicurus, or Montaigne.

In fact, there is a particularly bleak possibility concerning the lives that advanced everyday technology perhaps encourages: our technological culture encourages us to pursue lives that are primarily oriented towards material satisfaction, entertainment, and toys. This sounds a bit like a form of addiction or substance abuse. We might say that the ambient cultural imperatives of acquiring the latest iPhone, the fastest internet streaming connection, or a Tesla are created by the technological culture that we inhabit, and that these motivations are ultimately unworthy of a fully developed human life. Lucretius, Socrates, and Montaigne would scoff.

It is clear that technology has the power to distort our motives, goals and values. But perhaps with equal justice one might say that this is a life world created by capitalism rather than technology — a culture that encourages and elicits personal motivations that are “consumerist” and ultimately empty of real human value, a culture that depersonalizes social ties and trivializes human relationships based on trust, loyalty, love, or compassion. This is indeed the critique offered by theorists of the philosophers of the Frankfurt School — that capitalism depends upon a life world of crass materialism and impoverished social and personal values. And we can say with some exactness how capitalism distorts humanity and culture in its own image: through the machinations of advertising, strategic corporate communications, and the honoring of acquisitiveness and material wealth (link). It is good business to create an environment where people want more and more of the gadgets that technological capitalism can provide.

So what is a solution for people who worry about the shallowness and vapidity of this kind of technological materialism? We might say that an antidote to excessive materialism and technology fetishism is a fairly simple maxim that each person can strive to embrace: aim to identify and pursue the things that genuinely matter in life, not the glittering objects of short-term entertainment and satisfaction. Be temperate, reflective, and purposive in one’s life pursuits. Decide what values are of the greatest importance, and make use of technology to further those values, rather than as an end in itself. Let technology be a tool for creativity and commitment, not an end in itself. Be selective and deliberate in one’s use of technology, rather than being the hapless consumer of the latest and shiniest. Create a life that matters.

How things seem and why

The idea that there is a stark separation between many of our ideas of the social world, on the one hand, and the realities of the social world in which we live is an old one. We think “fairness and equality”, but what we get is exploitation, domination, and opportunity-capture. And there is a reasonable suspicion that this gap is in some sense intentional: interested parties have deceived us. In some sense it was the lesson of Plato’s allegory of the cave; it is the view that Marx expresses in his ideas of ideology and false consciousness; Gramsci’s theory of hegemony expresses the view; Nietzsche seems to have this separation in mind in much of his writing; and the Frankfurt School made much of it as well. The antidote to these forms of illusion, according to many of these theorists, is critique: careful, penetrating analysis and criticism of the presuppositions and claims of the ideological theory. (Here are several efforts within Understanding Society to engage in this kind of work; link, link, link.)

Peter Baehr’s recent book The Unmasking Style in Social Theory takes on this intellectual attitude of “unmasking” with a critical and generally skeptical eye. Baehr is an expert on the history of sociological theory who has written extensively on Hannah Arendt, Max Weber, and other fundamental contributors to contemporary social theory, and the book shows a deep knowledge of the history and intellectual traditions of social thought.

 The book picks out one particular aspect of the sociological tradition, the “style” of unmasking that he finds to be common in that history (and current practice). So what does Baehr mean by a style?

A style, in the sense used here, is a distinctive way of talking and writing. It is epitomized by characteristic words, images, metaphors, concepts and, especially, techniques. I refer to these collectively as elements or ingredients. (2)

The elements of the unmasking style that he identifies include rhetorical tools including weaponization, reduction and positioning, inversion, deflation, hyperbole and excess, and exclusive claims of emancipation (chapter 1).

The idea of an intellectual style is innocuous enough — we can recognize the styles of analytic philosophy, contemporary literary criticism, and right-wing political commentary when we read or hear them. But there is a hidden question here: is there more than style to these traditions of thought? Are there methods of inquiry and reasoning, traditions of assessment of belief, and habits of scholarly interaction that underlie these various traditions? In much of Baehr’s book he ignores these questions when it comes to the content of Marxist analysis, feminist theory, or the sociology of race in America. The impression he gives is that it is all style and rhetoric, with no rigorous research and analysis to support the claims.

In fact the overarching impression given by the book is that Baehr believes that much “unmasking” is itself biased, unfair, and dogmatic. He writes:

Unmasking aspires to create this roused awareness. The kind of analysis it requires is never conveyed to the reader as an interpretation of events, hypothetical and contestable. Nor does it allow scientific refutation or principled disagreement. True as fiat, unmasking statements brook no contradiction. (3)

Such an approach to theory and politics is problematic for several reasons. Its authoritarianism is obvious. So is its exclusivity: I am right, you can shut up. Yet ongoing discord, unlike slapdash accusation, is a good thing. (131)

Part of Baehr’s suspicion of the “style” of unmasking seems to derive from an allergy to the language of post-modernism in the humanities and some areas of social theory:

To be sure, unmask is a common term in social theory and political and cultural criticism. Find it consorting with illusion, disguise, fiction, hieroglyph, critique, mystification, fantasy, reversal, hegemony, myth, real interest, objective interest, semantic violence, symbolic violence, alienation, domination, revolution and emancipation. The denser this cluster, the more unmasking obtrudes from it. (5)

And he also associates the unmasking “style” with a culture of political correctness and a demand for compliance with a “progressive” agenda of political culture:

Rarely a day passes on Twitter without someone, somewhere, being upbraided for wickedness. When even a gesture or an intonation is potentially offensive to an aggrieved constituency on high alert, the opportunities for unmasking are endless. Some targets of censure are cowed. They apologize for an offense they were not conscious of committing. Publicly chastened, they resolve to be better behaved henceforth. (7)

A third salient difference between unmasking in popular culture and in academic social theory is that in the academy unmasking is considered progressive. Detecting concealed racism, white privilege, patriarchy, trans-gender phobia and colonial exploitation is the stock in trade of several disciplines, sub-disciplines and pseudo-disciplines across the humanities and social sciences. The common thread is the ubiquity of domination. (8)

Marxism lives on in sociology, in the humanities and social sciences, and in pockets of the wider culture. And wherever one finds Marxism, typically combined today to race and gender politics, and to postcolonial critique, one finds aspects of the unmasking template. (91)

These are currents of thought — memes, theoretical frameworks, apperceptions of the true nature of contemporary society — with which Baehr appears to have little patience.

But here are a few considerations in favor of unmasking in the world of politics, economics, and culture in which we now live.

First, Baehr’s aversion to active efforts to reveal the pernicious assumptions and motives of specific voices in social media is misplaced. When the language of hate, white supremacy, denigration of Muslims, gays, and audacious women, and memes that seem to derive directly from the fascist and neo-Nazi toolbox, is it not entirely appropriate to call those voices to task? Is it not important, even vital, to unmask the voices of hate that challenge the basis of a liberal and inclusive democracy (link)? Is it the unmaskers or the trolls conveying aggressive hate and division who most warrant our disapproval?

And likewise in the area of the thought-frameworks surrounding the facts of modern market society. In some sense the claim that class interest (corporate interest, business interest, elite interest) strives hard to create public understandings of the world that are at odds with the real power relations that govern us is too obviously true to debate. This is the purpose of much corporate public relations and advertising, self-serving think-tanking, and other concrete mechanisms of shifting the terms of public understanding in a direction more favorable to the interests of the powerful. (Here is an article in the New York Times describing research documenting sustained efforts by ExxonMobil to cast doubt in  public opinion about the reality of global warming and climate change; link.) And there is no barrier to conducting careful, rigorous, and intellectually responsible “decoding” of these corporate efforts at composing a fantasy; this is precisely what Conway and Oreskes do with such force in Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming in the case of corporate efforts to distort scientific reality concerning their products and their effects (link).

Baehr’s statements about the unavoidable dogmatism of “unmasking” analysis and criticism are also surprisingly categorical. “The kind of analysis it requires is never conveyed to the reader as an interpretation of events, hypothetical and contestable.” Really? Are there no honest scholars in the field of critical race theory, or in feminist epistemology and philosophy of science, or in the sociology of science and technology? What is this statement other than precisely the kind of wholesale rejection of the intellectual honesty of one’s opponents that otherwise seems to animate Baehr’s critique?

The Unmasking Style is a bit of a paradox, in my view. It denounces the “style” of unmasking, and yet it reads as its own kind of wholesale discrediting of an intellectual orientation for which Baehr plainly has no patience. This is the orientation that takes seriously the facts of power, privilege, wealth, and racial and gender domination that continue to constitute the skeleton of our world. It is fine, of course, to disagree with this fundamental diagnosis of the dynamics of power, domination, and exploitation in the current world. But Baehr’s book has many of the features of tone and rhetoric that the author vigorously criticizes in others. It is perplexing to find that this book offers so little of what the author seems to be calling for — an intellectually open effort to discern the legitimate foundations of one’s opponent’s positions. For my view, readers of The Unmasking Style would be well advised to read as well one or two books by scholars like Frédéric Vandenberghe, including A Philosophical History of German Sociology, to gain a more sympathetic view of critical sociological theory and its efforts to discern the underlying power relations of the modern world (link).

In general, I find that there is much more intellectual substance to efforts to uncover the interest-bias of various depictions of the capitalist world than Baehr is willing to recognize. How do energy companies shape the debate over climate change? How did Cold War ideologies influence the development of the social sciences in the 1950s? How has pro-business, anti-regulation propaganda made the roll-back of protections of the health and safety of the public possible? What is the meaning of the current administration’s persistent language about “dangerous immigrants” in terms of racial prejudice? These are questions that invoke some kind of “demystifying” analysis that would seem to fall in the category of what Baehr classifies as “unmasking”; and yet it is urgent that we undertake those inquiries.

A companion essay by Baehr, “The image of the veil in social theory”, appears in Theory and Society this month (link), and takes a nuanced approach to the question of “mask” and “veil”. The essay has little of the marks of polemical excess that seem to permeate the book itself. Here is the abstract to the essay:

Social theory draws energy not just from the concepts it articulates but also from the images it invokes. This article explores the image of the veil in social theory. Unlike the mask, which suggests a binary account of human conduct (what is covered can be uncovered), the veil summons a wide range of human experiences. Of special importance is the veil’s association with religion. In radical social thought, some writers ironize this association by “unveiling” religion as fraudulent (a move indistinguishable from unmasking it.) Baron d’Holbach and Marx offer classic examples of this stratagem. But other writers, notably Du Bois and Fanon, take a more nuanced and more theoretically productive approach to both religion and the veil. Refusing to debunk religion, these authors treat the veil—symbol and material culture—as a resource to theorize about social conflict. Proceeding in three stages, I, first, contrast the meanings of mask and unmasking with more supple veil imagery; second, identify anti-religious unveiling that is tantamount to unmasking; and, third, examine social theories of the veil that clarify the stakes of social adversity and political struggle. Du Bois’s and Fanon’s contributions to veil imagery receive special attention.

The Unmasking Style is erudite and interesting, and plainly designed to provoke debate. I only wish that it gave more consideration to the very real need we have to confront the lies and misrepresentations that currently pervade our contemporary world.

Sustaining a philosophy research community

 

The European Network for Philosophy of Social Science (ENPOSS) completed its annual conference in Krakow last week. It was a stimulating and productive success, with scholars from many countries and at every level of seniority. ENPOSS is one of the most dynamic networks where genuinely excellent work in philosophy of social science is taking place (link). Philosophers from Germany, Poland, Norway, Spain, France, the Netherlands, the UK, and other countries came together for three intensive days of panels and discussions. The discussions made it clear that this is an integrated research community with a common understanding of a number of research problems and a common vocabulary. There is a sense of continuing progress on key issues — micro-macro ontology, social mechanisms, naturalism, intentionality, institutional imperatives, fact-value issues, computational social science, and intersections of disciplinary perspectives, to name several.

Particular highlights were keynote addresses by Dan Hausman (“Social scientific naturalism revisited”), Anne Alexandrova (“Are social scientists experts on values?”), and Bartosz Brozek (“The architecture of the legal mind”). There were also lively book discussions on several current books in the philosophy of social science — Chris Mantzavinos’s Explanatory Pluralism, Lukasz Hardt’s Economics Without Laws: Towards a New Philosophy of Economics, and my own New Directions in the Philosophy of Social Science. Thanks to Eleonora Montuschi, Gianluca Manzo, and Federica Russo for excellent and stimulating discussion of my book.

It is interesting to observe that the supposed divide between analytic and Continental philosophy is not in evidence in this network of scholars. These are philosophers whose Ph.D. training took place all over Europe — Italy, Belgium, Finland, Germany, France, Spain, the UK … They are European philosophers. But their philosophical ideas do not fall within the stereotyped boundaries of “Continental philosophy.” The philosophical vocabulary in evidence is familiar from analytic philosophy. At the same time, this is not simply an extension of Anglo-American philosophy. The style of reasoning and analysis is not narrowly restricted to the paradigms reflected by Russell, Dummett, or Parfit. It is, perhaps, a new style of European philosophy. There is a broad commitment to engaging with the logic and content of particular social sciences at a level that would also make sense to the practitioners of sociology, political science, or economics. And there is a striking breadth to the substantive problems of social life that these philosophers are attempting to better understand. The overall impression is of a research community that has the features of what Imre Lakatos referred to as a “progressive research programme” in Criticism and the Growth of Knowledge — one in which problems are being addressed and treated in ways that sheds genuinely new light on the problem. Progress is taking place.

There were two large topic areas that perhaps surprisingly did not find expression in the ENPOSS program. One is the field of critical realism and the ideas about social explanation advanced by Roy Bhaskar and Margaret Archer. And the second is the theory of assemblages put forward by Deleuze and subsequently elaborated by DeLanda and Latour. These topic areas have drawn a fair amount of attention by social theorists and philosophers in other parts of the philosophy of social science research community. So it is interesting to realize that they were largely invisible in Krakow. This leads one to think that this particular network of scholars is simply not very much influenced by these ideas.

Part of the dynamism of the ENPOSS conference, both in Krakow and in prior years, is the broad sense that these issues matter a great deal. There was a sense of the underlying importance of the philosophy of social science. Participants seem to share the idea that the processes of social change and periodic crisis that we face in the contemporary world are both novel and potentially harmful to human flourishing, and that the social sciences need to develop better methods, ontologies, and theories if they are to help us to understand and improve the social world in which we live. So the philosophy of social science is not just a contribution to a minor area within the grand discipline of philosophy; more importantly, it is a substantial and valuable contribution to our collective ability to bring a scientific perspective to social problems and social progress.

Next year’s meeting will take place in early September at the University of Hannover and will be a joint meeting with the US-based Philosophy of Social Science Roundtable. The call for papers will be posted on the ENPOSS website.

Gross on the history of analytic philosophy in America

Neil Gross has a remarkably good ear for philosophy. And this extends especially to his occasional treatments of the influences that helped shape the discipline of philosophy in the United States in the second half of the twentieth century. His sociological biography of Richard Rorty is a tour de force (Richard Rorty: The Making of an American Philosopher; link). There he skillfully maps the “field” of American philosophy (in the Bourdieusian sense) and places the evolution of Rorty’s thought within the landmarks of the field. The book is an excellent exemplar of the new sociology of ideas, bringing together material, symbolic, and intellectual forces that influence the direction and shape of an intellectual tradition.

In his contribution to Craig Calhoun’s Sociology in America: A History Gross offers a very short description of the ideological and social forces that helped to determine the directions taken by the philosophy discipline in the post-war years, and this too is very illuminating. In contrast to historians of philosophy who tell the story of the development of a period in terms of the internal intellectual problems and debates that determined it, Gross seeks to identify some of the external factors that made the terrain hospitable to this movement or that. (Consider, by contrast, the internalist story that Michael Beaney tells in The Analytic Turn: Analysis in Early Analytic Philosophy and Phenomenology and in many chapters of The Oxford Handbook of The History of Analytic Philosophy.) Several paragraphs are worth quoting at length, since philosophers are unlikely to browse this collection on the history of American sociology.

Within academic philosophy, pragmatism’s stature was diminished considerably in the 1940s, 1950s, and 1960s as a result of the rise of what is called analytic philosophy. Analytic philosophy began with the efforts of G. E. Moore and Bertrand Russell in England to vanquish the idealism that had become popular there after years when empiricism dominated (Delacampagne 1999). Russell, influenced by the efforts of Gottlob Frege to develop a formal system by which logical propositions could be represented, took the view that new light could be shed on long-standing philosophical problems if attention were paid to the language in which they are expressed, and to the logical assumptions underlying that language. Russell made crucial contributions to the philosophy of mathematics, in which he tried, like Frege, to reduce mathematics topologic, but he also sought to develop an alternative metaphysics according to which objects in the world are seen as composed of logical atoms to which more complex entities can be reduced. The young Ludwig Wittgenstein studied under Russell and picked up where he and Frege left off, arguing in the Tractatus Logico-Philosophicus (1922) that facts, not logical atoms or objects per se, compose the world, and that language — which represents facts — does so by “picturing” them in logically valid propositions. On the basis of this assumption, Wittgenstein claimed that many traditional philosophical problems — particularly those concerning ethics, metaphysics, and aesthetics, which do not meet these criteria for picturing — are nonsensical. (201-202)

Gross goes on to describe the advent of ordinary language philosophy (Ryle, Austin), and the logical positivists. The positivists are particularly important in his story and in the subsequent development of professional academic philosophy in the United States.

The positivists, along with their counterparts at Oxford and Cambridge, had an enormous impact on U.S. philosophy, giving rise to a new style and tradition of philosophical scholarship. (202)

And Gross offers a sociological and political hypothesis for why analytic philosophy prevailed over pragmatism and idealism.

That analytic philosophy, at least in its early stages, downgraded the status of political philosophy may also have helped protect the field from critical scrutiny during the McCarthy era (McCumber 2001). Within the space of a few years, philosophers who saw themselves as working in the analytic tradition came to dominate nearly all the top-ranked U.S. philosophy graduate programs, analytic work became hegemonic in the major academic journals, and analysts came to assume leadership positions in the American Philosophical Association (Wilshire 2002). Pragmatism was marginalized as a consequence. Russell, for his part, had been sharply critical of Dewey, accusing pragmatism of being a philosophy “in harmony with the age of industrialism and collective enterprise” because it involved a “belief in human power and the unwillingness to admit ‘stubborn facts,’” which manifested itself in the view that the truth of a belief is a matter of its effects rather than its causes. Some American philosophers, like Chicago’s Charles Morris, tried to combine pragmatism and logical positivism, while others, like Quine or his Harvard colleague Morton White, brought pragmatism and linguistic analysis together in other ways. Nevertheless, many who worked in an analytic style came to see pragmatism and analytic philosophy as opposed, and pragmatism’s reputation went into decline. (203)

This is a nuanced and plausible precis of the evolution of academic philosophy during these decades, and the material influences that Gross cites (the influx of Vienna-school philosophers caused by the rise of Nazism, the political threat of McCarthyism) seem to be genuine historical causes of the rise of analytic philosophy dominance. And, consistent with the methods and priorities of the new sociology of ideas (link), Gross is very sensitive to the particulars of the institutions, journals, and associations through which a discipline seeks to define itself.

Contrast this narrative with the brief account offered by Michael Beaney in his introduction to The Oxford Handbook of The History of Analytic Philosophy. Beaney too points out the importance of the exodus of positivist philosophers from Europe caused by the Nazi rise to power (kl 636). But the balance of his account works through the substantive ideas and debates that took center stage in academic philosophy in the 1940s and 1950s. To read his account, philosophy moved forward as a consequence of a series of logical debates.

Agreement on the key founders already gives some shape to the analytic tradition — as a first approximation, we can characterize it as what is inspired by their work. With this in mind, we can then identify two subsequent strands in analytic philosophy that develop the ideas of its four founders [Russell, Moore, Frege, Wittgenstein]. The first is the Cambridge School of Analysis … and the second is logical empiricism. (kl 826)

The impetus of Gross’s interest in the development of analytic philosophy in American universities was the impact this movement had on pragmatism. Essentially Gross argues that pragmatism was pushed into a minor role within academic philosophy by the ascendency of positivism and analytic philosophy, and that the latter occurred because of social factors in the university and society at large. Cheryl Misak is a contemporary expert on pragmatism (The American Pragmatists (The Oxford History of Philosophy), New Pragmatists), and she disputes this view from a surprising point of view: she argues that analytical philosophy actually absorbed the greater part of pragmatism, and that one could make the case that pragmatist ideas have great contemporary influence within philosophy. Her argument is summarized in “Rorty, Pragmatism, and Analytic Philosophy” (link).

When the logical empiricists arrived in America, they found a soil in which their position could thrive. They did not arrive in a land that was inhospitable to their view, nor did they need to uproot the view they found already planted there.

 (373)

She argues that Peirce’s pragmatism had much in common with positivism, and she traces a fairly direct lineage from Peirce through Dewey and C.I. Lewis to Quine, Goodman, and Roy Wood Sellars. Here is her conclusion:

The epistemology and the view of truth that dominated analytic philosophy from the 1930s logical empiricism right through to the reign of Quine, Goodman, and Sellars in the 1950s–60s was in fact pragmatism. The stars of modern analytic philosophy were very much in step with pragmatism during the years in which it was supposedly driven out of philosophy departments by analytic philosophers.

 (380)

It seems to me that it is possible that both Misak and Gross are right, because they are concerned with different aspects of the “field” of academic philosophy. Misak is focusing on the issues of content, logic, and epistemology, and she finds that there is a substantial continuity on these issues across the literature of both analytic philosophy and classical pragmatism. But Gross has taken a broader focus: what are the paradigmatic topics and modes of approach that were characteristic of analytic philosophy and pragmatism? What were the “styles” of thinking that were characteristic of analytic philosophy and pragmatism? And he is right in thinking that, had Peirce, James, and Dewey and their successors prevailed by dominating the chief research departments of philosophy, American academic philosophy would have looked very different.

Nelson Goodman on psychology

Nelson Goodman is best known within philosophy as an iconoclast within the logical empiricist tradition. He published Fact, Fiction and Forecast in 1954, offering a “new riddle of induction.” Goodman was deeply interested in the arts and he argued that artistic expression is on a par with other forms of assertion and representation — for example, in Languages of Art (1968). And his 1978 book, Ways of Worldmaking, cast doubt on the empiricist project of extracting concepts directly from experience. So Goodman was an important voice within American analytic philosophy. But he had a significant influence on me during my graduate studies in the context of a very different set of problems — the philosophy of psychology.

Goodman taught a course titled “Philosophical Problems in Psychology” at Harvard in 1971. The course contained material from the tradition of empiricist philosophy — Locke and Berkeley — as well as then-current research on cognition and representation by empirical psychologists. These included Jean Piaget, Jerome Bruner, T.G.R. Bower, Ray Birdwhistell, Paul Kolers, Michael Posner, and R. W. Sperry. Goodman was interested in how perception works — according to the philosophers and according to the psychologists. The guiding concern was this: what is the nature and origin of the conceptual systems through which human beings make sense of the world around them? The course began with an examination of the theories of perception and representation offered by Locke and Berkeley, beginning with the empiricists’ critique of innate ideas, and then proceeded to contemporary efforts to analyze the same processes in real human beings.

Some of the writings of Jean Piaget played a central role in the topics and discussions of this class, including The Construction Of Reality In The Child. Goodman was interested in Piaget’s efforts to chronicle the development of the child’s conceptual world — the formations through which the child makes sense of sensorimotor experience at various ages.

Another noteworthy component of the course was an extensive discussion of Paul Kolers’ work on motion perception eventually published in Aspects of Motion Perception (1972). (This research was published in 1971 as “Figural Change in Apparent Motion”; link.) The phenomenon that Kolers described was a flashing pair of images that oscillated between one geometrical figure and another. The images are stationary, but the visual impression is of a smoothly moving object that changes from one shape to the other. Goodman provided a detailed and perceptive analysis of the methods and assumptions that underlaid Kolers’ treatment of the phenomenon.

One idea that is pervasive throughout many of Goodman’s writings is the idea of conceptual relativism. This is the key contribution of Ways of Worldmaking. The idea comes up repeatedly in the 1971 course. Here is a paragraph taken from my notes from the course that captures a lot of Goodman’s own views about conceptual schemes:

Experience is always organized in one way or another. Moreover, not all ways of organizing are equally good; some work much better than others, some may be ineffectual; others may lead to internal inconsistencies. When one proves unsatisfactory, this leads to a reorganization. But doesn’t this require that there be a world out there to which our schemes conform? Yes and no; no because there is no unalterable way of looking at the world in which facts about the world are expressed; but yes, within any given system of organization there are true facts about the world to which other schemata must form. The important thing here is that there is no unique “structure of the world”…. Goodman calls his point of view “neutralism” or “conceptual pluralism”. (11/9/1971)

What is most impressive about this course is the fact that Goodman paid close attention to current work in psychology and the cognitive sciences. Goodman followed a philosophical method in this course that I continue to admire — the idea that philosophical reflections about an area of science can be valuable to the degree that they are closely connected to real research problems in that area of research. This approach leans against a primary focus on big issues — is behaviorism correct? Are there mental entities? — in favor of more specific questions within philosophical and psychological studies of perception.

What I now find most interesting about the design of this course is the underlying assumption that philosophers and empirical scientists can find common questions where their methodologies can fruitfully interact to shed greater light on the issues.  In this case, philosophers have a lot of questions about perception and conceptual schemes; and cognitive and developmental psychologists are doing experimental and theoretical work that sheds light on exactly these issues. Finding a common vocabulary is challenging for the two research traditions, but it seems clear that the collaboration can be highly fruitful.

(Here is an interesting collection that takes seriously some of Goodman’s strictures on conceptual systems from the point of view of problems in the social sciences: Mary Douglas and David Hull, eds., How Classification Works: Nelson Goodman Among the Social Sciences.) Alessandro Giovannelli’s article on Goodman in the Stanford Encyclopedia of Philosophy is excellent; link. Here is an extensive bibliography of Goodman’s works; link.)

Quine’s indeterminacies

W.V.O. Quine’s writings were key to the development of American philosophy in the 1950s, 1960s, and 1970s. His landmark works (“Two Dogmas of Empiricism,” “Ontological Relativity,” and Word and Object, for example) provided a very appealing combination of plain speaking, seriousness, and import. Quine’s voice certainly stands out among all American philosophers of his period.

Quine’s insistence on naturalism as a view of philosophy’s place in the world is one of his key contributions. Philosophy is not a separate kind of theorizing and reasoning about the world, according to Quine; it is continuous with the empirical sciences through which we study the natural world (of which humanity and the social world are part). Also fundamental is his coherence theory of the justification of beliefs, both theoretical and philosophical. This theory was the source of John Rawls’s method of reasoning for a theory of justice based the idea of “reflective equilibrium.” This approach depended on careful weighing of our “considered judgments” and the adjustments of ethical beliefs needed to create the most coherent overall system of ethical beliefs.

There is another feature of Quine’s work that is particularly appealing: the fundamental desire that Quine had to make sense of obscure issues and to work through to plausible solutions. There is sometimes a premium on obscurity and elliptical thinking in some corners of the intellectual world. Quine was a strong antidote to this tendency. (John Searle makes similar points about the value of clarity in philosophical argument in his comments on Foucault here.)
 
Take “Ontological Relativity” (OR), the first of the Dewey Lectures in 1968 (link). The essay articulates some of Quine’s core themes — the behaviorist perspective on language and meaning, the crucial status of naturalism, and the indeterminacy of meaning and reference. But the essay also demonstrates a sensitive and careful reading of Dewey. Quine shows himself to be a philosopher who was able to give a respectful and insightful account of the ideas of other great philosophers.
Philosophically I am bound to Dewey by the naturalism that dominated his last three decades. With Dewey I hold that knowledge, mind, and meaning are part of the same world that they have to do with, and that they are to be studied in the same empirical spirit that animates natural science. There is no place for a prior philosophy. (185).
In OR Quine refers to a key metaphor in his own understanding of language and meaning, the “museum myth” theory of meaning. “Uncritical semantics is the myth of a museum in which the exhibits are meanings and the words are labels. To switch languages is to change the labels” (186). Against the museum myth, Quine argues here (as he does in Word and Object as well) for the indeterminacy of “meaning” and translation. The basic idea of indeterminacy of translation, as expressed in WO, comes down to this: there are generally alternative translation manuals that are possible between two languages (or within one’s own) which are equally compatible with all observed verbal behavior, and yet which map expressions onto significantly different alternative sentences. Sentence A can be mapped onto B1 or B2; B1 and B2 are apparently not equivalent; and therefore Sentence A does not have a fixed and determinate meaning either in the language or in the heads of the speakers. As Quine observes in his commentary on his example from Japanese concerning the translation of “five oxen”, “between the two accounts of Japanese classifiers there is no question of right and wrong” (193).
For naturalism the question whether two expressions are alike or unlike in meaning has no determinate answer, known or unknown, except insofar as the answer is settled in principle by people’s speech dispositions, known or unknown. If by these standards there are indeterminate cases, so much the worse for the terminology of meaning and likeness of meaning. (187)
Returning to the extended example he develops of indeterminacy of translation around the word “gavagai” that he introduced in Word and Object, Quine notes that the practical linguist will equate gavagai with “rabbit”, not “undetached rabbit part”. But he insists that there is no objective basis for this choice.
The implicit maxim guiding his choice of ‘rabbit’, and similar choices for other native words, is that an enduring and relatively homogeneous object, moving as a whole against a constrasting background, is a likely reference for a short expression. If he were to become conscious of this maxim, he might celebrate it as one of the linguistic universals, or traits of all languages, and he would have no trouble pointing out its psychological plausibility. But he would be wrong; the maxim is his own imposition, toward settling what is objectively indeterminate. It is a very sensible imposition, and I would recommend no other. But I am making philosophical point. (191)
In “Ontological Relativity” Quine takes the argument of the indeterminacy of meaning an important step forward, to argue for the “inscrutability of reference.” That is: there is no behavioral basis for concluding that a given language system involves reference to this set of fundamental entities rather than that set of fundamental entities. So not only can we not say that there are unique meanings associated with linguistic expressions; we cannot even say that expressions refer uniquely to a set of non-linguistic entities. This is what the title implies: there is no fixed ontology for a language or a scientific or mathematical theory.

These are radical and counter-intuitive conclusions — in some ways as radical as the “incommensurability of paradigms” notion associated with Thomas Kuhn and the critique of objectivity associated with Richard Rorty. What is most striking, though, is the fact that Quine comes to these conclusions through reasoning that rests upon very simple and clear assumptions. Fundamentally, it is his view that the only kinds of evidence and the only constraints that are available to users and listeners of language are the evidences of observable behavior; and the full body of this system of observations is insufficient to uniquely identify a single semantic map and a single ontology.

(Peter Hylton’s article in the Stanford Encyclopedia of Philosophy does a good job of capturing the logic of Quine’s philosophy; link.)

Marketing Wittgenstein

Who made Wittgenstein a great philosopher?  Why is the eccentric Austrian now regarded as one of the twentieth century’s greatest philosophers? What conjunction of events in his life history and the world of philosophy in the early twentieth century led to this accumulating recognition and respect?

We might engage in a bit of Panglossian intellectual history (“everything works out for the best!”) and say something like this: The young man Wittgenstein was in fact exceptionally talented and original, and eventually talent rises to the attention of the elite in a discipline or field of knowledge. But this is implausible in even more ordinary circumstances. And the circumstances in which Wittgenstein achieved eminence were anything but ordinary. His formal training was in engineering, not philosophy; his national origin was Austria, not Britain; his early years were marked by the chaos of the Great War; his personality was prickly and difficult; and his writings were as easily characterized as “peculiar” as “brilliant”.

The idea of a “field” introduced by Bourdieu in The Field of Cultural Production is particularly helpful in addressing this topic. (Here is a post that discusses the idea of a field; link.) The field of philosophy at a given time is an assemblage of institutions, personages, universities, journals, and funding agencies.  The question of whether an aspiring young philosopher rises or languishes is a social and institutional one, depending on the nature of his/her graduate program, the eminence of the mentors, the reception of early publications and conference presentations, and the like.  Indicators and causes of rising status depend on answers to questions like these: Are the publications included in the elite journals? Are the right people praising the work?  Is the candidate pursuing the right kinds of topics given the tastes of the current generation of “cool finders” in the profession? This approach postulates that status in a given profession depends crucially on situational and institutional facts — not simply “talent” and “brilliance”. And in many instances, the reality of these parameters reflexively influence the thinker himself: the young philosopher adapts, consciously or unconsciously, to the signposts of status.

Neil Gross’s biography of Richard Rorty (Richard Rorty: The Making of an American Philosopher) provides a great example of careful analysis of a philosopher’s career in these terms (link). Gross provides a convincing account of how the influence of the field’s definition of the “important” problems affected Rorty’s development, and how the particular circumstances of the Princeton department affected his later development in an anti-analytic direction.  Camic, Gross, and Lamont provide similar examples in Social Knowledge in the Making, including especially Neil Gross and Crystal Fleming’s study of the evolution of a conference paper.

So what was the “field” into which Wittgenstein injected himself in his visits to Frege and Russell?  Here is a point that seems likely to me from the perspective of 2012: the “field” of analytic philosophy in 1905 was substantially less determinate than it was from 1950 to 1980.  This fact has two contradictory implications: first, that this indeterminacy made it more possible for an “oddball” philosopher to make it to the top; and second, that it made it more unlikely that talent would be consistently identified and rewarded.  The relative looseness of the constraints on the field permitted “sports” to emerge, and also made it possible that highly meritorious thinkers would be overlooked.  (So the brilliant young metaphysician studying philosophy at the University of Nebraska in 1908 might never have gotten a chance to move into the top reaches of the discipline.)

What were some of the situational facts that contributed to Wittgenstein’s meteoric rise? One element seems clear: Wittgenstein’s early association with Bertrand Russell beginning in 1911, and the high level entrée this provided Wittgenstein into the elite circles of philosophy at Cambridge, was a crucial step in his rise to stardom. And Wittgenstein’s status with Russell was itself a curious conjunction: Wittgenstein’s fascination with Frege, aspects of Tractatus that appealed to Russell, and Wittgenstein’s personal intellectual style.  But because of this association, Wittgenstein wasn’t starting his rise to celebrity in the provinces, but rather at the center of British analytic philosophy.
Another element is one that was highly valued in Cambridge culture — the individual’s conversational skills. Simply being introduced into a circle of eminent thinkers doesn’t assure eminence. Instead, it is necessary to perform conversationally in ways that induce interest and respect. LW was apparently charismatic in an intense, harsh way. He was passionate about ideas and he expressed himself in ways that gave an impression of brilliant originality.  He made a powerful impression on the cool-finders.
And then there are his writings — or rather, his peculiar manuscript, Tractatus Logico-Philosophicus.

One could easily have dismissed the manuscript as a mad expression of logicism run wild, with its numbered paragraphs, its dense prose, and its gnomic expressions. Or one could react, as Russell did, with understanding and fascination. But without the reputation created by the reception of TLP, Wittgenstein would never have gotten the chance to expose the equally perplexing and challenging thinking that was expressed in Philosophical Investigations (3rd Edition).  In fact, almost all of LW’s written work is epigrammatic and suggestive rather than argumentative and constructive. When there is insight, it comes as a bolt from the blue rather than as a developed line of thought.

So what if we test out this idea: a verbally brilliant man, a charismatic interlocutor, a person with original perspectives on philosophical topics and methods — but also a figure who benefited greatly from some excellent marketing, some influential patrons, and some situationally unusual lucky breaks. Had Russell been less patient, had publishers found TLC too weird for their liking, had Moore been less open-minded about Wittgenstein’s PhD defense — then analytical philosophy might no longer remember the name “Wittgenstein”. This interpretation of Wittgenstein’s stature suggests something more general as well: there is an enormous dollop of arbitrariness and contingency in the history of ideas and in the processes through which some thinkers emerge as “canonical”.
Anat Biletzki and Anat Matar provide an excellent introduction to Wittgenstein’s philosophy in the Stanford Encyclopedia of Philosophy (link).

 

The sociology of ideas: Richard Rorty

Where do new ideas and directions of thought come from?  Is it possible to set a context for important changes in intellectual culture, in the sciences or the humanities?  Can we give any explanation for the development of individual thinkers’ thought?

These are the key questions that Neil Gross raises in his sociological biography of Richard Rorty in Richard Rorty: The Making of an American Philosopher (2009).  The book is excellent in every respect.  Gross has gone into thorough detail in discovering and incorporating correspondence with family and friends that allow him to reconstruct the micro setting within which the young Rorty took shape.  His exposition of the complex philosophical debates that set the stage for academic philosophy in the United States from the 1950s to the 1980s is effortless and accurate.  And he offers a very coherent interpretation of many of Rorty’s most important ideas.  Any one of these achievements is noteworthy; together they are exceptional.

Gross is not interested in writing a traditional intellectual biography. Rather, he wants to advance the emerging field of “new sociology of ideas” through an extended case study of the development of a particularly important philosopher. The purpose of the book is to provide a careful and sociologically rich account of the ways in which a humanities discipline (philosophy) developed, through a crucial period (the 1940s through the 1980s).

My goal is to develop, on the basis of immersion in an empirical case, a new theory about the social influences on intellectual choice, particularly for humanists—that is, a theory about the social factors that lead them to fasten onto one idea, or set of ideas, rather than another, during turning points in their intellectual careers. (kindle location 95)

The argument I now want to make is that the developments considered in chapters 1–8 reflect not Rorty’s idiosyncratic and entirely contingent biographical experiences but the operation of more general social mechanisms and processes that shaped and structured his intellectual life and career. (kl 5904)

Here is how he describes the sociology of ideas:

Sociologists of ideas seek to uncover the relatively autonomous social logics and dynamics, the underlying mechanisms and processes, that shape and structure life in the various social settings intellectuals inhabit: academic departments, laboratories, disciplinary fields, scholarly networks, and so on. It is these mechanisms and processes, they claim, that—in interaction with the facts that form the material for reflection—do the most to explain the assumptions, theories, methodologies, interpretations of ambiguous data, and specific ideas to which thinkers come to cleave. (kl 499)

The goal is to provide a sociological interpretation of the development of thinkers and disciplines within the humanities (in deliberate analogy to current studies in the sociology of science).  Gross acknowledges but rejects earlier efforts at sociology of knowledge (Marx, Mannheim), as being reductionist to the thinker’s location within a set of social structures.  Gross is more sympathetic to more recent contributions, including especially  the theories of Bourdieu (field) (Homo Academicus) and Randall Collins (interaction ritual chains) (The Sociology of Philosophies: A Global Theory of Intellectual Change).  These theories emphasize the incentives and advantages that lead strategically minded professionals in one direction or another within a discipline or field. But Gross argues that these theories too are insufficiently granular and don’t provide a basis for accounting for the choices made by particular intellectuals.

One does not have to be a methodological individualist to recognize that meso- and macrolevel social phenomena are constituted out of the actions and interactions of individual persons and that understanding individual-level action—its nature and phenomenology and the conditions and constraints under which it unfolds—is helpful for constructing theories of higher order phenomena, even though the latter have emergent properties and cannot be completely reduced to the former. (kl 158)

To fill this gap he wants to offer a sociology of ideas that brings agency back in.  He introduces the idea of the role of the individual’s “self-concept”, which turns out to be a basis for the choices the young intellectual makes within the context of the strategy-setting realities of the field.  A self-concept is a set of values, purposes, and conceptions that the individual has acquired through a variety of social structures, and that continues to evolve through life.  Gross emphasizes the narrative character of a self-conception: it is expressed and embodied through the stories the individual tells him/herself and others about the development of his/her life.

The theory of intellectual self-concept can thus be restated as follows: Thinkers tell stories to themselves and others about who they are as intellectuals. They are then strongly motivated to do intellectual work that will, inter alia, help to express and bring together the disparate elements of these stories. Everything else being equal, they will gravitate toward ideas that make this kind of synthesis possible. (kl 6650)

There is good reason to believe that such stories or self-narratives are not epiphenomenal aspects of experience but influences on social action in their own right. Indeed, few notions have been as important in social psychology as those of self and self-concept. (551)

Simply stated, the theory of intellectual self-concept holds that intellectuals tell themselves and others stories about who they are qua intellectuals: about their distinctive interests, dispositions, values, capacities, and tastes. (kl 6487)

And Gross thinks that these stories are deeply influential, in terms of the choices that a developing intellectual makes at each stage of life.  In particular, he thinks that the academic’s choices are often inflected by his/her self-conception to an extent that may override the strategic and prudential considerations that are highlighted by Bourdieu and Collins. Bourdieu and Collins offer “no attempt to think through how the quest for status and upward mobility in an intellectual field may intersect and sometimes compete with thinkers’ cognitive and affective interests in remaining true to narratives of intellectual selfhood that have become more or less stable features of their existence” (562). In his view, identity trumps interest–at least sometimes.

Here is how he applies this analysis to Rorty:

My central empirical thesis is that the shift in Rorty’s thought from technically oriented philosopher to free-ranging pragmatist reflected a shift from a career stage in which status considerations were central to one in which self-concept considerations became central. (576)

Or in other words, Rorty’s early career is well explained by the Bourdieu-Collins theory, whereas his later shift towards pragmatism and more heterodox, pluralistic philosophy is explained by his self-concept.

In Gross’s telling of the story, much of Richard Rorty’s self-concept was set by the influences of his remarkable parents in childhood and adolescence, James Rorty and Winifred Raushenbush.  The parents were politically engaged literary and political intellectuals, and they created an environment of social and intellectual engagement that set aspects of Richard’s self-concept that influenced several key choices in his life.  Gross’s depiction of the social and intellectual commitments of James and Winifred, and the elite milieu in which they circulated, is detailed and striking. This “social capital” served Richard well in his course from the University of Chicago to Yale into his academic career.

Gross believes that the key turns in Rorty’s development were these: first, the decision to do a masters thesis on Whitehead at Chicago; then his choice of Yale as a doctoral institution, with a Ph.D. dissertation on “The Concept of Potentiality” (a metaphysical subject); his shift towards analytic philosophy during his first several years of teaching at Wellesley; his deepening engagement with analytic philosophy in the early years at Princeton; his eventual critique of analytic philosophy in Philosophy and the Mirror of Nature; and his further alienation from analytic philosophy in the years that followed towards a contemporary pragmatism and a more pluralistic view of the domain of philosophical methods.  In other words, he began in an environment where pragmatism and substantive metaphysics were valued; he shifted to the more highly valued field of analytic philosophy during the years in which he was building his career and approaching tenure; and he returned to a more pluralistic view of philosophy in the years when his career was well established.

Rorty’s turn to analytic philosophy makes sense in a Bourdieuian way. Gross describes his Wellesley-era and early Princeton philosophical writings in these terms:

They represent Rorty’s attempt to make contributions to analytic thought of a piece with those that other bright, young analytic philosophers of his generation were making. They were, in other words, part of Rorty’s efforts to position himself even more squarely within the mainstream philosophical establishment. (kl 4642)

Those observing Rorty’s career from afar might have interpreted this spate of analytic publications, coming on the heels of The Linguistic Turn, as evidence that Rorty had joined the ranks of the analytic community and saw his work as of a piece with that being done by other analysts. (kl 4853)

But eventually Rorty shifts his philosophical stance, towards a pluralistic and pragmatist set of ideas about philosophical method and subject matter.  Here is how Gross summarizes Rorty’s turn to pragmatism:

On this understanding, a pragmatist is someone who holds three beliefs: first, that “there is no wholesale, epistemological way to direct, or criticize, or underwrite, the course of inquiry”; second, that “there is no . . . metaphysical difference between facts and values, nor any methodological difference between morality and science”; and third, that “there are no constraints on inquiry save conversational ones.” (kl 719)

As Rorty went about developing a historicist, therapeutic alternative to the analytic philosophy he saw being practiced by his Princeton colleagues and others, no one’s work was more important to him than that of Thomas Kuhn. (kl 5054)

And Gross dates Rorty’s impulses towards pragmatism to a much earlier phase of his intellectual development than is usually done:

Far from it being the case, as some Rorty interpreters have claimed, that Rorty’s interest in pragmatism arose only after he made a break with analytic philosophy, his earliest work is characterized by a desire to harness pragmatist insights in the service of a revised conception of the analytic project. (kl 4037)

Rorty rode both of these intellectual waves, becoming caught up in the rigorism of the analytic paradigm in the 1960s and then emerging as a leading figure in the antirigorist movement of the 1970s and 1980s. (kl 7058)

The book repays a close reading, in that it sheds a lot of light on a key period in the development of American philosophy and it provides a cogent sociological theory of the factors that influenced this development.  It is really a remarkable book. It would be fascinating to see similar accounts of innovative thinkers such as Nelson Goodman, John Rawls, or (from literary studies) Stephen Greenblatt.  That’s not likely to happen, however, so this book will probably remain a singular illustration of a powerful theory of the sociology of ideas.