Gross inequalities in a time of pandemic

Here is a stunning juxtaposition in the April 2 print edition of the New York Times. Take a close look. The top panel updates readers on the fact that the city and the region are enduring unimaginable suffering and stress caused by the COVID-19 pandemic, with 63,300 victims and 2,624 deaths (as of April 4) — and with hundreds of thousands facing immediate, existential financial crisis because of the economic shutdown. And only eight miles away, as the Sotheby’s “Prominent Properties” half-page advertisement proclaims, home buyers can find secluded luxury, relaxation, and safety, for residential estates priced at $32.9 million and $21.5 million. In case the reader missed the exclusiveness of these properties, the advertisement mentions that they are “located in one of the nation’s wealthiest zip codes”. And, lest the prospective buyer be concerned about maintaining social isolation in these difficult times, the ad reminds prospective buyers that these are gated estates — in fact, the $33M property is located on “the only guard gated street in Alpine”.

Could Friedrich Engels have found a more compelling illustration of the fundamental inhumanity of the inequalities that exist in twenty-first century capitalism in the United States? And there is no need for rhetorical exaggeration — here it is in black and white in the nation’s “newspaper of record”.

There are many compelling reasons that supported Elizabeth Warren’s proposal for a wealth tax. But here is one more: it is morally appalling, even gut-churning, to realize that $33 million for a home for one’s family (35,000 square feet, tennis court and indoor basketball court) is a reasonable “ask” for the super-wealthy in our country, the one-tenth of one percent who have ridden the crest of surging stock markets and finance and investment firms to a level of wealth that is literally unimaginable to at least 95% of the rest of the country.

Here is the heart of Warren’s proposal for a wealth tax (link):

Rates and Revenue

  • Zero additional tax on any household with a net worth of less than $50 million (99.9% of American households)
  • 2% annual tax on household net worth between $50 million and $1 billion
  • 4% annual Billionaire Surtax (6% tax overall) on household net worth above $1 billion
  • 10-Year revenue total of $3.75 trillion

Are we all in this together, or not? If we are, let’s share the wealth. Let’s all pay our fair share. Let’s pay for the costs of fighting the pandemic and saving tens of millions of our fellow citizens from financial ruin, eviction, malnutrition, and family crisis with a wealth tax on billionaires. They can afford it. The “65′ saltwater gunite pool” is not a life necessity. The revenue estimate of the Warren proposal is roughly proportionate to the current estimate of what it will cost the US economy to overcome the pandemic, protect the vulnerable, and restart the economy — $3.75 trillion. Both equity and the current crisis support such a plan.

Here is some background on the rising wealth inequalities we have witnessed in recent decades in the United States. Leiserson, McGrew, and Kopparam provide an excellent and data-rich survey of the system of wealth inequalities in the United States in “The distribution of wealth in the United States and implications for a net worth tax” (link). Since 1989 the increase in wealth inequality is dramatic. The top 10% owned about 67% of all wealth in 1989; by 2016 this had risen to 77%.

The second graph is a snapshot for 2016 (link). Both income and wealth are severely unequal, but wealth is substantially more so. The top quintile owns almost 90% of the wealth in the United States, with the top 1% owning about 40% of all wealth.

The website Inequality.org provides an historical look at the growth of inequalities of wealth in the US (link). Consider this graph of the wealth shares over a century of the top 1%, .1%, and .01% of the US population; it is eye-popping. Beginning in roughly 1978 the shares of the very top segments of the US population began to rise, and the trend continued through 2012 — with no end in sight. The top 1% in 2012 owned 41% of all wealth; the top 0.1% owned 21%; and the top 0.01% owned 11%.

We need a wealth tax, and Elizabeth Warren put together a pretty convincing and rational plan. This is not a question of “soaking the rich”. It is a question of basic fairness. Our economy and society have functioned as an express elevator for ever-greater fortunes for the few, with essentially no improvement for 60-80% of the rest of America. An economy is a system of social cooperation, requiring the efforts of all members of society. But the benefits of our economic system have gone ever-more disproportionately to the rich and the ultra-rich. That is fundamentally unfair. Now is the time to bring equity back into our society and politics. If Mr. Moneybags can afford a $33M home in New Jersey, he or she can afford to pay a small tax on his wealth.

It is interesting to note that social scientists and anthropologists are beginning to study the super-rich as a distinctive group. A fascinating source is Iain Hay and Jonathan Beaverstock, eds., Handbook on Wealth and the Super-Rich. Especially relevant is Chris Paris’s contribution, “The residential spaces of the super-rich”. Paris writes:

Prime residential real estate remains a key element in super-­rich investment portfolios, both for private use through luxury consumption and as investment items with anticipated long-­ term capital gain, often untaxed as properties are owned by companies rather than individuals. Most of the homes of the super-­rich are purchased using cash, specialized financial instruments and/or through companies, and ‘the higher the price of the property, the less likely buyers were to arrange traditional mortgage financing for the home acquisition. Whether buyers are foreign or domestic, cash transactions predominate at the higher end of the market’ (Christie’s, 2013, p. 14). Such transactions, therefore, never enter ‘national’ housing accounting systems and play no part in many accounts of aggregate ‘national’ house price trends. For example, the analysis of house price trends in the Joseph Rowntree Foundation UK Housing Review is based on data relating to transactions using mortgages or loans, and EU and OECD comparisons between countries are based on the same kinds of data (Paris, 2013b).

Also fascinating in the volume is Emma Spence’s study of the super-rich when at sea in their super-yachts, “Performing wealth and status: observing super-­yachts and the super-­rich in Monaco”:

In this chapter I focus upon the super-­yacht as a key tool for exploring how performances of wealth are made visible in Monaco. A super-­yacht is a privately owned and professionally crewed luxury vessel over 30 metres in length. An average super-­ yacht, at approximately 47 metres in length, costs around €30 million to buy new, operates with a permanent crew of ten, and costs around €1.8 million per year to run. Larger super-­yachts such as Motor Yacht (M/Y) Madame Gu (99 metres in length), or the current largest super-­yacht in the world M/Y Azzam (180 metres in length) cost substantially more to build and to run. The price to charter (rent) a super-­yacht also varies considerably with size, age and reputation of the shipyard in which it was built. For example, a typical 47-­metre yacht can range between €100 000 to €600 000 per week to charter, plus costs. At the most exclusive end of the super-­yacht charter industry costs are much higher. M/Y Solange, for example, is an 85-­metre newly built yacht (2013) from reputable German shipyard Lürssen, which operates with 29 full-­time crew, and is priced at €1 million plus costs to charter per week.  The super-­yacht industry is worth an estimated €24 billion globally (Rutherford, 2014, p. 51).

Brass on anti-Muslim violence in India

The occurrence of anti-Muslim violence, arson, and murder in New Delhi last month is sometimes looked at a simply an unpredictable episode provoked by protest against the citizenship legislation enacted by the BJP and Prime Minister Modi. (See Jeffrey Gettleman and Maria Abi-Habib’s New York Times article for a thoughtful and detailed account of the riots in New Delhi; link.) However, Paul Brass demonstrated several decades ago in The Production of Hindu-Muslim Violence in Contemporary India, that riots and violent episodes like this have a much deeper explanation in Indian politics. His view is that the political ideology of Hindutva (Hindu nationalism) is used by BJP and other extremist parties to advance its own political fortunes. This ideology (and the political program it is designed to support) is a prime cause of continuing violence by Hindu extremists against Muslims and other non-Hindu minorities in India.

Brass asks a handful of crucial and fundamental questions: Do riots serve a function in Indian politics? What are the political interests that are served by intensifying mistrust, fear, and hatred of Muslims by ordinary Hindu workers, farmers, and shopkeepers? How does a framework of divisive discourse contribute to inter-group hatred and conflict? “I intend to show also that a hegemonic discourse exists in Indian society, which I call the communal discourse, which provides a framework for explaining riotous violence.” (24). Throughout Brass keeps the actors in mind — including leaders, organizers, and participants: “It is one of the principal arguments of this book that we cannot understand what happens in riots until we examine in detail the multiplicity of roles and persons involved in them”. (29) Here are the central themes of the book:

The whole political order in post-Independence north India and many, if not most of its leading as well as local actors—more markedly so since the death of Nehru—have become implicated in the persistence of Hindu-Muslim riots. These riots have had concrete benefits for particular political organizations as well as larger political uses. (6)

The maintenance of communal tensions, accompanied from time to time by lethal rioting at specific sites, is essential for the maintenance of militant Hindu nationalism, but also has uses for other political parties, organizations, and even the state and central governments. (9)

Brass documents his interpretation through meticulous empirical research, including a review of the demographic and political history of regions of India, a careful timeline of anti-Muslim riots and pogroms since Independence, and extensive interviews with participants, officials, and onlookers in one particularly important city, Aligarh, in Uttar Pradesh (northern India). Brass gives substantial attention to the discourse chosen by Hindu nationalist parties and leaders, and he argues that violent attacks are deliberately encouraged and planned.

Most commonly, the rhetoric is laced with words that encourage its members not to put up any longer with the attacks of the other but to retaliate against their aggression. There are also specific forms of action that are designed to provoke the other community into aggressive action, which is then met with a stronger retaliatory response. (24)

Brass asks the fundamental question:

What interests are served and what power relations are maintained as a consequence of the wide acceptance of the reality of popular communal antagonisms and the inevitability of communal violence? (11)

(We can ask the same question about the rise of nationalist and racist discourse in the United States in the past fifteen years: what interests are served by according legitimacy to the language of white supremacy and racism in our politics?)

Brass rejects the common view that riots in India are “spontaneous” or “responsive to provocation”; instead, he argues that communal Hindu-nationalist riots are systemic and strategic. Violence derives from a discourse of Hindu-Muslim hostility and the legitimization of violence. Given this view that riots and anti-Muslim violence are deliberate political acts in India, Brass offers an analysis of what goes into “making of a riot”. He argues that there are three analytically separable phases: preparation / rehearsal; activation / enactment; and explanation / interpretation (15). This view amounts to an interpretation of the politics of Hindu nationalism as an “institutionalized riot system” (15).

When one examines the actual dynamics of riots, one discovers that there are active, knowing subjects and organizations at work engaged in a continuous tending of the fires of communal divisions and animosities, who exercise by a combination of subtle means and confrontational tactics a form of control over the incidence and timing of riots.” (31)

This deliberate provocation of violence was evident in the riots in Gujarat in 2002, according to Dexter Filkins in a brilliant piece of journalism on these issues in the New Yorker (link):

The most sinister aspect of the riots was that they appeared to have been largely planned and directed by the R.S.S. Teams of men, armed with clubs, guns, and swords, fanned out across the state’s Muslim enclaves, often carrying voter rolls and other official documents that led them to Muslim homes and shops.

Especially important in the question of civil strife and ethnic conflict in any country is the behavior and effectiveness of the police. Do the police work in an even-handed way to suppress violent acts and protect all parties neutrally? And does the justice system investigate and punish the perpetrators of violence? In India the track record is very poor, including in the riots in the early 1990s in Mumbai and in 2002 in Gujarat. Brass writes:

The government of India and the state governments do virtually nothing after a riot to prosecute and convict persons suspected of promoting or participating in riots. Occasionally, but less frequently in recent years, commissions of inquiry are appointed. If the final reports are not too damaging to the government of the day or to the political supporters of that government in the Hindu or Muslim communities, the report may be published More often than not, there is a significant delay before publication. Some reports are never made public. (65)

This pattern was repeated in Delhi during the most recent period of anti-Muslim pogrom. The police stand by while Hindutva thugs attack Muslims, burn homes and shops, and murder the innocent. Conversely, when the police function as representatives of the whole of civil society rather than supporters of a party, they are able to damp down inter-religious killing quickly (as Brass documents in his examination of the period of relative peace in Aligarh between 1978-80 to 1988-90).

Brass is especially rigorous in his development of the case for the deliberate and strategic nature of anti-Muslim bigotry within the politics of Hindu nationalism and its current government. But other experts agree. For example, Ashutosh Varshney described the dynamics of religious conflict in India in very similar terms to those offered by Brass (link):

Organized civic networks, when intercommunal, not only do a better job of withstanding the exogenous communal shocks—like partitions, civil wars, and desecration of holy places; they also constrain local politicians in their strategic behavior. Politicians who seek to polarize Hindu and Muslims for the sake of electoral advantage can tear at the fabric of everyday engagement through the organized might of criminals and gangs. All violent cities in the project showed evidence of a nexus of politicians and criminals. Organized gangs readily disturbed neighborhood peace, often causing migration from communally heterogeneous to communally homogenous neighborhoods, as people moved away in search of physical safety. Without the involvement of organized gangs, large-scale rioting and tens and hundreds of killings are most unlikely, and without the protection afforded by politicians, such criminals cannot escape the clutches of law. Brass has rightly called this arrangement an institutionalized riot system. (378)

Varshney treats these issues in greater detail in his 2002 book, Ethnic Conflict and Civic Life: Hindus and Muslims in India.

The greatest impetus to the political use of the politics of hate and the program of Hindu nationalism was the campaign to destroy the Babri Mosque in Ayodhya, UP, in 1992. For an informative and factual account of the Babri Mosque episode and its role within the current phase of Hindu nationalism in India, see Abdul Majid, “The Babri Mosque and Hindu Extremists Movements”; link.

A course on democracy and intolerance

I am teaching a brand new honors course at my university called “Democracy and the politics of division and hate”. The course focuses on the question of the relationship between democracy and intolerance. As any reader of the world’s news outlets knows, intolerance and bigotry have become ever-more prominent themes in the politics of Western democracies – France, the Netherlands, Germany, Greece, and – yes, the United States. These movements put the values of a liberal democracy to the test.

Here is the course description:

Democracy has been understood as a setting where equal citizens collectively make decisions about law and public policy in an environment of equality, fairness, and mutual respect. Political theorists from Rousseau to JS Mill to Rawls have attempted to define the conditions that make a democratic civil society possible. Today the world’s democracies are challenged by powerful political movements based on intolerance and division. How should democratic theory respond to the challenge of hate-based political movements? The course reexamines classic ideas in democratic theory, current sociological research on hate-based populism, and current strategies open to citizens in the twenty-first century to reclaim the values of tolerance and respect in their democratic institutions. The course is intended to provide students with better intellectual resources for understanding the political developments currently transforming societies as diverse as the United States, Germany, the Netherlands, India, and Nigeria.

The organizing idea is that democratic theorists have generally conceived of a democracy as a polity in which a sense of civic unity is cultivated that ensures a common commitment to the formal and substantive values of a democratic society — the equal worth and rights of all citizens, the rule of law, adherence to the constitution, and respect for the institutions of collective decision-making. (Josh Cohen provided an excellent analysis of Rousseau’s core philosophical ideas about democracy in Rousseau: A Free Community of Equalslink.) John Rawls captures this idea in Political Liberalism, where he introduces the idea of “political liberalism”:

A modern democratic society is characterized not simply by a pluralism of comprehensive religious, philosophical, and moral doctrines but by a pluralism of incompatible yet reasonable comprehensive doctrines…. Political liberalism assumes that, for political purposes, a plurality of reasonable yet incompatible comprehensive doctrines is the normal result of the exercise of human reason within the framework of the free institutions of a constitutional democratic regime. Political liberalism also supposes that a reasonable comprehensive doctrine does not reject the essentials of a democratic regime. (xvi)

This formulation is intended to capture the idea that a democracy always embraces groups of people who disagree about important things. These conflicting value frameworks are what he refers to as “comprehensive doctrines of the good”, and a liberal democracy is neutral among reasonable comprehensive doctrines.

So what is a “reasonable comprehensive doctrine”? Rawls’s conception amounts to precisely this: all such doctrines maintain a commitment to “the essentials of a democratic regime”. He refers to comprehensive doctrines that reject these commitments to political justice as irrational and “mad”:

Of course, a society may also contain unreasonable and irrational, and even mad, comprehensive doctrines. In their case the problem is to contain them so that they do not undermine the unity and justice of society. (xvi)

But here is an important point: Rawls seems to have a robust confidence in the idea that a society that satisfies the conditions of justice and political liberalism will evolve towards a greater degree of civic unity. This seems to imply that he believes that individuals and groups who adhere to their “unreasonable, irrational, and mad” comprehensive doctrines will be led to change their beliefs over time and will gradually come to accept the democratic consensus.

The problem that we consider in the course is that democratic societies seem to have evolved in the opposite direction: doctrines that reject the legitimacy of the fundamentals of liberal democracy (respect for the equality of all citizens and respect for the rule of law) — these doctrines appear to have rapidly gained ground in many democracies in Europe and now the United States. Instead of converging towards a “democratic consensus” where everyone recognizes the legitimacy, equality, and rights of all other citizens, many democracies have developed powerful political movements that reject all these commitments. These are the political movements of division and hate — or the movements of right-wing populism. Democracy depends fundamentally on the principle of tolerance of points of view different from our own. Does that mean that democracy must be “tolerant of the intolerant”, with no effective means of protecting its values and institutions against groups that would subvert its most basic principles?

So how do we take on this set of issues, which involve both political philosophy and the sociology of political mobilization and political psychology?

The course begins by immersing the students in some of the values that define democracy.We begin with John Stuart Mill’s short but influential 1859book, On Liberty. Mill postulates the equal worth and liberties of all citizens, and argues that a good democracy involves rule by the majority while scrupulously protecting the equal rights and freedoms of all citizens. (Notice the close agreement between this theory and the US Constitution and the Bill of Rights, which we also read.) We then consider the theory of a liberal society put forward by John Rawls in Political Liberalism, where Rawls argues that a democracy depends fundamentally upon a culture of respect for the equal worth and equal rights and liberties of all citizens. This implies that perhaps democracy cannot survive in the absence of such a culture.

This is the positive theory of democracy, as several centuries of philosophers have developed it.

Next we turn to the challenges these theories face in the contemporary world: the rise of hate-based populism in Europe and the United States, and the rising prevalence of racism, bigotry, and violence in many countries. And this is not just a Western problem — think of India, the world’s largest democracy, and the governing party’s inculcation of hate and violence against Muslims. Anti-semitism, anti-Muslim bigotry, and white supremacy are on the rise. The Front Nationale in France, the Alternative for Germany, and the Party for Freedom in the Netherlands are all examples of political parties that have developed mass followings with appeals based on racism and division, and similar parties exist in most other European countries. And white supremacist organizations in the United States make the same appeals in our country as well.

The hard question for us is this: can our liberal democracies find ways of coping with intolerance and hate? Can we reassert the values of civility and mutual respect in ways that build a greater consensus around the values of democracy? Does a democracy have the ability to defend itself against parties who reject the moral premises of democracy?

The assigned readings in the course include several excellent and thought-provoking books from philosophy, sociology, and political theory. We begin with Cas Mudde and Cristóbal Rovira Kaltwasser’s book Populism: A Very Short Introduction, which gives an excellent short overview of the phenomenon of rightwing populism in Europe and the United States, along with a good discussion of the challenge of defining the concept of populism.

We then turn to two weeks on McAdam and Kloos, Deeply Divided: Racial Politics and Social Movements in Postwar America, along with a survey report from the Southern Poverty Law Center on the spread of racist and hate-based organizations in the United States. McAdam and Kloos provide an analysis of the evolution of the mainstream “conservative” political party since the Nixon presidency, and document through survey data and other evidence from empirical political science the rapid increase in racial antagonism in the party’s platforms and behavior when in office (linklink). They offer a convincing demonstration of the racism that underlies the activism of the Tea Party.

The next readings are Justin Gest’s The New Minority: White Working Class Politics in an Age of Immigration and Inequality (link) and Kathleen Blee’s edited volume The New Minority: White Working Class Politics in an Age of Immigration and Inequality (link). These books provide an ethnographical perspective on the appeal of right-wing extremism in western democracies, deriving from rapid economic change (deindustrialization) and demographic change (immigration and the rising percentage of populations of color in both Britain and the US). Blee’s volume sheds much light on the role of gender in political mobilization by the right across the spectrum, with substantially more women involved in extremists groups in the US than in Europe.

Next we turn to both longstanding and current strategies by the Bharatiya Janata Party (BJP) in India to manage politics through antagonism against India’s Muslims. Paul Brass’s book The Production Of Hindu-Muslim Violence In Contemporary India is the primary source (link), and several good pieces of journalism about the current violence in India against Muslims help to fill in the details of the current situation (linklinklink).

The course ends with a consideration of Robert Putnam’s volume Better Together: Restoring the American Community, which makes the case for civic engagement and civic unity — but in a voice that appears a decade behind events when it comes to the virulence of hate-based activism.

This is a course that is entirely organized around an intensive and engaged student experience. Each session involves lively discussion and student presentations (which have been excellent), and the course aims at helping the students develop their own ideas and judgments. We all learn through open, honest, and respectful dialogue, and every session is engaging and valuable. Most importantly, we have all come to see that these issues of democracy, equality, and intolerance and bigotry are an enormous challenge for all of us in the twenty-first century that we must solve.

(For the first session students are asked to view several relevant videos on YouTube:

John Rawls Lecture 1, Modern Political Philosophy

Hate Rising: White Supremacy in America

Robert Putnam on Immigration and Diversity

Cas Mudde on Right-wing Populism

These videos set the stage for many of the topics raised throughout the course.)

An existential philosophy of technology

Ours is a technological culture, at least in the quarter of the countries in the world that enjoy a high degree of economic affluence. Cell phones, computers, autonomous vehicles, CT scan machines, communications satellites, nuclear power reactors, artificial DNA, artificial intelligence bots, drone swarms, fiber optic data networks — we live in an environment that depends unavoidably upon complex, scientifically advanced, and mostly reliable artifacts that go well beyond the comprehension of most consumers and citizens. We often do not understand how they work. But more than that, we do not understand how they affect us in our social, personal, and philosophical lives. We are different kinds of persons than those who came before us, it often seems, because of the sea of technological capabilities in which we swim. We think about our lives differently, and we relate to the social world around us differently.

How can we begin investigating the question of how technology affects the conduct of a “good life”? Is there such a thing as an “existential” philosophy of technology — that is, having to do with the meaning of the lives of human beings in the concrete historical and technological circumstances in which we now find ourselves? This suggests that we need to consider a particularly deep question: in what ways does advanced technology facilitate the good human life, and in what ways does it frustrate and block the good human life? Does advanced technology facilitate and encourage the development of full human beings, and lives that are lived well, or does it interfere with these outcomes?

We are immediately drawn to a familiar philosophical question, What is a good life, lived well? This has been a central question for philosophers since Aristotle and Epicurus, Kant and Kierkegaard, Sartre and Camus. But let’s try to answer it in a paragraph. Let’s postulate that there are a handful of characteristics that are associated with a genuinely valuable human life. These might include the individual’s realization of a capacity for self-rule, creativity, compassion for others, reflectiveness, and an ability to grow and develop. This suggests that we start from the conception of a full life of freedom and development offered by Amartya Sen in Development as Freedom and the list of capabilities offered by Martha Nussbaum in Creating Capabilities: The Human Development Approach — capacities for life, health, imagination, emotions, practical reason, affiliation with others, and self-respect. And we might say that a “life lived well” is one in which the person has lived with integrity, justice, and compassion in developing and fulfilling his or her fundamental capacities. Finally, we might say that a society that enables the development of each of these capabilities in all its citizens is a good society.

Now look at the other end of the issue — what are some of the enhancements to human living that are enabled by modern technologies? There are several obvious candidates. One might say that technology facilitates learning and the acquisition of knowledge; technology can facilitate health (by finding cures and preventions of disease; and by enhancing nutrition, shelter, and other necessities of daily life); technology can facilitate human interaction (through the forms of communication and transportation enabled by modern technology); technology can enhance compassion by acquainting us with the vivid life experiences of others. So technology is sometimes life-enhancing and fulfilling of some of our most fundamental needs and capabilities.

How might Dostoevsky, Dos Passos, Baldwin, or Whitman have adjusted their life plans if confronted by our technological culture? We would hope they would not have been overwhelmed in their imagination and passion for discovering the human in the ordinary by an iPhone, a Twitter feed, and a web browser. We would like to suppose that their insights and talents would have survived and flourished, that poetry, philosophy, and literature would still have emerged, and that compassion and commitment would have found its place even in this alternative world.

But the negative side of technology for human wellbeing is also easy to find. We might say that technology encourages excessive materialism; it draws us away from real interactions with other human beings; it promotes a life consisting of a series of entertaining moments rather than meaningful interactions; and it squelches independence, creativity, and moral focus. So the omnipresence of technologies does not ensure that human beings will live well and fully, by the standards of Aristotle, Epicurus, or Montaigne.

In fact, there is a particularly bleak possibility concerning the lives that advanced everyday technology perhaps encourages: our technological culture encourages us to pursue lives that are primarily oriented towards material satisfaction, entertainment, and toys. This sounds a bit like a form of addiction or substance abuse. We might say that the ambient cultural imperatives of acquiring the latest iPhone, the fastest internet streaming connection, or a Tesla are created by the technological culture that we inhabit, and that these motivations are ultimately unworthy of a fully developed human life. Lucretius, Socrates, and Montaigne would scoff.

It is clear that technology has the power to distort our motives, goals and values. But perhaps with equal justice one might say that this is a life world created by capitalism rather than technology — a culture that encourages and elicits personal motivations that are “consumerist” and ultimately empty of real human value, a culture that depersonalizes social ties and trivializes human relationships based on trust, loyalty, love, or compassion. This is indeed the critique offered by theorists of the philosophers of the Frankfurt School — that capitalism depends upon a life world of crass materialism and impoverished social and personal values. And we can say with some exactness how capitalism distorts humanity and culture in its own image: through the machinations of advertising, strategic corporate communications, and the honoring of acquisitiveness and material wealth (link). It is good business to create an environment where people want more and more of the gadgets that technological capitalism can provide.

So what is a solution for people who worry about the shallowness and vapidity of this kind of technological materialism? We might say that an antidote to excessive materialism and technology fetishism is a fairly simple maxim that each person can strive to embrace: aim to identify and pursue the things that genuinely matter in life, not the glittering objects of short-term entertainment and satisfaction. Be temperate, reflective, and purposive in one’s life pursuits. Decide what values are of the greatest importance, and make use of technology to further those values, rather than as an end in itself. Let technology be a tool for creativity and commitment, not an end in itself. Be selective and deliberate in one’s use of technology, rather than being the hapless consumer of the latest and shiniest. Create a life that matters.

Explaining large historical change

Great events happen; people live through them; and both ordinary citizens and historians attempt to make sense of them. Examples of the kinds of events I have in mind include the collapse of communism in Eastern Europe and the USSR; the rise of fascism in Europe in the 1930s; the violent suppression of the Democracy Movement in Tiananmen Square; the turn to right-wing populism in Europe and the United States; and the Rwandan genocide in 1994. My purpose here is to identify some of the important intellectual and conceptual challenges that present themselves in the task of understanding events on this scale. My fundamental points are these: large-scale historical developments are deeply contingent; the scale at which we attempt to understand the event matters; and there is important variation across time, space, region, culture, and setting when it comes to the large historical questions we want to investigate. This means that it is crucial for historians to pay attention to the particulars of institutions, knowledge systems, and social actors that combined to create a range of historical outcomes through a highly contingent and path-dependent process. The question for historiography is this: how can historians do the best job possible of discovering, documenting, and organizing their accounts of these kinds of complex historical happenings?

Is an historical period or episode an objective thing? It is not. Rather, it is an assemblage of different currents, forces, individual actors, institutional realities, international pressures, and popular claims, and there are many different “stories” that we can tell about the period. This is not a claim for relativism or subjectivism; it is rather the simple and well-understood point for social scientists and historians, that a social and historical realm is a dense soup of often conflicting tendencies, forces, and agencies. Weber understood this point in his classic essay “’Objectivity’ in Social Science” when he said that history must be constantly re-invented by successive generations of historians: “There is no absolutely “objective” scientific analysis of culture—or put perhaps more narrowly but certainly not essentially differently for our purposes—of “social phenomena” independent of special and “one-sided” viewpoints according to which—expressly or tacitly, consciously or unconsciously—they are selected, analyzed and organized for expository purposes” (Weber 1949: 72). Think of the radically different accounts offered of the French Revolution by Albert Soboul, Simon Schama, and Alexis de Tocqueville; and yet each offers insightful, honest, and “objective” interpretations of part of the history of this complex event.

We need to recall always that socially situated actors make history. History is social action in time, performed by a specific population of actors, within a specific set of social arrangements and institutions. Individuals act, contribute to social institutions, and contribute to change. People had beliefs and modes of behavior in the past. They did various things. Their activities were embedded within, and in turn constituted, social institutions at a variety of levels. Social institutions, structures, and ideologies supervene upon the historical individuals of a time. Institutions have great depth, breadth, and complexity. Institutions, structures, and ideologies display dynamics of change that derive ultimately from the mentalities and actions of the individuals who inhabit them during a period of time. And both behavior and institutions change over time.

This picture needs of course to reflect the social setting within which individuals develop and act. Our account of the “flow” of human action eventuating in historical change needs to take into account the institutional and structural environment in which these actions take place. Part of the “topography” of a period of historical change is the ensemble of institutions that exist more or less stably in the period: cultural arrangements, property relations, political institutions, family structures, educational practices. But institutions are heterogeneous and plastic, and they are themselves the product of social action. So historical explanations need to be sophisticated in their treatment of institutions and structures.

In Marx’s famous contribution to the philosophy of history, he writes that “men make their own history; but not in circumstances of their own choosing.” And circumstances can be both inhibiting and enabling; they constitute the environment within which individuals plan and act. It is an important circumstance that a given time possesses a fund of scientific and technical knowledge, a set of social relationships of power, and a level of material productivity. It is also an important circumstance that knowledge is limited; that coercion exists; and that resources for action are limited. Within these opportunities and limitations, individuals, from leaders to ordinary people, make out their lives and ambitions through action.

On this line of thought, history is a flow of human action, constrained and propelled by a shifting set of environmental conditions (material, social, epistemic). There are conditions and events that can be described in causal terms: enabling conditions, instigating conditions, cause and effect, … But here my point is to ask you to consider whether uncritical use of the language of cause and effect does not perhaps impose a discreteness of historical events that does not actually reflect the flow of history very well. It is of course fine to refer to historical causes; but we always need to understand that causes depend upon the structured actions of socially constituted individual actors.

A crucial idea in the new philosophy of history is the fact of historical contingency. Historical events are the result of the conjunction of separate strands of causation and influence, each of which contains its own inherent contingency. Social change and historical events are highly contingent processes, in a specific sense: they are the result of multiple influences that “could have been otherwise” and that have conjoined at a particular point in time in bringing about an event of interest. And coincidence, accident, and unanticipated actions by participants and bystanders all lead to a deepening of the contingency of historical outcomes. However, the fact that social outcomes have a high degree of contingency is entirely consistent with the idea that the idea that a social order embodies a broad collection of causal processes and mechanisms. These causal mechanisms are a valid subject of study – even though they do not contribute to a deterministic causal order.

What about scale? Should historians take a micro view, concentrating on local actions and details; or should they take a macro view, seeking out the highest level structures and patterns that might be visible in history? Both perspectives have important shortcomings. There is a third choice available to the historian, however, that addresses shortcomings of both micro- and macro-history. This is to choose a scale that encompasses enough time and space to be genuinely interesting and important, but not so much as to defy valid analysis. This level of scale might be regional – for example, G. William Skinner’s analysis of the macro-regions of China. It might be national – for example, a social history of Indonesia. And it might be supra-national – for example, an economic history of Western Europe. The key point is that historians in this middle range are free to choose the scale of analysis that seems to permit the best level of conceptualization of history, given the evidence that is available and the social processes that appear to be at work. And this mid-level scale permits the historian to make substantive judgments about the “reach” of social processes that are likely to play a causal role in the story that needs telling. This level of analysis can be referred to as “meso-history,” and it appears to offer an ideal mix of specificity and generality.

Here is one strong impression that emerges from the almost any area of rigorous historical writing. Variation within a social or historical phenomenon seems to be all but ubiquitous. Think of the Cultural Revolution in China, demographic transition in early modern Europe, the ideology of a market society, or the experience of being black in America. We have the noun — “Cultural Revolution”, “European fascism”, “democratic transition” — which can be explained or defined in a sentence or two; and we have the complex underlying social realities to which it refers, spread out over many regions, cities, populations, and decades.

In each case there is a very concrete and visible degree of variation in the factor over time and place. Historical and social research in a wide variety of fields confirms the non-homogeneity of social phenomena and the profound location-specific variations that occur in the characteristics of virtually all large social phenomena. Social nouns do not generally designate uniform social realities. These facts of local and regional variation provide an immediate rationale for case studies and comparative research, selecting different venues of the phenomenon and identifying specific features of the phenomenon in this location. Through a range of case studies it is possible for the research community to map out both common features and distinguishing features of a given social process.

What is the upshot of these observations? It is that good historical writing needs to be attentive to difference — difference across national settings, across social groups, across time; that it should be grounded in many theories of how social processes work, but wedded to none; and that it should pay close attention to the evolution of the social arrangements (institutions) through which individuals conduct their social lives. I hope these remarks also help to make the case that philosophers can be helpful contributors to the work that historians do, by assisting in teasing out some of the conceptual and philosophical issues that they inevitably must confront as they do their work.

The assault on democracy by the right

A democracy depends crucially upon a core set of normative commitments that are accepted on all sides — political parties, citizens, government officials, judges, legislators. Central among these is the idea of the political equality of all citizens and the crucial importance of maintaining equality in the availability of access to formal political involvement in democratic processes. In particular, the right to vote must be inviolate for every citizen, without regard to region, religion, gender, race, national origin, or any other criterion. John Rawls encapsulates these commitments within his conception of the political values of a just society in Political Liberalism.

The third feature of a political conception of justice is that its content is expressed in terms of certain fundamental ideas seen as implicit in the public political culture of a democratic society. This public culture comprises the political institutions of a constitutional regime and the public traditions of their interpretation (including those of the judiciary), as well as historic texts and documents that are common knowledge. (13) … A sense of justice is the capacity to understand, to apply, and to act from the public conception of justice which characterizes the fair terms of social cooperation. Given the nature of the political conception as specifying a public basis of justification, a sense of justice also expresses a willingness, if not the desire, to act in relation to others on terms that they also can publicly endorse. (18)

The Voting Rights Act in 1965 was an important step in the development of racial equality in the United States for a number of reasons; but most important was the clear statement it made guaranteeing voting rights to African-American citizens, and the judicial remedies it established for addressing efforts made in various states or localities to limit or block the exercise of those rights. The act prohibited literacy tests for voting rights and other practices that inhibited or prevented voter registration and voter participation in elections.

However, the Supreme Court decision in 2013 (Shelby County v Holder) eliminated the fundamental force of the 1965 act by removing the foundation of the requirement of pre-clearance of changes in voting procedures in certain states and jurisdictions. This action appears to have had the effect of allowing states to take steps that reduce participation in elections by under-served minorities (link).

Also important is the idea that the formal decisions within a democracy should depend upon citizens’ preferences, not the expenditure of money in favor of or against a given candidate or act of legislation. The Supreme Court’s decision in 2010 in the case of Citizens United v Federal Election Commission found the 2002 Bipartisan Campaign Reform Act to be unconstitutional because it restricted the freedom of speech of legal persons (corporations and unions). This ruling gave essentially unlimited rights to corporations to provide financial support to candidate and legislative initiatives; this decision in one stroke diminished the political voice of ordinary voters to a vanishing level. Big money in politics became the decisive factor in determining the outcomes of political disagreements within our democracy. (Here is a summary from the Washington Post on the effects of Citizens United on campaign spending; link.)

The 2014 book by Doug McAdam and Karina Kloos, Deeply Divided: Racial Politics and Social Movements in Postwar America, is profoundly alarming for a number of reasons. They make clear the pivotal role that the politics of race have played in American electoral politics since the Nixon presidency. Most recently, the Tea Party social movement appears to be substantially motivated by racism.

The question is: where did this upsurge in “old-fashioned racism” come from? Based on the best survey data on support for the Tea Party, it seems reasonable to credit the movement for at least some of the infusion of more extreme racial views and actions into American politics. We begin by considering the racial attitudes of Tea Party supporters and what that suggests about the animating racial politics of the movement wing of the Republican Party. In this, we rely on two sources of data: the multi-state surveys of support for the Tea Party conducted by Parker and Barreto in 2010 and 2011 and Abramowitz’s analysis of the October 2010 wave of the American National Election Studies. (KL 5008)

Based on this survey data, they conclude:

Support for the Tea Party is thus decidedly not the same thing as conventional conservatism or traditional partisan identification with the Republican Party. Above all else, it is race and racism that runs through and links all three variables discussed here. Whatever else is motivating supporters, racial resentment must be seen as central to the Tea Party and, by extension, to the GOP as well in view of the movement’s significant influence within the party. (KL 5053)

Most alarming is the evidence McAdam and Kloos offer of a deliberate, widespread effort to suppress the voting rights of specific groups. Voter suppression occurs through restrictions on the voting process itself; and more systemically, it occurs through the ever-more-impactful ability of state legislators to engage in data-supported strategies of gerrymandering. And they connect the dots from these attitudes about race to political strategies by elected officials reflecting this movement:

Nor is the imprint of race and racism on today’s GOP only a matter of attitudes. It was also reflected in the party’s transparent efforts to disenfranchise poor and minority voters in the run-up to the 2012 election. It may well be that the country has never seen a more coordinated national effort to constrain the voting rights of particular groups than we saw in 2012. Throughout the country, Republican legislators and other officials sought to enact new laws or modify established voting procedures which, in virtually all instances, would have made it harder—in some cases, much harder—for poor and minority voters to exercise the franchise. (KL 5053)

Through gerrymandering the votes of a large percentage of the electorate are functionally meaningless; they live in districts that have been designed as “safe districts” in which the candidates of one party (most commonly the Republican Party, though there are certainly examples of Democratic gerrymandering as well) are all but certain to win election. Consider these completely deranged districts from Illinois, Georgia, Louisiana, and North Carolina:

And nation-wide, the power of state legislatures to create gerrymandered districts has led to a lopsided political map, where only a few districts are genuinely competitive:

So the preferences of a given block of voters among candidates in a Republican safe district have zero likelihood of bringing about the election of the competing candidate. McAdam and Kloos are very explicit about the threat to democracy these efforts and the deliberateness with which the Republican Party has carried out these strategies over the past several decades. They are explicit as well in documenting the goal of these efforts: to suppress votes by racial groups who have traditionally supported Democratic candidates for office.

The efforts at voter suppression documented by McAdam and Kloos have continued unabated, even accelerated, since the 2014 publication of their book.

The hard question raised by Deeply Divided is not answered in the book, because it is very hard to answer at all: how will the public manage to claim back its rights of equality and equal participation? How will democracy be restored as the operative principle of our country?

Fascist attacks on democracy

The hate-based murders of at least nine young people in Hanau, Germany this week brought the world’s attention once again to right-wing extremism in Germany and elsewhere. The prevalence of right-wing extremist violence in Germany today is shocking, and it presents a deadly challenge to democratic institutions in modern Germany. Here is the German justice minister, quoted in the New York Times (link):

“Far-right terror is the biggest threat to our democracy right now,” Christine Lambrecht, the justice minister, told reporters on Friday, a day after joining the country’s president at a vigil for the victims. “This is visible in the number and intensity of attacks.”

Extremist political parties like the Alternative for Germany and the National Democratic Party (linklink) have moved from fringe extremism to powerful political organizations in Germany, and it is not clear that the German government has strategies that will work in reducing their power and influence. Most important, these parties, and many other lesser organizations, spread a message of populist hate, division, and distrust that motivates some Germans to turn to violence against immigrants and other targeted minorities. These political messages can rightly be blamed for cultivating an atmosphere of hate and resentment that provokes violence. Right-wing populist extremism is a fertile ground for political and social violence; hate-based activism leads to violence. (Here is an excellent report from the BBC on the political messages and growing political influence of AfD in Germany (link).)

Especially disturbing for the fate of democracy in Germany is the fact that there is a rising level of violence and threat against local elected officials in Germany over their support for refugee integration. (Here is a story in the New York Times (2/21/20) that documents this aspect of the crisis; link.) The story opens with an account of the near-fatal attack in 2015 on Henriette Reker, candidate for mayor of Cologne. She survived the attack and won the election, but has been subject to horrendous death threats ever since. And she is not alone; local officials in many towns and municipalities have been subjected to similar persistent threats. According to the story, there were 1,240 politically motivated attacks against politicians and elected officials (link). Of these attacks, about 33% are attributed to right-wing extremists, about double the number attributed to left-wing extremists. Here is a summary from the Times story:

The acrimony is felt in town halls and village streets, where mayors now find themselves the targets of threats and intimidation. The effect has been chilling. 

Some have stopped speaking out. Many have quit, tried to arm themselves or taken on police protection. The risks have mounted to such an extent that some German towns are unable to field candidates for leadership at all. 

“Our democracy is under attack at the grass-roots level,” Ms. Reker said in a recent interview in Cologne’s City Hall. “This is the foundation of our democracy, and it is vulnerable.” 

This is particularly toxic for the institutions of democratic governance, because the direct and obvious goal is to intimidate government officials from carrying out their duties. This is fascism.

What strategies exist that will help to reduce the appeal of right-wing extremism and the currents of hatred and resentment that these forms of populism thrive on? In practical terms, how can liberal democracies (e.g. Germany, Britain, or the United States) reduce the appeal of white supremacy, nationalism, racism, and xenophobia while enhancing citizens’ commitment to the civic values of equality and rule of law?

One strategy involves strengthening the institutions of democracy and the trust and confidence that citizens have in those institutions. This is the approach developed in an important 2013 issue of Daedalus (link) devoted to civility and the common good. This approach includes efforts at improving civic education for young people. It also includes reforming political and electoral institutions in such a way as to address the obvious sources of inequality of voice that they currently involve. In the United States, for example, the prevalence of extreme and politicized practices of gerrymandering has the obvious effect of reducing citizens’ confidence in their electoral institutions. Their elected officials have deliberately taken policy steps to reduce citizens’ ability to affect electoral outcomes. Likewise, the erosion of voting rights in the United States through racially aimed changes to voter registration procedures, polling hours and locations, and other aspects of the institutions of voting provokes cynicism and detachment from the institutions of government. (McAdam and Kloos make these arguments in Deeply Divided: Racial Politics and Social Movements in Postwar America.)

Second, much of the appeal of right-wing extremism turns on lies about minorities (including immigrants). Mainstream and progressive parties should do a much better job of communicating the advantages to the whole of society that flow from diversity, talented immigrants, and an inclusive community. Mainstream parties need to expose and de-legitimize the lies that right-wing politicians use to stir up anger, resentment, and hatred against various other groups in society, and they need to convey a powerful and positive narrative of their own.

Another strategy to enhance civility and commitment to core democratic values is to reduce the economic inequalities that all too often provoke resentment and distrust across groups within society. Justin Gest illustrates this dynamic in The New Minority; the dis-employed workers in East London and Youngstown, Ohio have good reason to think their lives and concerns have been discarded by the economies in which they live. As John Rawls believed, a stable democracy depends upon the shared conviction that the basic institutions of society are working to the advantage of all citizens, not just the few (Justice as Fairness: A Restatement).

Finally, there is the police response. Every government has a responsibility to protect its citizens from violence. When groups actively conspire to commit violence against others — whether it is Baader-Meinhof, radical spinoffs of AfD, or the KKK — the state has a responsibility to uncover, punish, and disband those groups. Germany’s anti-terrorist police forces are now placing higher priority on right-wing terrorism than they apparently have done in the past, and this is a clear responsibility for a government with duty for ensuring the safety of the public (link). (It is worrisome to find that members of the police and military are themselves sometimes implicated in right-wing extremist groups in Germany.) Here are a few paragraphs from a recent Times article on arrests of right-wing terrorists:

BERLIN — Twelve men — one a police employee — were arrested Friday on charges of forming and supporting a far-right terrorism network planning wide-ranging attacks on politicians, asylum seekers and Muslims, the authorities said.

The arrests come as Germany confronts both an increase in violence and an infiltration of its security services by far-right extremists. After focusing for years on the risks from Islamic extremists and foreign groups, officials are recalibrating their counterterrorism strategy to address threats from within.

The arrests are the latest in a series of episodes that Christine Lambrecht, the justice minister, called a “very worrying right-wing extremist and right-wing terrorist threat in our country.”

“We need to be particularly vigilant and act decisively against this threat,” she said on Twitter. (link)

The German political system is not well prepared for the onslaught of radical right-wing populism and violence. But much the same can be said in the United States, with a president who espouses many of the same hate-based doctrines that fuel the rise of radical populism in other countries, and in a national climate where hate-based crimes have accelerated in the past several years. (Here is a recent review of hate-based groups and crimes in the United States provided by the Southern Poverty Law Center; link.) And, like Germany, the FBI has been slow to place appropriate priority on the threat of right-wing terrorism in the United States.

(This opinion piece in the New York Times by Anna Sauerbrey (link) describes one tool available to the German government that is not available in the United States — strong legal prohibitions of neo-Nazi propaganda and incitement to hatred:

“There is the legal concept of Volksverhetzung,” the incitement to hatred: Anybody who denigrates an individual or a group based on their ethnicity or religion, or anybody who tries to rouse hatred or promotes violence against such a group or an individual, could face a sentence of up to five years in prison.

Because of virtually unlimited protection of freedom of speech and association guaranteed in the First Amendment of the Bill of Rights, these prohibitions do not exist in the United States. Here is an earlier discussion of this topic (link).)

Slime mold intelligence

We often think of intelligent action in terms of a number of ideas: goal-directedness, belief acquisition, planning, prioritization of needs and wants, oversight and management of bodily behavior, and weighting of risks and benefits of alternative courses of action. These assumptions presuppose the existence of the rational subject who actively orchestrates goals, beliefs, and priorities into an intelligent plan of action. (Here is a series of posts on “rational life plans”; linklinklink.)

It is interesting to discover that some simple adaptive systems apparently embody an ability to modify behavior so as to achieve a specific goal without possessing a number of these cognitive and computational functions. These systems seem to embody some kind of cross-temporal intelligence. An example that is worth considering is the spatial and logistical capabilities of the slime mold. A slime mold is a multi-cellular “organism” consisting of large numbers of independent cells without a central control function or nervous system. It is perhaps more accurate to refer to the population as a colony rather than an organism. Nonetheless the slime mold has a remarkable ability to seek out and “optimize” access to food sources in the environment through the creation of a dynamic network of tubules established through space.

The slime mold lacks beliefs, it lacks a central cognitive function or executive function, it lacks “memory” — and yet the organism (colony?) achieves a surprising level of efficiency in exploring and exploiting the food environment that surrounds it. Researchers have used slime molds to simulate the structure of logistical networks (rail and road networks, telephone and data networks), and the results are striking. A slime mold colony appear to be “intelligent” in performing the task of efficiently discovering and exploiting food sources in the environment in which it finds itself.

One of the earliest explorations of this parallel between biological networks and human-designed networks was Tero et al, “Rules for Biologically Inspired Adaptive Network Design” in Science in 2010 (link). Here is the abstract of their article:

Abstract Transport networks are ubiquitous in both social and biological systems. Robust network performance involves a complex trade-off involving cost, transport efficiency, and fault tolerance. Biological networks have been honed by many cycles of evolutionary selection pressure and are likely to yield reasonable solutions to such combinatorial optimization problems. Furthermore, they develop without centralized control and may represent a readily scalable solution for growing networks in general. We show that the slime mold Physarum polycephalum forms networks with comparable efficiency, fault tolerance, and cost to those of real-world infrastructure networks—in this case, the Tokyo rail system. The core mechanisms needed for adaptive network formation can be captured in a biologically inspired mathematical model that may be useful to guide network construction in other domains.

Their conclusion is this:

Overall, we conclude that the Physarum networks showed characteristics similar to those of the [Japanese] rail network in terms of cost, transport efficiency, and fault tolerance. However, the Physarum networks self-organized without centralized control or explicit global information by a process of selective reinforcement of preferred routes and simultaneous removal of redundant connections. (441)

They attempt to uncover the mechanism through which this selective reinforcement of routes takes place, using a simulation “based on feedback loops between the thickness of each tube and internal protoplasmic flow in which high rates of streaming stimulate an increase in tube diameter, whereas tubes tend to decline at low flow rates” (441). The simulation is successful in approximately reproducing the observable dynamics of evolution of the slime mold networks. Here is their summary of the simulation:

Our biologically inspired mathematical model can capture the basic dynamics of network adaptability through iteration of local rules and produces solutions with properties comparable or better than those real-world infrastructure networks. Furthermore, the model has a number of tunable parameters that allow adjustment of the benefit-cost ratio to increase specific features, such as fault tolerance or transport efficiency, while keeping costs low. Such a model may provide a useful starting point to improve routing protocols and topology control for self-organized networks such as remote sensor arrays, mobile ad hoc networks, or wireless mesh networks. (442)

Here is a summary description of what we might describe as the “spatial problem-solving abilities” of the slime mold based on this research by Katherine Harman in a Scientific American blog post (link):

Like the humans behind a constructed network, the organism is interested in saving costs while maximizing utility. In fact, the researchers wrote that this slimy single-celled amoeboid can “find the shortest path through a maze or connect different arrays of food sources in an efficient manner with low total length yet short average minimum distances between pairs of food sources, with a high degree of fault tolerance to accidental disconnection”—and all without the benefit of “centralized control or explicit global information.” In other words, it can build highly efficient connective networks without the help of a planning board.

This research has several noteworthy features. First, it seems to provide a satisfactory account of the mechanism through which slime mold “network design intelligence” is achieved. Second, the explanation depends only on locally embodied responses at the local level, without needing to appeal to any sort of central coordination or calculation. The process is entirely myopic and locally embodied, and the “global intelligence” of the colony is entirely generated by the locally embodied action states of the individual mold cells. And finally, the simulation appears to offer resources for solving real problems of network design, without the trouble of sending out a swarm of slime mold colonies to work out the most efficient array of connectors.

We might summarize this level of slime-mold intelligence as being captured by:

  • trial-and-error extension of lines of exploration
  • localized feedback on results of a given line leading to increase/decrease of the volume of that line

This system is decentralized and myopic with no ability to plan over time and no “over-the-horizon” vision of potential gains from new lines of exploration. In these respects slime-mold intelligence has a lot in common with the evolution of species in a given ecological environment. It is an example of “climbing Mt. Improbable” involving random variation and selection based on a single parameter (volume of flow rather than reproductive fitness). If this is a valid analogy, then we might be led to expect that the slime mold is capable of finding local optima in network design but not global optima. (Or the slime colony may avoid this trap by being able to fully explore the space of network configurations over time.) What the myopia of this process precludes is the possibility of strategic action and planning — absorbing sacrifices at an early part of the process in order to achieve greater gains later in the process. Slime molds would not be very good at chess, Go, or war.

I’ve been tempted to offer the example of slime mold intelligence as a description of several important social processes apparently involving collective intentionality: corporate behavior and discovery of pharmaceuticals (link) and the aggregate behavior of large government agencies (link).

On pharmaceutical companies:

So here’s the question for consideration here: what if we attempted to model the system of population, disease, and the pharmaceutical industry by representing pharma and its multiple research and discovery units as the slime organism and the disease space as a set of disease populations with different profitability characteristics? Would we see a major concentration of pharma slime around a few high-frequency, high profit disease-drug pairs? Would we see substantial under-investment of pharma slime on low frequency low profit “orphan” disease populations? And would we see hyper-concentrations around diseases whose incidence is responsive to marketing and diagnostic standards? (link)

On the “intelligence” of firms and agencies:

But it is perfectly plain that the behavior of functional units within agencies are only loosely controlled by the will of the executive. This does not mean that executives have no control over the activities and priorities of subordinate units. But it does reflect a simple and unavoidable fact about large organizations. An organization is more like a slime mold than it is like a control algorithm in a factory. (link)

In each instance the analogy works best when we emphasize the relative weakness of central strategic control (executives) and the solution-seeking activities of local units. But of course there is a substantial degree of executive involvement in both private and public organizations — not fully effective, not algorithmic, but present nonetheless. So the analogy is imperfect. It might be more accurate to say that the behavior of large complex organizations incorporates both imperfect central executive control and the activities of local units with myopic search capabilities coupled with feedback mechanisms. The resulting behavior of such a system will not look at all like the idealized business-school model of “fully implemented rational business plans”, but it will also not look like a purely localized resource-maximizing network of activities.

******

Here is a very interesting set of course notes in which Prof. Donglei Du from the University of New Brunswick sets the terms for a computational and heuristic solution to a similar set of logistics problems. Du asks his students to consider the optimal locations of warehouses to supply retailers in multiple locations; link. Here is how Du formulates the problem:
*     Assuming that plants and retailer locations are fixed, we concentrate on the following strategic decisions in terms of warehouses.

  • Pick the optimal number, location, and size of warehouses 
  • Determine optimal sourcing strategy
    • Which plant/vendor should produce which product 
  • Determine best distribution channels
    • Which warehouses should service which retailers

The objective is to design or reconfigure the logistics network so as to minimize annual system-wide costs, including

  • Production/ purchasing costs
  • Inventory carrying costs, and facility costs (handling and fixed costs)
  • Transportation costs

As Du demonstrates, the mathematics involved in an exact solution are challenging, and become rapidly more difficult as the number of nodes increases.

Even though this example looks rather similar to the rail system example above, it is difficult to see how it might be modeled using a slime mold colony. The challenge seems to be that the optimization problem here is the question of placement of nodes (warehouses) rather than placement of routes (tubules).

Methods of causal inquiry

This diagram provides a map of an extensive set of methods of causal inquiry in the social sciences. The goal here is to show that the many approaches that social scientists have taken to discovering causal relationships have an underlying order, and they can be related to a small number of ontological ideas about social causation. (Here is a higher resolution version of the image; link.)

We begin with the idea that causation involves the production of an outcome by a prior set of conditions mediated by a mechanism. The task of causal inquiry is to discover the events, conditions, and processes that combine to bring about the outcome of interest. Given that causal relationships are often unobservable and complexly intertwined with multiple other causal processes, we need to have methods of inquiry to allow us to use observable evidence and hypothetical theories about causal mechanisms to discover valid causal relationships.

The upper left node of the diagram reviews the basic elements of the ontology of social causation. It gives priority to the idea of causal realism — the view that social causes are real and inhere in a substrate of social action constituted by social actors and their relations and interactions. This substrate supports the existence of causal mechanisms (and powers) through which causal relations unfold. It is noted that causes are often manifest in a set of necessary and/or sufficient conditions: if X had not occurred, Y would not have occurred. Causes support (and are supported by) counterfactual statements — our reasoning about what would have occurred in somewhat different circumstances. The important qualification to the simple idea of exceptionless causation is the fact that much causation is probabilistic rather than exceptionless: the cause increases (or decreases) the likelihood of occurrence of its effect. Both exceptionless causation and probabilistic causation supports the basic Humean idea that causal relations are often manifest in observable regularities.

These features of real causal relations give rise to a handful of different methods of inquiry.

First, there is a family of methods of causal inquiry that involve search for underlying causal mechanisms. These include process tracing, individual case studies, paired comparisons, comparative historical sociology, and the application of theories of the middle range.

Second, the ontology of generative causal mechanisms suggests the possibility of simulations as a way of probing the probable workings of a hypothetical mechanism. Agent-based models and computational simulations more generally are formal attempts to identify the dynamics of the mechanisms postulated to bring about specific social outcomes.

Third, the fact that causes produce their effects supports the use of experimental methods. Both exceptionless causation and probabilistic causation supports experimentation; the researcher attempts to discern causation by creating a pair of experimental settings differing only in the presence or absence of the “treatment” (hypothetical causal agent), and observing the outcome.

Fourth, the fact that exceptionless causation produces a set of relationships among events that illustrate the logic of necessary and sufficient conditions permits a family of methods inspired by JS Mills’ methods of similarity and difference. If we can identify all potentially relevant causal factors for the occurrence of an outcome and if we can discover a real case illustrating every combination of presence and absence of those factors and the outcome of interest, then we can use truth-functional logic to infer the necessary and/or sufficient conditions that produce the outcome. These results constitute JL Mackie’s INUS conditions for the causal system under study (insufficient but non-redundant parts of a condition which is itself unnecessary but sufficient for the occurrence of the effect). Charles Ragin’s Boolean methods and fuzzy-set theories of causal analysis and the method of quantitative comparative analysis conform to the same logical structure.

Probabilistic causation cannot be discovered using these Boolean methods, but it is possible to use statistical and probabilistic methods in application to large datasets to discover facilitating and inhibiting conditions and multifactoral and conjunctural causal relations. Statistical analysis can produce evidence of what Wesley Salmon refers to as “causal relevance” (conditional probabilities that are not equal to background population probabilities). This is expressed as: P(O|A&B&C) <> P(O).

Finally, the fact that causal factors can be relied upon to give rise to some kind of statistical associations between factors and outcomes supports the application of methods of inquiry involving regression, correlation analysis, and structural equation modeling. 

It is important to emphasize that none of these methods is privileged over all the others, and none permits a purely inductive or empirical study to arrive at valid claims about causation. Instead, we need to have hypotheses about the mechanisms and powers that underlie the causal relationships we identify, and the features of the causal substrate that give these mechanisms their force. In particular, it is sometimes believed that experimental methods, random controlled trials, or purely statistical analysis of large datasets can establish causation without reference to hypothesis and theory. However, none of these claims stands up to scrutiny. There is no “gold standard” of causal inquiry.

This means that causal inquiry requires a plurality of methods of investigation, and it requires that we arrive at theories and hypotheses about the real underlying causal mechanisms and substrate that give rise to (“generate”) the outcomes that we observe.

Generativity and emergence

Social entities and structures have properties that exercise causal influence over all of us, and over the continuing development of the society in which we live. Schools, corporations, armies, terror networks, transport networks, markets, churches, and cities all fall in this range — they are social compounds or entities that shape the behavior of the individuals who live and work within them, and they have substantial effects on the broader society as well.

So it is unsurprising that sociologists and ordinary observers alike refer to social structures, organizations, and practices as real components of the social world. Social entities have properties that make a difference, at the individual level and at the social and historical level. Individuals are influenced by the rules and practices of the organizations that employ them; and political movements are influenced by the competition that exists among various religious organizations. Putting the point simply, social entities have real causal properties that influence daily life and the course of history.

What is less clear in the social sciences, and in the areas of philosophy that take an interest in such things, is where those causal properties come from. We know from physics that the causal properties of metallic silver derive from the quantum-level properties of the atoms that make it up. Is something parallel to this true in the social realm as well? Do the causal properties of a corporation derive from the properties of the individual human beings who make it up? Are social properties reducible to individual-level facts?

John Stuart Mill was an early advocate for methodological individualism. In 1843 he wrote his System of Logic: Ratiocinative and Inductive, which contained his view of the relationships that exist between the social world and the world of individual thought and action:

All phenomena of society are phenomena of human nature, generated by the action of outward circumstances upon masses of human beings; and if, therefore, the phenomena of human thought, feeling, and action are subject to fixed laws, the phenomena of society can not but conform to fixed laws. (Book VI, chap. VI, sect. 2)

With this position he set the stage for much of the thinking in social science disciplines like economics and political science, with the philosophical theory of methodological individualism.

About sixty years later Emile Durkheim took the opposite view. He believed that social properties were autonomous with respect to the individuals that underlie them. In 1901 he wrote in the preface to the second edition of Rules of Sociological Method:

Whenever certain elements combine and thereby produce, by the fact of their combination, new phenomena, it is plain that these new phenomena reside not in the original elements but in the totality formed by their union. The living cell contains nothing but mineral particles, as society contains nothing but individuals. Yet it is patently impossible for the phenomena characteristic of life to reside in the atoms of hydrogen, oxygen, carbon, and nitrogen…. Let us apply this principle to sociology. If, as we may say, this synthesis constituting every society yields new phenomena, differing from those which take place in individual consciousness, we must, indeed, admit that these facts reside exclusively in the very society itself which produces them, and not in its parts, i.e., its members…. These new phenomena cannot be reduced to their elements. (preface to the 2nd edition)

These ideas provided the basis for what we can call “methodological holism”.

So the issue between Mill and Durkheim is the question of whether the properties of the higher-level social entity can be derived from the properties of the individuals who make up that entity. Mill believed yes, and Durkheim believed no.

This debate persists to the current day, and the positions are both more developed, more nuanced, and more directly relevant to social-science research. Consider first what we might call “generativist social-science modeling”. This approach holds that methodological individualism is obviously true, and the central task for the social sciences is to actually perform the reduction of social properties to the actions of individuals by providing computational models that reproduce the social property based on a model of the interacting individuals. These models are called “agent-based models” (ABM). Computational social scientist Joshua Epstein is a recognized leader in this field, and his book Growing Artificial Societies: Social Science From the Bottom Up provides developed examples of ABMs designed to explain well-known social phenomena from the disappearance of the Anasazi in the American Southwest to the occurrence of social unrest. Here is his summary statement of the approach:

To the generativist, explaining macroscopic social regularities, such as norms, spatial patterns, contagion dynamics, or institutions requires that one answer the following question: How could the autonomous local interactions of heterogeneous boundedly rational agents generate the given regularity?Accordingly, to explain macroscopic social patterns, we generate—or “grow”—them in agent models. 

Epstein’s memorable aphorism summarizes the field — “If you didn’t grow it, you didn’t explain its emergence.” A very clear early example of this approach is an agent-based simulation of residential segregation provided by Thomas Schelling in “Dynamic Models of Segregation” (Journal of Mathematical Sociology, 1971; link). The model shows that simple assumptions about the neighborhood-composition preferences of individuals of two groups, combined with the fact that individuals can freely move to locations that satisfy their preferences, leads almost invariably to strongly segregated urban areas.

There is a surface plausibility to the generativist approach, but close inspection of many of these simulations lays bare some important deficiencies. In particular, a social simulation necessarily abstracts mercilessly from the complexities of both the social environment and the dynamics of individual action. It is difficult to represent the workings of higher-level social entities within an agent-based model — for example, organizations and social practices. And ABMs are not well designed for the task of representing dynamic social features that other researchers on social action take to be fundamental — for example, the quality of leadership, the content of political messages, or the high degree of path dependence that most real instances of political mobilization reflect.

So if methodological individualism is a poor guide to social research, what is the alternative? The strongest opposition to generativism and reductionism is the view that social properties are “emergent”. This means that social ensembles sometimes possess properties that cannot be explained by or reduced to the properties and actions of the participants. For example, it is sometimes thought that a political movement (e.g. Egyptian activism in Tahrir Square in 2011) possessed characteristics that were different in kind from the properties of the individuals and activists who made it up.

There are a few research communities currently advocating for a strong concept of emergence. One is the field of critical realism, a philosophy of science developed by Roy Bhaskar in A Realist Theory of Science (1975) and The Possibility of Naturalism (1979). According to Bhaskar, we need to investigate the social world by looking for the real (though usually unobservable) mechanisms that give rise to social stability and change. Bhaskar is anti-reductionist, and he maintains that social entities have properties that are different in kind from the properties of individuals. In particular, he believes that the social mechanisms that generate the social world are themselves created by the autonomous causal powers of social entities and structures. So attempting to reduce a process of social change to the actions of the individuals who make it up is a useless exercise; these individuals are themselves influenced by the autonomous causal powers of larger social forces.

Another important current line of thought that defends the idea of emergence is the theory of assemblage, drawn from Gilles Deleuze but substantially developed by Manuel DeLanda in A New Philosophy of Society: Assemblage Theory and Social Complexity (2006) and Assemblage Theory (2016). This theory argues for a very different way of conceptualizing the social world. This approach proposes that we should understand complex social entities as a compound of heterogeneous and independent lesser entities, structures, and practices. Social entities do not have “essences”. Instead, they are continent and heterogenous ensembles of parts that have been brought together in contingent ways. But crucially, DeLanda maintains that assemblages too have emergent properties that do not derive directly from the properties of the parts. A city has properties that cannot be explained in terms of the properties of its parts. So assemblage theory too is anti-reductionist. 

The claim of emergence too has a superficial appeal. It is clear, for one thing, that social entities have effects that are autonomous with respect to the particular individuals who compose them. And it is clear as well that there are social properties that have no counterpart at the individual level (for example, social cohesion). So there is a weak sense in which it is possible to accept a concept of emergence. However, that weak sense does not rule out either generativity or reduction in principle. It is possible to hold both generativity and weak emergence consistently. And the stronger sense — that emergent properties are unrelated to and underivable from lower level properties — seems flatly irrational. What could strongly emergent properties depend on, if not the individuals and social relations that make up these higher-level social entities?

For this reason it is reasonable for social scientists to question both generativity and strong emergence. We are better off avoiding the strong claims of both generativity and emergence, in favor of a more modest social theory. Instead, it is reasonable to advocate for the idea of the relative explanatory autonomy of social properties. This position comes down to a number of related ideas. Social properties are ultimately fixed by the actions and thoughts of socially constituted individuals. Social properties are stable enough to admit of direct investigation. Social properties are relatively autonomous with respect to the specific individuals who occupy positions within these structures. And there is no compulsion to perform reductions of social properties through ABMs or any other kind of derivation. (These are ideas that were first advocated in 1974 by Jerry Fodor in “Special sciences: Or: The disunity of science as a working hypothesis” (link).)

It is interesting to note that a new field of social science, complexity studies, has relevance to both ends of this dichotomy. Joshua Epstein himself is a complexity theorist, dedicated to discovering mathematical methods for understanding complex systems. Other complexity scientists like John Miller and Scott Page are open to the idea of weak emergence in Complex Adaptive Systems: An Introduction to Computational Models of Social Life. Here is how Miller and Page address the idea of emergence in CAS:

The usual notion put forth underlying emergence is that individual, localized behavior aggregates into global behavior that is, in some sense, disconnected from its origins. Such a disconnection implies that, within limits, the details of the local behavior do not matter to the aggregate outcome. (CAS, p. 44)

Herbert Simon is another key contributor to modern complexity studies. Simon believed that complex systems have properties that are irreducible to the properties of their components for pragmatic reasons, including especially computational intractability. It is therefore reasonable, in his estimation, to look at higher-level social properties as being emergent — even though we believe in principle that these properties are ultimately determined by the properties of the components. Here is his treatment in the third edition of The Sciences of the Artificial – 3rd Edition (1996):

[This amounts to] reductionism in principle even though it is not easy (often not even computationally feasible) to infer rigorously the properties of the whole from knowledge of the properties of the parts. In this pragmatic way, we can build nearly independent theories for each successive level of complexity, but at the same time, build bridging theories that show how each higher level can be accounted for in terms of the elements and relations of the next level down. (172)

The debate over generativity and emergence may seem like an arcane issue that is of interest only to philosophers and the most theoretical of social scientists. But in fact, disputes like this one have real consequences for the conduct of an area of scientific research. Suppose we are interested in the sociology of hate-based social movements. If we begin with the framework of reductionism and generativism, we may be led to focus on the social psychology of adherents and the aggregative processes through which potential followers are recruited into a hate-based movement. If, on the other hand, we believe that social structures and practices have relatively autonomous causal properties, then we will be led to consider the empirical specifics of the workings of organizations like White Citizens Councils, legal structures like the laws that govern hate-based political expressions in Germany and France, and the ways that the Internet may influence the spread of hate-based values and activism. In each of these cases the empirical research is directed in important measure to the concrete workings of the higher-level social institutions that are hypothesized to influence the emergence and shape of hate-based movements. In other words, the sociological research that we conduct is guided in part by the assumptions we make about social ontology and the composition of the social world.

%d bloggers like this: