Cacophony of the social

Take a typical day in a major city — a busy street with a subway stop, a park, a coffee bar, and a large consumer financial office. There are several thousand people in view, mostly in ones and twos. Some people are rushing to an appointment with a doctor, a job interview, a drug dealer in the park. A group of young men and women are beginning to chant in a demonstration in the park against a particularly egregious announcement of government policy on contraception.

There is a blooming, buzzing confusion to the scene. And yet there are overlapping forms of order — pedestrians crossing streets at the crosswalks, surges of suits and ties at certain times of day, snatch and grab artists looking for an unguarded cell phone. The brokers in the financial office are more coordinated in their actions, tasked to generate sales with customers who walk in for service. The demonstrators have assembled from many parts of the city, arriving by subway in the previous hour. Their presence is, of course, coordinated; they were alerted to the demo by a group text from the activist organization they belong to. 

What are the opportunities for social science investigation here? What possibilities exist for explanation of some of the phenomena on display?

For one thing there is an interesting opportunity for ethnographic study presented here. A micro-sociologist or urban anthropologist may find it very interesting to look closely to see what details of dress and behavior are on display. This is the kind of work that sociologists inspired by Erving Goffmam have pursued.

Another interesting possibility is to see what coordinated patterns of behavior can be observed. Do people establish eye contact as they pass? Are the suits more visibly comfortable with other suits than with the street people and panhandlers with whom they cross paths? Is there a subtle racial etiquette at work among these urban strangers?

These considerations fall at the “micro” end of the spectrum. But it is clear enough that the snapshots we gain from a few hours on the street also illustrate a number of background features of social structure. There is differentiation among actors in these scenes that reflect various kinds of social inequalities. There are visible inequalities of income and quality of life that can be observed. These inequalities in turn can be associated with current activities — where the various actors work, how much education they have, what schools they attended, their overall state of health. There are spatial indicators of interest as well — what kinds of neighborhoods, in what parts of the city, did these various actors wake up in this morning?

And for all of these structural differentiators we can ask the question, what were the social mechanisms and processes that performed the sorting of new-borns into affluent/poor, healthy/sick, well educated/poorly educated, and so forth? In other words, how did social structure impose a stamp on this heterogeneous group of people through their own distinctive histories?

We can also ask a series of questions about social networks and social data about these actors. How large are their personal social networks? What are the characteristics of other individuals within various individual networks? How deep do we need to go before we begin to find overlap across the networks of individuals on the street? This is where big data comes in; Amazon, credit agencies, and Verizon know vastly more about these individuals, their habits, and their networks than a social science researcher is likely to discover through a few hundred interviews. 

I’d like to think this disorderly ensemble of purposive but uncoordinated action by several thousand people is highly representative of the realities of the social world. And this picture in turn gives support to the ontology of heterogeneity and contingency that is a core theme here. 

Organizational learning

 
I’ve posed the question of organizational learning several times in recent months: are there forces that push organizations towards changes leading to improvements in performance over time? Is there a process of organizational evolution in the social world? So where do we stand on this question?

There are only two general theories that would lead us to conclude affirmatively. One is a selection theory. According to this approach, organizations undergo random changes over time, and the environment of action favors those organizations whose changes are functional with respect to performance. The selection theory itself has two variants, depending on how we think about the unit of selection. It might be hypothesized that the firm itself is the unit of selection, so firms survive or fail based on their own fitness. Over time the average level of performance rises through the extinction of low-performance organizations. Or it might be maintained that the unit is at a lower level — the individual alternative arrangements for performing various kinds of work, which are evaluated and selected on the basis of some metric of performance. On this approach, individual innovations are the object of selection. 

The other large mechanism of organizational learning is quasi-intentional. We postulate that intelligent actors control various aspects of the functioning of an organization; these actors have a set of interests that drive their behavior; and actors fine-tune the arrangements of the organization so as to serve their interests. This is a process I describe as quasi-intentional to convey that the organization itself has no intentionality, but its behavior and arrangements are under the control of a loosely connected set of actors who are individually intentional and purposive. 

In a highly idealized representation of organizations at work, these quasi-intentional processes may indeed push the organization towards higher functioning. Governance processes — boards of directors, executives — have a degree of influence over the activities of other actors within and adjacent to the organization, and they are able to push some subordinate behavior in the direction of higher performance and innovation if they have an interest in doing so. And sometimes these governance actors do in fact have an interest in higher performance — more revenue, less environmental harm, greater safety, gender and racial equity. Under these circumstances it is reasonable to expect that arrangements will be modified to improve performance, and the organization will “evolve”.

However, two forms of counter-intentionality arise. The interests of the governing actors are not perfectly aligned with increasing performance. Substantial opportunities for conflict of interest exist at every level, including the executive level (e.g. Enron). So the actions of executives are not always in concert with the goal of improving performance. Second, other actors within the organization are often beyond control of executive actors and are motivated by interests that are quite separate from the goal of increasing performance. Their actions may often lead to status quo performance or even degradation of performance. 

So the question of whether a given organization will change in the direction of higher performance is highly sensitive to (i) the alignment of mission interest and personal interest for executive actors, (ii) the scope of control executive actors are able to exercise over subordinates, and (iii) the strength and pervasiveness of personal interests among subordinates within the organization and the capacity these subordinates have to select and maintain arrangements that favor their interests.

This represents a highly contingent and unpredictable situation for the question of organizational learning. We might regard the question as an ongoing struggle between local private interest and the embodiment of mission-defined interest. And there is no reason at all to believe that this struggle is biased in the direction of enhancement of performance. Some organizations will progress, others will be static, and yet others will decline over time. There is no process of evolution, guided or invisible, that leads inexorably towards improvement of arrangements and performance.

So we might formulate this conclusion in a fairly stark way. If organizations improve in capacity and performance over time in a changing environment, this is entirely the result of intelligent actors undertaking to implement innovations that will lead to these outcomes, at a variety of levels of action within the organization. There is no hidden process that can be expected to generate an evolutionary tendency towards higher organizational performance. 

(The images above are of NASA headquarters and Enron headquarters — two organizations whose histories reflect the kinds of dysfunctions mentioned here.)

Social change and leadership


Historians pay a lot of attention to important periods of social change — the emergence of new political movements, the development of a great city, the end of Jim Crow segregation. There is an inclination to give a lot of weight to the importance of leaders, visionaries, and change-makers in driving these processes to successful outcomes. And, indeed, history correctly records the impact of charismatic and visionary leaders. But consider the larger question: are large social changes amenable to design by a small number of actors?

My inclination is to think that the capacity of calculated design for large, complex social changes is very much more limited than we often imagine. Instead, change more often emerges from the independent strategies and actions of numerous actors, only loosely coordinated with others, and proceeding from their own interests and framing assumptions. The large outcome — the emergence of Chicago as the major metropolis of the Midwest, the forging of the EU and the monetary union, the coalescence of nationalist movements in France and Germany — are the resultant of multiple actors and causes. Big outcomes are contingent outcomes of multiple streams of action, mobilization, business decisions, political parties, etc.

There are exceptions, of course. Italy’s political history would have been radically different without Mussolini, and the American Civil War would probably have had a different course if Douglas had won the 1860 presidential election. 

But these are exceptions, I believe. More common is the history of Chicago, the surge of right-wing nationalism, or the collapse of the USSR. These are all multi-causal and multi-actor outcomes, and there is no single, unified process of development. And there is no author, no architect, of the outcome. 

So what does this imply about individual leaders and organizations who want to change the social and political environment facing them? Are their aspirations for creating change simply illusions? I don’t think so. To deny that single visionaries cannot write the future does not imply they cannot nudge it in a desirable direction. And these effects can indeed alter the future, sometimes in the desired direction. An anti-racist politician can influence voters and institutions in ways that inflect the arc of his or her society in a less racist way. This doesn’t permanently solve the problem, but it helps. And with good fortune, other actors will have made similar efforts, and gradually the situation of racism changes. 

This framework for thinking about large social change raises large questions about how we should think about improving the world around us. It seems to imply the importance of local and decentralized social change. We should perhaps adjust our aspirations for social progress around the idea of slow, incremental change through many actors, organizations, and coalitions. As Marx once wrote, “men make their own history, but not in circumstances of their own choosing.” And we can add a qualification Marx would not have appreciated: change makers are best advised to construct their plans around long, slow, and incremental change instead of blueprints for unified, utopian change. 

Is there a new capitalism?

 

An earlier post considered Dave Elder-Vass’s very interesting treatment of the contemporary digital economy. In Profit and Gift in the Digital Economy Elder-Vass argues that the vast economic significance of companies like Google, FaceBook, and Amazon in today’s economy is difficult to assimilate within the conceptual framework of Marx’s foundational ideas about capitalism, constructed as they were around manufacturing, labor, and ownership of capital, and that we need some new conceptual tools in order to make sense of the economic system we now confront. (Elder-Vass responded to my earlier post here.)

A new book by Nick Srnicek looks at this problem from a different point of view. In Platform Capitalism Srnicek proposes to understand the realities of our current “digital economy” according to traditional ideas about capitalism and profit. Here is a preliminary statement of his approach:

The simple wager of the book is that we can learn a lot about major tech companies by taking them to be economic actors within a capitalist mode of production. This means abstracting from them as cultural actors defined by the values of the Californian ideology, or as political actors seeking to wield power. By contrast, these actors are compelled to seek out profits in order to fend off competition. This places strict limits on what constitutes possible and predictable expectations of what is likely to occur. Most notably, capitalism demands that firms constantly seek out new avenues for profit, new markets, new commodities, and new means of exploitation. For some, this focus on capital rather than labour may suggest a vulgar econo-mism; but, in a world where the labour movement has been significantly weakened, giving capital a priority of agency seems only to reflect reality. (Kindle Locations 156-162)

In other words, there is not a major break from General Motors, with its assembly lines, corporate management, and vehicles, to IBM, with its products, software, and innovations, to Google, with its purely abstract and information-intensive products. All are similar in their basic corporate navigation systems: make decisions today that will support or increase profits tomorrow. In fact, each of these companies falls within the orbit of the new digital economy, according to Srnicek:

As a preliminary definition, we can say that the digital economy refers to those businesses that increasingly rely upon information technology, data, and the internet for their business models. This is an area that cuts across traditional sectors – including manufacturing, services, transportation, mining, and telecommunications – and is in fact becoming essential to much of the economy today. (Kindle Locations 175-177).

What has changed, according to the economic history constructed by Srnicek, is that the creation and control of data has suddenly become a vast and dynamic source of potential profit, and capitalist firms have adapted quickly to capture these profits.

The restructuring associated with the rise of information-intensive economic activity has greatly changed the nature of work:

Simultaneously, the generalised deindustrialisation of the high-income economies means that the product of work becomes immaterial: cultural content, knowledge, affects, and services. This includes media content like YouTube and blogs, as well as broader contributions in the form of creating websites, participating in online forums, and producing software. (Kindle Locations 556-559)

But equally it takes the form of specialized data-intensive work within traditional companies: design experts, marketing analysis of “big data” on consumer trends, the use of large simulations to guide business decision-making, the use of automatically generated data from vehicles to guide future engineering changes.

In order to capture the profit opportunities associated with the availability of big data, something else was needed: an organizational basis for aggregating and monetizing the data that exist around us. This is the innovation that comes in for Srnicek’s greatest focus of attention: the platform.

This chapter argues that the new business model that eventually emerged is a powerful new type of firm: the 

platform

. Often arising out of internal needs to handle data, platforms became an efficient way to monopolise, extract, analyse, and use the increasingly large amounts of data that were being recorded. Now this model has come to expand across the economy, as numerous companies incorporate platforms: powerful technology companies (Google, Facebook, and Amazon), dynamic start-ups (Uber, Airbnb), industrial leaders (GE, Siemens), and agricultural powerhouses (John Deere, Monsanto), to name just a few. (Kindle Locations 602-607).

What are platforms? At the most general level, platforms are digital infrastructures that enable two or more groups to interact. They therefore position themselves as intermediaries that bring together different users: customers, advertisers, service providers, producers, suppliers, and even physical objects. More often than not, these platforms also come with a series of tools that enable their users to build their own products, services, and marketplaces. Microsoft’s Windows operating system enables software developers to create applications for it and sell them to consumers; Apple’s App Store and its associated ecosystem (XCode and the iOS SDK) enable developers to build and sell new apps to users; Google’s search engine provides a platform for advertisers and content providers to target people searching for information; and Uber’s taxi app enables drivers and passengers to exchange rides for cash. (Kindle Locations 607-616)

Srnicek distinguishes five large types of digital data platforms that have been built out as business models: advertising, cloud, industrial, product, and “lean” platforms (the latter exemplified by Uber).

Srnicek believes that firms organized around digital platforms are subject to several important dynamics and tendencies: “expansion of extraction, positioning as a gatekeeper, convergence of markets, and enclosure of ecosystems” (kl 1298). These tendencies are created by the imperative by the platform-based firm to generate profits. Profits depend upon monetizing data; and data has little value in small volume. So the most fundamental imperative is — mass collection of data from individual consumers.

If data collection is a key task of platforms, analysis is the necessary correlate. The proliferation of data-generating devices creates a vast new repository of data, which requires increasingly large and sophisticated storage and analysis tools, further driving the centralisation of these platforms. (kl 1337-1339)

So privacy threats emerging from the new digital economy are not a bug; they are an inherent feature of design.

This appears to lead us to Srnicek’s most basic conclusion: the new digital economy is just like the old industrial economy in one important respect. Firms are wholly focused on generating profits, and they design intelligent strategies to permit themselves to appropriate ever-larger profits from the raw materials they process. In the case of the digital economy the raw material is data, and the profits come from centralizing and monopolizing access to data, and deploying data to generate profits for other firms (who in turn pay for access to the data). And revenues and profits have no correspondence to the size of the firm’s workforce:

Tech companies are notoriously small. Google has around 60,000 direct employees, Facebook has 12,000, while WhatsApp had 55 employees when it was sold to Facebook for $ 19 billion and Instagram had 13 when it was purchased for $ 1 billion. By comparison, in 1962 the most significant companies employed far larger numbers of workers: AT& T had 564,000 employees, Exxon had 150,000 workers, and GM had 605,000 employees. Thus, when we discuss the digital economy, we should bear in mind that it is something broader than just the tech sector defined according to standard classifications. (Kindle Locations 169-174)

Marx’s theory of capitalism fundamentally originates in a theory of conflict of interest and a theory of exploitation. In Capital that conflict exists between capitalists and workers, and consumers are essentially ignored (except when Marx sometimes refers to the deleterious effects of competition on public health; link). But in Srnicek’s reading of the contemporary digital economy (and Elder-Vass’s as well) the focus shifts away from labor and towards the consumer. The primary conflict in the digital economy is between the platform firm that seeks to acquire our data and the consumers who want the digital services but who are poorly aware of the cost to their privacy. And here it is more difficult to make an argument about exploitation. Are consumers being exploited in this exchange? Or are they getting fair value through extensive and valuable digital services, for the surrender of their privacy in the form of data collection of clicks, purchases, travel, phone usage, and the countless other ways in which individual data winds up in the aggregation engines?

In an unexpected way, this analysis leads us back to a question that seems to belong in the nineteenth century: what after all is the source of value and wealth? And who has a valid claim on a share? What principles of justice should govern the distribution of the wealth of society? The labor theory of value had an answer to the question, but it is an answer that didn’t have a lot of validity in 1850 and has none today. But in that case we need to address the question again. The soaring inequalities of income and wealth that capitalism has produced since 1980 suggest that our economy has lost its control mechanisms for equity; and perhaps this has something to do with the fact that a great deal of the money being generated in capitalism today comes from control of data rather than the adding of value to products through labor. Oddly enough, perhaps Marx’s other big idea is relevant here: social ownership of the means of production. If there were a substantial slice of public-sector ownership of big data firms, including financial institutions, the resulting flow of income and wealth might be expected to begin to correct the hyper-inequalities our economy is currently generating.

Generativism

There is a seductive appeal to the idea of a “generative social science”. Joshua Epstein is one of the main proponents of the idea, most especially in his book, Generative Social Science: Studies in Agent-Based Computational Modeling. The central tool of generative social science is the construction of an agent-based model (link). The ABM is said to demonstrate the way in which an observable social outcome of pattern is generated by the properties and activities of the component parts that make it up — the actors. The appeal comes from the notion that it is possible to show how complicated or complex outcomes are generated by the properties of the components that make them up. Fix the properties of the components, and you can derive the properties of the composites. Here is Epstein’s capsule summary of the approach:

The agent-based computational model — or artificial society — is a new scientific instrument. It can powerfully advance a distinctive approach to social science, one for which the term “generative” seems appropriate. I will discuss this term more fully below, but in a strong form, the central idea is this: To the generativist, explaining the emergence of macroscopic societal regularities, such as norms or price equilibria, requires that one answer the following question: 

The Generativist’s Question 

*How could the decentralized local interactions of heterogeneous autonomous agents generate the given regularity?  

The agent-based computational model is well-suited to the study of this question, since the following features are characteristic: [heterogeneity, autonomy, explicit space, local interactions, bounded rationality]

(5-6)

And a few pages later:

Agent-based models provide computational demonstrations that a given microspecification is in fact sufficient to generate a macrostructure of interest. . . . To the generativist — concerned with formation dynamics — it does not suffice to establish that, if deposited in some macroconfiguration, the system will stay there. Rather, the generativist wants an account of the configuration’s attainment by a decentralized system of heterogeneous autonomous agents. Thus, the motto of generative social science, if you will, is: If you didn’t grow it, you didn’t explain its emergence. (8)

Here is how Epstein describes the logic of one of the most extensive examples of generative social science, the attempt to understand the disappearance of Anasazi population in the American Southwest nearly 800 years ago.

The logic of the exercise has been, first, to digitize the true history — we can now watch it unfold on a digitized map of Longhouse Valley. This data set (what really happened) is the target — the explanandum. The aim is to develop, in collaboration with anthropologists, microspecifications — ethnographically plausible rules of agent behavior — that will generate the true history. The computational challenge, in other words, is to place artificial Anasazi where the true ones were in 80-0 AD and see if — under the postulated rules — the simulated evolution matches the true one. Is the microspecification empirically adequate, to use van Fraassen’s phrase? (13)

Here is a short video summarizing the ABM developed under these assumptions:

The artificial Anasazi experiment is an interesting one, and one to which the constraints of an agent-based model are particularly well suited. The model follows residence location decision-making based on ground-map environmental information.

But this does not imply that the generativist interpretation is equally applicable as a general approach to explaining important social phenomena.

Note first how restrictive the assumption is of “decentralized local interactions” as a foundation to the model. A large proportion of social activity is neither decentralized nor purely local: the search for muons in an accelerator lab, the advance of an armored division into contested territory, the audit of a large corporation, preparations for a strike by the UAW, the coordination of voices in a large choir, and so on, indefinitely. In all these examples and many more, a crucial part of the collective behavior of the actors is the coordination that occurs through some centralized process — a command structure, a division of labor, a supervisory system. And by its design, ABMs appear to be incapable of representing these kinds of non-local coordination.

Second, all these simulation models proceed from highly stylized and abstract modeling assumptions. And the outcomes they describe capture at best some suggestive patterns that might be said to be partially descriptive of the outcomes we are interested in. Abstraction is inevitable in any scientific work, of course; but once recognizing that fact, we must abandon the idea that the model demonstrates the “generation” of the empirical phenomenon. Neither premises nor conclusions are fully descriptive of concrete reality; both are approximations and abstractions. And it would be fundamentally implausible to maintain that the modeling assumptions capture all the factors that are causally relevant to the situation. Instead, they represent a particular stylized hypothesis about a few of the causes of the situation in question.  Further, we have good reason to believe that introducing more details at the ground level will sometimes lead to significant alteration of the system-level properties that are generated.

 
So the idea that an agent-based model of civil unrest could demonstrate that (or how) civil unrest is generated by the states of discontent and fear experienced by various actors is fundamentally ill-conceived. If the unrest is generated by anything, it is generated by the full set of causal and dynamic properties of the set of actors — not the abstract stylized list of properties. And other posts have made the point that civil unrest or rebellion is rarely purely local in its origin; rather, there are important coordinating non-local structures (organizations) that influence mobilization and spread of rebellious collective action. Further, the fact that the ABM “generates” some macro characteristics that may seem empirically similar to the observed phenomenon is suggestive, but far from a demonstration that the model characteristics suffice to determine some aspect of the macro phenomenon. Finally, the assumption of decentralized and local decision-making is unfounded for civil unrest, given the important role that collective actors and organizations play in the success or failure of social mobilizations around grievances (link).
The point here is not that the generativist approach is invalid as a way of exploring one particular set of social dynamics (the logic of decentralized local decision-makers with assigned behavioral rules). On the contrary, this approach does indeed provide valuable insights into some social processes. The error is one of over-generalization — imagining that this approach will suffice to serve as a basis for analysis of all social phenomena. In a way the critique here is exactly parallel to that which I posed to analytical sociology in an earlier post. In both cases the problem is one of asserting priority for one specific approach to social explanation over a number of other equally important but non-equivalent approaches.

Patrick Grim et al provide an interesting approach to the epistemics of models and simulations in “How simulations fail” (link). Grim and his colleagues emphasize the heuristic and exploratory role that simulations generally play in probing the dynamics of various kinds of social phenomena.

 

Strategies for resisting right-wing populism

Social Europe is a vibrant publisher of current progressive thought in Europe. Readers can find data, opinion, and policy analysis on the site, highly relevant to the core priorities of progressives across the continent and Britain — social opportunity, inequalities, work, the threat of right-wing populism, the refugee crisis, and the future of the European Union. Here is the Twitter feed for Social Europe (link).

SE also publishes a bi-annual journal called the Social Europe Journal. The most recent issue is focused on a recurring theme in Understanding Society, the menace posed by the rise of extreme-right populism. The volume is available as a digital book under the title, Understanding the Populist Revolt, edited by Henning Meyer.

Several chapters of Understanding the Populist Revolt are particularly interesting, including Bo Rothstein’s contribution, “Why has the white working class abandoned the left?”. Rothstein’s title poses a critical question, which Rothstein unpacks in these terms:

Maybe the most surprising political development during this decade is why increased inequality in almost all capitalist market societies has not resulted in more votes for left parties. Especially telling is the political success of Donald Trump and why such a large part of the American working class voted for him. In a country with staggering and increasing economic inequality, why would people who will undoubtedly lose economically from his policies support him? Why did his anti-government policies such as cutting taxes for the super-rich and slashing the newly established health care insurance system succeed to such a large extent? Moreover, why were these policies especially effective in securing votes from the white working class? 

One answer may be in an issue often neglected by the left, namely how people perceive the quality of their government institutions. The idea behind this “quality of government” approach is that when people take a stand on what policies they are going to support, they do not only evaluate the policy as such. In addition, they also take into consideration the quality of the government institutions that are going to be responsible for its implementation.

Surveys show that the reason for Trump’s unexpected victory was his ability to get massive support from what has historically been a stronghold for the Democratic Party, namely low-educated white working class voters. However, as has recently been pointed out by, among others, Paul Krugman, this is a group likely to be the big losers from the policies Trump said he will launch. Many would say that race and immigration determined this election, but this can only be a part of the story because in many of the areas where Trump got most of the white working class votes there are few immigrants and no significant multi-ethnic population. (Kindle Locations 461-469)

This is indeed a key question for politicians and activists in the United States to consider. However, Rothstein’s answer is a fairly narrow one; he argues that a widespread belief about corruption and favoritism in Democratic elites is a primary factor leading to the disaffection of the white working class in the US. This seems to be only a secondary factor, however.

A more comprehensive attempt at an answer to the question, why is populism on the rise?, is suggested in the concluding chapter of the volume in an interview with Jurgen Habermas. Habermas calls out several factors in the past twenty-five years that have led to a rising appeal of right-wing populism among large segments of the populations of democratic countries in Europe and the United States. First among these factors is the steep and continuing increase in inequalities that neoliberal economies brought about since 1989. He believes that this trend could only be offset by an active state policy of social welfare — the policies of social democracy — and that advanced capitalist democracies have retreated from such policies.

Second, he highlights the deliberate politics and rhetoric of the right in both Europe and the United States in pursuing a politics of division and resentment. People suffer; and politicians aim their resentment at vulnerable others.

Third, Habermas emphasizes the fact that neoliberal globalization has not delivered on the promises made on its behalf in the 1970s, that globalization will improve everyone’s standard of living. In fact, he argues that globalization has led to stagnation of living standards in many countries and has led to an overall decline of the importance of the western capitalist economies within the global system overall. This trend in turn has given new energy to the nationalistic forces underlying right-wing populism.

So what advice does Habermas offer to the progressive parties in western democracies? He argues that the progressive left needs to confront the root of the problem — the increasing inequalities that exist both nationally and internationally. Moreover, he argues that this will require substantial international cooperation:

The question is why left-wing parties do not go on the offensive against social inequality by embarking upon a co-ordinated and cross-border taming of unregulated markets. As a sensible alternative – as much to the status quo of feral financial capitalism as to the agenda for a völkisch or left-nationalist retreat into the supposed sovereignty of long-since hollowed-out nation states – I would suggest there is only a supranational form of co-operation that pursues the goal of shaping a socially acceptable political reconfiguration of economic globalisation. (Kindle Locations 566-569)

In Habermas’s judgment, the fundamental impetus to right-wing populism was the cooptation of “social-democrat” parties like the Democratic Party in the United States and the Labour Party in Britain by the siren song of neoliberalism:

Since Clinton, Blair and Schröder social democrats have swung over to the prevailing neoliberal line in economic policies because that was or seemed to be promising in the political sense: in the “battle for the middle ground” these political parties thought they could win majorities only by adopting the neoliberal course of action. This meant taking on board toleration of long-standing and growing social inequalities. Meantime, this price – the economic and socio-cultural “hanging out to dry” of ever-greater parts of the populace – has clearly risen so high that the reaction to it has gone over to the right. (Kindle Locations 573-578)

So what is the path to broad support for the progressive left? It is to be progressive — to confront the root cause of the economic stagnation of the working class people whose lives are increasingly precarious and whose standard of living has not advanced materially in twenty-five years.

But this requires being willing to open up a completely different front in domestic politics and doing so by making the above-mentioned problem the key point at issue: How do we regain the political initiative vis-à-vis the destructive forces of unbridled capitalist globalisation? Instead, the political scene is predominantly grey on grey, where, for example, the left-wing pro-globalisation agenda of giving a political shape to a global society growing together economically and digitally can no longer be distinguished from the neoliberal agenda of political abdication to the blackmailing power of the banks and of the unregulated markets. (Kindle Locations 590-595)

So the distance between Rothstein and Habermas is substantial: Rothstein ultimately chalks up the Trump victory to a successful marketing campaign (“crooked Hillary”), whereas Habermas believes that very large forces within the neoliberal financial and trade regimes of the past twenty-five years have in fact worked to the disadvantage of the very people needed for achieving a majority for progressive politics.

I find it interesting that Habermas does not address the themes of radical nationalism, xenophobia, racism, and anti-Semitism that are raised by populist parties in many European countries and the United States. In his telling of the story, it is an issue of interests and structural advantages and disadvantages for various groups. By implication, the racism of the far right will subside if the US Democratic Party or progressive parties in France, Germany, or the Netherlands succeed in redefining the social contract in a way that is more favorable for the less advantaged citizens in their societies. Interestingly, this is exactly the argument constructed by Manuel Muñiz in his contribution to the volume, “Populism and the need for a new social contract.” Here are Muñiz’s central ideas:

The decoupling of productivity and wages is the explanation behind the structural stagnation of salaries of the middle class and the increase in inequality within our societies. Wealth is being concentrated in the hands of those that invest in and own the robots and algorithms while most of those living off labour wages are struggling. The McKinsey Global Institute recently reported that over 80% of US households had seen their income stagnate or decline in the period 2009-2016. (Kindle Locations 224-227)

The people most negatively affected by these trends are the abandoned of our time, the ignored, and are beginning to constitute a new political class. The embodiment of this new class is not just the unemployed but also the underemployed and the working poor – people who have seen economic opportunity escape from them over the last few decades. (Kindle Locations 230-232)

And here is a preliminary description of the new social contract he believes we will need:

The appearance and design of the new social contract that we need is only now starting to be discussed. What is clear, however, is that it will require a big change in the way the state procures its income, possibly through a reinvigorated industrial policy, large public venture capital investments and others. In essence, if wealth is concentrated in capital some form of democratisation of capital holding will be required. On the spending side, changes will also be required. This might adopt the form of negative income taxes, the establishment of a universal basic income, or the launch of public employment schemes. (Kindle Locations 246-250)

So there seems to be a degree of consensus among these contributors to Social Europe: the best strategy for fighting radical right-wing populism, and the tendencies towards racism and authoritarianism that it brings with it, is to re-establish the robust terms of social democracy that have the potential to offset the destructive structural dynamics of contemporary neoliberal capitalism. This means a more active state; more redistribution; more regulation of the financial industry and other sectors; and a more level playing field for all citizens in our democracies. And these are precisely the political values that Tea Party conservativism, Trumpism, and mainstream Republicans agree about; their rhetoric has demonized precisely these kinds of policies for decades. What would it take for the parties of the left to embrace the pro-working class policies described here? And is the underlying suspicion voiced by Rothstein above actually correct: that the Democratic Party is so beholden to large corporate interests that it is incapable of adopting these kinds of platforms?

Observation, measurement, and explanation

An earlier post reiterated my reasons for doubting that the social sciences can in principle give rise to general theories that serve to organize and predict the domain of social phenomena. The causes of social events are too heterogeneous and conjunctural to permit this kind of systematic representation.

 
That said, social behavior and social processes give rise to very interesting patterns at the macro scale. And it is always legitimate took ask what the causes are that produce these patterns. Consider the following graphs. They are drawn very miscellaneously from a range of social science disciplines.

 
 
 

These graphs represent many different kinds of social behavior and processes. A few are synchronic — snapshots of a variable at a moment in time. The graph of India’s population age structure falls in this category, as do the graphs of India’s literacy rates. Most are diachronic, representing change over time. The majority show an apparent pattern of stochastic change, even in cases where there is also a measurable direction of change indicating underlying persistent causes. Graphs of stock market activity fall in this category, with random variations of prices even during a consistent period of rising or falling prices.

The graph representing the evolution of China’s agricultural economy tells an interesting and complicated story. It shows rising productivity in agriculture and (since 1984) a sharp decline in the proportion of the labor force involved in agriculture — an important cause of China’s urban growth and the growth of its internal migrant population. And it shows a long-term decline in the share of the national economy played by agricultural production overall, from about 40% in 1969 to less than 15% in 2005. What these statistics convey is a period of fundamental change in China, in economy, urbanization, and ultimately in politics.

The graph of the composition of the US population is a time series graph that tells a complicated story as well — a smooth rise in total national population composed of shifting shares of population across the regions of the country. These shifts of population shares across the region’s of the country demand historical and causal explanation.

The graph of India’s literacy rates over age warrants comment. It appears to give a valid indication of several important social realities — a persistent gap between men and women of all ages, and lower literacy among older men and women. But the graph also displays variation that can only reflect some sort of artifact from the data collection: literacy rates plummet at the decade and half decade, for both men and women. Plainly there is a problem with the data represented in this graph; nothing could explain a 15% discrepancy in literacy rates between 57-year-old men and 60-year-old men. The same anomalous pattern is evident in the female graph as well. Essentially there are two distinct data series represented here: the decade and half-decade series (low) and the by-year series (high). There is no way of telling from the graph which series should be given greater credibility. The other chart representing state literacy rates is of interest as well. It allows us to see that there are substantial gaps across states in terms of literacy — Kerala’s literacy rate in 1981 is 2.5 times higher than that of Bihar in that year. And some states have made striking progress in literacy between 1981 and 2001 (Arunachal Pradhesh) while other states have shown less proportional increases (Kerala). Here though we can ask whether the order of states on the graph makes sense. The states are ranked from high to low literacy rates. Perhaps it would be more illuminating to group states by regions so it is possible to draw some inferences and comparisons about similarly situated states.

The graph representing grain price correlations across commodities in Qing China demands a different kind of explanation. We need to be able to identify a mechanism that causes prices in different places to converge to a common market price separated by the cost of transport between these places and the relative utilities of wheat, sorghum, and millet. The mechanism is that of mobile price-sensitive traders responding to information about prices in different locations. The map demonstrates the existence of these mechanisms of communication and transportation on the ground. This is a paradigm example of a mechanism-based explanation. (This example comes from Rawski and Li, eds., Chinese History in Economic Perspective (Studies on China).)

The graph representing the rank order of city sizes is perhaps the most intriguing among all of these. There is nothing inherently implausible about a population distributed across five cities of comparable size and a hundred towns of comparable size — and yet this hypothetical case would display a size distribution radically different from the Zipf law. So what explanation is available to account the account for the empirical pattern almost universally observed? Various scholars have argued that the regularity is the result of very simple conditions that apply to city growth rates over time, and that the cities in a growing population will come to conform to the Zipf regularity over time  as a simple statistical consequence of size and growth (link). It is an example, perhaps, of what Schelling calls “the inescapable mathematics of musical chairs” (Micromotives and Macrobehavior).

What these examples have in common is that they illustrate two of the key tasks of the social sciences: to measure important social variables over time and space, and to identify the social mechanisms that lead to variation in these variables. There are large problems of methodology and conceptual clarification that need to be addressed in both parts of this agenda. On the side of measurement, we have the problems of arriving at consistent and revealing definitions of economic wellbeing, using incomplete historical sources to reconstruct estimates of prices and wages, and using a range of statistical methods to validate and interpret the results. And on the explanatory side, we are faced with the difficult task of reconstructing social processes and forces in the past that may have powered the changes we are able to document, and with the task of validating the hypotheses we have put forward on the basis of historical evidence.

Science policy and the Cold War

The marriage of science, technology, and national security took a major step forward during and following World War II. The secret Manhattan project, marshaling the energies and time of thousands of scientists and engineers, showed that it was possible for military needs to effectively mobilize and conduct coordinated research into fundamental and applied topics, leading to the development of the plutonium bomb and eventually the hydrogen bomb. (Richard Rhodes’ memorable The Making of the Atomic Bomb provides a fascinating telling of that history.) But also noteworthy is the coordinated efforts made in advanced computing, cryptography, radar, operations research, and aviation. (Interesting books on several of these areas include Stephen Budiansky’s Code Warriors: NSA’s Codebreakers and the Secret Intelligence War Against the Soviet Union and Blackett’s War: The Men Who Defeated the Nazi U-Boats and Brought Science to the Art of Warfare Warfare, and Dyson’s Turing’s Cathedral: The Origins of the Digital Universe.) Scientists served the war effort, and their work made a material difference in the outcome. More significantly, the US developed effective systems for organizing and directing the process of scientific research — decision-making processes to determine which avenues should be pursued, bureaucracies for allocating funds for research and development, and motivational structures that kept the participants involved with a high level of commitment. Tom Hughes’ very interesting Rescuing Prometheus: Four Monumental Projects that Changed Our World tells part of this story.

But what about the peace?

During the Cold War there was a new global antagonism, between the US and the USSR. The terms of this competition included both conventional weapons and nuclear weapons, and it was clear on all sides that the stakes were high. So what happened to the institutions of scientific and technical research and development from the 1950s forward?

Stuart Leslie addressed these questions in a valuable 1993 book, The Cold War and American Science: The Military-Industrial-Academic Complex at MIT and Stanford. Defense funding maintained and deepened the quantity of university-based research that was aimed at what were deemed important military priorities.

The armed forces supplemented existing university contracts with massive appropriations for applied and classified research, and established entire new laboratories under university management: MIT’s Lincoln Laboratory (air defense); Berkeley’s Lawrence Livermore Laboratory (nuclear weapons); and Stanford’s Applied Electronics Laboratory (electronic communications and countermeasures). (8)

In many disciplines, the military set the paradigm for postwar American science. Just as the technologies of empire (specifically submarine telegraphy and steam power) once defined the relevant research programs for Victorian scientists and engineers, so the military-driven technologies of the Cold War defined the critical problems for the postwar generation of American accidents and engineers…. These new challenges defined what scientists and engineers studied, what they designed and built, where they went to work, and what they did when they got there. (9)

And Leslie offers an institutional prediction about knowledge production in this context:

Just as Veblen could have predicted, as American science became increasingly bound up in a web of military institutions, so did its character, scope, and methods take on new, and often disturbing, forms. (9)

The evidence for this prediction is offered in the specialized chapters that follow. Leslie traces in detail the development of major research laboratories at both universities, involving tens of millions of dollars in funding, thousands of graduate students and scientists, and very carefully focused on the development of sensitive technologies in radio, computing, materials, aviation, and weaponry.

No one denied that MIT had profited enormously in those first decades after the war from its military connections and from the unprecedented funding sources they provided. With those resources the Institute put together an impressive number of highly regarded engineering programs, successful both financially and intellectually. There was at the same time, however, a growing awareness, even among those who had benefited most, that the price of that success might be higher than anyone had imagined — a pattern for engineering education set, organizationally and conceptually, by the requirements of the national security state. (43)

Leslie gives some attention to the counter-pressures to the military’s dominance in research universities that can arise within a democracy in the closing chapter of the book, when the anti-Vietnam War movement raised opposition to military research on university campuses and eventually led to the end of classified research on many university campuses. He highlights the protests that occurred at MIT and Stanford during the 1960s; but equally radical protests against classified and military research happened in Madison, Urbana, and Berkeley.

This is a set of issues that are very resonant with Science, Technology and Society studies (STS). Leslie is indeed a historian of science and technology, but his approach does not completely share the social constructivism of that approach today. His emphasis is on the implications of the funding sources on the direction that research in basic science and technology took in the 1950s and 1960s in leading universities like MIT and Stanford. And his basic caution is that the military and security priorities associated with this structure all but guaranteed that the course of research was distorted in directions that would not have been chosen in a more traditional university research environment.

The book raises a number of important questions about the organization of knowledge and the appropriate role of universities in scientific research. In one sense the Vietnam War is a red herring, because the opposition it generated in the United States was very specific to that particular war. But most people would probably understand and support the idea that universities played a crucial role in World War II by discovering and developing new military technologies, and that this was an enormously important and proper role for scientists in universities to play. Defeating fascism and dictatorship was an existential need for the whole country. So the idea that university research is sometimes used and directed towards the interests of national security is not inherently improper.

A different kind of worry arises on the topic of what kind of system is best for guiding research in science and technology towards improving the human condition. In grand terms, one might consider whether some large fraction of the billions of dollars spent in military research between 1950 and 1980 might have been better spent on finding ways of addressing human needs directly — and therefore reducing the likely future causes of war. Is it possible that we would today be in a situation in which famine, disease, global warming, and ethnic and racial conflict were substantially eliminated if we had dedicated as much attention to these issues as we did to advanced nuclear weapons and stealth aircraft?

Leslie addresses STS directly in “Reestablishing a Conversation in STS: Who’s Talking? Who’s Listening? Who Cares?” (link). Donald MacKenzie’s Inventing Accuracy: A Historical Sociology of Nuclear Missile Guidance tells part of the same story with a greater emphasis on the social construction of knowledge throughout the process.

(I recall a demonstration at the University of Illinois against a super-computing lab in 1968 or 1969. The demonstrators were appeased when it was explained that the computer was being used for weather research. It was later widely rumored on the campus that the weather research in question was in fact directed towards considering whether the weather of Vietnam could be manipulated in a militarily useful way.)

Moral limits on war

World War II raised great issues of morality in the conduct of war. These were practical issues during the war, because that conflict approached “total war” — the use of all means against all targets to defeat the enemy. So the moral questions could not be evaded: are there compelling reasons of moral principle that make certain tactics in war completely unacceptable, no matter how efficacious they might be said to be?

As Michael Walzer made clear in Just and Unjust Wars: A Moral Argument with Historical Illustrations in 1977, we can approach two rather different kinds of questions when we inquire about the morality of war. First, we can ask whether a given decision to go to war is morally justified given its reasons and purposes. This brings us into the domain of the theory of just war–self-defense against aggression, and perhaps prevention of large-scale crimes against humanity. And second, we can ask whether the strategies and tactics chosen are morally permissible. This forces us to think about the moral distinction between combatant and non-combatant, the culpable and the innocent, and possibly the idea of military necessity. The principle of double effect comes into play here — the idea that unintended but predictable civilian casualties may be permissable if the intended target is a legitimate military target, and the unintended harms are not disproportionate to the value of the intended target.

We should also notice that there are two ways of approaching both issues — one on the basis of existing international law and treaty, and the other on the basis of moral theory. The first treats the morality of war as primarily a matter of convention, while the latter treats it as an expression of valued moral principles. There is some correspondence between the two approaches, since laws and treaties seek to embody shared norms about warfare. And there are moral reasons why states should keep their agreements, irrespective of the content. But the rationales of the two approaches are different.

Finally, there are two different kinds of reasons why a people or a government might care about the morality of its conduct of war. The first is prudential: “if we use this instrument, then others may use it against us in the future”. The convention outlawing the use of poison gas may fall in this category. So it may be argued that the conventions limiting the conduct of war are beneficial to all sides, even when there is a shortterm advantage in violating the convention. The second is a matter of moral principle: “if we use this instrument, we will be violating fundamental normative ideals that are crucial to us as individuals and as a people”. This is a Kantian version of the morality of war: there are at least some issues that cannot be resolved based solely on consequences, but rather must be resolved on the basis of underlying moral principles and prohibitions. So executing hostages or prisoners of war is always and absolutely wrong, no matter what military advantages might ensue. Preserving the lives and well-being of innocents seems to be an unconditional moral duty in war. But likewise, torture is always wrong, not only because it is imprudent, but because it is fundamentally incompatible with treating people in our power in a way that reflects their fundamental human dignity.

The means of war-making chosen by the German military during World War II were egregious — for example, shooting hostages, murdering prisoners, performing medical experiments on prisoners, and unrestrained strategic bombing of London. But hard issues arose on the side of the alliance that fought against German aggression as well. Particularly hard cases during World War II were the campaigns of “strategic bombing” against cities in Germany and Japan, including the firebombing of Dresden and Tokyo. These decisions were taken in the context of fairly clear data showing that strategic bombing did not substantially impair the enemy’s ability to wage war industrially, and in the context of the fact that its primary victims were innocent civilians. Did the Allies make a serious moral mistake by making use of this tactic? Did innocent children and non-combatant adults pay the price in these most horrible ways of the decision to incinerate cities? Did civilian leaders fail to exercise sufficient control to prevent their generals from inflicting pet theories like the presumed efficacy of strategic bombing on whole urban populations?

 
And how about the decision to use atomic bombs against Hiroshima and Nagasaki? Were these decisions morally justified by the rationale that was offered — that they compelled surrender by Japan and thereby avoided tens of thousands of combatant deaths ensuing from invasion? Were two bombs necessary, or was the attack on Nagasaki literally a case of overkill? Did the United Stares make a fateful moral error in deciding to use atomic bombs to attack cities and the thousands of non-combatants who lived there?

These kinds of questions may seem quaint and obsolete in a time of drone strikes, cyber warfare, and renewed nuclear posturing. But they are not. As citizens we have responsibility for the acts of war undertaken by our governments. We need to be clear and insistent in maintaining that the use of the instruments of war requires powerful moral justification, and that there are morally profound reasons for demanding that war tactics respect the rights and lives of the innocent. War, we must never forget, is horrible.

Geoffrey Robertson’s Crimes Against Humanity: The Struggle for Global Justice poses these questions with particular pointedness. Also of interest is John Mearsheimer’s Conventional Deterrence.

How organizations adapt

Organizations do things; they depend upon the coordinated efforts of numerous individuals; and they exist in environments that affect their ongoing success or failure. Moreover, organizations are to some extent plastic: the practices and rules that make them up can change over time. Sometimes these changes happen as the result of deliberate design choices by individuals inside or outside the organization; so a manager may alter the rules through which decisions are made about hiring new staff in order to improve the quality of work. And sometimes they happen through gradual processes over time that no one is specifically aware of. The question arises, then, whether organizations evolve toward higher functioning based on the signals from the environments in which they live; or on the contrary, whether organizational change is stochastic, without a gradient of change towards more effective functioning? Do changes within an organization add up over time to improved functioning? What kinds of social mechanisms might bring about such an outcome?

One way of addressing this topic is to consider organizations as mid-level social entities that are potentially capable of adaptation and learning. An organization has identifiable internal processes of functioning as well as a delineated boundary of activity. It has a degree of control over its functioning. And it is situated in an environment that signals differential success/failure through a variety of means (profitability, success in gaining adherents, improvement in market share, number of patents issued, …). So the environment responds favorably or unfavorably, and change occurs.

Is there anything in this specification of the structure, composition, and environmental location of an organization that suggests the possibility or likelihood of adaptation over time in the direction of improvement of some measure of organizational success? Do institutions and organizations get better as a result of their interactions with their environments and their internal structure and actors?

There are a few possible social mechanisms that would support the possibility of adaptation towards higher functioning. One is the fact that purposive agents are involved in maintaining and changing institutional practices. Those agents are capable of perceiving inefficiencies and potential gains from innovation, and are sometimes in a position to introduce appropriate innovations. This is true at various levels within an organization, from the supervisor of a custodial staff to a vice president for marketing to a CEO. If the incentives presented to these agents are aligned with the important needs of the organization, then we can expect that they will introduce innovations that enhance functioning. So one mechanism through which we might expect that organizations will get better over time is the fact that some agents within an organization have the knowledge and power necessary to enact changes that will improve performance, and they sometimes have an interest in doing so. In other words, there is a degree of intelligent intentionality within an organization that might work in favor of enhancement.

This line of thought should not be over-emphasized, however, because there are competing forces and interests within most organizations. Previous posts have focused on current organizational theory based on the idea of a “strategic action field” of insiders and outsiders who determine the activities of the organization (Fligstein and McAdam, Crozier; linklink). This framework suggests that the structure and functioning of an organization is not wholly determined by a single intelligent actor (“the founder”), but is rather the temporally extended result of interactions among actors in the pursuit of diverse aims. This heterogeneity of purposive actions by actors within an institution means that the direction of change is indeterminate; it is possible that the coalitions that form will bring about positive change, but the reverse is possible as well.

And in fact, many authors and participants have pointed out that it is often enough not the case that the agents’ interests are aligned with the priorities and needs of the organization. Jack Knight offers persuasive critique of the idea that organizations and institutions tend to increase in their ability to provide collective benefits in Institutions and Social Conflict. CEOs who have a financial interest in a rapid stock price increase may take steps that worsen functioning for shortterm market gain; supervisors may avoid work-flow innovations because they don’t want the headache of an extended change process; vice presidents may deny information to other divisions in order to enhance appreciation of the efforts of their own division. Here is a short description from Knight’s book of the way that institutional adjustment occurs as a result of conflict among players of unequal powers:

Individual bargaining is resolved by the commitments of those who enjoy a relative advantage in substantive resources. Through a series of interactions with various members of the group, actors with similar resources establish a pattern of successful action in a particular type of interaction. As others recognize that they are interacting with one of the actors who possess these resources, they adjust their strategies to achieve their best outcome given the anticipated commitments of others. Over time rational actors continue to adjust their strategies until an equilibrium is reached. As this becomes recognized as the socially expected combination of equilibrium strategies, a self-enforcing social institution is established. (Knight, 143)

A very different possible mechanism is unit selection, where more successful innovations or firms survive and less successful innovations and firms fail. This is the premise of the evolutionary theory of the firm (Nelson and Winter, An Evolutionary Theory of Economic Change). In a competitive market, firms with low internal efficiency will have a difficult time competing on price with more efficient firms; so these low-efficiency firms will go out of business occasionally. Here the question of “units of selection” arises: is it firms over which selection operates, or is it lower-level innovations that are the object of selection?

Geoffrey Hodgson provides a thoughtful review of this set of theories here, part of what he calls “competence-based theories of the firm”. Here is Hobson’s diagram of the relationships that exist among several different approaches to study of the firm.

The market mechanism does not work very well as a selection mechanism for some important categories of organizations — government agencies, legislative systems, or non-profit organizations. This is so, because the criterion of selection is “profitability / efficiency within a competitive market”; and government and non-profit organizations are not importantly subject to the workings of a market.

In short, the answer to the fundamental question here is mixed. There are factors that unquestionably work to enhance effectiveness in an organization. But these factors are weak and defeasible, and the countervailing factors (internal conflict, divided interests of actors, slackness of corporate marketplace) leave open the possibility that institutions change but they do not evolve in a consistent direction. And the glaring dysfunctions that have afflicted many organizations, both corporate and governmental, make this conclusion even more persuasive. Perhaps what demands explanation is the rare case where an organization achieves a high level of effectiveness and consistency in its actions, rather than the many cases that come to mind of dysfunctional organizational activity.

(The examples of organizational dysfunction that come to mind are many — the failures of nuclear regulation of the civilian nuclear industry (Perrow, The Next Catastrophe: Reducing Our Vulnerabilities to Natural, Industrial, and Terrorist Disasters); the failure of US anti-submarine warfare in World War II (Cohen, Military Misfortunes: The Anatomy of Failure in War); and the failure of chemical companies to ensure safe operations of their plants (Shrivastava, Bhopal: Anatomy of Crisis). Here is an earlier post that addresses some of these examples; link. And here are several earlier posts on the topic of institutional change and organizational behavior; linklink.)

%d bloggers like this: