Advancing high-energy physics in the United States

Here is an interesting and important scientific question: where is high-energy physics going? What future discoveries are possible in the field? And what strategies are most likely to bring these breakthroughs about? HEP is the field of physics that studies sub-atomic particles — muons, quarks, neutrinos, bosons, as well as now-familiar larger particles like neutrons, protons, and electrons — and their interactions. Research in this field involves producing collisions of sub-atomic particles at high energies (speeds) to create conditions permitting observation of new particles and properties. (The image above is a screenshot of the breakthrough results achieved at Europe’s CERN particle accelerator documenting observation of the Higgs boson.) And the most striking feature of HEP is the fact that it requires multi-billion-dollar tools (particle accelerators) and scientific teams (armies of advanced experimental physicists) to have any hope of making progress in the state of the field. Progress in high-energy physics does not happen in a garage or a university laboratory; it requires massive public investments in research facilities and scientific teams, organized around specific research objectives. In the United States these are largely located in the national laboratories (link) and through collaboration with international research facilities (CERN).

The question I want to address here is this: Who should be interested in a serious way in the topic of where research in high-energy physics is going? It should be emphasized that in this context I mean “interest” in a specific way: “materially, politically, or professionally concerned about the choices that are made”. Who are the actors who contribute to setting the agenda for future scientific work in high-energy physics? To what extent do the scientists themselves determine the future of their scientific field? Is this primarily an academic and scientific question, a question of public policy, a question of national prestige, or possibly a question of economic growth and development?

One answer to “who should be interested” is straightforward and obvious: the small network of world-class experimental and theoretical physicists in the country whose scientific careers are devoted to progress and discovery in the field of high-energy physics. Every physicist who teaches physics in a university in a sense has an interest in the future of the field, and a small number of highly trained physicists have strong scientific intuitions about where future advances are most likely to be found. Moreover, there is only a relatively small number of expert physicists whose own abilities and the capacities of their laboratories have a realistic opportunity to contribute to progress in the field. So the expert scientific community, including experimental and theoretical physicists, computational experts, and instrumentation specialists, have highly informed ideas about where meaningful progress in physics is possible.

An institution with definite interest in the question is the formal organization that represents the collective scientific practice of American physics — the American Physical Society (APS) (link). The APS is a prestigious organization which contributes specialized advice to government and the public on a range of questions, from the feasibility of anti-missile defense to the level of risk associated with global climate change (link).

An important practice involved directly in surveying the horizon for future physics advances is the Snowmass conference (link) (or more formally, the Particle Physics Community Planning Exercise). Snowmass is organized and managed by APS, and it has formal independence from the Department of Energy. Here is a thumbnail description of Snowmass:

The Particle Physics Community Planning Exercise (a.k.a. “Snowmass”) is organized by the Division of Particles and Fields (DPF) of the American Physical Society. Snowmass is a scientific study. It provides an opportunity for the entire particle physics community to come together to identify and document a scientific vision for the future of particle physics in the U.S. and its international partners. Snowmass will define the most important questions for the field of particle physics and identify promising opportunities to address them. (Learn more about the history and spirit of Snowmass here “How to Snowmass” written by Chris Quigg). The P5, Particle Physics Project Prioritization Panel, will take the scientific input from Snowmass and develop a strategic plan for U.S. particle physics that can be executed over a 10 year timescale, in the context of a 20-year global vision for the field. (link)

As noted, Snowmass is an arm of APS, with close informal ties to the Department of Energy and its advisory committee HEPAC. For this reason we would like to infer that it is a reasonably independent process, developing its assessments and recommendations based on the best scientific expertise and judgment available. But we can also ask whether it succeeds in the task of formulating a clear set of visions and priorities for the future of high-energy physics research, or instead presents a grab-bag of the particular views of its participating scientists. If the latter, does the Snowmass process succeed in influencing or guiding the decision-making that others will follow in setting priorities and budgets for future investments in physics research? So there is an important question for policy-institution analysis even at this early stage of our consideration: how “rational” is the Snowmass process, and how effective is it at distilling a credible scientific consensus about the future direction of high energy physics research? This is a question for policy studies in organizational sociology, similar to many studies in the field of science, technology, and society (STS).

Snowmass in turn plays into a more formal part of DoE’s decision-making process, the P5 process (Particle Physics Project Prioritization Panel), which prepares a decennial report and strategic vision for the future of high-energy physics for the coming decade. This report is then conveyed to the DoE advisory committee and to DoE’s director. (Here is a summary of the 2007-08 P5 report (link), and here is a link to the 2014 P5 report (link). In 2020 HEPAC conducted a review of the recommendations of the 2014 report and progress made towards those priorities (link).) Here too we can ask the organizational question: how effective is the P5 process at defining the best possible scientific consensus on priorities for the field of high energy physics research?

The National Academy of Sciences, Engineering and Medicine (link) is another organization that has an interest and capability in developing specific assessments and recommendations about the future of high-energy physics, and the research investments most likely to lead to important advances and discoveries. Here is a “consensus report” prepared by a group of leading physicists and commissioned by the NASEM Committee on Elementary Particle Physics in the 21st Century in 2006 (link).

Scientists are actors in the process of priority setting for the future of physics research, then. But it is clear that scientists do not ultimately make these decisions. Given that programs of research in high energy physics require multi-billion dollar investments, the Federal government is a major decision-maker in priority-setting for the future of physics. There are several Federal agencies that have a primary interest in setting the direction of future research in high-energy physics. The Department of Energy is the largest source of funding — and therefore priorities — for future investments in research in high-energy physics, including the neutrino detector DUNE project centered in Chicago at Fermilab and the now-defunct plan for the Superconducting Super Collider (SSC) in Texas in the 1980s, terminated in 1993. The Office of High Energy Physics (link) is ultimately responsible for decisions about major capital investments in this field, with budget oversight from Congressional committees. The Office of National Laboratories has oversight over the national laboratories (Fermilab, Argonne, Ames, Brookhaven, and several others). The DoE process is inherently agency-driven, given that it is concerned with a small number of highly impactful investment decisions. One such decision was the plan to implement the Deep Underground Neutrino Experiment (DUNE) at Fermilab in metro Chicago in around 2010 for several billion dollars. So here again we have an organizational problem for research: how are decisions made within the Office of High Energy Physics? Are the director and staff simply a transparent transmission belt from the physics community to DoE priorities? Or do agency officials have agendas of their own?

The Office of High Energy Physics is supported by an advisory committee of senior scientists, the High Energy Physics Advisory Committee (HEPAP). This committee exists to provide expert scientific advice to OHEP about priorities, goals, and scientific strategies. It is unclear whether HEPAP is enabled to fulfill this role given its current functioning and administration. Do members of HEPAP have the opportunity for free and open discussion of priorities and projects, or is the agenda of the committee effectively driven by OHEP director and staff?

Congress is an important actor in the formulation of science policy in general, and policy in the field of high-energy physics in particular, through its control of the Federal budget. Some elected officials also have an interest in the question in the future of physics, for a different reason. They believe that there are national interests at stake in the future developments of physics; and they believe that world-class scientific discovery and progress are important components of global prestige. Perhaps the US will be thought to be less of a scientific superpower than Japan or Europe in twenty years because the major advances in particle physics have taken place at CERN and advanced research installations in Japan. To maintain the edge, the elected official may have an interest in supporting budget decisions that boost the strength and effectiveness of US science — including high-energy physics. Small investments guarantee minimal progress, whereas large investments make the possibility of significant breakthroughs much greater than would otherwise be the case.

There are still two constituencies to be considered: citizens and businesses. Do ordinary citizens have an interest in the future of high-energy physics? Probably not. No one has made the case for HEP that has been made for the planetary space program — that research dollars spent on planetary space vehicles and exploration will lead to currently unpredictable but valuable technology breakthroughs that will “change daily life as we know it”. No “teflon story” is likely to emerge from the DUNE project. HEP, neutrinos, hadron particles, and their like, as well as the accelerators, detectors, and computational equipment needed to evaluate their behavior, have little likelihood of leading to practical spin-off technologies. As a first approximation, then, ordinary citizens have little interest — in either the economist’s sense or the psychologist’s sense — in what strategies are likely to be most fruitful for the progress of high-energy physics.

The business community is different from the citizen and consumer segment for a familiar reason. Like citizens and consumers, business leaders have no inherent interest in the progress or future of high-energy physics. But as manufacturers of high-performance cryogenic electromagnet systems or instrumentation systems, they have a very distinct interest in supporting (and lobbying for) the establishment of major new technology-intensive infrastructure projects. This is similar to the defense industry; it is not that aircraft manufacturers want military conflict, but they recognize that building military aircraft is a profitable business strategy. So more military spending on high-tech weapons is better than less, from the point of view of defense contractors. The large cryogenic electromagnet producer has a very specific business interest in seeing an investment in a largescale neutrino experiment, because it will lead to expenditures in the range of hundreds of millions of dollars on electromagnets once construction begins.

Now that we’ve surveyed the players, what should we expect when it comes to science policy and strategy? Should we expect a highly rational process in which “scientific aims and goals” are debated and finalized by the scientific experts solicited by the American Physical Society and Snowmass; a report is received from the P5 process by the quasi-public body HEPAP that advises DoE on its strategies, and evaluated in a clear and rational basis; recommendations are conveyed to DoE officials, who introduce a note of budget realism but strive to craft a set of strategic goals for the coming decade that largely incorporate the wisdom of the APS/Snowmass report; DoE executives are able to make a compelling case for the public good to key legislators; and budget commitments are made to accomplish the top 5 out of 8 recommendations of the Snowmass report? Do we get a reasonably coherent and scientifically defensible set of strategies and investments out of this process?

The answer is likely to be clear to any social scientist. The clean lines of “recommendation, collection of expert scientific opinion, rational assessment, disinterested selection of priorities” will quickly be blurred by facts having to do with very well known organizational and political dysfunctions: conflicts of interest and agenda within agencies; industry and agency capture of the big-science agenda; conflicting interests among stakeholders; confusion within policy debates between longterm and medium-term objectives; imperfect communication within and across organizational lines; and a powerful interest expressed by local stakeholders to gain part of the benefits of the project as private incomes. It is illogical that parochial business interests in Chicago or Japan would influence the decision whether to fund the International Linear Collider (link); but this appears in fact to be the case. In other words, the clean and rational decision-making process we would like to see is broken apart by conflicts of interest and priority from various powerful actors. And the result may bear only a faint resemblance to the best judgments about “good science” that were offered by the scientific advisors in early stages of the process. March and Simon’s “garbage can model” of organizational decision-making seems relevant here (link); or, as Charles Perrow describes the process in Complex Organizations (2014):

Goals may thus emerge in a rather fortuitous fashion, as when the organization seems to back into a new line of activity or into an external alliance in a fit of absentmindedness. (135)

No coherent, stable goal guided the total process, but after the fact a coherent stable goal was presumed to have been present. It would be unsettling to see it otherwise. (135)

United States after the failure of democracy

Democracy is at risk in the United States. Why do leading political observers like Steven Levitsky and  Daniel Ziblatt (How Democracies Die) fear for the fate of our democracy? Because anti-democratic forces have taken over one of America’s primary political parties — the GOP; because GOP officials, governors, and legislators openly conspire to subvert future elections; because GOP activists and officials work intensively in state legislatures to restrict voting rights for non-Republican voters, including people of color and city dwellers; and because the Supreme Court no longer protects the Constitution and the rights that it embodies. 

Here is how Levitsky and Ziblatt summarize their urgent concerns about the future of our democracy in a recent Atlantic article (link):

From November 2020 to January 2021, then, a significant portion of the Republican Party refused to unambiguously accept electoral defeat, eschew violence, or break with extremist groups—the three principles that define prodemocracy parties. Because of that behavior, as well as its behavior over the past six months, we are convinced that the Republican Party leadership is willing to overturn an election. Moreover, we are concerned that it will be able to do so—legally. That’s why we serve on the board of advisers to Protect Democracy, a nonprofit working to prevent democratic decline in the United States. We wrote this essay as part of “The Democracy Endgame,” the group’s symposium on the long-term strategy to fight authoritarianism.

Any reader of the morning newspaper understands how deadly serious this threat is. Many residents of Michigan find it absolutely chilling that the most recently appointed GOP canvasser for Wayne County has said publicly that he would not have certified the election results for the county in 2020 — with no factual basis whatsoever (link). With GOP officials in many states indicating their corrupt willingness to subvert future elections, how can one have a lot of hope for the future of our democracy?

So, tragically, it is very timely to consider this difficult question: what might an anti-democratic authoritarian system look like in the United States? Sinclair Lewis considered this question in 1935, and his portrait in It Can’t Happen Here was gloomy. Here is a snippet of Lewis’s vision of a fascist dictatorship in America following the election of the unscrupulous populist candidate Berzelius Windrip and his paramilitary followers, the Minute Men:

At the time of Windrip’s election, there had been more than 80,000 relief administrators employed by the federal and local governments in America. With the labor camps absorbing most people on relief, this army of social workers, both amateurs and long-trained professional uplifters, was stranded.

The Minute Men controlling the labor camps were generous: they offered the charitarians the same dollar a day that the proletarians received, with special low rates for board and lodging. But the cleverer social workers received a much better offer: to help list every family and every unmarried person in the country, with his or her finances, professional ability, military training and, most important and most tactfully to be ascertained, his or her secret opinion of the M.M.’s and of the Corpos in general.

A good many of the social workers indignantly said that this was asking them to be spies, stool pigeons for the American OGPU. These were, on various unimportant charges, sent to jail or, later, to concentration camps—which were also jails, but the private jails of the M.M.’s, unshackled by any old-fashioned, nonsensical prison regulations.

In the confusion of the summer and early autumn of 1937, local M.M. officers had a splendid time making their own laws, and such congenital traitors and bellyachers as Jewish doctors, Jewish musicians, Negro journalists, socialistic college professors, young men who preferred reading or chemical research to manly service with the M.M.’s, women who complained when their men had been taken away by the M.M.’s and had disappeared, were increasingly beaten in the streets, or arrested on charges that would not have been very familiar to pre-Corpo jurists. (ch xvii)

But perhaps this is extreme. Foretelling the future is impossible, but here are several features that seem likely enough given the current drift of US politics, if anti-democratic authoritarian politicians seize control of our legislative and executive offices.

Undermining of constitutional liberties

  • weakening of freedom of the press through additional libel-law restrictions, bonds, and other “chilling” legal mechanisms
  • weakening of freedom of thought and speech through legislation and bullying concerning critical / unpopular doctrines — “Critical Race Theory”, “Queer Studies”, “Communist/anarchist thought”, …
  • weakening of freedom of association through extension of police surveillance, police violence, “anti-riot” legislation limiting demonstrations, vilification by leaders, trolls, and social media of outspoken advocates of unpopular positions

Further restrictions on voting rights and voter access to elections

  • extreme gerrymandering to ensure one-party dominance
  • unreasonable voter ID requirements
  • limitations on absentee voting
  • voter intimidation at the polls

The imposition of laws and mandates that are distinctly opposed by the majority of citizens by minority-party-dominated legislatures 

  • repressive and unconstitutional anti-abortion legislation
  • open-carry firearms legislation

Implementation of an anti-regulation agenda that gives a free hand to big business and other powerful stakeholders

  • weakening of regulatory agencies through reduction of legal mandate and budget

Intimidation of dissenters through violent threats, paramilitary demonstrations, and the occasional murder

  • encouragement of social violence by followers of the authoritarian leader
  • persecution through informal and sometimes formal channels of racial and social minorities — immigrants, people of color, Asians, LGBTQ and transgender people, …
  • threats of violence and murder against public officials, journalists, and dissidents

These are terrible outcomes, and taken together they represent the extinction of liberal democracy: the integrity of constitutionally-defined equal rights for all individuals, and the principle of majoritarian public decision-making. But what about the extremes that authoritarian states have often reached in the past century — wholesale persecution of “enemies of the state”, imprisonment of dissidents, forcible dissolution of opposition political organizations, political murder, and wholesale use of paramilitary organizations to achieve the political goals of the authoritarian rules? What about the secret police, the Gulag, and the concentration camps? What are the prospects for these horrific outcomes in the United States? How likely is the descent imagined by Sinclair Lewis into wholesale fascist dictatorship?

One would like to say these extremes are unlikely in the US — that US authoritarianism would be “soft dictatorship” like that of Orban rather than the hard dictatorship of a Putin involving rule by fear, violence, imprisonment, and intimidation. But actually, history is not encouraging. We have seen the decline of one after another of the “guard rails of democracy” in just the past five years, and we have seen the actions of a president who clearly cared only about his own power and will. So where exactly should we find optimism for the idea that an American Mussolini or Windrip would never commit the crimes of the dictators of the twentieth century? Isn’t there a great deal of truth in Acton’s maxim, “power corrupts; and absolute power corrupts absolutely”? Here is Acton’s quote in its more extended context; and it is very specific in its advice that we should not trust “great leaders” to refrain from great crimes:

If there is any presumption it is the other way, against the holders of power, increasing as the power increases. Historic responsibility has to make up for the want of legal responsibility. Power tends to corrupt, and absolute power corrupts absolutely. Great men are almost always bad men, even when they exercise influence and not authority, still more when you superadd the tendency or the certainty of corruption by authority. There is no worse heresy than that the office sanctifies the holder of it.

Would any of us want to trust our fate as free, equal, and dignified persons to the kindness and democratic values of a Greg Abbott, Ron DeSantis, or Donald Trump? 

The best remedy against these terrible outcomes is to struggle for our democracy now. We must give full and deep support to politicians and candidates who demonstrate a commitment to democratic values, and we must reject the very large number of GOP politicians who countenance the subversion of our democracy through their adherence to the lies of the Trump years. This is not a struggle between “liberals” and “conservatives”; it is a struggle between those who value our liberal democracy and those who cynically undermine and disparage it. And perhaps we will need to take the example and the courage of men and women in Belarus, Myanmar, Thailand, and Hong Kong in their willingness to stand up against the usurpation of their democratic rights through massive peaceful demonstrations.

The great threat to democracy

Democracies have fallen to strongmen, tyrants-in-waiting, bullies, thugs, spewers-of-bombast. But these powerful personalities are not the greatest threat to democracy today. The greatest threat is a loss of trust in the institutions and offices of a democratic society, on the part of the citizens of the democracy. And what are these institutions? Courts, judges, police; legislatures, representatives, agencies; election officials and procedures; tax authorities; presidents and governors.  

Here is a recent Pew survey on trust in government that provides disturbing reading (link).

As the report emphasizes, the current period is a low point in public confidence in government since the 1950s, with over 75% of the public expressing trust and confidence in government during the Johnson administration and under 30% expressing trust in government during the Trump administration. (At present only 9% of Republican and Republican-leaning voters express trust in government.) Here is a similar report from the OECD reflecting 2019 data (link), indicating a similar but less drastic fall in trust in European democracies as well.

In 2014 John Tierney undertook to analyze variations in trust in government across the states of the United States (link). The variation across the states is striking, from North Dakota and Wyoming showing levels of trust in excess of 75% and Illinois at about 28%. And Tierney tries to identify some of the factors that would help explain this variation. (It would be very interesting to re-examine Tierney’s analysis and the Gallup data for the past several years; it would seem likely that the data will have changed substantially since 2014.)

Why has there been such a precipitous decline in trust in our democratic institutions? This is not a mystery. Right-wing media, cynical politicians, lying youtubers, passionate conspiracy theorists … anti-democratic activists and opportunists have taken every opportunity to undermine, discredit, and subvert our political institutions. Right-wing politicians, cable news pundits, and social media voices actively seek to further their own careers and fortunes by actively generating suspicion, doubt, and mistrust of virtually everyone they can. Tucker Carlson is only the most visible example of this cynical and dishonest approach.

Why is the odious and deliberate strategy of cultivating mistrust so invidious to the future of democracy? Because our democracy depends crucially on the endurance and fairness of our institutions; but it is clear that institutions have no underlying, enduring source of stability. There is no solid granite underlying the judiciary or the system of voting; an institution lacks a “skeleton”. Unlike a towering modern building which maintains its integrity of steel girders long after its external architectural elements have degraded, an institution is more like a collective but real illusion. When we stop believing in the institution, it immediately begins to die. Institutions depend upon the continuing support and adherence of the individuals who fall within their scope. In a sense, institutions have more in common with the social reality of “money” than with that of a coral reef. In order for a paycheck for $1,000 to be real for me, I must also believe and understand that it is also real for other people — and that 1/100 of that check will buy a meal for two at Wendy’s and 1/4 of it will be accepted by my landlord as payment for a week’s rent. Without that collective ongoing belief in money, the currency has no social reality whatsoever. But likewise — citizens will engage in a system of voting only if they believe that the votes will be counted honestly and the candidate with the most votes will be sworn into office. 

What does it take to sustain trust in an institution? One favorable feature supporting trust is institutional transparency. “Blind” trust is hard to sustain; trust is more stable when it is based on a continuing ability for participants to see how the institution is functioning, how its actions and outcomes are brought about, how its officials and staffers conduct their work. This is the reason for “sunshine” laws about public institutions. And the less gap there is between private and public reasons for action, the more reasonable citizens will have confidence in their government.

A related feature of a trustworthy institution is the reputation for integrity possessed by its officers. If most citizens in a state have a fairly direct personal relationship with a handful of legislators, and if they believe, based on their acquaintance, that these legislators are honest and committed to the public good, then they are more likely to have confidence in the institution as well. (This is one of the arguments made by Tierney in the 2014 Atlantic article mentioned above.) Conversely, if legislators engage in behavior that makes the citizen doubt their integrity (corruption, lying, conflict of interest), then citizens’ trust in the institution is likely to fall.

A third feature of governments that instills trust in their citizens (highlighted by the OECD report above) is competence and effectiveness by government in performing the tasks needed to secure the common good. The OECD report summarizes its recommendations in these terms: 

OECD evidence shows that government’s values, such as high levels of integrity, fairness and openness of institutions are strong predictors of public trust. Similarly, government’s competence – its responsiveness and reliability in delivering public services and anticipating new needs – are crucial for boosting trust in institutions.

When governments fail in crucial tasks affecting the health and safety of large numbers of citizens — for example in managing COVID vaccination programs, or administering disaster relief after natural disasters — it is understandable that public trust in government would fall.

Several earlier posts (linklinklink) have explored the “moral emotions of democracy” and how to enhance them. Plainly, cultivating trust in our democratic institutions is an urgent need if our democracy is to survive. And, like a house of cards or a carefully balanced pile of field stones, our institutions will only be stable if there is a persistent pattern of mutual reinforcement among institutional rules, official behavior, and citizen awareness and trust in government.

(Pew has also done some important survey work on the challenge of regaining trust in democracy; link. Also of interest is a very interesting 2011 research conference paper by Juan Castillo, Daniel Miranda, and Pablo Torres exploring the connections that appear to exist between Social Dominance Orientation, Right-Wing Authoritarian Personality, and the level of trust individuals have in government; link. SDO and WRW are explored in an earlier post.)

(See Paul Krugman’s very ominous diagnosis of the state of our democracy; link.)

Gross inequalities in a time of pandemic

Here is a stunning juxtaposition in the April 2 print edition of the New York Times. Take a close look. The top panel updates readers on the fact that the city and the region are enduring unimaginable suffering and stress caused by the COVID-19 pandemic, with 63,300 victims and 2,624 deaths (as of April 4) — and with hundreds of thousands facing immediate, existential financial crisis because of the economic shutdown. And only eight miles away, as the Sotheby’s “Prominent Properties” half-page advertisement proclaims, home buyers can find secluded luxury, relaxation, and safety, for residential estates priced at $32.9 million and $21.5 million. In case the reader missed the exclusiveness of these properties, the advertisement mentions that they are “located in one of the nation’s wealthiest zip codes”. And, lest the prospective buyer be concerned about maintaining social isolation in these difficult times, the ad reminds prospective buyers that these are gated estates — in fact, the $33M property is located on “the only guard gated street in Alpine”.

Could Friedrich Engels have found a more compelling illustration of the fundamental inhumanity of the inequalities that exist in twenty-first century capitalism in the United States? And there is no need for rhetorical exaggeration — here it is in black and white in the nation’s “newspaper of record”.

There are many compelling reasons that supported Elizabeth Warren’s proposal for a wealth tax. But here is one more: it is morally appalling, even gut-churning, to realize that $33 million for a home for one’s family (35,000 square feet, tennis court and indoor basketball court) is a reasonable “ask” for the super-wealthy in our country, the one-tenth of one percent who have ridden the crest of surging stock markets and finance and investment firms to a level of wealth that is literally unimaginable to at least 95% of the rest of the country.

Here is the heart of Warren’s proposal for a wealth tax (link):

Rates and Revenue

  • Zero additional tax on any household with a net worth of less than $50 million (99.9% of American households)
  • 2% annual tax on household net worth between $50 million and $1 billion
  • 4% annual Billionaire Surtax (6% tax overall) on household net worth above $1 billion
  • 10-Year revenue total of $3.75 trillion

Are we all in this together, or not? If we are, let’s share the wealth. Let’s all pay our fair share. Let’s pay for the costs of fighting the pandemic and saving tens of millions of our fellow citizens from financial ruin, eviction, malnutrition, and family crisis with a wealth tax on billionaires. They can afford it. The “65′ saltwater gunite pool” is not a life necessity. The revenue estimate of the Warren proposal is roughly proportionate to the current estimate of what it will cost the US economy to overcome the pandemic, protect the vulnerable, and restart the economy — $3.75 trillion. Both equity and the current crisis support such a plan.

Here is some background on the rising wealth inequalities we have witnessed in recent decades in the United States. Leiserson, McGrew, and Kopparam provide an excellent and data-rich survey of the system of wealth inequalities in the United States in “The distribution of wealth in the United States and implications for a net worth tax” (link). Since 1989 the increase in wealth inequality is dramatic. The top 10% owned about 67% of all wealth in 1989; by 2016 this had risen to 77%.

The second graph is a snapshot for 2016 (link). Both income and wealth are severely unequal, but wealth is substantially more so. The top quintile owns almost 90% of the wealth in the United States, with the top 1% owning about 40% of all wealth.

The website Inequality.org provides an historical look at the growth of inequalities of wealth in the US (link). Consider this graph of the wealth shares over a century of the top 1%, .1%, and .01% of the US population; it is eye-popping. Beginning in roughly 1978 the shares of the very top segments of the US population began to rise, and the trend continued through 2012 — with no end in sight. The top 1% in 2012 owned 41% of all wealth; the top 0.1% owned 21%; and the top 0.01% owned 11%.

We need a wealth tax, and Elizabeth Warren put together a pretty convincing and rational plan. This is not a question of “soaking the rich”. It is a question of basic fairness. Our economy and society have functioned as an express elevator for ever-greater fortunes for the few, with essentially no improvement for 60-80% of the rest of America. An economy is a system of social cooperation, requiring the efforts of all members of society. But the benefits of our economic system have gone ever-more disproportionately to the rich and the ultra-rich. That is fundamentally unfair. Now is the time to bring equity back into our society and politics. If Mr. Moneybags can afford a $33M home in New Jersey, he or she can afford to pay a small tax on his wealth.

It is interesting to note that social scientists and anthropologists are beginning to study the super-rich as a distinctive group. A fascinating source is Iain Hay and Jonathan Beaverstock, eds., Handbook on Wealth and the Super-Rich. Especially relevant is Chris Paris’s contribution, “The residential spaces of the super-rich”. Paris writes:

Prime residential real estate remains a key element in super-­rich investment portfolios, both for private use through luxury consumption and as investment items with anticipated long-­ term capital gain, often untaxed as properties are owned by companies rather than individuals. Most of the homes of the super-­rich are purchased using cash, specialized financial instruments and/or through companies, and ‘the higher the price of the property, the less likely buyers were to arrange traditional mortgage financing for the home acquisition. Whether buyers are foreign or domestic, cash transactions predominate at the higher end of the market’ (Christie’s, 2013, p. 14). Such transactions, therefore, never enter ‘national’ housing accounting systems and play no part in many accounts of aggregate ‘national’ house price trends. For example, the analysis of house price trends in the Joseph Rowntree Foundation UK Housing Review is based on data relating to transactions using mortgages or loans, and EU and OECD comparisons between countries are based on the same kinds of data (Paris, 2013b).

Also fascinating in the volume is Emma Spence’s study of the super-rich when at sea in their super-yachts, “Performing wealth and status: observing super-­yachts and the super-­rich in Monaco”:

In this chapter I focus upon the super-­yacht as a key tool for exploring how performances of wealth are made visible in Monaco. A super-­yacht is a privately owned and professionally crewed luxury vessel over 30 metres in length. An average super-­ yacht, at approximately 47 metres in length, costs around €30 million to buy new, operates with a permanent crew of ten, and costs around €1.8 million per year to run. Larger super-­yachts such as Motor Yacht (M/Y) Madame Gu (99 metres in length), or the current largest super-­yacht in the world M/Y Azzam (180 metres in length) cost substantially more to build and to run. The price to charter (rent) a super-­yacht also varies considerably with size, age and reputation of the shipyard in which it was built. For example, a typical 47-­metre yacht can range between €100 000 to €600 000 per week to charter, plus costs. At the most exclusive end of the super-­yacht charter industry costs are much higher. M/Y Solange, for example, is an 85-­metre newly built yacht (2013) from reputable German shipyard Lürssen, which operates with 29 full-­time crew, and is priced at €1 million plus costs to charter per week.  The super-­yacht industry is worth an estimated €24 billion globally (Rutherford, 2014, p. 51).

Responsible innovation and the philosophy of technology

Several posts here have focused on the philosophy of technology (linklinklinklink). A simple definition of the philosophy of technology might go along these lines:

Technology may be defined broadly as the sum of a set of tools, machines, and practical skills available at a given time in a given culture through which human needs and interests are satisfied and the interplay of power and conflict furthered. The philosophy of technology offers an interdisciplinary approach to better understanding the role of technology in society and human life. The field raises critical questions about the ways that technology intertwines with human life and the workings of society. Do human beings control technology? For whose benefit? What role does technology play in human wellbeing and freedom? What role does technology play in the exercise of power? Can we control technology? What issues of ethics and social justice are raised by various technologies? How can citizens within a democracy best ensure that the technologies we choose will lead to better human outcomes and expanded capacities in the future?

One of the issues that arises in this field is the question of whether there are ethical principles that should govern the development and implementation of new technologies. (This issue is discussed further in an earlier post; link.)

One principle of technology ethics seems clear: policies and regulations are needed to protect the future health and safety of the public. This is the same principle that serves as the ethical basis of government regulation of current activities, justifying coercive rules that prevent pollution, toxic effects, fires, radiation exposure, and other clear harms affecting the health and safety of the public.

Another principle might be understood as exhortatory rather than compulsory, and that is the general recommendation that technologies should be pursued by private actors that make some positive contribution to human welfare. This principle is plainly less universal and obligatory than the “avoid harm” principle; many technologies are chosen because their inventors believe they will entertain, amuse, or otherwise please members of the public, and will thereby permit generation of profits. (Here is a discussion of the value of entertainment; link.)

A more nuanced exhortation is the idea that inventors and companies should subject their technology and product innovation research to broad principles of sustainability. Given that large technological change can potentially have very large environmental and collective effects, we might think that companies and inventors should pay attention to the large challenges our society faces, now and in the foreseeable future: addiction, obesity, CO2 production, plastic waste, erosion of privacy, spread of racist politics, fresh water depletion, and information disparities, to name several.

These principles fall within the general zone of the ethics of corporate social responsibility. Many companies pay lip service to the social-benefits principle and the sustainability principle, though it is difficult to see evidence of the effectiveness of this motivation. Business interests often seem to trump concerns for positive social effects and sustainability — for example, in the pharmaceutical industry and its involvement in the opioid crisis (link).

It is in the context of these reflections about the ethics of technology that I was interested to learn of an academic and policy field in Europe called “responsible innovation”. This is a network of academics, government officials, foundations, and non-profit organizations working together to try to induce more directionality in technology change (innovation). René von Schomberg and Jonathan Hankins’s recently published volume International Handbook on Responsible Innovation: A Global Resource gives an in-depth exposure to the thinking, research, and policy advocacy that this network has accumulated. A key actor in the advancement of this field has been the Bassetti Foundation (link) in Milan, which has made the topic of responsible innovation central to its mission for several decades. The Journal of Responsible Innovation provides a look at continuing research in this field.

The primary locus of discussion and applications in the field of RRI has been within the EU. There is not much evidence of involvement in the field from United States actors in this movement, though the Virtual Institute of Responsible Innovation at Arizona State University has received support from the US National Science Foundation (link).

Von Schomberg describes the scope and purpose of the RRI field in these terms:

Responsible Research and Innovation is a transparent, interactive process by which societal actors and innovators become mutually responsive to each other with a view to the (ethical) acceptability, sustainability and societal desirability of the innovation process and its marketable products (in order to allow a proper embedding of scientific and technological advances in our society). (2)

The definition of this field overlaps quite a bit with the philosophy and ethics of technology, but it is not synonymous. For one thing, the explicit goal of RRI is to help provide direction to the social, governmental, and business processes driving innovation. And for another, the idea of innovation isn’t exactly the same as “technology change”. There are social and business innovations that fall within the scope of the effort — for example, new forms of corporate management or new kinds of financial instruments — but which do not fall within the domain of technological innovations.

Von Schomberg has been a leading thinker within this field, and his contributions have helped to set the agenda for the movement. In his contribution to the volume he identifies six deficits in current innovation policy in Europe (all drawn from chapter two of the volume):

  1. Exclusive focus on risk and safety issues concerning new technologies under governmental regulations
  2. Market deficits in delivering on societal desirable innovations
  3. Aligning innovations with broadly shared public values and expectations
  4. A focus on the responsible development of technology and technological potentials rather than on responsible innovations
  5. A lack of open research systems and open scholarship as a necessary, but not sufficient condition for responsible innovation
  6. Lack of foresight and anticipative governance for the alternative shaping of innovation in sectors

Each of these statements involves very complex ideas about society-government-corporate relationships, and we may well come to judge that some of the recommendations made by Schomberg are more convincing than others. But the clarity of this statement of the priorities and concerns of the RRI movement is enormously valuable as a way of advancing debate on the issues.
The examples that von Schomberg and other contributors discuss largely have to do with large innovations that have sparked significant public discussion and opposition — nuclear power, GMO foods, nanotechnology-based products. These example focus attention on the later stages of scientific and technological knowledge when it comes to the point of introducing the technology into the public. But much technological innovation takes place at a much more mundane level — consumer electronics and software, enhancements of solar technology, improvements in electric vehicle technology, and digital personal assistants (Alexa, Siri), to name a few.

A defining feature of the RRI field is the explicit view that innovation is not inherently good or desirable (for example, in the contribution by Luc Soete in the volume). Contrary to the assumptions of many government economic policy experts, the RRI network is unified in criticism of the idea that innovation is always or usually productive of economic growth and employment growth. These observers argue instead that the public should have a role in deciding which technological options ought to be pursued, and which should not.

In reading the programmatic statements of purpose offered in the volume, it sometimes seems that there is a tendency to exaggerate the degree to which scientific and technological innovation is (or should be) a directed and collectively controlled process. The movement seems to undervalue the important role that creativity and invention play within the crucial fact of human freedom and fulfillment. It is an important moral fact that individuals have extensive liberties concerning the ways in which they use their talents, and the presumption needs to be in favor of their right to do so without coercive interference. Much of what goes on in the search for new ideas, processes, and products falls properly on the side of liberty rather than a socially regulated activity, and the proper relation of social policy to these activities seems to be one of respect for the human freedom and creativity of the innovator rather than a prescriptive and controlling one. (Of course some regulation and oversight is needed, based on assessments of risk and harm; but von Schomberg and others dismiss this moral principle as too limited.)

It sometimes seems as though the contributors slide too quickly from the field of government-funded research and development (where the public has a plain interest in “directing” the research at some level), to the whole ecology of innovation and discovery, whether public, corporate, or academic. As noted above, von Schomberg considers the governmental focus on harm and safety to be the “first deficit” — in other words, an insufficient basis for “guiding innovation”. In contrast, he wants to see public mechanisms tasked with “redirecting” technology innovations and industries. However, much innovation is the result of private initiative and funding, and it seems that this field appropriately falls outside of prescription by government (beyond normal harm-based regulatory oversight). Von Schomberg uses the phrase “a proper embedding of scientific and technological advances in society”; but this seems to be a worrisome overreach, in that it seems to imply that all scientific and technology research should be guided and curated by a collective political process.

This suggests that a more specific description of the goals of the movement would be helpful. Here is one possible specification:

  • Require government agencies to justify the funding and incentives that they offer in support of technology innovation based on an informed assessment of the public’s preferences;
  • Urge corporations to adopt standards to govern their own internal innovation investments to conform to acknowledged public concerns (environmental sustainability, positive contributions to health and safety of citizens and consumers, …);
  • Urge scientists and researchers to engage in public discussion of their priorities in scientific and technological research.
  • Create venues for open and public discussion of major technological choices facing society in the current century, leading to more articulate understanding of priorities and risks.

There is an interesting parallel here with the Japanese government’s efforts in the 1980s to guide investment and research and development resources into the highest priority fields to advance the Japanese economy. The US National Research Council study, 21st Century Innovation Systems for Japan and the United States: Lessons from a Decade of Change: Report of a Symposium (2009) (link), provides an excellent review of the strategies adopted by the United States and Japan in their efforts to stimulate technology innovation in chip production and high-end computers from the 1960s to the 1990s. These efforts were entirely guided by the effort to maintain commercial and economic advantage in the global marketplace. Jason Owen-Smith addresses the question of the role of US research universities as sites of technological research in Research Universities and the Public Good: Discovery for an Uncertain Futurelink.

The “responsible research and innovation” (RRI) movement in Europe is a robust effort to pose the question, how can public values be infused into the processes of technology innovation that have such a massive potential effect on public welfare? It would seem that a major aim of the RRI network is to help to inform and motivate commitments by corporations to principles of responsible innovation within their definitions of corporate social responsibility, which is unmistakably needed. It is worthwhile for U.S. policy experts and technology ethicists alike to pay attention to these debates in Europe, and the International Handbook on Responsible Innovation is an excellent place to begin.

Regulatory delegation at the FAA

Earlier posts have focused on the role of inadequate regulatory oversight as part of the tragedy of the Boeing 737 MAX (link, link). (Also of interest is an earlier discussion of the “quiet power” through which business achieves its goals in legislation and agency rules (link).) Reporting in the New York Times this week by Natalie Kitroeff and David Gelles provides a smoking gun for the idea of regulatory capture by industry over the regulatory agency established to ensure its safe operations (link). The article quotes a former attorney in the FAA office of chief counsel:

“The reauthorization act mandated regulatory capture,” said Doug Anderson, a former attorney in the agency’s office of chief counsel who reviewed the legislation. “It set the F.A.A. up for being totally deferential to the industry.”

Based on exhaustive investigative journalism, Kitroeff and Gelles provide a detailed account of the lobbying strategy and efforts by Boeing and the aircraft manufacturing industry group that led to the incorporation of industry-favored language into the FAA Reauthorization Act of 2018, and it is a profoundly discouraging account for anyone interested in the idea that the public good should drive legislation. The new paragraphs introduced into the final legislation stipulate full implementation of the philosophy of regulatory delegation and establish an industry-centered group empowered to oversee the agency’s performance and to make recommendations about FAA employees’ compensation. “Now, the agency, at the outset of the development process, has to hand over responsibility for certifying almost every aspect of new planes.” Under the new legislation the FAA is forbidden from taking back control of the certification process for a new aircraft without a full investigation or inspection justifying such an action.

As the article notes, the 737 MAX was certified under the old rules. The new rules give the FAA even less oversight powers and responsibilities for the certification of new aircraft and major redesigns of existing aircraft. And the fact that the MCAS system was never fully reviewed by the FAA, based on assurances of its safety from Boeing, reduces even further our confidence in the effectiveness of the FAA process. From the article:

The F.A.A. never fully analyzed the automated system known as MCAS, while Boeing played down its risks. Late in the plane’s development, Boeing made the system more aggressive, changes that were not submitted in a safety assessment to the agency.

Boeing, the Aerospace Industries Association, and the General Aviation Manufacturers Association exercised influence on the 2018 legislation through a variety of mechanisms. Legislators and lobbyists alike were guided by a report on regulation authored by Boeing itself. Executives and lobbyists exercised their ability to influence powerful senators and members of Congress through person-to-person interactions. And elected representatives from both parties favored “less regulation” as a way of supporting the economic interests of businesses in their states. For example:

They also helped persuade Senator Maria Cantwell, Democrat of Washington State, where Boeing has its manufacturing hub, to introduce language that requires the F.A.A. to relinquish control of many parts of the certification process.

And, of course, it is important not to forget about the “revolving door” from industry to government to lobbying firm. Ali Bahrami was an FAA official who subsequently became a lobbyist for the aerospace industry; Stephen Dixon is a former executive of Delta Airlines who now serves as Administrator of the FAA; and in 2007 former FAA Administrator Marion Blakey became CEO of the Aerospace Industries Association, the industry’s chief advocacy and lobbying group (link). It is hard to envision neutral, objective judgment in ensuring the safety of the public from such appointments.

Boeing and its allies found a receptive audience in the head of the House transportation committee, Bill Shuster, a Pennsylvania Republican staunchly in favor of deregulation, and his aide working on the legislation, Holly Woodruff Lyons.

These kinds of influence on legislation and agency action provide crystal-clear illustrations of the mechanisms cited by Pepper Culpepper in Quiet Politics and Business Power: Corporate Control in Europe and Japan explaining the political influence of business. Here is my description of his views in an earlier post:

Culpepper unpacks the political advantage residing with business elites and managers in terms of acknowledged expertise about the intricacies of corporate organization, an ability to frame the issues for policy makers and journalists, and ready access to rule-writing committees and task forces. These factors give elite business managers positional advantage, from which they can exert a great deal of influence on how an issue is formulated when it comes into the forum of public policy formation.

It seems abundantly clear that the “regulatory delegation” movement and its underlying effort to reduce regulatory burden on industry have gone too far in the case of aviation; and the same seems true in other industries such as the nuclear industry. The much harder question is organizational: what form of regulatory oversight would permit a regulatory industry to genuinely enhance the safety of the regulated industry and protect the public from unnecessary hazards? Even if we could take the anti-regulation ideology that has governed much public discourse since the Reagan years out of the picture, there are the continuing issues of expertise, funding, and industry power of resistance that make effective regulation a huge challenge.

Flood plains and land use

An increasingly pressing consequence of climate change is the rising threat of flood in coastal and riverine communities. And yet a combination of Federal and local policies have created land use incentives that have led to increasing development in flood plains since the major floods of the 1990s and 2000s (Mississippi River 1993, Hurricane Katrina 2005, Hurricane Sandy 2016, …), with the result that economic losses from flooding have risen sharply. Many of those costs are born by tax payers through Federal disaster relief and subsidies to the Federal flood insurance program.

Christine Klein and Sandra Zellmer provide a highly detailed and useful review of these issues in their brilliant SMU Law Review article, “Mississippi River Stories: Lessons from a Century of Unnatural Disasters” (link). These arguments are developed more fully in their 2014 book Mississippi River Tragedies: A Century of Unnatural Disaster. Klein and Zellmer believe that current flood insurance policies and disaster assistance policies at the federal level continue to support perverse incentives for developers and homeowners and need to be changed. Projects and development within 100-year flood plains need to be subject to mandatory flood insurance coverage; flood insurance policies should be rated by degree of risk; and government units should have the legal ability to prohibit development in flood plains. Here are their central recommendations for future Federal policy reform:

Substantive requirements for watershed planning and management would effectuate the Progressive Era objective underlying the original Flood Control Act of 1928: treating the river and its floodplain as an integrated unit from source to mouth, “systematically and consistently,” with coordination of navigation, flood control, irrigation, hydropower, and ecosystem services. To accomplish this objective, the proposed organic act must embrace five basic principles:

(1) Adopt sustainable, ecologically resilient standards and objectives;

(2) Employ comprehensive environmental analysis of individual and cumulative effects of floodplain construction (including wetlands fill);

(3) Enhance federal leadership and competency by providing the Corps with primary responsibility for flood control measures, cabined by clear standards, continuing monitoring responsibilities, and oversight through probing judicial review, and supported by a secure, non-partisan funding source;

(4) Stop wetlands losses and restore damaged floodplains by re-establishing natural areas that are essential for floodwater retention; and 

(5) Recognize that land and water policies are inextricably linked and plan for both open space and appropriate land use in the floodplain. (1535-36)

Here is Klein and Zellmer’s description of the US government’s response to flood catastrophes in the 1920s:

Flood control was the most pressing issue before the Seventieth Congress, which sat from 1927 to 1929. Congressional members quickly recognized that the problems were two-fold. First, Congressman Edward Denison of Illinois criticized the absence of federal leadership: “the Federal Government has allowed the people. . . to follow their own course and build their own levees as they choose and where they choose until the action of the people of one State has thrown the waters back upon the people of another State, and vice versa.” Moreover, as Congressman Robert Crosser of Ohio noted, the federal government’s “levees only” policy–a “monumental blunder”–was not the right sort of federal guidance. (1482-83)

In passing the Flood Control Act of 1928, congressional members were influenced by Progressive Era objectives. Comprehensive planning and multiple-use management were hallmarks of the time. The goal was nothing less than a unified, planned society. In the early 1900s, many federal agencies, including the Bureau of Reclamation and the U.S. Geological Survey, had agreed that each river must be treated as an integrated unit from source to mouth. Rivers were to be developed “systematically and consistently,” with coordination of navigation, flood control, irrigation, and hydro-power. But the Corps of Engineers refused to join the movement toward watershed planning, instead preferring to conduct river management in a piecemeal fashion for the benefit of myriad local interests. (1484)

But perverse incentives were created by Federal flood policies in the 1920s that persist to the present:

Only a few decades after the 1927 flood, the Mississippi River rose up out of its banks once again, teaching a new lesson: federal structural responses plus disaster relief payouts had incentivized ever more daring incursions into the floodplain. The floodwater evaded federal efforts to control it with engineered structures, and those same structures prevented the river from finding its natural retention areas–wetlands, oxbows, and meanders–that had previously provided safe storage for floodwater. The resulting damage to affected areas was increased by orders of magnitude. The federal response to this lesson was the adoption of a nationwide flood insurance program intended to discourage unwise floodplain development and to limit the need for disaster relief. Both lessons are detailed in this section. (1486)

Paradoxically, navigational structures and floodplain constriction by levees, highway embankments, and development projects exacerbated the flood damage all along the rivers in 1951 and 1952. Flood-control engineering works not only enhanced the danger of floods, but actually contributed to higher flood losses. Flood losses were, in turn, used to justify more extensive control structures, creating a vicious cycle of ever-increasing flood losses and control structures. The mid-century floods demonstrated the need for additional risk-management measures. (1489)

Only five years after the program was enacted, Gilbert White’s admonition was validated. Congress found that flood losses were continuing to increase due to the accelerating development of floodplains. Ironically, both federal flood control infrastructure and the availability of federal flood insurance were at fault. To address the problem, Congress passed the Flood Disaster Protection Act of 1973, which made federal assistance for construction in flood hazard areas, including loans from federally insured banks, contingent upon the purchase of flood insurance, which is only made available to participating communities. (1491)

But development and building in the floodplains of the rivers of the United States has continued and even accelerated since the 1990s.Government policy comes into this set of disasters at several levels. First, climate policy — the evidence has been clear for at least two decades that the human production of greenhouse gases is creating rapid climate change, including rising temperatures in atmosphere and oceans, severe storms, and rising ocean levels. A fundamental responsibility of government is to regulate and direct activities that create public harms, and the US government has failed abjectly to change the policy environment in ways that substantially reduce the production of CO2 and other greenhouse gases. Second, as Klein and Zellmer document, the policies adopted by the US government in the early part of the twentieth century intended to prevent major flood disasters were ill conceived. The efforts by the US government and regional governments to control flooding through levees, reservoirs, dams, and other infrastructure interventions have failed, and have probably made the problems of flooding along major US rivers worse. Third, the human activities in flood plains — residences, businesses, hotels and resorts — have worsened the severity of the consequences of floods by elevating the cost in lives and property because of reckless development in flood zones. Governments have failed to discourage or prevent these forms of development, and the consequences have proven to be extreme (and worsening).

It is evident that storms, floods, and sea-level rise will be vastly more destructive in the decades to come. Here is a projection of the effects on the Florida coastline after a sustained period of sea-level rise resulting from a 2-degree Centigrade rise in global temperature (link):

We seem to have passed the point where it will be possible to avoid catastrophic warming. Our governments need to take strong actions now to ameliorate the severity of global warming, and to prepare us for the damage when it inevitably comes.

Ethical disasters

Many examples of technical disasters have been provided in Understanding Society, along with efforts to understand the systemic dysfunctions that contributed to their occurrence. Frequently those dysfunctions fall within the business organizations that manage large, complex technology systems, and often enough those dysfunctions derive from the imperatives of profit-maximization and cost avoidance. Andrew Hopkins’ account of the business decisions contributing to the explosion of the ESSO gas plant in Longford, Australia illustrates this dynamic in Lessons from Longford: The ESSO Gas Plant Explosion. The withdrawal of engineering experts from the plant to a remote corporate headquarters was a cost-saving move that, according to Hopkins, contributed to the eventual disaster.

A topic we have not addressed in detail is the occurrence of ethical disasters — terrible outcomes that are the result of deliberate choices by decision-makers within an organization that are, upon inspection, clearly and profoundly unethical and immoral. The collapse of Enron is probably one such disaster; the Bernie Madoff scandal is another. But it seems increasingly likely that Purdue Pharma and the Sackler family’s business leadership of the corporation represent another major example. Recent reporting by ProPublica, the Atlantic, and the New York Times relies on documents collected in the course of litigation against Purdue Pharma and members of the Sackler family in Massachusetts and New York. (Here are the unredacted court documents on which much of this reporting depends; link.) These documents make it hard to avoid the ethical conclusion that the Sackler family actively participated in business strategies for their company Purdue Pharma that treated the OxyContin addiction epidemic as an expanding business opportunity. And this seems to be a huge ethical breach.

This set of issues is currently unresolved by the courts, so it rests with the legal system to resolve the facts and the issues of legal culpability. But as citizens we all have the ability to read the documents and make our own decisions about the ethical status of decisions and strategies made by the family and the corporation over the course of this disaster. The point here is simply to ask these key questions: how should we think about the ethical status of decisions and strategies of owners and managers that lead to terrible harms, and harms that could reasonably have been anticipated? How should a company or a set of owners respond to a catastrophe in which several hundred thousand people have died, and which was facilitated in part by deliberate marketing efforts by the company and the owners? How should the company have adjusted its business when it became apparent that its product was creating addiction and widespread death?

First, here are a few details from the current reporting about the case. Here are a few paragraphs from the ProPublica story (January 30, 2019):

Not content with billions of dollars in profits from the potent painkiller OxyContin, its maker explored expanding into an “attractive market” fueled by the drug’s popularity — treatment of opioid addiction, according to previously secret passages in a court document filed by the state of Massachusetts.

In internal correspondence beginning in 2014, Purdue Pharma executives discussed how the sale of opioids and the treatment of opioid addiction are “naturally linked” and that the company should expand across “the pain and addiction spectrum,” according to redacted sections of the lawsuit by the Massachusetts attorney general. A member of the billionaire Sackler family, which founded and controls the privately held company, joined in those discussions and urged staff in an email to give “immediate attention” to this business opportunity, the complaint alleges. (ProPublica 1/30/2019; link)

The NYT story reproduces a diagram included in the New York court filings that illustrates the company’s business strategy of “Project Tango” — the idea that the company could make money both from sales of its pain medication and from sales of treatments for the addiction it caused.

Further, according to the reporting provided by the NYT and ProPublica, members of the Sackler family used their positions on the Purdue Pharma board to press for more aggressive business exploitation of the opportunities described here:

In 2009, two years after the federal guilty plea, Mortimer D.A. Sackler, a board member, demanded to know why the company wasn’t selling more opioids, email traffic cited by Massachusetts prosecutors showed. In 2011, as states looked for ways to curb opioid prescriptions, family members peppered the sales staff with questions about how to expand the market for the drugs…. The family’s statement said they were just acting as responsible board members, raising questions about “business issues that were highly relevant to doctors and patients. (NYT 4/1/2019; link)

From the 1/30/2019 ProPublica story, and based on more court documents:

Citing extensive emails and internal company documents, the redacted sections allege that Purdue and the Sackler family went to extreme lengths to boost OxyContin sales and burnish the drug’s reputation in the face of increased regulation and growing public awareness of its addictive nature. Concerns about doctors improperly prescribing the drug, and patients becoming addicted, were swept aside in an aggressive effort to drive OxyContin sales ever higher, the complaint alleges. (link)

And ProPublica underlines the fact that prosecutors believe that family members have personal responsibility for the management of the corporation:

The redacted paragraphs leave little doubt about the dominant role of the Sackler family in Purdue’s management. The five Purdue directors who are not Sacklers always voted with the family, according to the complaint. The family-controlled board approves everything from the number of sales staff to be hired to details of their bonus incentives, which have been tied to sales volume, the complaint says. In May 2017, when longtime employee Craig Landau was seeking to become Purdue’s chief executive, he wrote that the board acted as “de-facto CEO.” He was named CEO a few weeks later. (link)

The courts will resolve the question of legal culpability. The question here is one of the ethical standards that should govern the actions and strategies of owners and managers. Here are several simple ethical observations that seem relevant to this case.

First, it is obvious that pain medication is a good thing when used appropriately under the supervision of expert and well-informed physicians. Pain management enhances quality of life for people experiencing pain.

Second, addiction is plainly a bad thing, and it is worse when it leads to predictable death or disability for its victims. A company has a duty of concern for the quality of life of human beings affected by its product, and this extends to a duty to take all possible precautions to minimize the likelihood that human beings will be harmed by the product.

Third, given that the risks of addiction that were known about this product, the company has a moral obligation to treat its relations with physicians and other health providers as occasions of accurate and truthful education about the product, not opportunities for persuasion, inducement, and marketing. Rather than a sales force of representatives whose incomes are determined by the quantity of the product they sell, the company has a moral obligation to train and incentivize its representatives to function as honest educators providing full information about the risks as well as the benefits of the product. And, of course, it has an obligation not to immerse itself in the dynamics of “conflict of interest” discussed elsewhere (link) — this means there should be no incentives provided to the physicians who agree to prescribe the product.

Fourth, it might be argued that the profits generated by the business of a given pharmaceutical product should be used proportionally to ameliorate the unavoidable harms it creates. Rather than making billions in profits from the sale of the product, and then additional hundreds of millions on products that offset the addictions and illness created by dissemination of the product (this was the plan advanced as “Project Tango”), the company and its owners should hold themselves accountable for the harms created by their product. (That is, the social and human costs of addiction should not be treated as “externalities” or even additional sources of profit for the company.)

Finally, there is an important question at a more individual scale. How should we think about super-rich owners of a company who seem to lose sight entirely of the human tragedies created by their company’s product and simply demand more profits, more timely distribution of the profits, and more control of the management decisions of the company? These are individual human beings, and surely they have a responsibility to think rigorously about their own moral responsibilities. The documents released in these court proceedings seem to display an amazing blindness to moral responsibility on the part of some of these owners.

(There are other important cases illustrating the clash between moral responsibility, corporate profits, and corporate decision-making, having to do with the likelihood of collaboration between American companies, their German and Polish subsidiaries, and the Nazi regime during World War II. Edwin Black argues in IBM and the Holocaust: The Strategic Alliance Between Nazi Germany and America’s Most Powerful Corporation-Expanded Edition that the US-based computer company provided important support for Germany’s extermination strategy. Here is a 2002 piece from the Guardian on the update of Black’s book providing more documentary evidence for this claim; link. And here is a piece from the Washington Post on American car companies in Nazi Germany; link. )

(Stephen Arbogast’s Resisting Corporate Corruption: Cases in Practical Ethics From Enron Through The Financial Crisis is an interesting source on corporate ethics,)

The mind of government

We often speak of government as if it has intentions, beliefs, fears, plans, and phobias. This sounds a lot like a mind. But this impression is fundamentally misleading. “Government” is not a conscious entity with a unified apperception of the world and its own intentions. So it is worth teasing out the ways in which government nonetheless arrives at “beliefs”, “intentions”, and “decisions”.

Let’s first address the question of the mythical unity of government. In brief, government is not one unified thing. Rather, it is an extended network of offices, bureaus, departments, analysts, decision-makers, and authority structures, each of which has its own reticulated internal structure.

This has an important consequence. Instead of asking “what is the policy of the United States government towards Africa?”, we are driven to ask subordinate questions: what are the policies towards Africa of the State Department, the Department of Defense, the Department of Commerce, the Central Intelligence Agency, or the Agency for International Development? And for each of these departments we are forced to recognize that each is itself a large bureaucracy, with sub-units that have chosen or adapted their own working policy objectives and priorities. There are chief executives at a range of levels — President of the United States, Secretary of State, Secretary of Defense, Director of CIA — and each often has the aspiration of directing his or her organization as a tightly unified and purposive unit. But it is perfectly plain that the behavior of functional units within agencies are only loosely controlled by the will of the executive. This does not mean that executives have no control over the activities and priorities of subordinate units. But it does reflect a simple and unavoidable fact about large organizations. An organization is more like a slime mold than it is like a control algorithm in a factory.

This said, organizational units at all levels arrive at something analogous to beliefs (assessments of fact and probable future outcomes), assessments of priorities and their interactions, plans, and decisions (actions to take in the near and intermediate future). And governments make decisions at the highest level (leave the EU, raise taxes on fuel, prohibit immigration from certain countries, …). How does the analytical and factual part of this process proceed? And how does the decision-making part unfold?

One factor is particularly evident in the current political environment in the United States. Sometimes the analysis and decision-making activities of government are short-circuited and taken by individual executives without an underlying organizational process. A president arrives at his view of the facts of global climate change based on his “gut instincts” rather than an objective and disinterested assessment of the scientific analysis available to him. An Administrator of the EPA acts to eliminate long-standing environmental protections based on his own particular ideological and personal interests. A Secretary of the Department of Energy takes leadership of the department without requesting a briefing on any of its current projects. These are instances of the dictator strategy (in the social-choice sense), where a single actor substitutes his will for the collective aggregation of beliefs and desires associated with both bureaucracy and democracy. In this instance the answer to our question is a simple one: in cases like these government has beliefs and intentions because particular actors have beliefs and intentions and those actors have the power and authority to impose their beliefs and intentions on government.

The more interesting cases involve situations where there is a genuine collective process through which analysis and assessment takes place (of facts and priorities), and through which strategies are considered and ultimately adopted. Agencies usually make decisions through extended and formalized processes. There is generally an organized process of fact gathering and scientific assessment, followed by an assessment of various policy options with public exposure. Final a policy is adopted (the moment of decision).

The decision by the EPA to ban DDT in 1972 is illustrative (link, linklink). This was a decision of government which thereby became the will of government. It was the result of several important sub-processes: citizen and NGO activism about the possible toxic harms created by DDT, non-governmental scientific research assessing the toxicity of DDT, an internal EPA process designed to assess the scientific conclusions about the environmental and human-health effects of DDT, an analysis of the competing priorities involved in this issue (farming, forestry, and malaria control versus public health), and a decision recommended to the Administrator and adopted that concluded that the priority of public health and environmental safety was weightier than the economic interests served by the use of the pesticide.

Other examples of agency decision-making follow a similar pattern. The development of policy concerning science and technology is particularly interesting in this context. Consider, for example, Susan Wright (link) on the politics of regulation of recombinant DNA. This issue is explored more fully in her book Molecular Politics: Developing American and British Regulatory Policy for Genetic Engineering, 1972-1982. This is a good case study of “government making up its mind”. Another interesting case study is the development of US policy concerning ozone depletion; link.

These cases of science and technology policy illustrate two dimensions of the processes through which a government agency “makes up its mind” about a complex issue. There is an analytical component in which the scientific facts and the policy goals and priorities are gathered and assessed. And there is a decision-making component in which these analytical findings are crafted into a decision — a policy, a set of regulations, or a funding program, for example. It is routine in science and technology policy studies to observe that there is commonly a substantial degree of intertwining between factual judgments and political preferences and influences brought to bear by powerful outsiders. (Here is an earlier discussion of these processes; link.)

Ideally we would like to imagine a process of government decision-making that proceeds along these lines: careful gathering and assessment of the best available scientific evidence about an issue through expert specialist panels and sections; careful analysis of the consequences of available policy choices measured against a clear understanding of goals and priorities of the government; and selection of a policy or action that is best, all things considered, for forwarding the public interest and minimizing public harms. Unfortunately, as the experience of government policies concerning climate change in both the Bush administration and the Trump administration illustrates, ideology and private interest distort every phase of this idealized process.

(Philip Tetlock’s Superforecasting: The Art and Science of Prediction offers an interesting analysis of the process of expert factual assessment and prediction. Particularly interesting is his treatment of intelligence estimates.)

Is corruption a social thing?

When we discuss the ontology of various aspects of the social world, we are often thinking of such things as institutions, organizations, social networks, value systems, and the like. These examples pick out features of the world that are relatively stable and functional. Where does an imperfection or dysfunction of social life like corruption fit into our social ontology?

We might say that “corruption” is a descriptive category that is aimed at capturing a particular range of behavior, like stealing, gossiping, or asceticism. This makes corruption a kind of individual behavior, or even a characteristic of some individuals. “Mayor X is corrupt.”

This initial effort does not seem satisfactory, however. The idea of corruption is tied to institutions, roles, and rules in a very direct way, and therefore we cannot really present the concept accurately without articulating these institutional features of the concept of corruption. Corruption might be paraphrased in these terms:

  • Individual X plays a role Y in institution Z; role Y prescribes honest and impersonal performance of duties; individual X accepts private benefits to take actions that are contrary to the prescriptions of Y. In virtue of these facts X behaves corruptly.

Corruption, then, involves actions taken by officials that deviate from the rules governing their role, in order to receive private benefits from the subjects of those actions. Absent the rules and role, corruption cannot exist. So corruption is a feature that presupposes certain social facts about institutions. (Perhaps there is a link to Searle’s social ontology here; link.)

We might consider that corruption is analogous to friction in physical systems. Friction is a factor that affects the performance of virtually all mechanical systems, but that is a second-order factor within classical mechanics. And it is possible to give mechanical explanations of the ubiquity of friction, in terms of the geometry of adjoining physical surfaces, the strength of inter-molecular attractions, and the like. Analogously, we can offer theories of the frequency with which corruption occurs in organizations, public and private, in terms of the interests and decision-making frameworks of variously situated actors (e.g. real estate developers, land value assessors, tax assessors, zoning authorities …). Developers have a business interest in favorable rulings from assessors and zoning authorities; some officials have an interest in accepting gifts and favors to increase personal income and wealth; each makes an estimate of the likelihood of detection and punishment; and a certain rate of corrupt exchanges is the result.

This line of thought once again makes corruption a feature of the actors and their calculations. But it is important to note that organizations themselves have features that make corrupt exchanges either more likely or less likely (link, link). Some organizations are corruption-resistant in ways in which others are corruption-neutral or corruption-enhancing. These features include internal accounting and auditing procedures; whistle-blowing practices; executive and supervisor vigilance; and other organizational features. Further, governments and systems of law can make arrangements that discourage corruption; the incidence of corruption is influenced by public policy. For example, legal requirements on transparency in financial practices by firms, investment in investigatory resources in oversight agencies, and weighty penalties to companies found guilty of corrupt practices can affect the incidence of corruption. (Robert Klitgaard’s treatment of corruption is relevant here; he provides careful analysis of some of the institutional and governmental measures that can be taken that discourage corrupt practices; link, link. And there are cross-country indices of corruption (e.g. Transparency International) that demonstrate the causal effectiveness of anti-corruption measures at the state level. Finland, Norway, and Switzerland rank well on the Transparency International index.)

So — is corruption a thing? Does corruption need to be included in a social ontology? Does a realist ontology of government and business organization have a place for corruption? Yes, yes, and yes. Corruption is a real property of individual actors’ behavior, observable in social life. It is a consequence of strategic rationality by various actors. Corruption is a social practice with its own supporting or inhibiting culture. Some organizations effectively espouse a core set of values of honesty and correct performance that make corruption less frequent. And corruption is a feature of the design of an organization or bureau, analogous to “mean-time-between-failure” as a feature of a mechanical design. Organizations can adopt institutional protections and cultural commitments that minimize corrupt behavior, while other organizations fail to do so and thereby encourage corrupt behavior. So “corruption-vulnerability” is a real feature of organizations and corruption has a social reality.

%d bloggers like this: