Gross inequalities in a time of pandemic

Here is a stunning juxtaposition in the April 2 print edition of the New York Times. Take a close look. The top panel updates readers on the fact that the city and the region are enduring unimaginable suffering and stress caused by the COVID-19 pandemic, with 63,300 victims and 2,624 deaths (as of April 4) — and with hundreds of thousands facing immediate, existential financial crisis because of the economic shutdown. And only eight miles away, as the Sotheby’s “Prominent Properties” half-page advertisement proclaims, home buyers can find secluded luxury, relaxation, and safety, for residential estates priced at $32.9 million and $21.5 million. In case the reader missed the exclusiveness of these properties, the advertisement mentions that they are “located in one of the nation’s wealthiest zip codes”. And, lest the prospective buyer be concerned about maintaining social isolation in these difficult times, the ad reminds prospective buyers that these are gated estates — in fact, the $33M property is located on “the only guard gated street in Alpine”.

Could Friedrich Engels have found a more compelling illustration of the fundamental inhumanity of the inequalities that exist in twenty-first century capitalism in the United States? And there is no need for rhetorical exaggeration — here it is in black and white in the nation’s “newspaper of record”.

There are many compelling reasons that supported Elizabeth Warren’s proposal for a wealth tax. But here is one more: it is morally appalling, even gut-churning, to realize that $33 million for a home for one’s family (35,000 square feet, tennis court and indoor basketball court) is a reasonable “ask” for the super-wealthy in our country, the one-tenth of one percent who have ridden the crest of surging stock markets and finance and investment firms to a level of wealth that is literally unimaginable to at least 95% of the rest of the country.

Here is the heart of Warren’s proposal for a wealth tax (link):

Rates and Revenue

  • Zero additional tax on any household with a net worth of less than $50 million (99.9% of American households)
  • 2% annual tax on household net worth between $50 million and $1 billion
  • 4% annual Billionaire Surtax (6% tax overall) on household net worth above $1 billion
  • 10-Year revenue total of $3.75 trillion

Are we all in this together, or not? If we are, let’s share the wealth. Let’s all pay our fair share. Let’s pay for the costs of fighting the pandemic and saving tens of millions of our fellow citizens from financial ruin, eviction, malnutrition, and family crisis with a wealth tax on billionaires. They can afford it. The “65′ saltwater gunite pool” is not a life necessity. The revenue estimate of the Warren proposal is roughly proportionate to the current estimate of what it will cost the US economy to overcome the pandemic, protect the vulnerable, and restart the economy — $3.75 trillion. Both equity and the current crisis support such a plan.

Here is some background on the rising wealth inequalities we have witnessed in recent decades in the United States. Leiserson, McGrew, and Kopparam provide an excellent and data-rich survey of the system of wealth inequalities in the United States in “The distribution of wealth in the United States and implications for a net worth tax” (link). Since 1989 the increase in wealth inequality is dramatic. The top 10% owned about 67% of all wealth in 1989; by 2016 this had risen to 77%.

The second graph is a snapshot for 2016 (link). Both income and wealth are severely unequal, but wealth is substantially more so. The top quintile owns almost 90% of the wealth in the United States, with the top 1% owning about 40% of all wealth.

The website Inequality.org provides an historical look at the growth of inequalities of wealth in the US (link). Consider this graph of the wealth shares over a century of the top 1%, .1%, and .01% of the US population; it is eye-popping. Beginning in roughly 1978 the shares of the very top segments of the US population began to rise, and the trend continued through 2012 — with no end in sight. The top 1% in 2012 owned 41% of all wealth; the top 0.1% owned 21%; and the top 0.01% owned 11%.

We need a wealth tax, and Elizabeth Warren put together a pretty convincing and rational plan. This is not a question of “soaking the rich”. It is a question of basic fairness. Our economy and society have functioned as an express elevator for ever-greater fortunes for the few, with essentially no improvement for 60-80% of the rest of America. An economy is a system of social cooperation, requiring the efforts of all members of society. But the benefits of our economic system have gone ever-more disproportionately to the rich and the ultra-rich. That is fundamentally unfair. Now is the time to bring equity back into our society and politics. If Mr. Moneybags can afford a $33M home in New Jersey, he or she can afford to pay a small tax on his wealth.

It is interesting to note that social scientists and anthropologists are beginning to study the super-rich as a distinctive group. A fascinating source is Iain Hay and Jonathan Beaverstock, eds., Handbook on Wealth and the Super-Rich. Especially relevant is Chris Paris’s contribution, “The residential spaces of the super-rich”. Paris writes:

Prime residential real estate remains a key element in super-­rich investment portfolios, both for private use through luxury consumption and as investment items with anticipated long-­ term capital gain, often untaxed as properties are owned by companies rather than individuals. Most of the homes of the super-­rich are purchased using cash, specialized financial instruments and/or through companies, and ‘the higher the price of the property, the less likely buyers were to arrange traditional mortgage financing for the home acquisition. Whether buyers are foreign or domestic, cash transactions predominate at the higher end of the market’ (Christie’s, 2013, p. 14). Such transactions, therefore, never enter ‘national’ housing accounting systems and play no part in many accounts of aggregate ‘national’ house price trends. For example, the analysis of house price trends in the Joseph Rowntree Foundation UK Housing Review is based on data relating to transactions using mortgages or loans, and EU and OECD comparisons between countries are based on the same kinds of data (Paris, 2013b).

Also fascinating in the volume is Emma Spence’s study of the super-rich when at sea in their super-yachts, “Performing wealth and status: observing super-­yachts and the super-­rich in Monaco”:

In this chapter I focus upon the super-­yacht as a key tool for exploring how performances of wealth are made visible in Monaco. A super-­yacht is a privately owned and professionally crewed luxury vessel over 30 metres in length. An average super-­ yacht, at approximately 47 metres in length, costs around €30 million to buy new, operates with a permanent crew of ten, and costs around €1.8 million per year to run. Larger super-­yachts such as Motor Yacht (M/Y) Madame Gu (99 metres in length), or the current largest super-­yacht in the world M/Y Azzam (180 metres in length) cost substantially more to build and to run. The price to charter (rent) a super-­yacht also varies considerably with size, age and reputation of the shipyard in which it was built. For example, a typical 47-­metre yacht can range between €100 000 to €600 000 per week to charter, plus costs. At the most exclusive end of the super-­yacht charter industry costs are much higher. M/Y Solange, for example, is an 85-­metre newly built yacht (2013) from reputable German shipyard Lürssen, which operates with 29 full-­time crew, and is priced at €1 million plus costs to charter per week.  The super-­yacht industry is worth an estimated €24 billion globally (Rutherford, 2014, p. 51).

Responsible innovation and the philosophy of technology

Several posts here have focused on the philosophy of technology (linklinklinklink). A simple definition of the philosophy of technology might go along these lines:

Technology may be defined broadly as the sum of a set of tools, machines, and practical skills available at a given time in a given culture through which human needs and interests are satisfied and the interplay of power and conflict furthered. The philosophy of technology offers an interdisciplinary approach to better understanding the role of technology in society and human life. The field raises critical questions about the ways that technology intertwines with human life and the workings of society. Do human beings control technology? For whose benefit? What role does technology play in human wellbeing and freedom? What role does technology play in the exercise of power? Can we control technology? What issues of ethics and social justice are raised by various technologies? How can citizens within a democracy best ensure that the technologies we choose will lead to better human outcomes and expanded capacities in the future?

One of the issues that arises in this field is the question of whether there are ethical principles that should govern the development and implementation of new technologies. (This issue is discussed further in an earlier post; link.)

One principle of technology ethics seems clear: policies and regulations are needed to protect the future health and safety of the public. This is the same principle that serves as the ethical basis of government regulation of current activities, justifying coercive rules that prevent pollution, toxic effects, fires, radiation exposure, and other clear harms affecting the health and safety of the public.

Another principle might be understood as exhortatory rather than compulsory, and that is the general recommendation that technologies should be pursued by private actors that make some positive contribution to human welfare. This principle is plainly less universal and obligatory than the “avoid harm” principle; many technologies are chosen because their inventors believe they will entertain, amuse, or otherwise please members of the public, and will thereby permit generation of profits. (Here is a discussion of the value of entertainment; link.)

A more nuanced exhortation is the idea that inventors and companies should subject their technology and product innovation research to broad principles of sustainability. Given that large technological change can potentially have very large environmental and collective effects, we might think that companies and inventors should pay attention to the large challenges our society faces, now and in the foreseeable future: addiction, obesity, CO2 production, plastic waste, erosion of privacy, spread of racist politics, fresh water depletion, and information disparities, to name several.

These principles fall within the general zone of the ethics of corporate social responsibility. Many companies pay lip service to the social-benefits principle and the sustainability principle, though it is difficult to see evidence of the effectiveness of this motivation. Business interests often seem to trump concerns for positive social effects and sustainability — for example, in the pharmaceutical industry and its involvement in the opioid crisis (link).

It is in the context of these reflections about the ethics of technology that I was interested to learn of an academic and policy field in Europe called “responsible innovation”. This is a network of academics, government officials, foundations, and non-profit organizations working together to try to induce more directionality in technology change (innovation). René von Schomberg and Jonathan Hankins’s recently published volume International Handbook on Responsible Innovation: A Global Resource gives an in-depth exposure to the thinking, research, and policy advocacy that this network has accumulated. A key actor in the advancement of this field has been the Bassetti Foundation (link) in Milan, which has made the topic of responsible innovation central to its mission for several decades. The Journal of Responsible Innovation provides a look at continuing research in this field.

The primary locus of discussion and applications in the field of RRI has been within the EU. There is not much evidence of involvement in the field from United States actors in this movement, though the Virtual Institute of Responsible Innovation at Arizona State University has received support from the US National Science Foundation (link).

Von Schomberg describes the scope and purpose of the RRI field in these terms:

Responsible Research and Innovation is a transparent, interactive process by which societal actors and innovators become mutually responsive to each other with a view to the (ethical) acceptability, sustainability and societal desirability of the innovation process and its marketable products (in order to allow a proper embedding of scientific and technological advances in our society). (2)

The definition of this field overlaps quite a bit with the philosophy and ethics of technology, but it is not synonymous. For one thing, the explicit goal of RRI is to help provide direction to the social, governmental, and business processes driving innovation. And for another, the idea of innovation isn’t exactly the same as “technology change”. There are social and business innovations that fall within the scope of the effort — for example, new forms of corporate management or new kinds of financial instruments — but which do not fall within the domain of technological innovations.

Von Schomberg has been a leading thinker within this field, and his contributions have helped to set the agenda for the movement. In his contribution to the volume he identifies six deficits in current innovation policy in Europe (all drawn from chapter two of the volume):

  1. Exclusive focus on risk and safety issues concerning new technologies under governmental regulations
  2. Market deficits in delivering on societal desirable innovations
  3. Aligning innovations with broadly shared public values and expectations
  4. A focus on the responsible development of technology and technological potentials rather than on responsible innovations
  5. A lack of open research systems and open scholarship as a necessary, but not sufficient condition for responsible innovation
  6. Lack of foresight and anticipative governance for the alternative shaping of innovation in sectors

Each of these statements involves very complex ideas about society-government-corporate relationships, and we may well come to judge that some of the recommendations made by Schomberg are more convincing than others. But the clarity of this statement of the priorities and concerns of the RRI movement is enormously valuable as a way of advancing debate on the issues.
The examples that von Schomberg and other contributors discuss largely have to do with large innovations that have sparked significant public discussion and opposition — nuclear power, GMO foods, nanotechnology-based products. These example focus attention on the later stages of scientific and technological knowledge when it comes to the point of introducing the technology into the public. But much technological innovation takes place at a much more mundane level — consumer electronics and software, enhancements of solar technology, improvements in electric vehicle technology, and digital personal assistants (Alexa, Siri), to name a few.

A defining feature of the RRI field is the explicit view that innovation is not inherently good or desirable (for example, in the contribution by Luc Soete in the volume). Contrary to the assumptions of many government economic policy experts, the RRI network is unified in criticism of the idea that innovation is always or usually productive of economic growth and employment growth. These observers argue instead that the public should have a role in deciding which technological options ought to be pursued, and which should not.

In reading the programmatic statements of purpose offered in the volume, it sometimes seems that there is a tendency to exaggerate the degree to which scientific and technological innovation is (or should be) a directed and collectively controlled process. The movement seems to undervalue the important role that creativity and invention play within the crucial fact of human freedom and fulfillment. It is an important moral fact that individuals have extensive liberties concerning the ways in which they use their talents, and the presumption needs to be in favor of their right to do so without coercive interference. Much of what goes on in the search for new ideas, processes, and products falls properly on the side of liberty rather than a socially regulated activity, and the proper relation of social policy to these activities seems to be one of respect for the human freedom and creativity of the innovator rather than a prescriptive and controlling one. (Of course some regulation and oversight is needed, based on assessments of risk and harm; but von Schomberg and others dismiss this moral principle as too limited.)

It sometimes seems as though the contributors slide too quickly from the field of government-funded research and development (where the public has a plain interest in “directing” the research at some level), to the whole ecology of innovation and discovery, whether public, corporate, or academic. As noted above, von Schomberg considers the governmental focus on harm and safety to be the “first deficit” — in other words, an insufficient basis for “guiding innovation”. In contrast, he wants to see public mechanisms tasked with “redirecting” technology innovations and industries. However, much innovation is the result of private initiative and funding, and it seems that this field appropriately falls outside of prescription by government (beyond normal harm-based regulatory oversight). Von Schomberg uses the phrase “a proper embedding of scientific and technological advances in society”; but this seems to be a worrisome overreach, in that it seems to imply that all scientific and technology research should be guided and curated by a collective political process.

This suggests that a more specific description of the goals of the movement would be helpful. Here is one possible specification:

  • Require government agencies to justify the funding and incentives that they offer in support of technology innovation based on an informed assessment of the public’s preferences;
  • Urge corporations to adopt standards to govern their own internal innovation investments to conform to acknowledged public concerns (environmental sustainability, positive contributions to health and safety of citizens and consumers, …);
  • Urge scientists and researchers to engage in public discussion of their priorities in scientific and technological research.
  • Create venues for open and public discussion of major technological choices facing society in the current century, leading to more articulate understanding of priorities and risks.

There is an interesting parallel here with the Japanese government’s efforts in the 1980s to guide investment and research and development resources into the highest priority fields to advance the Japanese economy. The US National Research Council study, 21st Century Innovation Systems for Japan and the United States: Lessons from a Decade of Change: Report of a Symposium (2009) (link), provides an excellent review of the strategies adopted by the United States and Japan in their efforts to stimulate technology innovation in chip production and high-end computers from the 1960s to the 1990s. These efforts were entirely guided by the effort to maintain commercial and economic advantage in the global marketplace. Jason Owen-Smith addresses the question of the role of US research universities as sites of technological research in Research Universities and the Public Good: Discovery for an Uncertain Futurelink.

The “responsible research and innovation” (RRI) movement in Europe is a robust effort to pose the question, how can public values be infused into the processes of technology innovation that have such a massive potential effect on public welfare? It would seem that a major aim of the RRI network is to help to inform and motivate commitments by corporations to principles of responsible innovation within their definitions of corporate social responsibility, which is unmistakably needed. It is worthwhile for U.S. policy experts and technology ethicists alike to pay attention to these debates in Europe, and the International Handbook on Responsible Innovation is an excellent place to begin.

Regulatory delegation at the FAA

Earlier posts have focused on the role of inadequate regulatory oversight as part of the tragedy of the Boeing 737 MAX (link, link). (Also of interest is an earlier discussion of the “quiet power” through which business achieves its goals in legislation and agency rules (link).) Reporting in the New York Times this week by Natalie Kitroeff and David Gelles provides a smoking gun for the idea of regulatory capture by industry over the regulatory agency established to ensure its safe operations (link). The article quotes a former attorney in the FAA office of chief counsel:

“The reauthorization act mandated regulatory capture,” said Doug Anderson, a former attorney in the agency’s office of chief counsel who reviewed the legislation. “It set the F.A.A. up for being totally deferential to the industry.”

Based on exhaustive investigative journalism, Kitroeff and Gelles provide a detailed account of the lobbying strategy and efforts by Boeing and the aircraft manufacturing industry group that led to the incorporation of industry-favored language into the FAA Reauthorization Act of 2018, and it is a profoundly discouraging account for anyone interested in the idea that the public good should drive legislation. The new paragraphs introduced into the final legislation stipulate full implementation of the philosophy of regulatory delegation and establish an industry-centered group empowered to oversee the agency’s performance and to make recommendations about FAA employees’ compensation. “Now, the agency, at the outset of the development process, has to hand over responsibility for certifying almost every aspect of new planes.” Under the new legislation the FAA is forbidden from taking back control of the certification process for a new aircraft without a full investigation or inspection justifying such an action.

As the article notes, the 737 MAX was certified under the old rules. The new rules give the FAA even less oversight powers and responsibilities for the certification of new aircraft and major redesigns of existing aircraft. And the fact that the MCAS system was never fully reviewed by the FAA, based on assurances of its safety from Boeing, reduces even further our confidence in the effectiveness of the FAA process. From the article:

The F.A.A. never fully analyzed the automated system known as MCAS, while Boeing played down its risks. Late in the plane’s development, Boeing made the system more aggressive, changes that were not submitted in a safety assessment to the agency.

Boeing, the Aerospace Industries Association, and the General Aviation Manufacturers Association exercised influence on the 2018 legislation through a variety of mechanisms. Legislators and lobbyists alike were guided by a report on regulation authored by Boeing itself. Executives and lobbyists exercised their ability to influence powerful senators and members of Congress through person-to-person interactions. And elected representatives from both parties favored “less regulation” as a way of supporting the economic interests of businesses in their states. For example:

They also helped persuade Senator Maria Cantwell, Democrat of Washington State, where Boeing has its manufacturing hub, to introduce language that requires the F.A.A. to relinquish control of many parts of the certification process.

And, of course, it is important not to forget about the “revolving door” from industry to government to lobbying firm. Ali Bahrami was an FAA official who subsequently became a lobbyist for the aerospace industry; Stephen Dixon is a former executive of Delta Airlines who now serves as Administrator of the FAA; and in 2007 former FAA Administrator Marion Blakey became CEO of the Aerospace Industries Association, the industry’s chief advocacy and lobbying group (link). It is hard to envision neutral, objective judgment in ensuring the safety of the public from such appointments.

Boeing and its allies found a receptive audience in the head of the House transportation committee, Bill Shuster, a Pennsylvania Republican staunchly in favor of deregulation, and his aide working on the legislation, Holly Woodruff Lyons.

These kinds of influence on legislation and agency action provide crystal-clear illustrations of the mechanisms cited by Pepper Culpepper in Quiet Politics and Business Power: Corporate Control in Europe and Japan explaining the political influence of business. Here is my description of his views in an earlier post:

Culpepper unpacks the political advantage residing with business elites and managers in terms of acknowledged expertise about the intricacies of corporate organization, an ability to frame the issues for policy makers and journalists, and ready access to rule-writing committees and task forces. These factors give elite business managers positional advantage, from which they can exert a great deal of influence on how an issue is formulated when it comes into the forum of public policy formation.

It seems abundantly clear that the “regulatory delegation” movement and its underlying effort to reduce regulatory burden on industry have gone too far in the case of aviation; and the same seems true in other industries such as the nuclear industry. The much harder question is organizational: what form of regulatory oversight would permit a regulatory industry to genuinely enhance the safety of the regulated industry and protect the public from unnecessary hazards? Even if we could take the anti-regulation ideology that has governed much public discourse since the Reagan years out of the picture, there are the continuing issues of expertise, funding, and industry power of resistance that make effective regulation a huge challenge.

Flood plains and land use

An increasingly pressing consequence of climate change is the rising threat of flood in coastal and riverine communities. And yet a combination of Federal and local policies have created land use incentives that have led to increasing development in flood plains since the major floods of the 1990s and 2000s (Mississippi River 1993, Hurricane Katrina 2005, Hurricane Sandy 2016, …), with the result that economic losses from flooding have risen sharply. Many of those costs are born by tax payers through Federal disaster relief and subsidies to the Federal flood insurance program.

Christine Klein and Sandra Zellmer provide a highly detailed and useful review of these issues in their brilliant SMU Law Review article, “Mississippi River Stories: Lessons from a Century of Unnatural Disasters” (link). These arguments are developed more fully in their 2014 book Mississippi River Tragedies: A Century of Unnatural Disaster. Klein and Zellmer believe that current flood insurance policies and disaster assistance policies at the federal level continue to support perverse incentives for developers and homeowners and need to be changed. Projects and development within 100-year flood plains need to be subject to mandatory flood insurance coverage; flood insurance policies should be rated by degree of risk; and government units should have the legal ability to prohibit development in flood plains. Here are their central recommendations for future Federal policy reform:

Substantive requirements for watershed planning and management would effectuate the Progressive Era objective underlying the original Flood Control Act of 1928: treating the river and its floodplain as an integrated unit from source to mouth, “systematically and consistently,” with coordination of navigation, flood control, irrigation, hydropower, and ecosystem services. To accomplish this objective, the proposed organic act must embrace five basic principles:

(1) Adopt sustainable, ecologically resilient standards and objectives;

(2) Employ comprehensive environmental analysis of individual and cumulative effects of floodplain construction (including wetlands fill);

(3) Enhance federal leadership and competency by providing the Corps with primary responsibility for flood control measures, cabined by clear standards, continuing monitoring responsibilities, and oversight through probing judicial review, and supported by a secure, non-partisan funding source;

(4) Stop wetlands losses and restore damaged floodplains by re-establishing natural areas that are essential for floodwater retention; and 

(5) Recognize that land and water policies are inextricably linked and plan for both open space and appropriate land use in the floodplain. (1535-36)

Here is Klein and Zellmer’s description of the US government’s response to flood catastrophes in the 1920s:

Flood control was the most pressing issue before the Seventieth Congress, which sat from 1927 to 1929. Congressional members quickly recognized that the problems were two-fold. First, Congressman Edward Denison of Illinois criticized the absence of federal leadership: “the Federal Government has allowed the people. . . to follow their own course and build their own levees as they choose and where they choose until the action of the people of one State has thrown the waters back upon the people of another State, and vice versa.” Moreover, as Congressman Robert Crosser of Ohio noted, the federal government’s “levees only” policy–a “monumental blunder”–was not the right sort of federal guidance. (1482-83)

In passing the Flood Control Act of 1928, congressional members were influenced by Progressive Era objectives. Comprehensive planning and multiple-use management were hallmarks of the time. The goal was nothing less than a unified, planned society. In the early 1900s, many federal agencies, including the Bureau of Reclamation and the U.S. Geological Survey, had agreed that each river must be treated as an integrated unit from source to mouth. Rivers were to be developed “systematically and consistently,” with coordination of navigation, flood control, irrigation, and hydro-power. But the Corps of Engineers refused to join the movement toward watershed planning, instead preferring to conduct river management in a piecemeal fashion for the benefit of myriad local interests. (1484)

But perverse incentives were created by Federal flood policies in the 1920s that persist to the present:

Only a few decades after the 1927 flood, the Mississippi River rose up out of its banks once again, teaching a new lesson: federal structural responses plus disaster relief payouts had incentivized ever more daring incursions into the floodplain. The floodwater evaded federal efforts to control it with engineered structures, and those same structures prevented the river from finding its natural retention areas–wetlands, oxbows, and meanders–that had previously provided safe storage for floodwater. The resulting damage to affected areas was increased by orders of magnitude. The federal response to this lesson was the adoption of a nationwide flood insurance program intended to discourage unwise floodplain development and to limit the need for disaster relief. Both lessons are detailed in this section. (1486)

Paradoxically, navigational structures and floodplain constriction by levees, highway embankments, and development projects exacerbated the flood damage all along the rivers in 1951 and 1952. Flood-control engineering works not only enhanced the danger of floods, but actually contributed to higher flood losses. Flood losses were, in turn, used to justify more extensive control structures, creating a vicious cycle of ever-increasing flood losses and control structures. The mid-century floods demonstrated the need for additional risk-management measures. (1489)

Only five years after the program was enacted, Gilbert White’s admonition was validated. Congress found that flood losses were continuing to increase due to the accelerating development of floodplains. Ironically, both federal flood control infrastructure and the availability of federal flood insurance were at fault. To address the problem, Congress passed the Flood Disaster Protection Act of 1973, which made federal assistance for construction in flood hazard areas, including loans from federally insured banks, contingent upon the purchase of flood insurance, which is only made available to participating communities. (1491)

But development and building in the floodplains of the rivers of the United States has continued and even accelerated since the 1990s.Government policy comes into this set of disasters at several levels. First, climate policy — the evidence has been clear for at least two decades that the human production of greenhouse gases is creating rapid climate change, including rising temperatures in atmosphere and oceans, severe storms, and rising ocean levels. A fundamental responsibility of government is to regulate and direct activities that create public harms, and the US government has failed abjectly to change the policy environment in ways that substantially reduce the production of CO2 and other greenhouse gases. Second, as Klein and Zellmer document, the policies adopted by the US government in the early part of the twentieth century intended to prevent major flood disasters were ill conceived. The efforts by the US government and regional governments to control flooding through levees, reservoirs, dams, and other infrastructure interventions have failed, and have probably made the problems of flooding along major US rivers worse. Third, the human activities in flood plains — residences, businesses, hotels and resorts — have worsened the severity of the consequences of floods by elevating the cost in lives and property because of reckless development in flood zones. Governments have failed to discourage or prevent these forms of development, and the consequences have proven to be extreme (and worsening).

It is evident that storms, floods, and sea-level rise will be vastly more destructive in the decades to come. Here is a projection of the effects on the Florida coastline after a sustained period of sea-level rise resulting from a 2-degree Centigrade rise in global temperature (link):

We seem to have passed the point where it will be possible to avoid catastrophic warming. Our governments need to take strong actions now to ameliorate the severity of global warming, and to prepare us for the damage when it inevitably comes.

Ethical disasters

Many examples of technical disasters have been provided in Understanding Society, along with efforts to understand the systemic dysfunctions that contributed to their occurrence. Frequently those dysfunctions fall within the business organizations that manage large, complex technology systems, and often enough those dysfunctions derive from the imperatives of profit-maximization and cost avoidance. Andrew Hopkins’ account of the business decisions contributing to the explosion of the ESSO gas plant in Longford, Australia illustrates this dynamic in Lessons from Longford: The ESSO Gas Plant Explosion. The withdrawal of engineering experts from the plant to a remote corporate headquarters was a cost-saving move that, according to Hopkins, contributed to the eventual disaster.

A topic we have not addressed in detail is the occurrence of ethical disasters — terrible outcomes that are the result of deliberate choices by decision-makers within an organization that are, upon inspection, clearly and profoundly unethical and immoral. The collapse of Enron is probably one such disaster; the Bernie Madoff scandal is another. But it seems increasingly likely that Purdue Pharma and the Sackler family’s business leadership of the corporation represent another major example. Recent reporting by ProPublica, the Atlantic, and the New York Times relies on documents collected in the course of litigation against Purdue Pharma and members of the Sackler family in Massachusetts and New York. (Here are the unredacted court documents on which much of this reporting depends; link.) These documents make it hard to avoid the ethical conclusion that the Sackler family actively participated in business strategies for their company Purdue Pharma that treated the OxyContin addiction epidemic as an expanding business opportunity. And this seems to be a huge ethical breach.

This set of issues is currently unresolved by the courts, so it rests with the legal system to resolve the facts and the issues of legal culpability. But as citizens we all have the ability to read the documents and make our own decisions about the ethical status of decisions and strategies made by the family and the corporation over the course of this disaster. The point here is simply to ask these key questions: how should we think about the ethical status of decisions and strategies of owners and managers that lead to terrible harms, and harms that could reasonably have been anticipated? How should a company or a set of owners respond to a catastrophe in which several hundred thousand people have died, and which was facilitated in part by deliberate marketing efforts by the company and the owners? How should the company have adjusted its business when it became apparent that its product was creating addiction and widespread death?

First, here are a few details from the current reporting about the case. Here are a few paragraphs from the ProPublica story (January 30, 2019):

Not content with billions of dollars in profits from the potent painkiller OxyContin, its maker explored expanding into an “attractive market” fueled by the drug’s popularity — treatment of opioid addiction, according to previously secret passages in a court document filed by the state of Massachusetts.

In internal correspondence beginning in 2014, Purdue Pharma executives discussed how the sale of opioids and the treatment of opioid addiction are “naturally linked” and that the company should expand across “the pain and addiction spectrum,” according to redacted sections of the lawsuit by the Massachusetts attorney general. A member of the billionaire Sackler family, which founded and controls the privately held company, joined in those discussions and urged staff in an email to give “immediate attention” to this business opportunity, the complaint alleges. (ProPublica 1/30/2019; link)

The NYT story reproduces a diagram included in the New York court filings that illustrates the company’s business strategy of “Project Tango” — the idea that the company could make money both from sales of its pain medication and from sales of treatments for the addiction it caused.

Further, according to the reporting provided by the NYT and ProPublica, members of the Sackler family used their positions on the Purdue Pharma board to press for more aggressive business exploitation of the opportunities described here:

In 2009, two years after the federal guilty plea, Mortimer D.A. Sackler, a board member, demanded to know why the company wasn’t selling more opioids, email traffic cited by Massachusetts prosecutors showed. In 2011, as states looked for ways to curb opioid prescriptions, family members peppered the sales staff with questions about how to expand the market for the drugs…. The family’s statement said they were just acting as responsible board members, raising questions about “business issues that were highly relevant to doctors and patients. (NYT 4/1/2019; link)

From the 1/30/2019 ProPublica story, and based on more court documents:

Citing extensive emails and internal company documents, the redacted sections allege that Purdue and the Sackler family went to extreme lengths to boost OxyContin sales and burnish the drug’s reputation in the face of increased regulation and growing public awareness of its addictive nature. Concerns about doctors improperly prescribing the drug, and patients becoming addicted, were swept aside in an aggressive effort to drive OxyContin sales ever higher, the complaint alleges. (link)

And ProPublica underlines the fact that prosecutors believe that family members have personal responsibility for the management of the corporation:

The redacted paragraphs leave little doubt about the dominant role of the Sackler family in Purdue’s management. The five Purdue directors who are not Sacklers always voted with the family, according to the complaint. The family-controlled board approves everything from the number of sales staff to be hired to details of their bonus incentives, which have been tied to sales volume, the complaint says. In May 2017, when longtime employee Craig Landau was seeking to become Purdue’s chief executive, he wrote that the board acted as “de-facto CEO.” He was named CEO a few weeks later. (link)

The courts will resolve the question of legal culpability. The question here is one of the ethical standards that should govern the actions and strategies of owners and managers. Here are several simple ethical observations that seem relevant to this case.

First, it is obvious that pain medication is a good thing when used appropriately under the supervision of expert and well-informed physicians. Pain management enhances quality of life for people experiencing pain.

Second, addiction is plainly a bad thing, and it is worse when it leads to predictable death or disability for its victims. A company has a duty of concern for the quality of life of human beings affected by its product, and this extends to a duty to take all possible precautions to minimize the likelihood that human beings will be harmed by the product.

Third, given that the risks of addiction that were known about this product, the company has a moral obligation to treat its relations with physicians and other health providers as occasions of accurate and truthful education about the product, not opportunities for persuasion, inducement, and marketing. Rather than a sales force of representatives whose incomes are determined by the quantity of the product they sell, the company has a moral obligation to train and incentivize its representatives to function as honest educators providing full information about the risks as well as the benefits of the product. And, of course, it has an obligation not to immerse itself in the dynamics of “conflict of interest” discussed elsewhere (link) — this means there should be no incentives provided to the physicians who agree to prescribe the product.

Fourth, it might be argued that the profits generated by the business of a given pharmaceutical product should be used proportionally to ameliorate the unavoidable harms it creates. Rather than making billions in profits from the sale of the product, and then additional hundreds of millions on products that offset the addictions and illness created by dissemination of the product (this was the plan advanced as “Project Tango”), the company and its owners should hold themselves accountable for the harms created by their product. (That is, the social and human costs of addiction should not be treated as “externalities” or even additional sources of profit for the company.)

Finally, there is an important question at a more individual scale. How should we think about super-rich owners of a company who seem to lose sight entirely of the human tragedies created by their company’s product and simply demand more profits, more timely distribution of the profits, and more control of the management decisions of the company? These are individual human beings, and surely they have a responsibility to think rigorously about their own moral responsibilities. The documents released in these court proceedings seem to display an amazing blindness to moral responsibility on the part of some of these owners.

(There are other important cases illustrating the clash between moral responsibility, corporate profits, and corporate decision-making, having to do with the likelihood of collaboration between American companies, their German and Polish subsidiaries, and the Nazi regime during World War II. Edwin Black argues in IBM and the Holocaust: The Strategic Alliance Between Nazi Germany and America’s Most Powerful Corporation-Expanded Edition that the US-based computer company provided important support for Germany’s extermination strategy. Here is a 2002 piece from the Guardian on the update of Black’s book providing more documentary evidence for this claim; link. And here is a piece from the Washington Post on American car companies in Nazi Germany; link. )

(Stephen Arbogast’s Resisting Corporate Corruption: Cases in Practical Ethics From Enron Through The Financial Crisis is an interesting source on corporate ethics,)

The mind of government

We often speak of government as if it has intentions, beliefs, fears, plans, and phobias. This sounds a lot like a mind. But this impression is fundamentally misleading. “Government” is not a conscious entity with a unified apperception of the world and its own intentions. So it is worth teasing out the ways in which government nonetheless arrives at “beliefs”, “intentions”, and “decisions”.

Let’s first address the question of the mythical unity of government. In brief, government is not one unified thing. Rather, it is an extended network of offices, bureaus, departments, analysts, decision-makers, and authority structures, each of which has its own reticulated internal structure.

This has an important consequence. Instead of asking “what is the policy of the United States government towards Africa?”, we are driven to ask subordinate questions: what are the policies towards Africa of the State Department, the Department of Defense, the Department of Commerce, the Central Intelligence Agency, or the Agency for International Development? And for each of these departments we are forced to recognize that each is itself a large bureaucracy, with sub-units that have chosen or adapted their own working policy objectives and priorities. There are chief executives at a range of levels — President of the United States, Secretary of State, Secretary of Defense, Director of CIA — and each often has the aspiration of directing his or her organization as a tightly unified and purposive unit. But it is perfectly plain that the behavior of functional units within agencies are only loosely controlled by the will of the executive. This does not mean that executives have no control over the activities and priorities of subordinate units. But it does reflect a simple and unavoidable fact about large organizations. An organization is more like a slime mold than it is like a control algorithm in a factory.

This said, organizational units at all levels arrive at something analogous to beliefs (assessments of fact and probable future outcomes), assessments of priorities and their interactions, plans, and decisions (actions to take in the near and intermediate future). And governments make decisions at the highest level (leave the EU, raise taxes on fuel, prohibit immigration from certain countries, …). How does the analytical and factual part of this process proceed? And how does the decision-making part unfold?

One factor is particularly evident in the current political environment in the United States. Sometimes the analysis and decision-making activities of government are short-circuited and taken by individual executives without an underlying organizational process. A president arrives at his view of the facts of global climate change based on his “gut instincts” rather than an objective and disinterested assessment of the scientific analysis available to him. An Administrator of the EPA acts to eliminate long-standing environmental protections based on his own particular ideological and personal interests. A Secretary of the Department of Energy takes leadership of the department without requesting a briefing on any of its current projects. These are instances of the dictator strategy (in the social-choice sense), where a single actor substitutes his will for the collective aggregation of beliefs and desires associated with both bureaucracy and democracy. In this instance the answer to our question is a simple one: in cases like these government has beliefs and intentions because particular actors have beliefs and intentions and those actors have the power and authority to impose their beliefs and intentions on government.

The more interesting cases involve situations where there is a genuine collective process through which analysis and assessment takes place (of facts and priorities), and through which strategies are considered and ultimately adopted. Agencies usually make decisions through extended and formalized processes. There is generally an organized process of fact gathering and scientific assessment, followed by an assessment of various policy options with public exposure. Final a policy is adopted (the moment of decision).

The decision by the EPA to ban DDT in 1972 is illustrative (link, linklink). This was a decision of government which thereby became the will of government. It was the result of several important sub-processes: citizen and NGO activism about the possible toxic harms created by DDT, non-governmental scientific research assessing the toxicity of DDT, an internal EPA process designed to assess the scientific conclusions about the environmental and human-health effects of DDT, an analysis of the competing priorities involved in this issue (farming, forestry, and malaria control versus public health), and a decision recommended to the Administrator and adopted that concluded that the priority of public health and environmental safety was weightier than the economic interests served by the use of the pesticide.

Other examples of agency decision-making follow a similar pattern. The development of policy concerning science and technology is particularly interesting in this context. Consider, for example, Susan Wright (link) on the politics of regulation of recombinant DNA. This issue is explored more fully in her book Molecular Politics: Developing American and British Regulatory Policy for Genetic Engineering, 1972-1982. This is a good case study of “government making up its mind”. Another interesting case study is the development of US policy concerning ozone depletion; link.

These cases of science and technology policy illustrate two dimensions of the processes through which a government agency “makes up its mind” about a complex issue. There is an analytical component in which the scientific facts and the policy goals and priorities are gathered and assessed. And there is a decision-making component in which these analytical findings are crafted into a decision — a policy, a set of regulations, or a funding program, for example. It is routine in science and technology policy studies to observe that there is commonly a substantial degree of intertwining between factual judgments and political preferences and influences brought to bear by powerful outsiders. (Here is an earlier discussion of these processes; link.)

Ideally we would like to imagine a process of government decision-making that proceeds along these lines: careful gathering and assessment of the best available scientific evidence about an issue through expert specialist panels and sections; careful analysis of the consequences of available policy choices measured against a clear understanding of goals and priorities of the government; and selection of a policy or action that is best, all things considered, for forwarding the public interest and minimizing public harms. Unfortunately, as the experience of government policies concerning climate change in both the Bush administration and the Trump administration illustrates, ideology and private interest distort every phase of this idealized process.

(Philip Tetlock’s Superforecasting: The Art and Science of Prediction offers an interesting analysis of the process of expert factual assessment and prediction. Particularly interesting is his treatment of intelligence estimates.)

Is corruption a social thing?

When we discuss the ontology of various aspects of the social world, we are often thinking of such things as institutions, organizations, social networks, value systems, and the like. These examples pick out features of the world that are relatively stable and functional. Where does an imperfection or dysfunction of social life like corruption fit into our social ontology?

We might say that “corruption” is a descriptive category that is aimed at capturing a particular range of behavior, like stealing, gossiping, or asceticism. This makes corruption a kind of individual behavior, or even a characteristic of some individuals. “Mayor X is corrupt.”

This initial effort does not seem satisfactory, however. The idea of corruption is tied to institutions, roles, and rules in a very direct way, and therefore we cannot really present the concept accurately without articulating these institutional features of the concept of corruption. Corruption might be paraphrased in these terms:

  • Individual X plays a role Y in institution Z; role Y prescribes honest and impersonal performance of duties; individual X accepts private benefits to take actions that are contrary to the prescriptions of Y. In virtue of these facts X behaves corruptly.

Corruption, then, involves actions taken by officials that deviate from the rules governing their role, in order to receive private benefits from the subjects of those actions. Absent the rules and role, corruption cannot exist. So corruption is a feature that presupposes certain social facts about institutions. (Perhaps there is a link to Searle’s social ontology here; link.)

We might consider that corruption is analogous to friction in physical systems. Friction is a factor that affects the performance of virtually all mechanical systems, but that is a second-order factor within classical mechanics. And it is possible to give mechanical explanations of the ubiquity of friction, in terms of the geometry of adjoining physical surfaces, the strength of inter-molecular attractions, and the like. Analogously, we can offer theories of the frequency with which corruption occurs in organizations, public and private, in terms of the interests and decision-making frameworks of variously situated actors (e.g. real estate developers, land value assessors, tax assessors, zoning authorities …). Developers have a business interest in favorable rulings from assessors and zoning authorities; some officials have an interest in accepting gifts and favors to increase personal income and wealth; each makes an estimate of the likelihood of detection and punishment; and a certain rate of corrupt exchanges is the result.

This line of thought once again makes corruption a feature of the actors and their calculations. But it is important to note that organizations themselves have features that make corrupt exchanges either more likely or less likely (link, link). Some organizations are corruption-resistant in ways in which others are corruption-neutral or corruption-enhancing. These features include internal accounting and auditing procedures; whistle-blowing practices; executive and supervisor vigilance; and other organizational features. Further, governments and systems of law can make arrangements that discourage corruption; the incidence of corruption is influenced by public policy. For example, legal requirements on transparency in financial practices by firms, investment in investigatory resources in oversight agencies, and weighty penalties to companies found guilty of corrupt practices can affect the incidence of corruption. (Robert Klitgaard’s treatment of corruption is relevant here; he provides careful analysis of some of the institutional and governmental measures that can be taken that discourage corrupt practices; link, link. And there are cross-country indices of corruption (e.g. Transparency International) that demonstrate the causal effectiveness of anti-corruption measures at the state level. Finland, Norway, and Switzerland rank well on the Transparency International index.)

So — is corruption a thing? Does corruption need to be included in a social ontology? Does a realist ontology of government and business organization have a place for corruption? Yes, yes, and yes. Corruption is a real property of individual actors’ behavior, observable in social life. It is a consequence of strategic rationality by various actors. Corruption is a social practice with its own supporting or inhibiting culture. Some organizations effectively espouse a core set of values of honesty and correct performance that make corruption less frequent. And corruption is a feature of the design of an organization or bureau, analogous to “mean-time-between-failure” as a feature of a mechanical design. Organizations can adopt institutional protections and cultural commitments that minimize corrupt behavior, while other organizations fail to do so and thereby encourage corrupt behavior. So “corruption-vulnerability” is a real feature of organizations and corruption has a social reality.

Exercising government’s will

Since the beginning of the industrial age the topic of regulation of private activity for the public good has been essential for the health and safety of the public. The economics of externalities and public harms are too powerful to permit private actors to conduct their affairs purely according to the dictates of profit and private interest. The desolation of the River Irk described in Engels’ The Condition of the Working-Class in England in 1844 was powerful evidence of this dynamic in the nineteenth century, and need for the protection of health and safety in the food industry, the protection of air and water quality, and establishment of regulations ensuring safe operation of industrial, chemical, and nuclear plants became evident in the middle of the twentieth century. (Of course it goes without saying that our current administration no longer concedes this point.)

A fundamental problem for understanding the mechanics of government is the question of how the will and intentions of government (policies and regulatory regimes) are conveyed from the sites of decision-making to the behavior of the actors whom these policies are meant to influence.

The familiar principal-agent problem designates precisely this complex of issues. Applying a government policy or regulation requires a chain of behaviors by multiple agents within an extended network of governmental and non-governmental offices. It is all too evident that actors at various levels have interests and intentions that are important to their choices; and blind obedience to commands from above is not a common practice within any organization. Instead, actors within an office or bureau have some degree of freedom to act strategically with regard to their own preferences and interests. What, then, are the arrangements that the principal can put in place that makes conformance by the agent more complete?

Further, there are commonly a range of non-governmental entities and actors who are affected by governmental policies and regulations. They too have the ability to act strategically in consideration of their preferences and interests. And some of the actions that are available to non-governmental actors have the capacity to significantly influence the impact and form of various governmental policies and regulations. The corporations that own nuclear power plants, for example, have an ability to constrain and deflect the inspection schedules to which their properties are subject through influence on legislators, and the regulatory agency may be seriously hampered in its ability to apply existing safety regulations.

This is a problem of social ontology: what kind of thing is a governmental agency, how does it work internally, and through what kinds of mechanisms does it influence the world around it (firms, criminals, citizens, local government, …)?

Two related ideas about the nature of organizations are relevant in this context. The idea of organizations as “strategic action fields” that is developed by Fligstein and McAdam (A Theory of Fields) fits the situation of a governmental agency. And the earlier work by Michel Crozier and Erhard Friedberg offer a similar account of the strategic action that jointly determines the workings of an organization. Here is a representative passage from Crozier and Friedberg:

The reader should not misconstrue the significance of this theoretical bet. We have not sought to formulate a set of general laws concerning the substance, the properties and the stages of development of organizations and systems. We do not have the advantage of being able to furnish normative precepts like those offered by management specialists who always believe they can elaborate a model of “good organization” and present a guide to the means and measures necessary to realize it. We present of series of simple propositions on the problems raised by the existence of these complex but integrated ensembles that we call organizations, and on the means and instruments that people have invented to surmount these problems; that is to say, to assure and develop their cooperation in view of the common goals.” L’acteur et le système, p. 11

(Here are some earlier discussions of these theories; link, link, link.  And here is a related discussion of Mayer Zald’s treatment of organizations; link.)

Also relevant from the point of view of the ontology of government organization is the new theory of institutional logics. Patricia Thornton, William Ocasio, and Michael Lounsbury describe new theoretical developments within the general framework of new institutionalism in The Institutional Logics Perspective: A New Approach to Culture, Structure and Process. Here is how they define their understanding of “institutional logic”:

“… as the socially constructed, historical patterns of cultural symbols and material practices, including assumptions, values, and beliefs, by which individuals and organizations provide meaning to their daily activity, organize time and space, and reproduce their lives and experiences.” (2)

The institutional logics perspective is a metatheoretical framework for analyzing the interrelationships among institutions, individuals, and organizations in social systems. It aids researchers in questions of how individual and organizational actors are influenced by their situation in multiple social locations in an interinstitutional system, for example the institutional orders of the family, religion, state, market, professions, and corporations. Conceptualized as a theoretical model, each institutional order of the interinstitutional system distinguishes unique organizing principles, practices, and symbols that influence individual and organizational behavior. Institutional logics represent frames of reference that condition actors’ choices for sensemaking, the vocabulary they use to motivate action, and their sense of self and identity. The principles, practices, and symbols of each institutional order differentially shape how reasoning takes place and how rationality is perceived and experienced. (2)

Here is a discussion of institutional logics; link.

So what can we say about the ontology of policy implementation, compliance, and executive decisions? We can say that —

  • it proceeds through individual actors in particular circumstances guided by particular interests and preferences;
  • implementation is likely to be imperfect in the best of circumstances and entirely ineffectual in other circumstances;
  • implementation is affected by the strategic non-governmental actors and organizations it is designed to influence, leading to further distortion and incompleteness.

We can also, more positively, identify specific mechanisms that governments and executives introduce to increase the effectiveness of implementation of their policies. These include —

  • internal audit and discipline functions,
  • communications and training strategies designed at enhancing conformance by intermediate actors,
  • periodic purges of non-conformant sub-officials and powerful non-governmental actors,
  • and dozens of other strategies and mechanisms of conformance.

Most fundamentally we can say that any model of government that postulates frictionless application and implementation of policy is flawed at its core. Such a model overlooks an ontological fundamental about government and other organizations, large and small: that organizational action is never automatic, algorithmic, or exact; that it is always conveyed by intermediate actors who have their own understandings and preferences about policy; and that it works in an environment where powerful non-governmental actors are almost always in positions to blunt the effectiveness of “the will of government”.

This topic unavoidably introduces the idea of corruption into the discussion (link, link). Sometimes the contrarian behavior of internal actors derives from private benefits offered them by outsiders influenced by the actions of government. (Hotels in Moscow?) More generally, however, it raises the question of conflicts of commitment, mission, role obligations, and organizational ethics.

Social mobility disaggregated

There is a new exciting and valuable contribution from the research group around Raj Chetty, Nathan Hendren, and John Friedman, this time on the topic of neighborhood-level social mobility. (Earlier work highlighted measures of the impact on social mobility contributed by university education across the country. This work is presented on the Opportunity Insights website; link, link. Here is an earlier post on that work; link.) In the recently released work Chetty and his colleagues have used census data to compare incomes of parents and children across the country by neighborhood of birth, with the ability to disaggregate by race and gender, and the results are genuinely staggering. Here is a report on the project on the US Census website; link. The interactive dataset and mapping app are provided here (link). The study identifies neighborhoods of origin; characteristics of parents and neighborhoods; and characteristics of children.

Here are screenshots of metropolitan Detroit representing the individual incomes of the children (as adults) based on their neighborhoods of origin for all children, black children, and white children. (Of course a percentage of these individuals no longer live in the original neighborhood.) There are 24 outcome variables included as well as 13 neighborhood characteristics, and it is possible to create maps based on multiple combinations of these variables. It is also possible to download the data.

Children born in Highland Park, Michigan earned an average individual income as adults in 2014-15 of $18K; children born in Plymouth, Michigan earned an average individual income as adults of $42K. It is evident that these differences in economic outcomes are highly racialized; in many of the tracts in the Detroit area there are “insufficient data” for either black or white individuals to provide average data for these sub-populations in the given areas. This reflects the substantial degree of racial segregation that exists in the Detroit metropolitan area. (The project provides a special study of opportunity in Detroit, “Finding Opportunity in Detroit”.)

This dataset is genuinely eye-opening for anyone interested in the workings of economic opportunity in the United States. It is also valuable for public policy makers at the local and higher levels who have an interest in improving outcomes for children in poverty. It is possible to use the many parameters included in the data to probe for obstacles to socioeconomic progress that might be addressed through targeted programs of opportunity enhancement.

(Here is a Brookings description of the social mobility project’s central discoveries; link.)

Cyber threats

David Sanger’s very interesting recent book, The Perfect Weapon: War, Sabotage, and Fear in the Cyber Age, is a timely read this month, following the indictments of twelve Russian intelligence officers for hacking the DNC in 2015. Sanger is a national security writer for the New York Times, and has covered cyber security issues for a number of years. He and William Broad and John Markoff were among the first journalists to piece together the story behind the Stuxnet attack on Iran’s nuclear fuel program (the secret program called Olympic Games), and the book also offers some intriguing hints about the possibility of “left of launch” intrusions by US agencies into the North Korean missile program. This is a book that everyone should read. It greatly broadens the scope of what most of us think about under the category of “hacking”. We tend to think of invasions of privacy and identity theft when we think of nefarious uses of the internet; but Sanger makes it clear that the stakes are much greater. The capabilities of current cyber-warfare tools have the possibility of bringing down whole national infrastructures, leading to massive civilian hardship.

There are several important takeaways from Sanger’s book. One is the pervasiveness and power of the offensive cyber tools available to nation-state actors in penetrating and potentially disrupting or destroying the infrastructures of their potential opponents. Russia, China, North Korea, Iran, and the United States are all shown to possess tools of intrusion, data extraction, and system destruction that are extremely difficult for targeted countries and systems to defend against. The Sony attack (North Korea), the Office of Personnel Management (China), the attack on the Ukraine electric grid (Russia), the attack on Saudi Arabia’s massive oil company Aramco (Iran), and the attack on the US electoral system (Russia) all proceeded with massive effect and without evident response from their victims or the United States. At this moment in time the balance of capability appears to favor the offense rather than the defense. A second important theme is the extreme level of secrecy that the US intelligence establishment has imposed on the capabilities it possesses for conducting cyber conflict. Sanger makes it clear that he believes that a greater level of public understanding of the capabilities and risks created by cyber weapons like Stuxnet would be beneficial in the United States and other countries, by permitting a more serious public debate about means and ends, risks and rewards of the use of cyber weapons. He likens it to the evolution of the Obama administration’s eventual willingness to make a public case for the use of unmanned drone strikes against its enemies.

Third, Sanger makes it clear that the classic logic of deterrence that was successful in maintaining nuclear peace is less potent when it comes to cyber warfare and escalation. State-level adversaries have selected strategies of cyber attack precisely because of the relatively low cost of developing this technology, the relative anonymity of an attack once it occurs, and the difficulties faced by victims in selecting appropriate and effective counter-strikes that would deter the attacker in the future.

The National Security Agency gets a lot of attention in the book. The Office of Tailored Access Operations gets extensive discussion, based on revelations from the Snowden materials and other sources. Sanger makes it clear that the NSA had developed a substantial toolkit for intercepting communications and penetrating computer systems to capture data files of security interest. But according to Sanger it has also developed strong cyber tools for offensive use against potential adversaries. Part of the evidence for this judgment comes from the Snowden revelations (which are also discussed extensively). Part comes from what Sanger and others were able to discover about the workings of Stuxnet in targeting Iranian nuclear centrifuges over a many-month period. And part comes from suggestive reporting about the odd fact that North Korea’s medium range missile tests were so spectacularly unsuccessful for a series of launches.

The book leads to worrisome conclusions and questions. US infrastructure and counter-cyber programs were highly vulnerable to attacks that have already taken place in our country. The extraction by Chinese military intelligence of millions of confidential personal records of US citizens from the Office of Personnel Management took place over months and was uncovered only after the damage was done. The effectiveness of Russian attacks on the Ukraine electric power grid suggest that similar attacks would be possible in other advanced countries, including the United States. All of these incidents suggest a level of vulnerability and potential for devastating attack that the public is not prepared for.

%d bloggers like this: