Understanding Society has now reached its twelfth anniversary of continuous publication. This represents 1,271 posts, and over 1.3 million words. According to Google Blogspot statistics, the blog has gained over 11 million pageviews since 2010. Just over half of visitors came from the United States, Great Britain, and Canada, with the remainder spread out over the rest of the world. The most popular posts are “Lukes on power” (134K) and “What is a social structure?” (124K).
I’ve continued to find writing the blog to be a great way of keeping several different lines of thought and research going. My current interest in “organizational causes of technology failures” has had a large presence in the blog in the past year, with just under half of the posts in 2019 on this topic. Likewise, a lot of the thinking I’ve done on the topic of “a new ontology of government” has unfolded in the blog. Other topic areas include the philosophy of social science, philosophy of technology, and theories of social ontology. A theme that was prominent in 2018 that is not represented in the current year is “Democracy and the politics of hate”, but I’m sure I’ll return to this topic in the coming months because I’ll be teaching a course on this subject in the spring.
I continue to look at academic blogging as a powerful medium for academic communication, creativity, and testing out new ideas. I began in 2007 by describing the blog as “open-source philosophy”, and it still has that character for me. And I continue to believe that my best thinking finds expression in Understanding Society. Every post that I begin starts with an idea or a question that is of interest to me on that particular day, and it almost always leads me to learning something new along the way.
I’ve also looked at the blog as a kind of experiment in exploration of social media for serious academic purposes. Can blogging platforms and social media platforms like Twitter or Facebook contribute to academic progress? So it is worth examining the reach of the blog over time, and the population of readers whom it has touched. The graph of pageviews over time is interesting in this respect.
Traffic to the blog increased in a fairly linear way from the beginning date of the data collection in 2010 through about 2017, and then declined more steeply from 2017 through to the present. (The data points are pageviews per month.) At its peak the blog received about 150K pageviews per month, and it seems to be stabilizing now at about 100K pageviews per month. My impression is that a lot of the variation has to do with unobserved changes in search engine page ranking algorithms, resulting in falling numbers of referrals. The Twitter feed associated with the blog has just over 2,100 followers (@dlittle30), and the Facebook page for the blog registers 12,800 followers. The Facebook page is not a very efficient way of disseminating new posts from the blog, though, because Facebook’s algorithm for placing an item into the feed of a “follower” is extremely selective and opaque. A typical item may be fed into 200-400 of the feeds of the almost 13,000 individuals who have expressed interest in the page.
A surprising statistic is that about 75% of pageviews on the blog came through desktop requests rather than mobile requests (phone and tablet). We tend to think that most web viewing is occurring on mobile devices now, but that does not seem to be the case. Also interesting is that the content of the blog is mirrored to a WordPress platform (www.undsoc.org), and the traffic there is a small fraction of the traffic on the Blogspot platform (1,500 pageviews versus 80,000 pageviews).
So thanks to the readers who keep coming back for more, and thanks as well to those other visitors who come because of an interest in a very specific topic. It’s genuinely rewarding and enjoyable to be connected to an international network of people, young and old, who share an interest in how the social world works.
Allan McDonald’s Truth, Lies, and O-Rings: Inside the Space Shuttle Challenger Disaster (2009) has given me a somewhat different understanding of the Challenger launch disaster than I’ve gained from other sources, including Diane Vaughan’s excellent book The Challenger Launch Decision: Risky Technology, Culture, and Deviance at NASA. McDonald is a Morton Thiokol (MTI) insider who was present through virtually all aspects of the evolving solid rocket program at NASA in the two years leading up to the explosion in January 1986. He was director of the Space Shuttle Solid Rocket Motor Project during part of this time and he represented MTI at the formal Launch Readiness Review panels (LRRs) for several shuttle launches, including the fateful Challenger launch. He was senior management representative for MTI for the launch of STS-51L Challenger. His account gives a great deal of engineering detail about the Morton Thiokol engineering group’s ongoing concerns about the O-rings in the months preceding the Challenger disaster. This serves as a backdrop for a detailed analysis of the dysfunctions in decision-making in both NASA and Morton Thiokol that led to an insufficient priority being given to safety assessments.
It is worth noting that O-rings were a key part of other large solid-fuel rockets, including the Titan rocket. So there was a large base of engineering and test experience with the performance of the O-rings when exposed to the high temperatures and pressures of ignition and firing.
The biggest surprise to me is the level of informed, rigorous, and evidence-based concern that MTI engineers had about the reliability of joint seal afforded by the primary and secondary seals on the solid rocket motors on the Shuttle system. These specialists had a very good and precise understanding of the mechanics of the problem. Further, there was a good engineering understanding of the expected (and required) time-sequence performance of the O-rings during ignition and firing. If the sealing action were delayed by even a few hundredths of a second, hot gas would be able to penetrate past the seal. These were not hypothetical worries, but instead were based on data from earlier launches demonstrating O-ring erosion and soot between the primary and secondary rings showing that super-hot gases had penetrated the primary seal. The worst damage and evidence of blowby had occurred on flight STS-51C January 25, 1985, one year earlier, the lowest-temperature launch yet attempted. And that launch took place when the temperature was 53 degrees.
Launch temperatures for the rescheduled January 28 launch were projected to be extremely cold — 22-26 degrees was forecast on January 27, roughly 30 degrees colder than the previous January launch. The projected temperatures immediately raised alarm concerning the potential effects on the O-rings with the Utah-based engineering team and with McDonald himself. A teleconference meeting was scheduled for January 27 to receive recommendations from the Utah-based Morton Thiokol engineers who were focused on the O-rings problem about the minimum acceptable temperature for launch (95).
I tried to reach Larry Mulloy at his hotel but failed, so I called Cecil Houston, the NASA/MSFC Resident Manager at KSC. I alerted him of our concerns about the sealing capability of the field-joint O-rings at the predicted cold temperatures and asked him to set up the teleconference. (96)
The teleconference began at 8:30 pm on the evening before the launch. McDonald was present in Cape Canaveral for the Flight Readiness Review panel and participated in the teleconference involving the analysis and recommendations from MTI engineering, leading to a recommendation against launching in the expected cold weather conditions.
Thiokol’s engineering presentation consisted of about a dozen charts summarizing the history of the performance of the field-joints, some engineering analysis on the operation of the joints, and some laboratory and full-scale static test data relative to the performance of the O-rings at various temperatures. About half the charts had been prepared by Roger Boisjoly, our chief seal expert on the O-ring Seal Task Force and staff engineer to Jack Kapp, Manager of Applied Mechanics. The remainder were presented by Arnie Thompson, the supervisor of our Structures Section under Jack Kapp, and by Brian Russell, a program manager working for Bob Ebeling. (97)
Boisjoly’s next chart showed how cold temperature would reduce all the factors that helped maintain a good seal in the joint: lower O-ring squeeze due to thermal shrinkage of the O-ring; thicker and more viscous grease around the O-ring, making it slower to move across the O-ring groove; and higher O-ring hardness due to low temperature, making it more difficult for the O-ring to extrude dynamically into the gap for proper sealing. All of these things increased the dynamic actuation time, or timing function, of the O-ring, when at the very same time the O-ring could be eroding, creating a situation where the secondary seal might not be able to seal the motor, not if the primary O-ring was sufficiently eroded to prevent sealing in the joint. (99)
Based on their concerns about temperature and effectiveness of the seals in the critical half-second of ignition, MTI engineering staff prepared the foundation for a recommendation to not launch in temperatures lower than 53 degrees. Their conclusion as presented at the January 27 teleconference was unequivocal against launch under these temperature conditions:
The final chart included the recommendations, which resulted in several strong comments and many very surprising reactions from the NASA participants in the teleconference. The first statement on the “Recommendations” chart stated that the O-ring temperature must be equal to or greater than 53° at launch, and this was primarily based upon the fact that SRM-15, which was the best simulation of this condition, worked at 53 °. The chart ended with a statement that we should project the ambient conditions (temperature and wind) to determine the launch time. (102)
NASA lead Larry Mulloy contested the analysis and evidence in the slides and expressed great concern about the negative launch recommendation, and he asserted that the data were “inconclusive” in establishing a relationship between temperature and O-ring failure.
Mulloy immediately said he could not accept the rationale that was used in arriving at that recommendation. Stan Reinartz then asked George Hardy, Deputy Director of Science and Engineering at NASA/MSFC, for his opinion. Hardy said he was “appalled” that we could make such a recommendation, but that he wouldn’t fly without Morton Thiokol’s concurrence. Hardy also stated that we had only addressed the primary O-ring, and did not address the secondary O-ring, which was in a better position to seal because of the leak-check. Mulloy then shouted, “My God, Thiokol, when do you want me to launch, next April?” He also stated that “the eve of a launch is a helluva time to be generating new launch commit criteria!” Stan Reinartz entered the conversation by saying that he was under the impression that the solid rocket motors were qualified from 40° to 90° and that the 53° recommendation certainly was not consistent with that.” (103)
Joe Kilminster, VP of Space Booster Programs at MTI, then requested a short caucus for the engineering team in Utah to reevaluate the data and consider their response to the skepticism voiced by NASA officials. McDonald did not participate in the caucus, but his reconstruction based on the memories of persons present paints a clear picture. The engineering experts did not change their assessment, and they were overriden by MTI executives Cal Wiggins (VP and General Manager of the Space Division) and Jerry Mason (Senior VP of Wasatch Operations). In opening the caucus discussion, Mason is quoted as saying “we need to make a management decision”. Engineers Boisjoly and Thompson reiterated their technical concerns about the functionality of the O-ring seals at low temperature, with no response from the senior executives. No members of the engineering team spoke up to support a decision to launch. Mason polled the senior executives, including Bob Lund (VP of Engineering), and said to Lund, “It’s time for you, Bob, to take off your engineering hat and put on your management hat.” (111) A positive launch recommendation was then conveyed to NASA, and the process in Florida resumed towards launch.
McDonald spends considerable time indicating the business pressure that MTI was subject to from its largest customer, NASA. NASA was considering creating a second-source option for competing companies for solid fuel motors from MTI and had also delayed signing a large contract (Buy-III fixed cost bid) for the next batch of motors. The collective impact of these actions by NASA could cost MTI over a billion dollars. So MTI management appears to have been under great pressure to accommodate to NASA managers’ preferences concerning the launch decision. And it is hard to avoid the conclusion that their decision placed business interests first and the professional judgments of their safety engineers second. In doing so they placed the lives of seven astronauts at risk, with tragic consequences.
And what about NASA? Here the pressures are somewhat less fully developed than in Vaughan’s account, but the driving commitment to achieve a 24-launch per year schedule seems to have been a primary motivation. Delayed launches significantly undermined this goal, which threatened both the prestige of NASA, the hope of significant commercial revenue for the program, and the assurance of continuing funding from Congress.
McDonald was not a participant in the caucus conference call. But he provides a reconstruction based on information provided by participants. In his understanding the engineers continued to defend their recommendation based on very concrete concerns about the effectiveness of the O-rings in extreme cold. Senior managers indicated their lack of support for this engineering judgment, and in the end Jerry Mason indicated that this would need to be a management decision. The FRR team was then informed that MTI has reconsidered its negative recommendation concerning launch. McDonald refused to sign the launch recommendation document, which was signed by his boss Joe Kilminster and faxed to the LRR team.
In hindsight it seems clear that both MTI executives and NASA executives deferred to business pressures of their respective organizations in the face of well-supported doubts about the safety of the launch. Is this a case of 20-20 vision after the fact? It distinctly appears not to be. The depth of knowledge, analysis, and rational concern that was present in the engineering group for at least a year prior to the Challenger disaster gave very specific and evidence-based reasons to abort this launch. This was not some intuitive, unspecific set of worries; it was an ongoing research problem that greatly concerned the engineers who were directly involved. And it appears there was no significant disagreement or uncertainty among them.
So it is hard to avoid a rather terrible conclusion, that the Challenger disaster was avoidable and should have been prevented. And the culpability lies with senior NASA and MTI executives who placed production pressures and business interests ahead of normal safety assessment procedures, and ahead of safety itself.
It is worth noting that Diane Vaughan’s assessment is directly at odds with this assessment. She writes:
We now return to the eve of the launch. Accounts emphasizing valiant attempts by Thiokol engineers to stop the launch, actions of a few powerful managers who overruled a unanimous engineering position, and managerial failure to pass information about the teleconference to senior NASA administrators, coupled with news of economic strain and production pressure at NASA, led many to suspect that NASA managers had acted as amoral calculators, knowingly violating rules and taking extraordinary risk with human lives in order to keep the shuttle on schedule. However, like the history of decision making, I found that events on the eve of the launch were vastly more complex than the published accounts and media representations of it. From the profusion of information available after the accident, some actions, comments, and actors were brought repeatedly to public attention, finding their way into recorded history. Others, receiving less attention or none, were omitted. The omissions became, for me, details of social context essential for explanation. (LC 6215)
Young, Cook, Boisjoly, and Feynman. Concluding this list of puzzles and contradictions, I found that no one accused any of the NASA managers associated with the launch decision of being an amoral calculator. Although the Presidential Commission report extensively documented and decried the production pressures under which the Shuttle Program operated, no individuals were confirmed or even alleged to have placed economic interests over safety in the decision to launch the Space Shuttle Challenger. For the Commission to acknowledge production pressures and simultaneously fail to connect economic interests and individual actions is, prima facie, extremely suspect. But NASA’s most outspoken critics—Astronaut John Young, Morton Thiokol engineers Al McDonald and Roger Boisjoly, NASA Resource Analyst Richard Cook, and Presidential Commissioner Richard Feynman, who frequently aired their opinions to the media—did not accuse anyone of knowingly violating safety rules, risking lives on the night of January 27 and morning of January 28 to meet a schedule commitment. (kl 1627)
Vaughan’s account includes many of the pivot-points of McDonald’s narrative, but she assigns a different significance to many of them. She prefers her “normalization of deviance” explanation over the “amoral calculator” explanation.
(The Rogers Commission report and supporting documents are available online. Here is a portion of the hearings transcripts in which senior NASA officials provide testimony; link. This segment is critical to the issues raised in McDonald’s account, since it addresses the January 27, 1986 teleconference FRR session in which a recommendation against launch was put forward by MTI engineering and was challenged by NASA senior administrators.)
Technologies and technology systems have deep and pervasive effects on the human beings who live within their reach. How do normative principles and principles of social and political justice apply to technology? Is there such a thing as “the ethics of technology”?
There is a reasonably active literature on questions that sound a lot like these. (See, for example, the contributions included in Winston and Edelbach, eds., Society, Ethics, and Technology.) But all too often the focus narrows too quickly to ethical issues raised by a particular example of contemporary technology — genetic engineering, human cloning, encryption, surveillance, and privacy, artificial intelligence, autonomous vehicles, and so forth. These are important questions; but it is also possible to ask more general questions as well, about the normative space within which technology, private activity, government action, and the public live together. What principles allow us to judge the overall justice, fairness, and legitimacy of a given technology or technology system?
There is a reasonably active literature on questions that sound a lot like these. (See, for example, the contributions included in Winston and Edelbach, eds.,Society, Ethics, and Technology.) But all too often the focus narrows too quickly to ethical issues raised by a particular example of contemporary technology — genetic engineering, human cloning, encryption, surveillance, and privacy, artificial intelligence, autonomous vehicles, and so forth. These are important questions; but it is also possible to ask more general questions as well, about the normative space within which technology, private activity, government action, and the public live together. What principles allow us to judge the overall justice, fairness, and legitimacy of a given technology or technology system?
There is an overriding fact about technology that needs to be considered in every discussion of the ethics of technology. It is a basic principle of liberal democracy that individual freedom and liberty should be respected. Individuals should have the right to act and create as they choose, subject to something like Mill’s harm principle. The harm principle holds that liberty should be restricted only when the activity in question imposes harm on other individuals. Applied to the topic of technology innovation, we can derive a strong principle of “liberty of innovation and creation” — individuals (and their organizations, such as business firms) should have a presumptive right to create new technologies constrained only by something like the harm principle.
Often we want to go beyond this basic principle of liberty to ask what the good and bad of technology might be. Why is technological innovation a good thing, all things considered? And what considerations should we keep in mind as we consider legitimate regulations or limitations on technology?
Consider three large principles that have emerged in other areas of social and political ethics as a basis for judging the legitimacy and fairness of a given set of social arrangements:
A. Technologies should contribute to some form of human good, some activity or outcome that is desired by human beings — health, education, enjoyment, pleasure, sociality, friendship, fitness, spirituality, …
B. Technologies ought to be consistent with the fullest development of the human capabilities and freedoms of the individuals whom they affect. [Or stronger: “promote the fullest development …”]
C. Technologies ought to have population effects that are fair, equal, and just.
The first principle attempts to address the question, “What is technology good for? What is the substantive moral good that is served by technology development?” The basic idea is that human beings have wants and needs, and contributing to their ability to fulfill these wants is itself a good thing (if in so doing other greater harms are not created as well). This principle captures what is right about utilitarianism and hedonism — the inherent value of human happiness and satisfaction. This means that entertainment and enjoyment are legitimate goals of technology development.
The second principle links technology to the “highest good” of human wellbeing — the full development of human capabilities and freedoms. As is evident, the principle offered here derives from Amartya Sen’s theory of capabilities and functionings, expressed in Development as Freedom. This principle recalls Mill’s distinction between higher and lower pleasures:
Mill always insisted that the ultimate test of his own doctrine was utility, but for him the idea of the greatest happiness of the greatest number included qualitative judgements about different levels or kinds of human happiness. Pushpin was not as good as poetry; only Pushkin was…. Cultivation of one’s own individuality should be the goal of human existence. (J.S. McClelland, A History of Western Political Thought : 454)
The third principle addresses the question of fairness and equity. Thinking about justice has evolved a great deal in the past fifty years, and one thing that emerges clearly is the intimate connection between injustice and invidious discrimination — even if unintended. Social institutions that arbitrarily assign significantly different opportunities and life outcomes to individuals based on characteristics such as race, gender, income, neighborhood, or religion are unfair and unjust, and need to be reformed. This approach derives as much from current discussions of racial health disparities as it does from philosophical theories along the lines of Rawls and Sen.
On these principles a given technology can be criticized, first, if it has no positive contribution to make for the things that make people happy or satisfied; second, if it has the effect of stunting the development of human capabilities and freedoms; and third, if it has discriminatory effects on quality of life across the population it effects.
One important puzzle facing the ethics of technology is a question about the intended audience of such a discussion. We are compelled to ask, to whom is a philosophical discussion of the normative principles that ought to govern our thinking about technology aimed? Whose choices, actions, and norms are we attempting to influence? There appear to be several possible answers to this question.
Corporate ethics. Entrepreneurs and corporate boards and executives have an ethical responsibility to consider the impact of the technologies that they introduce into the market. If we believe that codes of corporate ethics have any real effect on corporate decision-making, then we need to have a basis in normative philosophy for a relevant set of principles that should guide business decision-making about the creation and implementation of new technologies by businesses. A current example is the use of facial recognition for the purpose of marketing or store security; does a company have a moral obligation to consider the negative social effects it may be promoting by adopting such a technology?
Governments and regulators. Government has an overriding responsibility of preserving and enhancing the public good and minimizing harmful effects of private activities. This is the fundamental justification for government regulation of industry. Since various technologies have the potential of creating harms for some segments of the public, it is legitimate for government to enact regulatory systems to prevent reckless or unreasonable levels of risk. Government also has a responsibility for ensuring a fair and just environment for all citizens, and enacting policies that serve to eliminate inequalities based on discriminatory social institutions. So here too governments have a role in regulating technologies, and a careful study of the normative principles that should govern our thinking about the fairness and justice of technologies is relevant to this process of government decision-making as well.
Public interest advocacy groups. One way in which important social issues can be debated and sometimes resolved is through the advocacy of well-organized advocacy groups such as the Union of Concerned Scientists, the Sierra Club, or Greenpeace. Organizations like these are in a position to argue in favor of or against a variety of social changes, and raising concerns about specific kinds of technologies certainly falls within this scope. There are only a small number of grounds for this kind of advocacy: the innovation will harm the public, the innovation will create unacceptable hidden costs, or the innovation raises unacceptable risks of unjust treatment of various groups. In order to make the latter kind of argument, the advocacy group needs to be able to articulate a clear and justified argument for its position about “unjust treatment”.
The public. Citizens themselves have an interest in being able to make normative judgments about new technologies as they arise. “This technology looks as though it will improve life for everyone and should be favored; that technology looks as though it will create invidious and discriminatory sets of winners and losers and should be carefully regulated.” But for citizens to have a basis for making judgments like these, they need to have a normative framework within which to think and reason about the social role of technology. Public discussion of the ethical principles underlying the legitimacy and justice of technology innovations will deepen and refine these normative frameworks.
Considered as proposed here, the topic of “ethics of technology” is part of a broad theory of social and political philosophy more generally. It invokes some of our best reasoning about what constitutes the human good (fulfillment of capabilities and freedoms) and about what constitutes a fair social system (elimination of invidious discrimination in the effects of social institutions on segments of population). Only when we have settled these foundational questions are we able to turn to the more specific issues often discussed under the rubric of the ethics of technology.
Earlier posts have focused on the role of inadequate regulatory oversight as part of the tragedy of the Boeing 737 MAX (link, link). (Also of interest is an earlier discussion of the “quiet power” through which business achieves its goals in legislation and agency rules (link).) Reporting in the New York Times this week by Natalie Kitroeff and David Gelles provides a smoking gun for the idea of regulatory capture by industry over the regulatory agency established to ensure its safe operations (link). The article quotes a former attorney in the FAA office of chief counsel:
“The reauthorization act mandated regulatory capture,” said Doug Anderson, a former attorney in the agency’s office of chief counsel who reviewed the legislation. “It set the F.A.A. up for being totally deferential to the industry.”
Based on exhaustive investigative journalism, Kitroeff and Gelles provide a detailed account of the lobbying strategy and efforts by Boeing and the aircraft manufacturing industry group that led to the incorporation of industry-favored language into the FAA Reauthorization Act of 2018, and it is a profoundly discouraging account for anyone interested in the idea that the public good should drive legislation. The new paragraphs introduced into the final legislation stipulate full implementation of the philosophy of regulatory delegation and establish an industry-centered group empowered to oversee the agency’s performance and to make recommendations about FAA employees’ compensation. “Now, the agency, at the outset of the development process, has to hand over responsibility for certifying almost every aspect of new planes.” Under the new legislation the FAA is forbidden from taking back control of the certification process for a new aircraft without a full investigation or inspection justifying such an action.
As the article notes, the 737 MAX was certified under the old rules. The new rules give the FAA even less oversight powers and responsibilities for the certification of new aircraft and major redesigns of existing aircraft. And the fact that the MCAS system was never fully reviewed by the FAA, based on assurances of its safety from Boeing, reduces even further our confidence in the effectiveness of the FAA process. From the article:
The F.A.A. never fully analyzed the automated system known as MCAS, while Boeing played down its risks. Late in the plane’s development, Boeing made the system more aggressive, changes that were not submitted in a safety assessment to the agency.
Boeing, the Aerospace Industries Association, and the General Aviation Manufacturers Association exercised influence on the 2018 legislation through a variety of mechanisms. Legislators and lobbyists alike were guided by a report on regulation authored by Boeing itself. Executives and lobbyists exercised their ability to influence powerful senators and members of Congress through person-to-person interactions. And elected representatives from both parties favored “less regulation” as a way of supporting the economic interests of businesses in their states. For example:
They also helped persuade Senator Maria Cantwell, Democrat of Washington State, where Boeing has its manufacturing hub, to introduce language that requires the F.A.A. to relinquish control of many parts of the certification process.
And, of course, it is important not to forget about the “revolving door” from industry to government to lobbying firm. Ali Bahrami was an FAA official who subsequently became a lobbyist for the aerospace industry; Stephen Dixon is a former executive of Delta Airlines who now serves as Administrator of the FAA; and in 2007 former FAA Administrator Marion Blakey became CEO of the Aerospace Industries Association, the industry’s chief advocacy and lobbying group (link). It is hard to envision neutral, objective judgment in ensuring the safety of the public from such appointments.
Boeing and its allies found a receptive audience in the head of the House transportation committee, Bill Shuster, a Pennsylvania Republican staunchly in favor of deregulation, and his aide working on the legislation, Holly Woodruff Lyons.
Culpepper unpacks the political advantage residing with business elites and managers in terms of acknowledged expertise about the intricacies of corporate organization, an ability to frame the issues for policy makers and journalists, and ready access to rule-writing committees and task forces. These factors give elite business managers positional advantage, from which they can exert a great deal of influence on how an issue is formulated when it comes into the forum of public policy formation.
It seems abundantly clear that the “regulatory delegation” movement and its underlying effort to reduce regulatory burden on industry have gone too far in the case of aviation; and the same seems true in other industries such as the nuclear industry. The much harder question is organizational: what form of regulatory oversight would permit a regulatory industry to genuinely enhance the safety of the regulated industry and protect the public from unnecessary hazards? Even if we could take the anti-regulation ideology that has governed much public discourse since the Reagan years out of the picture, there are the continuing issues of expertise, funding, and industry power of resistance that make effective regulation a huge challenge.
I’ve been interested in the economic history of capitalism since the 1970s, and there are a few titles that stand out in my memory. There were the Marxist and neo-Marxist economic historians (Marx’s Capital, E.P. Thompson, Eric Hobsbawm, Rodney Hilton, Robert Brenner, Charles Sabel); the debate over the nature of the industrial revolution (Deane and Cole, NFR Crafts, RM Hartwell, EL Jones); and volumes of the Cambridge Economic History of Europe. The history of British capitalism poses important questions for social theory: is there such a thing as “capitalism”, or are there many capitalisms? What are the features of the capitalist social order that are most fundamental to its functioning and dynamics of development? Is Marx’s intellectual construction of the “capitalist mode of production” a useful one? And does capitalism have a logic or tendency of development, as Marx believed, or is its history fundamentally contingent and path-dependent? Putting the point in concrete terms, was there a probable path of development from the “so-called primitive accumulation” to the establishment of factory production and urbanization to the extension of capitalist property relations throughout much of the world?
Part of the interest of detailed research in economic history in different places — England, Sweden, Japan, the United States, China — is the light that economic historians have been able to shed on the particulars of modern economic organization and development, and the range of institutions and “life histories” they have identified for these different historically embodied social-economic systems. For this reason I have found it especially interesting to read and learn about the ways in which the early modern Chinese economy developed, and different theories of why China and Europe diverged in this period. Kenneth Pomeranz, Philip Huang, William Skinner, Mark Elvin, Bozhong Li, James Lee, and Joseph Needham all shed light on different aspects of this set of questions, and once again the Cambridge Economic History of China was a deep and valuable resource.
Dockès is interested in both the history of capitalism as an economic system and the history of economic science and political economy during the past four centuries. And he is particularly interested in discovering what we can learn about our current economic challenges from both these stories.
He specifically distances himself from “mainstream” economic theory and couches his own analysis in a less orthodox and more eclectic set of ideas. He defines mainstream economics in terms of five ideas: first, its strong commitment to mathematization and formalization of economic ideas; second, its disciplinary tendency towards hyper-specialization; third, its tendency to take the standpoint of the capitalist and the free market in its analyses; fourth, the propensity to extend these neoliberal biases to the process of selection and hiring of academics; and fifth, its underlying “scientism” and positivism leads its practitioners to devalue the history of the discipline or the historical conditions through which modern institutions came to be (9-12).
Dockès holds that the history of the economic facts and the ideas researchers have had about these facts go hand in hand; economic history and the history of economics need to be studied together. Moreover, Dockès believes that mainstream economics has lost sight of insights from the innovators in the history of economics which still have value — Ricardo, Smith, Keynes, Walras, Sismondi, Hobbes. The solitary focus of the discipline of mainstream economics in the past forty years on formal, mathematical representations of a market economy precludes these economists from “seeing” the economic world through the conceptual lenses of gifted predecessors. They are trapped in a paradigm or an “epistemological framework” from which they cannot escape. (These ideas are explored in the introduction to the volume.)
The substantive foundation of the book is Dockès’ idea that capitalism has long-term rhythms punctuated by crises, and that these fluctuations themselves are amenable to historical-causal and institutional analysis.
En un mot, croissance et crise sont inséparables et inhérents au processus de développement capitaliste laissé à lui-même.
[In a word, growth and crisis are inseparable and inherent in the process of capitalist development left to itself.] (13)
The fluctuations of capitalism over the longterm are linked in a single system of causation — growth, depression, financial crisis, and growth again are linked. Therefore, Dockès believes, it should be possible to discover the systemic causes of the development of various capitalist economies by uncovering the dynamics of crisis. Further, he underlines the serious social and political consequences that have ensued from economic crises in the past, including the rise of the Nazi regime out of the global economic crisis of the 1930s.
Etudier ces rythmes impose une analyse des logiques de fonctionnement du capitalism.
[Studying these rhythms imposes an analysis of the logic of functioning of capitalism.] (12).
Dockès is explicit in saying that economic history does not “repeat” itself, and the crises of capitalism are not replicas of each other over the decades or centuries. Historicity of the time and place is fundamental, and he underlines the path dependency of economic development in some of its aspects as well. But he argues that there are important similarities across various kinds of economic crises, and it is worthwhile discovering these similarities. He takes debt crises as an example: there are great differences among several centuries of experience of debt crisis. But there is something in common as well:
Permanence aussi dans les relations de pouvoir et dans let intérêts des uns (les créanciers partisans de la déflation, des taux élevés) et des autres (les débiteurs inflationnistes), dan les jeux de l’état entre ces deux groupes de pression. On peut tirer deux conséquences des homologies entre le passé et le présent.
[Permanence also in the relations of power and in the interests of some (creditors who favor deflation, high rates) and others (inflationary debtors), in the games of the state between these two pressure groups. We can draw two resulting homologies between the past and the present.] (20)
And failing to consider carefully and critically the economies and crises of the past is a mistake that may lead contemporary economic experts and advisors into ever-deeper economic crises in the future.
L’oubli est dommageable, celui des catastrophes, celui des enseignements qu’elles ont rendu possible, celui des corpus théoriques du passé. Ouvrir la perspective par l’économie historique peut aider à une meilleure compréhension du présent, voire à préparer l’avenir. (21)
[Forgetting is harmful, especially forgetting past catastrophes, forgetting the lessons they have made possible, forgetting the theoretical corpus of the past. Embracing the perspective of the concrete economic history can help lead to a better understanding of the present, or even prepare for the future.] (21)
The scope and content of the book are evident in the list of the book’s chapters:
Crises et rythmes économiques
Périodisation, mutations et rythmes longs
Le capitalism d’Ancien Régime, ses crises
Le “Haut Capitalism”, ses crises et leur théorisation (1800-1870)
Karl Marx et les crises
Capitalisme “Monopoliste” et grande industrie (1870-1914)
Á l’âge de l’acier, les rythmes de l’investissement et de l’innovation
Impulsion monétaire et effets réels
La monnaie hégémonique
“Le chien dans la mangeoire”
La grande crise des années trente
Keynes et la “Théorie Générale”La “Haute Théorie”, la dynamique, le cycle (1926-1946)
En guise de conclusion d’étape
As the chapter titles make evident, Dockès delivers on his promise of treating both the episodes, trends, and facts of economic history as well as the history of the theories through which economists have sought to understand those facts and their dynamics.
The discipline of experimental economics is now a familiar one. It is a field that attempts to probe and test the behavioral assumptions of the theory of economic rationality, microeconomics, and game theory. How do real human reasoners deliberate and act in classic circumstances of economic decision-making? John Kagel and Alvin Roth provide an excellent overview of the discipline in The Handbook of Experimental Economics, where they identify key areas of research in expected utility theory, game theory, free-riding and public goods theory, bargaining theory, and auction markets.
Behavioral economics is a related field but is generally understood as having a broader definition of subject matter. It is the discipline in which researchers use the findings of psychology, cognitive science, cultural studies, and other areas of behavioral sciences to address issues of economics, without making the heroic assumptions of strict economic rationality concerning the behavior and choices of the agents. The iconoclastic writings of Kahneman and Tversky are foundational contributions to the field (Choices, Values, and Frames), and Richard Thaler’s work (Nudge: Improving Decisions About Health and Wealth, and Happiness and Misbehaving: The Making of Behavioral Economics) exemplifies the approach.
Here is a useful description of behavioral and experimental economics offered by Ana Santos:
Behavioural experiments have produced a substantial amount of evidence that shows that human beings are prone to systematic error even in areas of economic relevance where stakes are high (e.g. Thaler, 1992; Camerer, 1995). Rather than grounding individual choice on the calculus of the costs and benefits of alternative options so as to choose the alternative that provides the highest net benefit, individuals have recourse to a variety of decisional rules and are influenced by various contextual factors that jeopardise the pursuit of individuals’ best interests. The increased understanding of how people actually select and apply rules for dealing with particular forms of decision problems and of the influence of contexts on individual choices is the starting point of choice architecture devoted to the study of choice setups that can curb human idiosyncrasies to good result, as judged by individuals themselves, or by society as a whole (Thaler and Sunstein, 2003, 2008).
Researchers in experimental and behavioral economics make use of a variety of empirical and “experimental” methods to probe the nature of real human decision-making. But the experiments in question are generally of a very specialized kind. The goal is often to determine the characteristics of the decision rule that is used by a group of actual human decision-makers. So the subjects are asked to “play” a game in which the payoffs correspond to one of the simple games studied in game theory — e.g. the prisoners’ dilemma — and their behavior is observed from start to finish. This seems to be more a form of controlled observation than experimentation in the classical sense — isolating an experimental situation and a given variable of interest F, and then running the experiment in the presence and absence of F.
It is intriguing to ask whether a similar empirical approach might be applied to some of the findings and premises of micro-sociology. Sociologists too make assumptions about motivation, choice, and action. Whether we consider the sociology of contention, the sociology of race, or the sociology of the family, we are unavoidably drawn to making provisional assumptions about what makes the actors in these situations tick. What are their motives? How do they evaluate the facts of a situation? How do they measure and weigh risk in the actions they choose? How do ambient social norms influence their action? Whether explicitly or implicitly, sociologists make assumptions about the answers to questions like these. Could some of the theoretical ideas of James Coleman, Erving Goffman, or Mark Granovetter be subjected to experimental investigation? Even more intriguingly, are there supra-individual hypotheses offered by sociologists that might be explored with experimental methods?
Areas where experimental and empirical investigation might be expected to pay dividends in sociology include the motivations underlying cooperation and competition, Granovetter’s sociology of social embeddedness, corruption, the theories of conditional altruism and conditional fairness, the dynamics of contention, and the micro-social psychology of race and gender.
So is there an existing field of research that attempts to investigate questions like these using experiments and human subjects placed in artificial circumstances of action?
To begin, there are some famous examples of experiments in the behavioral sciences that are relevant to these questions. These include the Milgram experiment, the Stanford Prison experiment, and a variety of altruism experiments. These empirical research designs aim at probing the modes of behavior, norm observance, and decision-making that characterize real human beings in real circumstances.
Second, it is evident that the broad discipline of social psychology is highly relevant to this topic. For example, the study of “motivated reasoning” has come to play an important role within the discipline of social psychology (link).
Motivated reasoning has become a central theoretical concept in academic discourse across the fields of psychology, political science, and mass communication. Further, it has also entered the popular lexicon as a label for the seemingly limitless power of partisanship and prior beliefs to color and distort perceptions of the political and social world. Since its emergence in the psychological literature in the mid- to late-20th century, motivated reasoning theory has been continuously elaborated but also challenged by researchers working across academic fields. In broad terms, motivated reasoning theory suggests that reasoning processes (information selection and evaluation, memory encoding, attitude formation, judgment, and decision-making) are influenced by motivations or goals. Motivations are desired end-states that individuals want to achieve. The number of these goals that have been theorized is numerous, but political scientists have focused principally on two broad categories of motivations: accuracy motivations (the desire to be “right” or “correct”) and directional or defensive motivations (the desire to protect or bolster a predetermined attitude or identity). While much research documents the effects of motivations for attitudes, beliefs, and knowledge, a growing literature highlights individual-level variables and contexts that moderate motivated reasoning.
See Epley and Gilovich (link) for an interesting application of the “motivated reasoning” approach.
Finally, some of the results of behavioral and experimental economics are relevant to sociology and political science as well.
These ideas are largely organized around testing the behavioral assumptions of various sociological theories. Another line of research that can be treated experimentally is the investigation of locally relevant structural arrangements that some sociologists have argued to be causally relevant to certain kinds of social outcomes. Public schools with health clinics have been hypothesized to have better educational outcomes than those without such clinics. Factory workers are sometimes thought to be more readily mobilized in labor organizations than office workers. Small towns in rural settings are sometimes thought to be especially conducive to nationalist-populist political mobilization. And so forth. Each of these hypotheses about the causal role of social structures can be investigated empirically and experimentally (though often the experiments take the form of quasi-experiments or field experiments rather than randomly assigned subjects divided into treatment and control populations).
It seems, then, that the methods and perspective of behavioral and experimental economics are indeed relevant to sociological research. Some of the premises of key sociological theories can be investigated experimentally, and doing so has the promise of further assessing and deepening the content of those sociological theories. Experiments can help to probe the forms of knowledge-formation, norm acquisition, and decision-making that real social actors experience. And with a little ingenuity, it seems possible to use experimental methods to evaluate some core hypotheses about the causal roles played by various kinds of “micro-” social structures.
It is of both intellectual and practical interest to understand how organizations function and how the actors within them choose the actions that they pursue. A common answer to these questions is to refer to the rules and incentives of the organization, and then to attempt to understand the actor’s choices through the lens of rational preference theory. However, it is now increasingly clear that organizations embody distinctive “cultures” that significantly affect the actions of the individuals who operate within their scope. Edgar Schein is a leading expert on the topic of organizational culture. Here is how he defines the concept in Organizational Culture and Leadership. Organizational culture, according to Schein, consists of a set of “basic assumptions about the correct way to perceive, think, feel, and behave, driven by (implicit and explicit) values, norms, and ideals” (Schein, 1990).
Culture is both a dynamic phenomenon that surrounds us at all times, being constantly enacted and created by our interactions with others and shaped by leadership behavior, and a set of structures, routines, rules, and norms that guide and constrain behavior. When one brings culture to the level of the organization and even down to groups within the organization, one can see clearly how culture is created, embedded, evolved, and ultimately manipulated, and, at the same time, how culture constrains, stabilizes, and provides structure and meaning to the group members. These dynamic processes of culture creation and management are the essence of leadership and make one realize that leadership and culture are two sides of the same coin. (3rd edition, p. 1)
According to Schein, there is a cognitive and affective component of action within an organization that has little to do with rational calculation of interests and more to do with how the actors frame their choices. The values and expectations of the organization help to shape the actions of the participants. And one crucial aspect of leaders, according to Schein, is the role they play in helping to shape the culture of the organizations they lead.
It is intriguing that several pressing organizational problems have been found to rotate around the culture of the organization within which behavior takes place. The prevalence of sexual and gender harassment appears to depend a great deal on the culture of respect and civility that an organization has embodied — or has failed to embody. The ways in which accidents occur in large industrial systems seems to depend in part on the culture of safety that has been established within the organization. And the incidence of corrupt and dishonest practices within businesses seems to be influenced by the culture of integrity that the organization has managed to create. In each instance experience seems to demonstrate that “good” culture leads to less socially harmful behavior, while “bad” culture leads to more such behavior.
Consider first the prominence that the idea of safety culture has come to play in the nuclear industry after Three Mile Island and Chernobyl. Here are a few passages from a review document authored by the Advisory Committee on Reactor Safeguards (link).
There also seems to be a general agreement in the nuclear community on the elements of safety culture. Elements commonly included at the organization level are senior management commitment to safety, organizational effectiveness, effective communications, organizational learning, and a working environment that rewards identifying safety issues. Elements commonly identified at the individual level include personal accountability, questioning attitude, and procedural adherence. Financial health of the organization and the impact of regulatory bodies are occasionally identified as external factors potentially affecting safety culture.
The working paper goes on to consider two issues: has research validated the causal relationship between safety culture and safe performance? And should the NRC create regulatory requirements aimed at observing and enhancing the safety culture in a nuclear plant? They note that current safety statistics do not permit measurement of the association between safety culture and safe performance, but that experience in the industry suggests that the answers to both questions are probably affirmative:
On the other hand, even at the current level of industry maturity, we are confronted with events such as the recent reactor vessel head corrosion identified so belatedly at the Davis-Besse Nuclear Power Plant. Problems subsequently identified in other programmatic areas suggest that these may not be isolated events, but the result of a generally degraded plant safety culture. The head degradation was so severe that a major accident could have resulted and was possibly imminent. If, indeed, the true cause of such an event proves to be degradation of the facility’s safety culture, is it acceptable that the reactor oversight program has to wait for an event of such significance to occur before its true root cause, degraded culture, is identified? This event seems to make the case for the need to better understand the issues driving the culture of nuclear power plants and to strive to identify effective performance indicators of resulting latent conditions that would provide leading, rather than lagging, indications of future plant problems. (7-8)
Researchers in the area of sexual harassment have devoted quite a bit of attention to the topic of workplace culture as well. This theme is emphasized in the National Academy study on sexual and gender harassment (link); the authors make the point that gender harassment is chiefly aimed at expressing disrespect towards the target rather than sexual exploitation. This has an important implication for institutional change. An institution that creates a strong core set of values emphasizing civility and respect is less conducive to gender harassment. They summarize this analysis in the statement of findings as well:
Organizational climate is, by far, the greatest predictor of the occurrence of sexual harassment, and ameliorating it can prevent people from sexually harassing others. A person more likely to engage in harassing behaviors is significantly less likely to do so in an environment that does not support harassing behaviors and/or has strong, clear, transparent consequences for these behaviors. (50)
Ben Walsh is representative of this approach. Here is the abstract of a research article by Walsh, Lee, Jensen, McGonagle, and Samnani on workplace incivility (link):
Scholars have called for research on the antecedents of mistreatment in organizations such as workplace incivility, as well as the theoretical mechanisms that explain their linkage. To address this call, the present study draws upon social information processing and social cognitive theories to investigate the relationship between positive leader behaviors—those associated with charismatic leadership and ethical leadership—and workers’ experiences of workplace incivility through their perceptions of norms for respect. Relationships were separately examined in two field studies using multi- source data (employees and coworkers in study 1, employees and supervisors in study 2). Results suggest that charismatic leadership (study 1) and ethical leadership (study 2) are negatively related to employee experiences of workplace incivility through employee perceptions of norms for respect. Norms for respect appear to operate as a mediating mechanism through which positive forms of leadership may negatively relate to workplace incivility. The paper concludes with a discussion of implications for organizations regarding leader behaviors that foster norms for respect and curb uncivil behaviors at work.
David Hess, an expert on corporate corruption, takes a similar approach to the problem of corruption and bribery by officials of multinational corporations (link). Hess argues that bribery often has to do with organizational culture and individual behavior, and that effective steps to reduce the incidence of bribery must proceed on the basis of an adequate analysis of both culture and behavior. And he links this issue to fundamental problems in the area of corporate social responsibility.
Corporations must combat corruption. By allowing their employees to pay bribes they are contributing to a system that prevents the realization of basic human rights in many countries. Ensuring that employees do not pay bribes is not accomplished by simply adopting a compliance and ethics program, however. This essay provided a brief overview of why otherwise good employees pay bribes in the wrong organizational environment, and what corporations must focus on to prevent those situations from arising. In short, preventing bribe payments must be treated as an ethical issue, not just a legal compliance issue, and the corporation must actively manage its corporate culture to ensure it supports the ethical behavior of employees.
As this passage emphasizes, Hess believes that controlling corrupt practices requires changing incentives within the corporation while equally changing the ethical culture of the corporation; he believes that the ethical culture of a company can have effects on the degree to which employees engage in bribery and other corrupt practices.
What is in common among each of these examples — and other examples are available as well — is that intangible features of the work environment are likely to influence behavior of the actors in that environment, and thereby affect the favorable and unfavorable outcomes of the organization’s functioning as well. Moreover, if we take the lead offered by Schein and work on the assumption that leaders can influence culture through their advocacy for the values that the organization embodies, then leadership has a core responsibility to facilitate a work culture that embodies these favorable outcomes. Work culture can be cultivated to encourage safety and to discourage bad outcomes like sexual harassment and corruption.
We think of artifacts as being “functional” in a specific sense: their characteristics are well designed and adjusted for their “intended” use. Sometimes this is because of the explicit design process through which they were created, and sometimes it is the result of a long period of small adjustments by artisan-producers and users who recognize a potential improvement in shape, material, or internal workings that would lead to superior performance. Jon Elster described these processes in his groundbreaking 1983 book, Explaining Technical Change: A Case Study in the Philosophy of Science.
Here is how I described the gradual process of refinement of technical practice with respect to artisanal winegrowing in a 2009 post (link):
First, consider the social reality of a practice like wine-making. Pre-modern artisanal wine makers possess an ensemble of techniques through which they grow grapes and transform them into wine. These ensembles are complex and developed; different wine “traditions” handle the tasks of cultivation and fermentation differently, and the results are different as well (fine and ordinary burgundies, the sweet gewurztraminers of Alsace versus Germany). The novice artisan doesn’t reinvent the art of winemaking; instead, he/she learns the techniques and traditions of the elders. But at the same time, the artisan wine maker may also introduce innovations into his/her practice — a wrinkle in the cultivation techniques, a different timing in the fermentation process, the introduction of a novel ingredient into the mix.
Over time the art of grape cultivation and wine fermentation improves.
But in a way this expectation of “artifact functionality” is too simple and direct. In the development of a technology or technical practice there are multiple actors who are in a position to influence to development of the outcome, and they often have divergent interests. These differences of interests may lead to substantial differences in performance for the technology or technique. Technologies reflect social interests, and this is as evident in the history of technology as it is in the current world of high tech. In the winemaking case, for example, landlords may have interests that favor dense planting, whereas the wine maker may favor more sparse planting because of the superior taste this pattern creates in the grape. More generally, the owner’s interest in sales and profitability exerts a pressure on the characteristics of the product that run contrary to the interest of the artisan-producer who gives primacy to the quality of the product, and both may have interests that are somewhat inconsistent with the broader social good.
Imagine the situation that would result if a grain harvesting machine were continually redesigned by the profit-seeking landowner and the agricultural workers. Innovations that are favorable to enhancing profits may be harmful for safety and welfare of agricultural workers, and vice versa. So we might imagine a see-saw of technological development, as the landowner and the worker gains more influence over the development of the technology.
As an undergraduate at the University of Illinois in the late 1960s I heard the radical political scientist Michael Parenti tell just such a story about his father’s struggle to maintain artisanal quality in the Italian bread he baked in New York City in the 1950s. Here is an online version of the story (link). Michael Parenti’s story begins like this:
Years ago, my father drove a delivery truck for the Italian bakery owned by his uncle Torino. When Zi Torino returned to Italy in 1956, my father took over the entire business. The bread he made was the same bread that had been made in Gravina, Italy, for generations. After a whole day standing, it was fresh as ever, the crust having grown hard and crisp while the inside remained soft, solid, and moist. People used to say that our bread was a meal in itself….
Pressure from low-cost commercial bread companies forced his father into more and more cost-saving adulteration of the bread. And the story ends badly …
But no matter what he did, things became more difficult. Some of our old family customers complained about the change in the quality of the bread and began to drop their accounts. And a couple of the big stores decided it was more profitable to carry the commercial brands.
Not long after, my father disbanded the bakery and went to work driving a cab for one of the big taxi fleets in New York City. In all the years that followed, he never mentioned the bread business again.
Parenti’s message to activist students in the 1960s was stark: this is the logic of capitalism at work.
Of course market pressures do not always lead to the eventual ruin of the products we buy; there is also an economic incentive created by consumers who favor higher performance and more features that leads businesses to improve their products. So the dynamic that ruined Michael Parenti’s father’s bread is only one direction that market competition can take. The crucial point is this: there is nothing in the development of technology and technique that guarantees outcomes that are more and more favorable for the public.
An increasingly pressing consequence of climate change is the rising threat of flood in coastal and riverine communities. And yet a combination of Federal and local policies have created land use incentives that have led to increasing development in flood plains since the major floods of the 1990s and 2000s (Mississippi River 1993, Hurricane Katrina 2005, Hurricane Sandy 2016, …), with the result that economic losses from flooding have risen sharply. Many of those costs are born by tax payers through Federal disaster relief and subsidies to the Federal flood insurance program.
Christine Klein and Sandra Zellmer provide a highly detailed and useful review of these issues in their brilliant SMU Law Review article, “Mississippi River Stories: Lessons from a Century of Unnatural Disasters” (link). These arguments are developed more fully in their 2014 book Mississippi River Tragedies: A Century of Unnatural Disaster. Klein and Zellmer believe that current flood insurance policies and disaster assistance policies at the federal level continue to support perverse incentives for developers and homeowners and need to be changed. Projects and development within 100-year flood plains need to be subject to mandatory flood insurance coverage; flood insurance policies should be rated by degree of risk; and government units should have the legal ability to prohibit development in flood plains. Here are their central recommendations for future Federal policy reform:
Substantive requirements for watershed planning and management would effectuate the Progressive Era objective underlying the original Flood Control Act of 1928: treating the river and its floodplain as an integrated unit from source to mouth, “systematically and consistently,” with coordination of navigation, flood control, irrigation, hydropower, and ecosystem services. To accomplish this objective, the proposed organic act must embrace five basic principles:
(1) Adopt sustainable, ecologically resilient standards and objectives;
(2) Employ comprehensive environmental analysis of individual and cumulative effects of floodplain construction (including wetlands fill);
(3) Enhance federal leadership and competency by providing the Corps with primary responsibility for flood control measures, cabined by clear standards, continuing monitoring responsibilities, and oversight through probing judicial review, and supported by a secure, non-partisan funding source;
(4) Stop wetlands losses and restore damaged floodplains by re-establishing natural areas that are essential for floodwater retention; and
(5) Recognize that land and water policies are inextricably linked and plan for both open space and appropriate land use in the floodplain. (1535-36)
Here is Klein and Zellmer’s description of the US government’s response to flood catastrophes in the 1920s:
Flood control was the most pressing issue before the Seventieth Congress, which sat from 1927 to 1929. Congressional members quickly recognized that the problems were two-fold. First, Congressman Edward Denison of Illinois criticized the absence of federal leadership: “the Federal Government has allowed the people. . . to follow their own course and build their own levees as they choose and where they choose until the action of the people of one State has thrown the waters back upon the people of another State, and vice versa.” Moreover, as Congressman Robert Crosser of Ohio noted, the federal government’s “levees only” policy–a “monumental blunder”–was not the right sort of federal guidance. (1482-83)
In passing the Flood Control Act of 1928, congressional members were influenced by Progressive Era objectives. Comprehensive planning and multiple-use management were hallmarks of the time. The goal was nothing less than a unified, planned society. In the early 1900s, many federal agencies, including the Bureau of Reclamation and the U.S. Geological Survey, had agreed that each river must be treated as an integrated unit from source to mouth. Rivers were to be developed “systematically and consistently,” with coordination of navigation, flood control, irrigation, and hydro-power. But the Corps of Engineers refused to join the movement toward watershed planning, instead preferring to conduct river management in a piecemeal fashion for the benefit of myriad local interests. (1484)
But perverse incentives were created by Federal flood policies in the 1920s that persist to the present:
Only a few decades after the 1927 flood, the Mississippi River rose up out of its banks once again, teaching a new lesson: federal structural responses plus disaster relief payouts had incentivized ever more daring incursions into the floodplain. The floodwater evaded federal efforts to control it with engineered structures, and those same structures prevented the river from finding its natural retention areas–wetlands, oxbows, and meanders–that had previously provided safe storage for floodwater. The resulting damage to affected areas was increased by orders of magnitude. The federal response to this lesson was the adoption of a nationwide flood insurance program intended to discourage unwise floodplain development and to limit the need for disaster relief. Both lessons are detailed in this section. (1486)
Paradoxically, navigational structures and floodplain constriction by levees, highway embankments, and development projects exacerbated the flood damage all along the rivers in 1951 and 1952. Flood-control engineering works not only enhanced the danger of floods, but actually contributed to higher flood losses. Flood losses were, in turn, used to justify more extensive control structures, creating a vicious cycle of ever-increasing flood losses and control structures. The mid-century floods demonstrated the need for additional risk-management measures. (1489)
Only five years after the program was enacted, Gilbert White’s admonition was validated. Congress found that flood losses were continuing to increase due to the accelerating development of floodplains. Ironically, both federal flood control infrastructure and the availability of federal flood insurance were at fault. To address the problem, Congress passed the Flood Disaster Protection Act of 1973, which made federal assistance for construction in flood hazard areas, including loans from federally insured banks, contingent upon the purchase of flood insurance, which is only made available to participating communities. (1491)
But development and building in the floodplains of the rivers of the United States has continued and even accelerated since the 1990s.Government policy comes into this set of disasters at several levels. First, climate policy — the evidence has been clear for at least two decades that the human production of greenhouse gases is creating rapid climate change, including rising temperatures in atmosphere and oceans, severe storms, and rising ocean levels. A fundamental responsibility of government is to regulate and direct activities that create public harms, and the US government has failed abjectly to change the policy environment in ways that substantially reduce the production of CO2 and other greenhouse gases. Second, as Klein and Zellmer document, the policies adopted by the US government in the early part of the twentieth century intended to prevent major flood disasters were ill conceived. The efforts by the US government and regional governments to control flooding through levees, reservoirs, dams, and other infrastructure interventions have failed, and have probably made the problems of flooding along major US rivers worse. Third, the human activities in flood plains — residences, businesses, hotels and resorts — have worsened the severity of the consequences of floods by elevating the cost in lives and property because of reckless development in flood zones. Governments have failed to discourage or prevent these forms of development, and the consequences have proven to be extreme (and worsening).
It is evident that storms, floods, and sea-level rise will be vastly more destructive in the decades to come. Here is a projection of the effects on the Florida coastline after a sustained period of sea-level rise resulting from a 2-degree Centigrade rise in global temperature (link):
We seem to have passed the point where it will be possible to avoid catastrophic warming. Our governments need to take strong actions now to ameliorate the severity of global warming, and to prepare us for the damage when it inevitably comes.
Hegel on the Master-Slave relation 195. However, the feeling of absolute power as such, and in the particularities of service, is only dissolution in itself, and, although the fear of the lord is the beginning of wisdom, in that fear consciousness is what it is that is for it itself , but it is not being-for-itself. However, through work, this servile consciousness comes round to itself. In the moment corresponding to desire in the master’s consciousness, the aspect of the non-essential relation to the thing seemed to fall to the lot of the servant, as the thing there retained its self-sufficiency. Desire has reserved to itself the pure negating of the object, and, as a result, it has reserved to itself that unmixed feeling for its own self. However, for that reason, this satisfaction is itself only a vanishing, for it lacks the objective aspect, or stable existence. In contrast, work is desire held in check, it is vanishing staved off , or: work cultivates and educates. The negative relation to the object becomes the form of the object; it becomes something that endures because it is just for the laborer himself that the object has self-sufficiency. This negative mediating middle, this formative doing, is at the same time singularity, or the pure being-for-itself of consciousness, which in the work external to it now enters into the element of lasting. Thus, by those means, the working consciousness comes to an intuition of self-sufficient being as its own self. 196. However, what the formative activity means is not only that the serving consciousness as pure being-for-itself becomes, to itself, an exist- ing being within that formative activity. It also has the negative mean- ing of the first moment, that of fear. For in forming the thing, his own negativity, or his being-for-itself, only as a result becomes an object to himself in that he sublates the opposed existing form. However, this objective negative is precisely the alien essence before which he trembled, but now he destroys this alien negative and posits himself as such a negative within the element of continuance. He thereby becomes for himself an existing- being-for-itself . Being-for-itself in the master is to the servant an other, or it is only for him. In fear, being-for-itself is in its own self . In culturally formative activity, being-for-itself becomes for him his own being- for-itself, and he attains the consciousness that he himself is in and for himself. As a result, the form, by being posited as external, becomes to him not something other than himself, for his pure being-for-itself is that very form, which to him therein becomes the truth. Therefore, through this retrieval, he comes to acquire through himself a mind of his own, and he does this precisely in the work in which there had seemed to be only some outsider’s mind. – For this reflection, the two moments of fear and service, as well as the moments of culturally formative activity are both necessary, and both are necessary in a universal way. Without the discipline of service and obedience, fear is mired in formality and does not diffuse itself over the conscious actuality of existence. Without culturally formative activity, fear remains inward and mute, and consciousness will not become for it [consciousness] itself. If consciousness engages in formative activity without that first, absolute fear, then it has a mind of its own which is only vanity, for its form, or its negativity, is not negativity in itself , and his formative activity thus cannot to himself give him the consciousness of himself as consciousness of the essence. If he has not been tried and tested by absolute fear but only by a few anxieties, then the negative essence will have remained an externality to himself, and his substance will not have been infected all the way through by it. While not each and every one of the ways in which his natural consciousness was brought to fulfillment was shaken to the core, he is still attached in himself to determinate being. His having a mind of his own is then only stubbornness, a freedom that remains bogged down within the bounds of servility. To the servile consciousness, pure form can as little become the essence as can the pure form – when it is taken as extending itself beyond the singular individual – be a universal culturally formative activity, an absolute concept. Rather, the form is a skill which, while it has dominance over some things, has dominance over neither the universal power nor the entire objective essence. (Hegel, Phenomenology, 115-116)
Kojève’s interpretation of Hegel
Here are the primary passages that represent the heart of Kojève’s interpretation of this section.
Work, on the other hand, is repressed Desire, an arrested passing phase; or, in other words, it forms-and-educates. Work transforms the World and civilizes, educates, Man, the man who wants to work — or who must work — must repress the instinct that drives him “to consume” “immediately” the “raw” object. And the Slave can work for the Master — that is, for another than himself — only by repressing his own desires. Hence he transcends himself by working — or perhaps better, he educates himself, he “cultivates” and “sublimates” his instincts by repressing them. On the other hand, he does not destroy the thing as it is given. He postpones the destruction of the thing by first transforming it through work; he prepares it for consumption — that is to say, he “forms” it. In his work, he transforms things and transforms himself at the same time: he forms things and the World by transforming himself, by educating himself; and he educates himself, he forms himself, by transforming things and the World, Thus, the negative-or-negating relation to the object becomes a form of this object and gains permanence, precisely because, for the worker, the object has autonomy…. The product of work is the worker’s production. It is the realization of his project, of his idea; hence, it is he that is realized in and by this product, and consequently he contemplates himself when he contemplates it…. Therefore, it is by work, and only by work, that man realizes himself objectively as man. Only after producing an artificial object is man himself really and objectively more than and different from a natural being; and only in this real and objective product does he become truly conscious of his subjective human reality. Kojève 24-25
The Master can never detach himself from the World in which he lives, and if this World perishes, he perishes with it. Only the Slave can transcend the given world (which is subjugated by the Master) and not perish. Only the Slave can transform the World that forms him and fixes him in slavery and create a World that he has formed in which he will be free. And the Slave achieves this only through forced and terrified work carried out in the Master’s service. To be sure, this work by itself does not free him. But in transforming the World by this work, the Slave transforms himself too, and thus creates the new objective conditions that permit him to take up once more the liberating Fight for recognition that he refused in the beginning for fear of death. And thus in the long run, all slavish work realizes not the Master’s will, but the will — at first unconscious — of the Slave, who — finally –succeeds where the Master — necessarily — fails. Therefore, it is indeed originally dependent, serving, and slavish Consciousness that in the end realizes and reveals the ideal of autonomous Self-Consciousness and is thus its “truth.” Kojève 29-30
However, to understand the edifice of universal history and the process of its construction, one must know the materials that were used to construct it. These materials are men. To know what History is, one must therefore know what Man who realizes it is. Most certainly, man is something quite different from a brick. In the first place, if we want to compare universal history to the construction of an edifice, we must point out that men are not only the bricks that are used in the construction; they are also the masons who build it and the architects who conceive the plan for it, a plan, moreover, which is progressively elaborated during the construction itself. Furthermore, even as “brick,” man is essentially different from a material brick: even the human brick changes during the construction, just as the human mason and the human architect do. Nevertheless, there is something in Man, in every man, that makes him suited to participate–passively or actively–in the realization of universal history. At the beginning of this History, which ends finally in absolute Knowledge, there are, so to speak, the necessary and sufficient conditions. And Hegel studies these conditions in the first four chapters of the Phenomenology.
Finally, Man is not only the material, the builder, and the architect of the historical edifice. He is also the one for whom this edifice is constructed: he lives in it, he sees and understands it, he describes and criticizes it. There is a whole category of men who do not actively participate in the historical construction and who are content to live in the constructed edifice and to talk about it. These men, who live somehow “above the battle,” who are content to talk about things that they do not create by their Action, are Intellectuals who produce intellectuals’ ideologies, which they take for philosophy (and pass off as such). Hegel describes and criticizes these ideologies in Chapter V. (32-33)
The central ideas here are —
Work transforms and educates the worker.
Work requires the delay of consumption.
Work transforms the world and the environment.
The self-creation of the human being through work is essential to his or her reality as a human being.
By merely directing and commanding work, the master fails to engage in self-creation.
The master cannot be truly free.
Human beings create history through their creative labor.
Human beings create and transform themselves through labor.
History is human-centered. History is “subject” as well as “object”.
Those who merely think and reflect upon history are sterile and contribute nothing to the course of history.
These comments add up to a substantive theory of the human being in the world — one that emphasizes creativity, transformation, and self-creation. It stands in stark contrast to the liberal utilitarian view of Adam Smith and Jeremy Bentham of human nature as consumer and rational optimizer of a given set of choices; instead, on Kojève’s (and Hegel’s) view, the human being becomes fully human through creative engagement with the natural world, through labor.
It is interesting to realize that Kojève was a philosopher, but he was not primarily an academic professor. Instead, he was a high-placed civil servant and statesman in the French state, a man whose thinking and actions were intended to create a new path for France. He is credited with being one of the early theorists of the European Union.
Kojève’s account of labor and freedom is, of course, influenced by his own immersion in the writings of the early Marx; so the philosophy of labor, freedom, and self-creation articulated here is neither pure Hegel nor pure Marx. We might say that it is pure Kojève.
Jeff Love’s biography of Kojève is also of interest, emphasizing the Russian roots of Kojève’s thought; The Black Circle: A Life of Alexandre Kojève. Love confirms the importance of the richer theory of human freedom and self-realization that is offered in Kojève’s account, and notes a parallel with themes in nineteenth-century Russian literature.
Kojève’s critique of self-interest merits renewal in a day when consumer capitalism and the reign of self-interest are hardly in question, either implicitly or explicitly, and where the key precincts of critique have been hobbled by their own reliance on elements of the modern conception of the human being as the free historical individual that have not been sufficiently clarified. Kojève’s thought is thus anodyne: far from being “philosophically” mad or the learned jocularity of a jaded, extravagant genius, it expresses a probing inquiry into the nature of human being that returns us to questions that reach down to the roots of the free historical individual. Moreover, it extends a critique of self-interest deeply rooted in Russian thought, and Kojève does so, no doubt with trenchant irony, in the very capital of the modern bourgeoisie decried violently by Dostoevsky in his Winter Notes on Summer Impressions.
(Here is an interesting reflection on Kojève as philosopher by Stanley Rosen; link.)