What the boss wants to hear …

According to David Halberstam in his outstanding history of the war in Vietnam, The Best and the Brightest, a prime cause of disastrous decision-making by Presidents Kennedy and Johnson was an institutional imperative in the Defense Department to come up with a set of facts that conformed to what the President wanted to hear. Robert McNamara and McGeorge Bundy were among the highest-level miscreants in Halberstam’s account; they were determined to craft an assessment of the situation on the ground in Vietnam that conformed best with their strategic advice to the President.

Ironically, a very similar dynamic led to one of modern China’s greatest disasters, the Great Leap Forward famine in 1959. The Great Helmsman was certain that collective agriculture would be vastly more productive than private agriculture; and following the collectivization of agriculture, party officials in many provinces obliged this assumption by reporting inflated grain statistics throughout 1958 and 1959. The result was a famine that led to at least twenty million excess deaths during a two-year period as the central state shifted resources away from agriculture (Frank DikötterMao’s Great Famine: The History of China’s Most Devastating Catastrophe, 1958-62).

More mundane examples are available as well. When information about possible sexual harassment in a given department is suppressed because “it won’t look good for the organization” and “the boss will be unhappy”, the organization is on a collision course with serious problems. When concerns about product safety or reliability are suppressed within the organization for similar reasons, the results can be equally damaging, to consumers and to the corporation itself. General Motors, Volkswagen, and Michigan State University all seem to have suffered from these deficiencies of organizational behavior. This is a serious cause of organizational mistakes and failures. It is impossible to make wise decisions — individual or collective — without accurate and truthful information from the field. And yet the knowledge of higher-level executives depends upon the truthful and full reporting of subordinates, who sometimes have career incentives that work against honesty.

So how can this unhappy situation be avoided? Part of the answer has to do with the behavior of the leaders themselves. It is important for leaders to explicitly and implicitly invite the truth — whether it is good news or bad news. Subordinates must be encouraged to be forthcoming and truthful; and bearers of bad news must not be subject to retaliation. Boards of directors, both private and public, need to make clear their own expectations on this score as well: that they expect leading executives to invite and welcome truthful reporting, and that they expect individuals throughout the organization to provide truthful reporting. A culture of honesty and transparency is a powerful antidote to the disease of fabrications to please the boss.

Anonymous hotlines and formal protection of whistle-blowers are other institutional arrangements that lead to greater honesty and transparency within an organization. These avenues have the advantage of being largely outside the control of the upper executives, and therefore can serve as a somewhat independent check on dishonest reporting.

A reliable practice of accountability is also a deterrent to dishonest or partial reporting within an organization. The truth eventually comes out — whether about sexual harassment, about hidden defects in a product, or about workplace safety failures. When boards of directors and organizational policies make it clear that there will be negative consequences for dishonest behavior, this gives an ongoing incentive of prudence for individuals to honor their duties of honesty within the organization.

This topic falls within the broader question of how individual behavior throughout an organization has the potential for giving rise to important failures that harm the public and harm the organization itself. 


Regulatory failure

When we think of the issues of health and safety that exist in a modern complex economy, it is impossible to imagine that these social goods will be produced in sufficient quantity and quality by market forces alone. Safety and health hazards are typically regarded as “externalities” by private companies — if they can be “dumped” on the public without cost, this is good for the profitability of the company. And state regulation is the appropriate remedy for this tendency of a market-based economy to chronically produce hazards and harms, whether in the form of environmental pollution, unsafe foods and drugs, or unsafe industrial processes. David Moss and John Cisternino’s New Perspectives on Regulation provides some genuinely important perspectives on the role and effectiveness of government regulation in an epoch which has been shaped by virulent efforts to reduce or eliminate regulations on private activity. This volume is a report from the Tobin Project.

It is poignant to read the optimism that the editors and contributors have — in 2009 — about the resurgence of support for government regulation. The financial crisis of 2008 had stimulated a vigorous round of regulation of financial institutions, and most of the contributors took this as a harbinger of a fresh public support for regulation more generally. Of course events have shown this confidence to be sadly mistaken; the dismantling of Federal regulatory regimes by the Trump administration threatens to take the country back to the period described by Upton Sinclair in the early part of the prior century. But what this demonstrates is the great importance of the Tobin Project. We need to build a public understanding and consensus around the unavoidable necessity of effective and pervasive regulatory regimes in environment, health, product safety, and industrial safety.

Here is how Mitchell Weiss, Executive Director of the Tobin Project, describes the project culminating in this volume:

To this end, in the fall of 2008 the Tobin Project approached leading scholars in the social sciences with an unusual request: we asked them to think about the topic of economic regulation and share key insights from their fields in a manner that would be accessible to both policymakers and the public. Because we were concerned that a conventional literature survey might obscure as much as it revealed, we asked instead that the writers provide a broad sketch of the most promising research in their fields pertaining to regulation; that they identify guiding principles for policymakers wherever possible; that they animate these principles with concrete policy proposals; and, in general, that they keep academic language and footnotes to a minimum. (5)

The lead essay is provided by Joseph Stiglitz, who looks more closely than previous decades of economists had done at the real consequences of market failure. Stiglitz puts the point about market failure very crisply:

Only under certain ideal circumstances may individuals, acting on their own, obtain “pareto efficient” outcomes, that is, situations in which no one can be made better off without making another worse off. These individuals involved must be rational and well informed, and must operate in competitive market- places that encompass a full range of insurance and credit markets. In the absence of these ideal circumstances, there exist government interventions that can potentially increase societal efficiency and/or equity. (11)

And regulation is unpopular — with the businesses, landowners, and other powerful agents whose actions are constrained.

By its nature, a regulation restricts an individual or firm from doing what it otherwise would have done. Those whose behavior is so restricted may complain about, say, their loss of profits and potential adverse effects on innovation. But the purpose of government intervention is to address potential consequences that go beyond the parties directly involved, in situations in which private profit is not a good measure of social impact. Appropriate regulation may even advance welfare-enhancing innovations. (13)

Stiglitz pays attention to the pervasive problem of “regulatory capture”:

The current system has made regulatory capture too easy. The voices of those who have benefited from lax regulation are strong; the perspectives of the investment community have been well represented. Among those whose perspectives need to be better represented are the laborers whose jobs would be lost by macro-mismanagement, and the pension holders whose pension funds would be eviscerated by excessive risk taking.

One of the arguments for a financial products safety commission, which would assess the efficacy and risks of new products and ascertain appropriate usage, is that it would have a clear mandate, and be staffed by people whose only concern would be protecting the safety and efficacy of the products being sold. It would be focused on the interests of the ordinary consumer and investors, not the interests of the financial institutions selling the products. (18)

It is very interesting to read Stiglitz’s essay with attention to the economic focus he offers. His examples all come from the financial industry — the risk at hand in 2008-2009. But the arguments apply equally profoundly to manufacturing, the pharmaceutical and food industries, energy industries, farming and ranching, and the for-profit education sector. At the same time the institutional details are different, and an essay on this subject with a focus on nuclear or chemical plants would probably identify a different set of institutional barriers to effective regulation.

Also particularly interesting is the contribution by Michael Barr, Eldar Shafir, and Sendhil Mullainathan on how behavioral perspectives on “rational action” can lead to more effective regulatory regimes. This essay pays close attention to the findings of experimental economics and behavioral economics, and the deviations from “pure economic rationality” that are pervasive in ordinary economic decision making. These features of decision-making are likely to be relevant to the effectiveness of a regulatory regime as well. Further, it suggests important areas of consumer behavior that are particularly subject to exploitative practices by financial companies — creating a new need for regulation of these kinds of practices. Here is how they summarize their approach:

We propose a different approach to regulation. Whereas the classical perspective assumes that people generally know what is important and knowable, plan with insight and patience, and carry out their plans with wisdom and self-control, the central gist of the behavioral perspective is that people often fail to know and understand things that matter; that they misperceive, misallocate, and fail to carry out their intended plans; and that the context in which people function has great impact on their behavior, and, consequently, merits careful attention and constructive work. In our framework, successful regulation requires integrating this richer view of human behavior with our understanding of markets. Firms will operate on the contour de ned by this psychology and will respond strategically to regulations. As we describe above, because firms have a great deal of latitude in issue framing, product design, and so on, they have the capacity to a affect behavior and circumvent or pervert regulatory constraints. Ironically, firms’ capacity to do so is enhanced by their interaction with “behavioral” consumers (as opposed to the hypothetically rational actors of neoclassical economic theory), since so many of the things a regulator would find very hard to control (for example, frames, design, complexity, etc.) can greatly influence consumers’ behavior. e challenge of behaviorally informed regulation, therefore, is to be well designed and insightful both about human behavior and about the behaviors that firms are likely to exhibit in response to both consumer behavior and regulation. (55)

The contributions to this volume are very suggestive with regard to the issues of product safety, manufacturing safety, food and drug safety, and the like which constitute the larger core of the need for regulatory regimes. And the challenges faced in the areas of financial regulation discussed here are likely to be found to be illuminating in other sectors as well.

 

Gaining compliance

Organizations always involve numerous staff members whose behavior has the potential for creating significant risk for individuals and the organization but who are only loosely supervised. This situation unavoidably raises principal-agent problems. Let’s assume that the great majority of staff members are motivated by good intentions and ethical standards. That means that there are a small number of individuals whose behavior is not ethical and well intentioned. What arrangements can an organization put in place to prevent bad behavior and protect individuals and the integrity of the organization?

For certain kinds of bad behavior there are well understood institutional arrangements that work well to detect and deter the wrong actions. This is especially true for business transactions, purchasing, control of cash, expense reporting and reimbursement, and other financial processes within the organization. The audit and accounting functions within almost every sophisticated organization permit a reasonably high level of confidence in the likelihood of detection of fraud, theft, and misreporting. This doesn’t mean that corrupt financial behavior does not occur; but audits make it much more difficult to succeed in persistent dishonest behavior. So an organization with an effective audit function is likely to have a reasonably high level of compliance in the areas where standard audits can be effectively conducted.

A second kind of compliance effort has to do with the culture and practice of observer reporting of misbehavior. Compliance hotlines allow individuals who have observed (or suspected) bad behavior to report that behavior to responsible agents who are obligated to investigate these allegations. Policies that require reporting of certain kinds of bad behavior to responsible officers of the organization — sexual harassment, racial discrimination, or fraudulent actions, for example — should have the effect of revealing some kinds of misbehavior, and deterring others from engaging in bad behavior. So a culture and expectation of reporting is helpful in controlling bad behavior.

A third approach that some organizations take to compliance is to place a great deal of emphasis the moral culture of the organization — shared values, professional duty, and role responsibilities. Leaders can support and facilitate a culture of voluntary adherence to the values and policies of the organization, so that virtually all members of the organization fall in the “well-intentioned” category. The thrust off this approach is to make large efforts at eliciting voluntary good behavior. Business professor David Hess has done a substantial amount of research on these final two topics (link, link).

Each of these organizational mechanisms has some efficacy. But unfortunately they do not suffice to create an environment where we can be highly confident that serious forms of misconduct do not occur. In particular, reporting and culture are only partially efficacious when it comes to private and covert behavior like sexual assault, bullying, and discriminatory speech and behavior in the workplace. This leads to an important question: are there more intrusive mechanisms of supervision and observation that would permit organizations to discover patterns of misconduct even if they remain unreported by observers and victims? Are there better ways for an organization to ensure that no one is subject to the harmful actions of a predator or harasser?

A more active strategy for an organization committed to eliminating sexual assault is to attempt to predict the environments where inappropriate interpersonal behavior is possible and to redesign the setting so the behavior is substantially less likely. For example, a hospital may require that any physical examinations of minors must be conducted in the presence of a chaperone or other health professional. A school of music or art may require that after-hours private lessons are conducted in semi-public locations. These rules would deprive a potential predator of the seclusion needed for the bad behavior. And the practitioner who is observed violating the rule would then be suspect and subject to further investigation and disciplinary action.

Here is perhaps a farfetched idea: a “behavior audit” that is periodically performed in settings where inappropriate covert behavior is possible. Here we might imagine a process in which a random set of people are periodically selected for interview who might have been in a position to have been subject to inappropriate behavior. These individuals would then be interviewed with an eye to helping to surface possible negative or harmful experiences that they have had. This process might be carried out for groups of patients, students, athletes, performers, or auditioners in the entertainment industry. And the goal would be to uncover traces of the kinds of behavior involving sexual harassment and assault that are at the heart of recent revelations in a myriad of industries and organizations. The results of such an audit would occasionally reveal a pattern of previously unknown behavior requiring additional investigation, while the more frequent results would be negative. This process would lead to a higher level of confidence that the organization has reasonably good knowledge of the frequency and scope of bad behavior and a better system for putting in place a plan of remediation.

All of these organizational strategies serve fundamentally as attempts to solve principal-agent problems within the organization. The principals of the organization have expectations about the norms that ought to govern behavior within the organization. These mechanisms are intended to increase the likelihood that there is conformance between the principal’s expectations and the agent’s behavior. And, when they fail, several of these mechanisms are intended to make it more likely that bad behavior is identified and corrected.

(Here is an earlier post treating scientific misconduct as a principal-agent problem; link.)

Corruption and institutional design

Robert Klitgaard is an insightful expert on the institutional causes of corruption in various social arrangements. His 1988 book, Controlling Corruption, laid out several case studies in detail, demonstrating specific features of institutional design that either encouraged or discouraged corrupt behavior by social and political actors.

More recently Klitgaard prepared a major report for the OECD on the topic of corruption and development assistance (2015; link). This working paper is worth reading in detail for anyone interested in understanding the dysfunctional origins of corruption as an institutional fact. Here is an early statement of the kinds of institutional facts that lead to higher levels of corruption:

Corruption is a crime of calculation. Information and incentives alter patterns of corruption. Processes with strong monopoly power, wide discretion for officials and weak accountability are prone to corruption. (7)

Corruption can go beyond bribery to include nepotism, neglect of duty and favouritism. Corrupt acts can involve third parties outside the organisation (in transactions with clients and citizens, such as extortion and bribery) or be internal to an organisation (theft, embezzlement, some types of fraud). Corruption can occur in government, business, civil society organisations and international agencies. Each of these varieties has the dimension of scale, from episodic to systemic. (18)

Here is an early definition of corruption that Klitgaard offers:

Corruption is a term of many meanings, but at the broadest level, corruption is the misuse of office for unofficial ends. Office is a position of duty, or should be; the office-holder is supposed to put the interests of the institution and the people first. In its most pernicious forms, systemic corruption creates the shells of modern institutions, full of official ranks and rules but “institutions” in inverted commas only. V.S. Naipaul, the Trinidad-born Nobel Prize winner, once noted that underdevelopment is characterised by a duplicitous emphasis on honorific titles and simultaneously the abuse of those titles: judges who love to be called “your honour” even as they accept bribes, civil servants who are uncivil and serve themselves. (18)

The bulk of Klitgaard’s report is devoted to outlining mechanisms through which governments, international agencies, and donor agencies can attempt to initiate effective reform processes leading to lower levels of corruption. There are two theoretical foundations underlying the recommendations, one having to do with the internal factors that enhance or reduce corruption and the other having to do with a theory of effective institutional change. The internal theory is couched as a piece of algebra: corruption is the result of monopoly power plus official discretion minus accountability (37). So effective interventions should be designed around reducing monopoly power and official discretion while increasing accountability.

The premise about reform process that Klitgaard favors involves what he refers to as “convening” — assembling working groups of influential and knowledgeable stakeholders in a given country and setting them the task of addressing corruption in the country. Examples and case studies include the Philippines, Columbia, Georgia, and Indonesia. Here is a high-level description of what he has in mind:

The recommended process – referred to in this paper as convening – invites development assistance providers to share international data, case studies and theory, and invites national leaders from recipient countries to provide local knowledge and creative problem-solving skills. (5)

Klitgaard spends a fair amount of time on the problem of measuring corruption at the national level. He refers to several international indices that are relevant: Transparency International’s Corruption Perceptions Index, the World Economic Forum’s Global Competitiveness Index, the Global Integrity index, and the International Finance Corporation’s ranking of nations in terms of “ease of doing business” (11).

What this report does not attempt to do is to address specific institutional arrangements in order to discover the propensities for corrupt behavior that they create. This is the strength of Klitgaard’s earlier book, where he looks at alternative forms of social or political arrangements for policing or collecting taxes. In this report there is none of that micro detail. What specific institutional arrangements can be designed that have the effect of reducing official monopoly power and discretion, or the effect of increasing official accountability? Implicitly Klitgaard suggests that these are questions best posed to the experts who participate in the national convening on corruption, because they have the best local knowledge of government and business practices. But here are a few mechanisms that Klitgaard specifically highlights: punish major offenders, pick visible, low-hanging fruit, bring in new leaders and reformers, coordinate government institutions, involve officials, and mobilize citizens and the business community (chapter 5).

A more micro perspective on international corruption is provided by a recent study by David Hess, “Combating Corruption in International Business: The Big Questions” (link). Hess focuses on the Foreign Corrupt Practices Act in the United States, and he asks the question, why do large corporations pay bribes when this is clearly illegal under the FCPA? Moreover, given that FCPA has the power to assess very large fines against corporations that violate its strictures, how can violation be a rational strategy? Hess considers the case of Siemens, which was fined over $1.5 billion in 2008 for repeated acts of bribery in the pursuit of contracts (3). He considers two theories of corporate bribing: a cost-benefit analysis showing that the practice of bribing leads to higher returns, and the “rogue employee” view, according to which the corporation is unable to control the actions of its occasionally unscrupulous employees. On the latter view, bribery is essentially a principal-agent problem.

Hess takes the position that bribery often has to do with organizational culture and individual behavior, and that effective steps to reduce the incidence of bribery must proceed on the basis of an adequate analysis of both culture and behavior. And he links this issue to fundamental problems in the area of corporate social responsibility.

Corporations must combat corruption. By allowing their employees to pay bribes they are contributing to a system that prevents the realization of basic human rights in many countries. Ensuring that employees do not pay bribes is not accomplished by simply adopting a compliance and ethics program, however. This essay provided a brief overview of why otherwise good employees pay bribes in the wrong organizational environment, and what corporations must focus on to prevent those situations from arising. In short, preventing bribe payments must be treated as an ethical issue, not just a legal compliance issue, and the corporation must actively manage its corporate culture to ensure it supports the ethical behavior of employees.

As this passage emphasizes, Hess believes that controlling corrupt practices requires changing incentives within the corporation while equally changing the ethical culture of the corporation; he believes that the ethical culture of a company can have effects on the degree to which employees engage in bribery and other corrupt practices.

The study of corruption is an ideal case for the general topic of institutional dysfunction. And, as many countries have demonstrated, it is remarkably difficult to alter the pattern of corrupt behavior in a large, complex society.

Institutional design for democracies

 

How can we design practical, effective, and fair institutions for making the basic decisions that are needed within a democratic government? This is, of course, one of the oldest questions in democratic theory; but it is also a recent concern of Jon Elster’s. Under this rubric we can investigate, for example, the ways a legislature sets its agenda and votes or the ways constitutional principles function to secure citizens’ rights. Fundamentally we want to create institutions that reach good outcomes through a set of decision-making processes that minimize the workings of bias, self serving, and special interests.

Elster takes up these sorts of considerations in Securities Against Misrule: Juries, Assemblies, Elections. Here Elster concentrates on themes expressed by Jeremy Bentham, a philosopher whom Elster regards as being badly underrated. One part of Elster’s goal here is to recapture some of these overlooked insights. But he also wants to contribute to the substantive issue: what features of institutional design increase the likelihood of political outcomes that largely confirm to the best interests of society (whatever those are)?

Elster’s treatment of Bentham is focused on a small number of theoretical issues in public decision theory. Bentham’s core goal was to work out some features of institutional design that would make corruption and misuse of power least likely in an electroral democracy. “In Bentham’s view, the object of institutional design is security against misrule, or the prevention of mischief—the removal of obstacles that will thwart the realization of the greatest good for the greatest number” (1).

Elster endeavors to explicate Bentham’s ideas on this topic; but he is also interested in forwarding the argument in his own terms. “My purpose is to consider procedural accounts of good institutional design” (5). And further: “I shall be concerned with removing — or blocking the effect of — known obstacles to good decision making” (17). So we can look at this book as being both a contribution to the history of thought and a substantive, rigorous contribution to contemporary debates about institutional design. 

The obstacles that interfere with “good decision making” in the area of public choice are numerous: bad sheep in the process (domineering individuals or spoilers; 18); the effects of strategic behavior by participants; cyclical voting outcomes (Arrow paradox); dictatorship (institutional arrangements that permit one actor to determine the outcome); sensitivity of the collective outcome to the order of the agenda; procedural indeterminacy; power differentials; capture of the process by special interests; deadlock through requirements of super majorities; and information asymmetries among participants.

 
One family of mechanisms that Elster considers in some detail involve ignorance, secrecy, and publicity (chapter 2). Consider an example. Suppose a hospital aims to increase patient health and reduce costs by pre-selecting preferred medications for a small group of diagnoses. And suppose it assigns this task to a formal committee of physicians and other health professionals. The rules that define membership, deliberation, and decision-making of this committee need to be established. The desiderata for the functioning of the committee are clear. We want members to deliberate impersonally and neutrally, and to favor or disfavor candidate medications based on efficacy and cost. It will help to prevent bias if we block committee members from having knowledge of how the choice of X or Y will influence their own incomes. And publicity about the questions being referred to the committee and the resolutions those questions have received can also induce more neutral behavior by committee members to decide the issue on the basis of objective costs and benefits (rather than private self-interest). This is the purpose of “sunshine laws” about public deliberations and contracts.
 
Elster looks in detail at alternative designs that have been implemented in the use of jury trials to assess guilt or innocence (chapter 2). How are jurors selected? What information is provided to the jury, and what information is withheld? Are the identities of jurors known to defendants? Choices that are made in each of these design areas are pertinent to a variety of sources of distortion of outcome: racial bias, self interest on the part of the juror, or intimidation of the jury by confederates of the defendant.
 
The fundamental point that Elster takes from Bentham is that institutions should not be considered in terms of their ideal functioning, but in terms of how they will function when populated by ordinary people subject to a range of bad motivations (self-interest, prejudice, bias in favor of certain groups, …). This is the point of “security against misrule” — to find mechanisms that obviate the workings of venality, bias, and self-interest on the part of the participants. In a sense, this is a return to problems of imperfect rationality that interested Elster early in his career; but this time these problems are raised in the context of collective rationality.
 

Appearance and reality in public life

So what kind of democracy do we have?  Do our institutions do a great job of establishing the public interest over the medium term, or have our institutions been captured by private interests, leaving essentially no real power in the hands of citizens?

The way it is supposed to work, according to Civics 101:

  • Elected officials faithfully consider proposed legislation, based on their expressed political values, the interests of their constituents, and their perception of the best longterm interest of the polity.
  • Decisions are made in public view.
  • Legislative debates turn on the public presentation of reasons in favor of or against proposed legislation, invoking only rational assessment of likely consequences, fitness of proposed legislation to the longterm best interests of the polity, and consistency with existing law and constitution.
  • Agencies use experts to faithfully create regulations that implement legislation in ways that are consistent with legislative intent, grounded in rational study of relevant scientific findings, and impartially applied without regard to persons or specific private interests.
  • Lobbyists are able to influence legislation and regulation only through compelling rational arguments based on cost-benefit analysis, legitimate expression of a given set of affected interests, and public knowledge of their advocacy.

This sounds pretty much like the way that Rousseau would have expected the legislative process to have worked within the ideal polity; legislation enacts the “general will”.

The not-so-ideal case:

  • Elected officials give excessive importance to the impact their positions will have on the voters back home — thereby paying less attention to the facts and consequences for the public good of various legislative initiatives.
  • Elected officials sometimes permit themselves to be influenced by campaign contributions and other personal advantages from industries and other private interests, thereby supporting or opposing initiatives for reasons other than the overall goodness or badness of the legislation for the public good.
  • Regulative agencies are influenced by industry “experts” in writing regulations, with the result that regulatory regimes are tilted towards the private interests of the regulated industries rather than neutrally establishing public health and safety.
  • Lobbyists have substantial access to legislators and regulators, with the result that they are able to move the dial in their favored direction.

We might describe this scenario as the pluralism scenario, along the lines of Robert Dahl’s theories of democracy.  Various interests contend through the use of various legal tools of influence, and the resulting set of laws, policies, and regulations represent a rough-and-ready balance of the many interested parties in a complex society.  Private interests have weight on this scenario, but they don’t determine the outcomes.

The nightmare scenario for democracy:

  • Elected officials have no sincere adherence to the public good; they pursue their own private and political interests through all the powers available to them. (Senator Jim Bunning’s unembarrassed willingness to block extension of unemployment legislation for narrow personal and political reasons falls in this category.)
  • Elected officials are sometimes overtly corruptible, accepting significant gifts in exchange for official performance. 
  • Elected officials are intimidated by the power of private interests (corporations) to fund electoral opposition to their re-election.  (The Supreme Court decision on corporate free speech makes this much more likely.)
  • Regulatory agencies are dominated by the industries they regulate; independent commissioners are forced out of office; and regulations are toothless when it comes to environmental protection, wilderness protection, health and safety in the workplace, and food safety.
  • Lobbyists for special interests and corporations have almost unrestricted access to legislators and regulators, and are generally able to achieve their goals.

This is the nightmare scenario if one cares about democracy, because it implies that the apparatus of government is essentially controlled by private interests rather than the common good and the broad interests of society as a whole.  It isn’t “pluralism”, because there are many important social interests not represented in this system in any meaningful way: poor people, non-unionized workers, people without health insurance, inner-city youth, the environment, people exposed to toxic waste, …

The fact that healthcare reform, regulation of CO2 emissions, and significant reform of the financial system have all been essentially blocked in the current legislative process seems to point to one of these scenarios; and it isn’t the first or the second.

I’ll quote an idea used in the previous posting to suggest one possible way forward for our democracy: a movement towards substantially greater participatory democracy in this country.  Archon Fung and Erik Olin Wright address the future of our democracy in Deepening Democracy: Institutional Innovations in Empowered Participatory Governance.  Here is how they set the stage for their analysis:

As the tasks of the state have become more complex and the size of polities larger and more heterogeneous, the institutional forms of liberal democracy developed in the nineteenth century — representative democracy plus techno-bureaucratic administration — seem increasingly ill suited to the novel problems we face in the twenty-first century.  “Democracy” as a way of organizing the state has come to be narrowly identified with territorially based competitive elections of political leadership for legislative and executive offices.  Yet, increasingly, this mechanism of political representation seems ineffective in accomplishing the central ideals of democratic politics: facilitating active political involvement of the citizenry, forging political consensus through dialogue, devising and implementing public policies that ground a productive economy and healthy society, and, in more radical egalitarian versions fo the democratic idea, assuring that all citizens benefit from the nation’s wealth. (3)

It is an interesting question to consider whether a participatory process surrounding the issue of healthcare reform would have led to a more satisfactory outcome.  Given the results of the raucous, aggressive, and incivil disruption of town-hall meetings that occurred last summer around this issue, it is hard to be too optimistic about this approach either.

Business interests and democracy

The central ideal of democracy is the notion that citizens can express their political and policy preferences through political institutions, and that the policies selected will reflect those preferences. We also expect that elected officials will act ethically in support of the best interests of the public. This is their public trust.

The anti-democratic possibility is that popular debates and expressions of preference are only a sham, and that secretive, powerful actors are able to secure their will in most circumstances. And in contemporary circumstances, that sounds a lot like corporations and business lobbying organizations. (Here is an earlier post on a report about corrupt behavior at the Department of the Interior.)

The January Supreme Court decision affirming the status of corporations as persons, and therefore entitled to unfettered rights of free speech, is the most extreme expression of the power of business, corporations, and money. As distinguished law professor Ronald Dworkin argues in the New York Review of Books (link), this decision dramatically increases the ability of corporations to influence elections and decisions in their favor — vastly disproportionately to citizens’ organizations. And, as Dworkin points out, corporations don’t need to exercise this right frequently in order to have enormous impact on candidates and issues. The mere threat of a well-financed media campaign against key representatives will suffice to sway their behavior.

There are too many examples of pernicious influence of business interests on public policy. Take a useful policy that many states and cities have tested, pretrial release programs. It appears that the public interest has been defeated by … the bail bondsmen. NPR ran a story on the pretrial release program in Broward, Florida (link). The program was successful, with a high appearance rate for court appearances and annual savings of $20 million for the county. But this program cost the bail bondsmen business. They hired a lobbyist, and in the dead of night the county commission scaled back the program. Here is how the “industry” describes the issue (link).  It is a pretty shocking story:

According to campaign records, Book [the lobbyist] … and the rest of Broward’s bondsmen spread almost $23,000 across the council in the year before the bill was passed. Fifteen bondsmen cut checks worth more than $5,000 to commissioner and now-county Mayor Ken Keechl just five days before the vote.

Keechl and several other commissioners declined NPR’s repeated requests for an interview. At the meeting last January, they said they were concerned that Broward’s pretrial program cost more than other counties’ programs, and they vigorously denied that campaign contributions played any role.

Book had his work cut out for him. Broward’s own county attorney wrote a memo warning commissioners that cutting back pretrial could be unconstitutional. But Book worked behind the scenes.

He met with commissioners, and according to county records, he had unusual access. That’s because at the same time he was hired by the bondsmen to lobby commissioners, he was also hired by the commissioners to be their lobbyist. (transcript from NPR report)

The story makes the sequence pretty clear: Through the use of campaign contributions and influence of votes by commissioners, the bondsmen groups have prevailed to abandon the policy which was unmistakably in the public interest.  The commission acted in deference to the narrow financial interests of a business group; campaign contributions by that group played a decisive role; and an overburdened county government was denied a tool that was good public policy from every point of view.  And similar efforts are taking place in many cities.  So where is the public’s interest? 
Or take the largest issues we face today in national politics — cap-and-trade policy, healthcare reform, and the nation’s food system. The influence of large financial interests in each of these areas is perfectly visible. Energy companies, coal companies, insurance companies and trade associations, and large food companies and restaurant chains pretty much run the show. Regulations are written in deference to their interests, legislation conforms to their needs and demands, and elected officials calculate their actions to the winds of campaign contributions. And the Supreme Court reverses a century of precedent and accords the rights of freedom of expression to corporations and unions that are enjoyed by individual citizens. So the influence of financially powerful corporations and industry groups will become even greater.

It would be deeply interesting if we had a sort “influence compass” that would allow us to measure the net deviation created by the private interests of companies and industries for a number of policy areas. How far from the due north of the public’s interest are we when it comes to —

  • Environmental protection
  • Banking regulation
  • Insurance regulation
  • Energy policy
  • Cost-effective military procurement
  • Urban land use policy 
  • Airline safety
  • Licensing of public resources such as gas and coal leases

Of course the metaphor of “north” doesn’t really work here, since there is no purely objective definition of the public good in any of these areas. That is the purpose of open democratic debate about policy issues — what are the facts, what do we want to achieve, and what are the most effective ways of achieving our ends? But when private interests can influence decision makers to adopt X because it is good for the profits of industry Y — in spite of the clear public interest in doing Z — then we have anti-democratic distortion of the process.

Where are the democratic checks on this exercise of power? A first line of defense is the set of regulations most governments and agencies have concerning conflict of interest and lobbying. These institutions obviously don’t work; no one who pays attention would seriously think that agencies and governments are uninfluenced by gifts, contributions, promises of future benefits, and the blandishments of lobbyists. And these influences range from slight deviations to gross corruption.  Moreover, influence doesn’t need to be corrupt in order to be anti-democratic.  If an energy company gets a privileged opportunity to make the case for “clean coal” behind closed doors, this may represent a legitimate set of partial arguments.  The problem is that experts representing the public are not given the same opportunity.

A related strategy is publicity: requiring that decision-making agencies make their deliberations and decision-making processes transparent and visible to the public. Let the public know who is influencing the debate, and perhaps this will deter decision-makers from favoring an important set of private interests. Then-Vice-President Cheney’s refusal to make public the list of companies involved in consultations to the National Energy Policy Development Group (link) is an instructive example; it is very natural to suspect that the recommendations put forward by the NEPDG reflected the specific business concerns of an unknown set of energy companies and lobbyists (link). So greater publicity of process can be a tool in enhancing the fit between policy and the public’s interests. (Here are earlier posts on the capacity of publicity to serve as a check on bad organizational behavior (post, post).)

Another line of defense is the independent press and media. Our newspapers and magazines have historically had the resources and mission to track down the influence of private interests on the formulation of legislation, regulation, and policy. Bill Moyers is a great example (link); for example, his recent story on the role of campaign contributions in the election of judges (link). But the resources are disappearing and the cheerleaders at Fox News are gaining influence by the month. So relying on the investigative powers of an independent media looks like an increasingly long-odds bet.

So we have our work cut out for us to validate the main premise of democracy: that the interests of the public will be served faithfully by government without significant distortion by private business interests.

(Here is a recent post on C. Wright Mills’ analysis of power elites and the influence accorded to corporations in the United States.)

Scientific misconduct as a principal-agent problem


How does an organization assure that its agents perform their duties truthfully and faithfully? We have ample evidence of the other kind of performance — theft, misappropriation, lies, fraud, diversion of assets for personal use, and a variety of deceptive accounting schemes. And we have whole professions devoted to detecting and punishing these various forms of dishonesty — accountants, investigative reporters, management consultants, insurance experts, prosecutors and their investigators. And yet dishonest behavior is common, in business, finance, government, and even the food industry. (See several earlier postings for discussions of the issues of corruption and trust in society.)

Here I’m especially interested in a particular kind of activity — scientific and medical research. Consider a short but sobering list of scientific and medical frauds in the past fifty years: Cyril Burt’s intelligence studies, Dr. Hwang Woo-suk’s stem cell cloning fraud, the Anjan Kumar Banerjee case in Britain, the MMR vaccine-autism case, a spate of recent cases in China, and numerous other examples. And consider recent reports that a percentage of scientific photos in leading publications had been photoshopped in ways that favored the researcher’s findings (link). (Here are some comments by Edward Tufte on the issue of scientific imaging, and here are some journal guidelines from the Council of Science Editors attempting to regulate the issue.) Plainly, fraud and misconduct sometimes occur within the institutions of scientific and medical research. And each case has consequences — for citizens, for patients, and for the future course of research.

Here is how the editor of Family Practice describes the problem of research misconduct in a review of Fraud and Misconduct in Medical Research (third edition):

Fraud and misconduct are, it seems, endemic in scientific research. Even Galileo, Newton and Mendel appear to have fudged some of their results. From palaeontology to nanotechnology, scientific fraud reappears with alarming regularity. The Office of Research Integrity in the USA investigated 127 serious allegations of scientific fraud last year. The reasons for conducting fraudulent research and misrepresenting research in scientific publications are complex. The pressures to publish and to achieve career progression and promotion and the lure of fame and money may all play a part, but deeper forces often seem to be at work.

How important are fraud and misconduct in primary care research? As far as Family Practice goes, mercifully rare, as I pointed out in a recent editorial. Sadly, however, there are examples, all along the continuum from the beginning of a clinical trial to submission of a final manuscript, of dishonesty and deceit in general practice and primary care research. Patients have been invented to increase numbers (and profits) in clinical trials, ethical guidance on consent and confidentiality have been breached, and ‘salami’ and duplicate publication crop up from time to time.

The problem is particularly acute in the area of scientific and medical research because the public at large has very little ability to independently evaluate the validity of a research finding, let alone validate the integrity of the research. And this extends to science and medicine journalists in large part as well, since they are rarely given access to underlying records and data for a study.

The stakes are high — dishonest research can cost lives or delay legitimate research, not to speak of the cost of supporting the fraudulent research in the first place. The temptations for researchers are large as well — funding from drug and device makers, the incentives and pressures of career advancement, and pure vanity, to name several. And we know that instances of fraud and other forms of serious scientific misconduct continue to occur.

So, thinking of this topic as an organizational problem — what measures can be taken to minimize the incidence of fraud and misconduct in scientific research?

One way of describing the situation is as a gigantic principal-agent problem. (Khalid Abdalla provides a simple explanation of the principal-agent problem here.) It falls within the scope of the more general challenge of motivating, managing, and supervising highly skilled and independent professionals. The “agent” is the individual researcher and research team. And the “principal” may be construed at a range of levels: society at large, the Federal government, the NIH, the research institute, or the department chair. But it seems likely that the problem is most tractable if we focus attention on the more proximate relationships — the NIH, the research institute, and the researcher.

So this is a good problem to consider from the point of view of institutional design and complex interactive social behavior. We know what kind of behavior we want; the problem is to create the institutional settings and motivational processes through which the desired behavior is encouraged and the undesired behavior is detected and punished.

One response from the research institutions (research universities, institutes, and medical schools) is to emphasize training programs in scientific professional ethics, to more deeply instill the values of strict scientific integrity in each researcher and each institution. The hope here is that pervasive attention to the importance of scientific integrity will have the effect of reducing the incidence of misconduct. A second approach, from universities, research organizations, and journals, is to increase oversight and internal controls surrounding scientific fraud. One example — some journals require that the statistical analysis of results be performed by a qualified, independent, academic statistician. Strict requirements governing conflicts of interest are another institutional response. And a third approach from institutions such as the NIH and NSF is to ratchet up the consequences of misconduct. The United States Office of Research Integrity (link) has a number of training and enforcement programs designed to minimize scientific misconduct. The British government has set up a similar organization to combat research fraud, the UK Research Integrity Office (link). Individuals found culpable will be denied access to research funds — effectively halting their scientific careers, and criminal prosecution is possible as well. So the sanctions for misconduct are significant. (Here’s an egregious example leading to criminal prosecution).

And, of course, the first and last line of defense against scientific misconduct is the fundamental requirement of peer review. Scientific journals use expert peers to evaluate the research to be considered for publication, and universities turn to expert peers when they consider scientists for promotion and tenure. Both processes create a strong likelihood of detecting fraud if it exists. Who is better qualified to detect a potentially fraudulent research finding than a researcher in the same field?

But is all of this sufficient? It’s unclear. The most favorable interpretation would be the judgment that this combination of motivational factors and local and global institutional constraints will contain the problem to an acceptable level. But is there empirical evidence for this optimism? Or is misconduct becoming more widespread over time? The efforts to deepen researchers’ attachment to a code of research integrity are certainly positive — but what about the small percentage of people who are not motivated by an internal compass? Greater internal controls are certainly a good idea — but they are surely less effective in the area of research than accounting controls are in the financial arena. Oversight is just more difficult to achieve in the area of scientific research. (And of course we all know how porous those controls are in the financial sector — witness Enron and other accounting frauds. ) And if the likelihood of detection is low, then the threat of punishment is weakened. So the measures mentioned here have serious limitations in likely effectiveness.

Brian Deer is one of Britain’s leading journalists covering medical research (website). His work in the Sunday Times of London established the medical fraud underlying the spurious claim that MMR vaccine causes autism mentioned above. Following a recent public lecture to a medical audience he was asked the question, how can we get a handle on frauds like these? And his answer was blunt: with snap inspections, investigative policing, and serious penalties. In his perception, the stakes are too high to leave the matter to professional ethics.

It perhaps goes without saying that the vast majority of scientific researchers are honest investigators who are guided by the advancement of science and medicine. But it is also apparent that there are a small number of researchers of whom these statements are not true. And the problem confronting the organizations of scientific research is a hard one: how to create the institutional structures where misconduct is unlikely to occur and where misconduct is most likely to be detected when it does.

There is one other connection that strikes me as important, and it is a connection to the philosophy of science. It is an item of faith for philosophers of science that the scientific enterprise is truth-enhancing, in this sense: the community of researchers follows a set of institutionally embodied processes that are well designed to enhancing the comprehensiveness and veridicality of our theories and weeding out the false theories. Our theories get better through the empirical and logical testing that occurs as a result of these socially embodied procedures of science. But if the corrupting influences mentioned above are indeed common, then the confidence we have in the epistemic value of the procedures of science takes a big hit. And this is worrisome news indeed.

Public versus hidden faces of organizations



Think of a range of complex organizations and institutions — police departments, zoning boards, corporations, security agencies, and so on indefinitely. These organizations all have missions, personnel, constituencies, and policies and practices. They all do various things — they affect individuals in society and they bring about significant social effects. And, in each case there are at least three aspects of their realities — the ways they publicly present themselves, the ways their behaviors and effects are perceived by the public, and the usually unobservable reality of how they actually behave. Usually the public persona of the institution is benign, fair, and public- spirited. But how close is this public persona to the truth? In many of our basic institutions, the answer seems to be, not very. We are daily confronted with cases of official corruption, corporations that abuse their power, legislators who take advantage of insider status, and the like. So how can we conceptualize the task of getting a reasonably accurate perception of the hidden workings of our major institutions and organizations?

First, let’s consider whether it is possible to specify a minimum charter of good organizational behavior in a democratic society. This would be a partial answer to a part of our question: what defines the conditions of a socially acceptable and publicly defensible organization? Consider these aspirations —

  • The organization should have goals that are compatible with enhancing the public good.
  • The organization should have appropriate policies about behavior towards employees and the public.
  • The organization should genuinely incorporate a commitment of compliance to law and regulation.
  • The corporation should embody a faithful commitment to exerting its efforts on behalf of its stated mission and stakeholders.
  • The organization should be committed to transparency and accountability.

Bad business practices and corruption can often be traced to a violation of one or more of these principles. The most offensive practices by powerful organizations — predatory behavior, asset stripping, the use of coercion and threat to achieve organizational goals, fraud, deception, illegal behavior, toxic waste dumping, evasion of regulations, and bribery — all fall within the categories identified here.

So how are we to determine whether our existing organizations and institutions satisfy these minimal conditions? We might imagine a routine “scan” of major institutions and organizations that asks a small set of questions along these lines:

  • What are the real operational goals and priorities of the organization?
  • What are the operational policies that govern corporate action?
  • How do agents of the organization actually treat members of the public in carrying out their tasks?
  • To what extent are there discrepancies between policy and practice?
  • To what extent do powerful leaders and managers use their positions to favor their own private interests? (conflict of interest)
  • To what extent do business crimes occur — accounting fraud, investor deception, evasion of regulations for health and safety?
  • And, most generally, to what extent is there a discrepancy between the official story about the organization and its actual practices?

It is very easy to think of examples of bad organizational behavior illustrating each of these questions — waste management companies fronting for organized crime groups, pharmaceutical companies producing defective generic drugs, police officers accepting bribes from speeding drivers, mining companies hiring “security workers” to evict “squatters.” And it would be a very interesting exercise to try to provide brief but accurate answers to each of these questions for a number of organizations. Based on the answers to questions like these that we are able to establish, we could then make an effort to answer the question of how great a discrepancy there is between the benign public persona of major institutions and their actual workings.

In theory we might say that answering these questions is no more difficult than putting a man on the moon — costly but straightforward. However, as was said twenty years ago in the context of anti-ballistic missile technology, the difference is that the moon doesn’t fight back. Organizations — particularly large governmental and corporate organizations — are very adept at covering their tracks, concealing bad behavior, and re-telling the story in their own interests. So the investigative challenge is a huge one — we might speculate that corruption multiplies geometrically, while investigative capacity multiplies arithmetically (a sort of Malthusian theory of misbehavior). Any given abuse can be uncovered in the New Yorker or on the 6 o’clock news — but bad behavior outstrips investigative resources.

So the task of understanding this aspect of modern society amounts to finding effective ways of shining a light on the real practices and priorities of important organizations and institutions. And the practical interest we have in controlling bad organizations — controlling corruption, ensuring good environmental and labor practices, eliminating coercion and violence — comes down to the challenge of enhancing the ability of democracies to investigate, regulate, and publicize the standards and outcomes of behavior that are required.

(Earlier posts have addressed aspects of this issue, including comments on corruption and publicity.)

Trust and corruption

The recent collapse of a major skyscraper crane in New York City last month led to a surprising result: the arrest of the city’s chief crane inspector on charges of bribery. (See the New York Times story here.) (The story indicates that the facts surrounding the charges are unrelated to this particular crane collapse.) Several weeks earlier, a Congressional committee heard testimony from three F.A.A. inspectors to the effect that the agency had permitted Southwest Airlines to fly uninspected planes (story), and some attributed this lapse to too cozy a relationship between the F.A.A. and the airline industry:

The F.A.A.’s watchdog role, to many Democrats in Congress who now oversee airline regulators, grew toothless. “We had drifted a little bit too much toward the over-closeness and coziness between regulator and regulated,” said H. Clayton Foushee Jr., a former F.A.A. official who led a recent inquiry by Mr. Oberstar’s committee. (story)

The basic systems of a complex society depend upon the good-faith commitment of providers to give top priority to safety, health, and quality, but they also depend upon regulation, inspection, and certification. Caveat emptor doesn’t work when it comes to airline travel or working in a skyscraper; we simply have to trust that the airliner or the building is built and maintained to a high level of safety standards. The food we eat, the restaurants we patronize, the airlines and railroads we travel on, and the buildings we live and work in (and send our children to) provide complex products for our use that we can’t independently evaluate. Instead, we are obliged to trust the providers — the builders, the airline companies and their pilots and mechanics, the restaurant operators — and the regulatory and inspection regimes that are intended to provide an assurance of quality, safety, and health.

And yet there are two imperatives that work against public health and safety in most modern societies: the private incentive that the provider has to cut corners, and the perennial temptation of corruption that is inherent within a regulatory process. On the providers’ side, there is a constant business incentive to lower costs by substituting inferior ingredients or materials, to tolerate less-than-sanitary conditions in the back-of-restaurant areas, or to skimp on necessary maintenance of inherently dangerous systems. And on the regulatory side, there is the omnipresent possibility of collusion between inspectors and providers. Inspectors have it in their power to impose costs or savings on providers; so the provider has an economic interest in making payments to inspectors to save themselves these costs. (See Robert Klitgaard’s fascinating book, Controlling Corruption, for a political scientist’s analysis of this problem.)

In a purely laissez-faire environment we would expect there to be recurring instances of health and safety disasters in food production, building construction, transportation, and the healthcare system; this seems to be the logical result of a purely cost- and profit-driven system of production. (This seems to be what lies at the heart of the Chinese pet food and toy product scandals of several months ago, and it was at the heart of the food industries chronicled by Upton Sinclair a century ago in this country.)

But an inadequate system of regulation and enforcement seems equally likely to lead to health and safety crises for society, if inspection regimes are inadequate or if inspectors are corrupt. The two stories about inspection mentioned above point to different ways in which a regulatory system can go wrong: individual inspectors can be corrupted, or honest inspectors can be improperly managed by their regulatory organization. And, of course, there is a third possibility as well: the regulatory system may be fully honest and well-managed but wholly insufficient to the task presented to it in terms of the resources and personnel devoted to the regulatory task.

These two tendencies appear to be resulting in major social problems in China today. There is little confidence in the Chinese public in building standards in even the major civil engineering projects that the country has undertaken in the past ten years (CNN story, BBC story), there is widespread concern about corruption in many aspects of ordinary life, and there is growing concern among consumers about the safety of the system of food production, public water sources, and pharmaceuticals (story). (The anger and anguish expressed by parents whose children were lost in collapsed schools in Sichuan appear to derive from these kinds of mistrust.) So one of China’s major challenges for the coming years is to create credible, effective, and trusted regulatory regimes for the areas of public life that most directly affect health and safety.

But the stories mentioned above don’t have to do with China, or India, or Brazil; they have to do with the United States. We have lived through a period of determined deregulation since 1980, and have been subjected to a political ideology that minimized and demeaned the role of government in protecting the health and safety of the public — in banking no less than air safety. It seems very pressing for us now to ask ourselves: how effective are the systems of regulation and inspection that we have in our key industries — food, pharmaceuticals, hospitals, transportation, and construction? How much confidence can we have in the basis health and safety features of these fundamental social goods? And what sorts of institutional reforms do we need to undertake?

%d bloggers like this: