An existential philosophy of technology

Ours is a technological culture, at least in the quarter of the countries in the world that enjoy a high degree of economic affluence. Cell phones, computers, autonomous vehicles, CT scan machines, communications satellites, nuclear power reactors, artificial DNA, artificial intelligence bots, drone swarms, fiber optic data networks — we live in an environment that depends unavoidably upon complex, scientifically advanced, and mostly reliable artifacts that go well beyond the comprehension of most consumers and citizens. We often do not understand how they work. But more than that, we do not understand how they affect us in our social, personal, and philosophical lives. We are different kinds of persons than those who came before us, it often seems, because of the sea of technological capabilities in which we swim. We think about our lives differently, and we relate to the social world around us differently.

How can we begin investigating the question of how technology affects the conduct of a “good life”? Is there such a thing as an “existential” philosophy of technology — that is, having to do with the meaning of the lives of human beings in the concrete historical and technological circumstances in which we now find ourselves? This suggests that we need to consider a particularly deep question: in what ways does advanced technology facilitate the good human life, and in what ways does it frustrate and block the good human life? Does advanced technology facilitate and encourage the development of full human beings, and lives that are lived well, or does it interfere with these outcomes?

We are immediately drawn to a familiar philosophical question, What is a good life, lived well? This has been a central question for philosophers since Aristotle and Epicurus, Kant and Kierkegaard, Sartre and Camus. But let’s try to answer it in a paragraph. Let’s postulate that there are a handful of characteristics that are associated with a genuinely valuable human life. These might include the individual’s realization of a capacity for self-rule, creativity, compassion for others, reflectiveness, and an ability to grow and develop. This suggests that we start from the conception of a full life of freedom and development offered by Amartya Sen in Development as Freedom and the list of capabilities offered by Martha Nussbaum in Creating Capabilities: The Human Development Approach — capacities for life, health, imagination, emotions, practical reason, affiliation with others, and self-respect. And we might say that a “life lived well” is one in which the person has lived with integrity, justice, and compassion in developing and fulfilling his or her fundamental capacities. Finally, we might say that a society that enables the development of each of these capabilities in all its citizens is a good society.

Now look at the other end of the issue — what are some of the enhancements to human living that are enabled by modern technologies? There are several obvious candidates. One might say that technology facilitates learning and the acquisition of knowledge; technology can facilitate health (by finding cures and preventions of disease; and by enhancing nutrition, shelter, and other necessities of daily life); technology can facilitate human interaction (through the forms of communication and transportation enabled by modern technology); technology can enhance compassion by acquainting us with the vivid life experiences of others. So technology is sometimes life-enhancing and fulfilling of some of our most fundamental needs and capabilities.

How might Dostoevsky, Dos Passos, Baldwin, or Whitman have adjusted their life plans if confronted by our technological culture? We would hope they would not have been overwhelmed in their imagination and passion for discovering the human in the ordinary by an iPhone, a Twitter feed, and a web browser. We would like to suppose that their insights and talents would have survived and flourished, that poetry, philosophy, and literature would still have emerged, and that compassion and commitment would have found its place even in this alternative world.

But the negative side of technology for human wellbeing is also easy to find. We might say that technology encourages excessive materialism; it draws us away from real interactions with other human beings; it promotes a life consisting of a series of entertaining moments rather than meaningful interactions; and it squelches independence, creativity, and moral focus. So the omnipresence of technologies does not ensure that human beings will live well and fully, by the standards of Aristotle, Epicurus, or Montaigne.

In fact, there is a particularly bleak possibility concerning the lives that advanced everyday technology perhaps encourages: our technological culture encourages us to pursue lives that are primarily oriented towards material satisfaction, entertainment, and toys. This sounds a bit like a form of addiction or substance abuse. We might say that the ambient cultural imperatives of acquiring the latest iPhone, the fastest internet streaming connection, or a Tesla are created by the technological culture that we inhabit, and that these motivations are ultimately unworthy of a fully developed human life. Lucretius, Socrates, and Montaigne would scoff.

It is clear that technology has the power to distort our motives, goals and values. But perhaps with equal justice one might say that this is a life world created by capitalism rather than technology — a culture that encourages and elicits personal motivations that are “consumerist” and ultimately empty of real human value, a culture that depersonalizes social ties and trivializes human relationships based on trust, loyalty, love, or compassion. This is indeed the critique offered by theorists of the philosophers of the Frankfurt School — that capitalism depends upon a life world of crass materialism and impoverished social and personal values. And we can say with some exactness how capitalism distorts humanity and culture in its own image: through the machinations of advertising, strategic corporate communications, and the honoring of acquisitiveness and material wealth (link). It is good business to create an environment where people want more and more of the gadgets that technological capitalism can provide.

So what is a solution for people who worry about the shallowness and vapidity of this kind of technological materialism? We might say that an antidote to excessive materialism and technology fetishism is a fairly simple maxim that each person can strive to embrace: aim to identify and pursue the things that genuinely matter in life, not the glittering objects of short-term entertainment and satisfaction. Be temperate, reflective, and purposive in one’s life pursuits. Decide what values are of the greatest importance, and make use of technology to further those values, rather than as an end in itself. Let technology be a tool for creativity and commitment, not an end in itself. Be selective and deliberate in one’s use of technology, rather than being the hapless consumer of the latest and shiniest. Create a life that matters.

Responsible innovation and the philosophy of technology

Several posts here have focused on the philosophy of technology (linklinklinklink). A simple definition of the philosophy of technology might go along these lines:

Technology may be defined broadly as the sum of a set of tools, machines, and practical skills available at a given time in a given culture through which human needs and interests are satisfied and the interplay of power and conflict furthered. The philosophy of technology offers an interdisciplinary approach to better understanding the role of technology in society and human life. The field raises critical questions about the ways that technology intertwines with human life and the workings of society. Do human beings control technology? For whose benefit? What role does technology play in human wellbeing and freedom? What role does technology play in the exercise of power? Can we control technology? What issues of ethics and social justice are raised by various technologies? How can citizens within a democracy best ensure that the technologies we choose will lead to better human outcomes and expanded capacities in the future?

One of the issues that arises in this field is the question of whether there are ethical principles that should govern the development and implementation of new technologies. (This issue is discussed further in an earlier post; link.)

One principle of technology ethics seems clear: policies and regulations are needed to protect the future health and safety of the public. This is the same principle that serves as the ethical basis of government regulation of current activities, justifying coercive rules that prevent pollution, toxic effects, fires, radiation exposure, and other clear harms affecting the health and safety of the public.

Another principle might be understood as exhortatory rather than compulsory, and that is the general recommendation that technologies should be pursued by private actors that make some positive contribution to human welfare. This principle is plainly less universal and obligatory than the “avoid harm” principle; many technologies are chosen because their inventors believe they will entertain, amuse, or otherwise please members of the public, and will thereby permit generation of profits. (Here is a discussion of the value of entertainment; link.)

A more nuanced exhortation is the idea that inventors and companies should subject their technology and product innovation research to broad principles of sustainability. Given that large technological change can potentially have very large environmental and collective effects, we might think that companies and inventors should pay attention to the large challenges our society faces, now and in the foreseeable future: addiction, obesity, CO2 production, plastic waste, erosion of privacy, spread of racist politics, fresh water depletion, and information disparities, to name several.

These principles fall within the general zone of the ethics of corporate social responsibility. Many companies pay lip service to the social-benefits principle and the sustainability principle, though it is difficult to see evidence of the effectiveness of this motivation. Business interests often seem to trump concerns for positive social effects and sustainability — for example, in the pharmaceutical industry and its involvement in the opioid crisis (link).

It is in the context of these reflections about the ethics of technology that I was interested to learn of an academic and policy field in Europe called “responsible innovation”. This is a network of academics, government officials, foundations, and non-profit organizations working together to try to induce more directionality in technology change (innovation). René von Schomberg and Jonathan Hankins’s recently published volume International Handbook on Responsible Innovation: A Global Resource gives an in-depth exposure to the thinking, research, and policy advocacy that this network has accumulated. A key actor in the advancement of this field has been the Bassetti Foundation (link) in Milan, which has made the topic of responsible innovation central to its mission for several decades. The Journal of Responsible Innovation provides a look at continuing research in this field.

The primary locus of discussion and applications in the field of RRI has been within the EU. There is not much evidence of involvement in the field from United States actors in this movement, though the Virtual Institute of Responsible Innovation at Arizona State University has received support from the US National Science Foundation (link).

Von Schomberg describes the scope and purpose of the RRI field in these terms:

Responsible Research and Innovation is a transparent, interactive process by which societal actors and innovators become mutually responsive to each other with a view to the (ethical) acceptability, sustainability and societal desirability of the innovation process and its marketable products (in order to allow a proper embedding of scientific and technological advances in our society). (2)

The definition of this field overlaps quite a bit with the philosophy and ethics of technology, but it is not synonymous. For one thing, the explicit goal of RRI is to help provide direction to the social, governmental, and business processes driving innovation. And for another, the idea of innovation isn’t exactly the same as “technology change”. There are social and business innovations that fall within the scope of the effort — for example, new forms of corporate management or new kinds of financial instruments — but which do not fall within the domain of technological innovations.

Von Schomberg has been a leading thinker within this field, and his contributions have helped to set the agenda for the movement. In his contribution to the volume he identifies six deficits in current innovation policy in Europe (all drawn from chapter two of the volume):

  1. Exclusive focus on risk and safety issues concerning new technologies under governmental regulations
  2. Market deficits in delivering on societal desirable innovations
  3. Aligning innovations with broadly shared public values and expectations
  4. A focus on the responsible development of technology and technological potentials rather than on responsible innovations
  5. A lack of open research systems and open scholarship as a necessary, but not sufficient condition for responsible innovation
  6. Lack of foresight and anticipative governance for the alternative shaping of innovation in sectors

Each of these statements involves very complex ideas about society-government-corporate relationships, and we may well come to judge that some of the recommendations made by Schomberg are more convincing than others. But the clarity of this statement of the priorities and concerns of the RRI movement is enormously valuable as a way of advancing debate on the issues.
The examples that von Schomberg and other contributors discuss largely have to do with large innovations that have sparked significant public discussion and opposition — nuclear power, GMO foods, nanotechnology-based products. These example focus attention on the later stages of scientific and technological knowledge when it comes to the point of introducing the technology into the public. But much technological innovation takes place at a much more mundane level — consumer electronics and software, enhancements of solar technology, improvements in electric vehicle technology, and digital personal assistants (Alexa, Siri), to name a few.

A defining feature of the RRI field is the explicit view that innovation is not inherently good or desirable (for example, in the contribution by Luc Soete in the volume). Contrary to the assumptions of many government economic policy experts, the RRI network is unified in criticism of the idea that innovation is always or usually productive of economic growth and employment growth. These observers argue instead that the public should have a role in deciding which technological options ought to be pursued, and which should not.

In reading the programmatic statements of purpose offered in the volume, it sometimes seems that there is a tendency to exaggerate the degree to which scientific and technological innovation is (or should be) a directed and collectively controlled process. The movement seems to undervalue the important role that creativity and invention play within the crucial fact of human freedom and fulfillment. It is an important moral fact that individuals have extensive liberties concerning the ways in which they use their talents, and the presumption needs to be in favor of their right to do so without coercive interference. Much of what goes on in the search for new ideas, processes, and products falls properly on the side of liberty rather than a socially regulated activity, and the proper relation of social policy to these activities seems to be one of respect for the human freedom and creativity of the innovator rather than a prescriptive and controlling one. (Of course some regulation and oversight is needed, based on assessments of risk and harm; but von Schomberg and others dismiss this moral principle as too limited.)

It sometimes seems as though the contributors slide too quickly from the field of government-funded research and development (where the public has a plain interest in “directing” the research at some level), to the whole ecology of innovation and discovery, whether public, corporate, or academic. As noted above, von Schomberg considers the governmental focus on harm and safety to be the “first deficit” — in other words, an insufficient basis for “guiding innovation”. In contrast, he wants to see public mechanisms tasked with “redirecting” technology innovations and industries. However, much innovation is the result of private initiative and funding, and it seems that this field appropriately falls outside of prescription by government (beyond normal harm-based regulatory oversight). Von Schomberg uses the phrase “a proper embedding of scientific and technological advances in society”; but this seems to be a worrisome overreach, in that it seems to imply that all scientific and technology research should be guided and curated by a collective political process.

This suggests that a more specific description of the goals of the movement would be helpful. Here is one possible specification:

  • Require government agencies to justify the funding and incentives that they offer in support of technology innovation based on an informed assessment of the public’s preferences;
  • Urge corporations to adopt standards to govern their own internal innovation investments to conform to acknowledged public concerns (environmental sustainability, positive contributions to health and safety of citizens and consumers, …);
  • Urge scientists and researchers to engage in public discussion of their priorities in scientific and technological research.
  • Create venues for open and public discussion of major technological choices facing society in the current century, leading to more articulate understanding of priorities and risks.

There is an interesting parallel here with the Japanese government’s efforts in the 1980s to guide investment and research and development resources into the highest priority fields to advance the Japanese economy. The US National Research Council study, 21st Century Innovation Systems for Japan and the United States: Lessons from a Decade of Change: Report of a Symposium (2009) (link), provides an excellent review of the strategies adopted by the United States and Japan in their efforts to stimulate technology innovation in chip production and high-end computers from the 1960s to the 1990s. These efforts were entirely guided by the effort to maintain commercial and economic advantage in the global marketplace. Jason Owen-Smith addresses the question of the role of US research universities as sites of technological research in Research Universities and the Public Good: Discovery for an Uncertain Futurelink.

The “responsible research and innovation” (RRI) movement in Europe is a robust effort to pose the question, how can public values be infused into the processes of technology innovation that have such a massive potential effect on public welfare? It would seem that a major aim of the RRI network is to help to inform and motivate commitments by corporations to principles of responsible innovation within their definitions of corporate social responsibility, which is unmistakably needed. It is worthwhile for U.S. policy experts and technology ethicists alike to pay attention to these debates in Europe, and the International Handbook on Responsible Innovation is an excellent place to begin.

Ethical principles for assessing new technologies

Technologies and technology systems have deep and pervasive effects on the human beings who live within their reach. How do normative principles and principles of social and political justice apply to technology? Is there such a thing as “the ethics of technology”?

There is a reasonably active literature on questions that sound a lot like these. (See, for example, the contributions included in Winston and Edelbach, eds., Society, Ethics, and Technology.) But all too often the focus narrows too quickly to ethical issues raised by a particular example of contemporary technology — genetic engineering, human cloning, encryption, surveillance, and privacy, artificial intelligence, autonomous vehicles, and so forth. These are important questions; but it is also possible to ask more general questions as well, about the normative space within which technology, private activity, government action, and the public live together. What principles allow us to judge the overall justice, fairness, and legitimacy of a given technology or technology system?

There is a reasonably active literature on questions that sound a lot like these. (See, for example, the contributions included in Winston and Edelbach, eds., Society, Ethics, and Technology.) But all too often the focus narrows too quickly to ethical issues raised by a particular example of contemporary technology — genetic engineering, human cloning, encryption, surveillance, and privacy, artificial intelligence, autonomous vehicles, and so forth. These are important questions; but it is also possible to ask more general questions as well, about the normative space within which technology, private activity, government action, and the public live together. What principles allow us to judge the overall justice, fairness, and legitimacy of a given technology or technology system?

There is an overriding fact about technology that needs to be considered in every discussion of the ethics of technology. It is a basic principle of liberal democracy that individual freedom and liberty should be respected. Individuals should have the right to act and create as they choose, subject to something like Mill’s harm principle. The harm principle holds that liberty should be restricted only when the activity in question imposes harm on other individuals. Applied to the topic of technology innovation, we can derive a strong principle of “liberty of innovation and creation” — individuals (and their organizations, such as business firms) should have a presumptive right to create new technologies constrained only by something like the harm principle.

Often we want to go beyond this basic principle of liberty to ask what the good and bad of technology might be. Why is technological innovation a good thing, all things considered? And what considerations should we keep in mind as we consider legitimate regulations or limitations on technology?

Consider three large principles that have emerged in other areas of social and political ethics as a basis for judging the legitimacy and fairness of a given set of social arrangements:

 A. Technologies should contribute to some form of human good, some activity or outcome that is desired by human beings — health, education, enjoyment, pleasure, sociality, friendship, fitness, spirituality, …

B. Technologies ought to be consistent with the fullest development of the human capabilities and freedoms of the individuals whom they affect. [Or stronger: “promote the fullest development …”]

C. Technologies ought to have population effects that are fair, equal, and just.

The first principle attempts to address the question, “What is technology good for? What is the substantive moral good that is served by technology development?” The basic idea is that human beings have wants and needs, and contributing to their ability to fulfill these wants is itself a good thing (if in so doing other greater harms are not created as well). This principle captures what is right about utilitarianism and hedonism — the inherent value of human happiness and satisfaction. This means that entertainment and enjoyment are legitimate goals of technology development.

The second principle links technology to the “highest good” of human wellbeing — the full development of human capabilities and freedoms. As is evident, the principle offered here derives from Amartya Sen’s theory of capabilities and functionings, expressed in Development as Freedom. This principle recalls Mill’s distinction between higher and lower pleasures:

Mill always insisted that the ultimate test of his own doctrine was utility, but for him the idea of the greatest happiness of the greatest number included qualitative judgements about different levels or kinds of human happiness. Pushpin was not as good as poetry; only Pushkin was…. Cultivation of one’s own individuality should be the goal of human existence. (J.S. McClelland, A History of Western Political Thought : 454)

The third principle addresses the question of fairness and equity. Thinking about justice has evolved a great deal in the past fifty years, and one thing that emerges clearly is the intimate connection between injustice and invidious discrimination — even if unintended. Social institutions that arbitrarily assign significantly different opportunities and life outcomes to individuals based on characteristics such as race, gender, income, neighborhood, or religion are unfair and unjust, and need to be reformed. This approach derives as much from current discussions of racial health disparities as it does from philosophical theories along the lines of Rawls and Sen.

On these principles a given technology can be criticized, first, if it has no positive contribution to make for the things that make people happy or satisfied; second, if it has the effect of stunting the development of human capabilities and freedoms; and third, if it has discriminatory effects on quality of life across the population it effects.

One important puzzle facing the ethics of technology is a question about the intended audience of such a discussion. We are compelled to ask, to whom is a philosophical discussion of the normative principles that ought to govern our thinking about technology aimed? Whose choices, actions, and norms are we attempting to influence? There appear to be several possible answers to this question.

Corporate ethics. Entrepreneurs and corporate boards and executives have an ethical responsibility to consider the impact of the technologies that they introduce into the market. If we believe that codes of corporate ethics have any real effect on corporate decision-making, then we need to have a basis in normative philosophy for a relevant set of principles that should guide business decision-making about the creation and implementation of new technologies by businesses. A current example is the use of facial recognition for the purpose of marketing or store security; does a company have a moral obligation to consider the negative social effects it may be promoting by adopting such a technology?

Governments and regulators. Government has an overriding responsibility of preserving and enhancing the public good and minimizing harmful effects of private activities. This is the fundamental justification for government regulation of industry. Since various technologies have the potential of creating harms for some segments of the public, it is legitimate for government to enact regulatory systems to prevent reckless or unreasonable levels of risk. Government also has a responsibility for ensuring a fair and just environment for all citizens, and enacting policies that serve to eliminate inequalities based on discriminatory social institutions. So here too governments have a role in regulating technologies, and a careful study of the normative principles that should govern our thinking about the fairness and justice of technologies is relevant to this process of government decision-making as well.

Public interest advocacy groups. One way in which important social issues can be debated and sometimes resolved is through the advocacy of well-organized advocacy groups such as the Union of Concerned Scientists, the Sierra Club, or Greenpeace. Organizations like these are in a position to argue in favor of or against a variety of social changes, and raising concerns about specific kinds of technologies certainly falls within this scope. There are only a small number of grounds for this kind of advocacy: the innovation will harm the public, the innovation will create unacceptable hidden costs, or the innovation raises unacceptable risks of unjust treatment of various groups. In order to make the latter kind of argument, the advocacy group needs to be able to articulate a clear and justified argument for its position about “unjust treatment”.

The public. Citizens themselves have an interest in being able to make normative judgments about new technologies as they arise. “This technology looks as though it will improve life for everyone and should be favored; that technology looks as though it will create invidious and discriminatory sets of winners and losers and should be carefully regulated.” But for citizens to have a basis for making judgments like these, they need to have a normative framework within which to think and reason about the social role of technology. Public discussion of the ethical principles underlying the legitimacy and justice of technology innovations will deepen and refine these normative frameworks.

Considered as proposed here, the topic of “ethics of technology” is part of a broad theory of social and political philosophy more generally. It invokes some of our best reasoning about what constitutes the human good (fulfillment of capabilities and freedoms) and about what constitutes a fair social system (elimination of invidious discrimination in the effects of social institutions on segments of population). Only when we have settled these foundational questions are we able to turn to the more specific issues often discussed under the rubric of the ethics of technology.

The functionality of artifacts

We think of artifacts as being “functional” in a specific sense: their characteristics are well designed and adjusted for their “intended” use. Sometimes this is because of the explicit design process through which they were created, and sometimes it is the result of a long period of small adjustments by artisan-producers and users who recognize a potential improvement in shape, material, or internal workings that would lead to superior performance. Jon Elster described these processes in his groundbreaking 1983 book, Explaining Technical Change: A Case Study in the Philosophy of Science.

Here is how I described the gradual process of refinement of technical practice with respect to artisanal winegrowing in a 2009 post (link):

First, consider the social reality of a practice like wine-making. Pre-modern artisanal wine makers possess an ensemble of techniques through which they grow grapes and transform them into wine. These ensembles are complex and developed; different wine “traditions” handle the tasks of cultivation and fermentation differently, and the results are different as well (fine and ordinary burgundies, the sweet gewurztraminers of Alsace versus Germany). The novice artisan doesn’t reinvent the art of winemaking; instead, he/she learns the techniques and traditions of the elders. But at the same time, the artisan wine maker may also introduce innovations into his/her practice — a wrinkle in the cultivation techniques, a different timing in the fermentation process, the introduction of a novel ingredient into the mix.

Over time the art of grape cultivation and wine fermentation improves.

But in a way this expectation of “artifact functionality” is too simple and direct. In the development of a technology or technical practice there are multiple actors who are in a position to influence to development of the outcome, and they often have divergent interests. These differences of interests may lead to substantial differences in performance for the technology or technique. Technologies reflect social interests, and this is as evident in the history of technology as it is in the current world of high tech. In the winemaking case, for example, landlords may have interests that favor dense planting, whereas the wine maker may favor more sparse planting because of the superior taste this pattern creates in the grape. More generally, the owner’s interest in sales and profitability exerts a pressure on the characteristics of the product that run contrary to the interest of the artisan-producer who gives primacy to the quality of the product, and both may have interests that are somewhat inconsistent with the broader social good.

Imagine the situation that would result if a grain harvesting machine were continually redesigned by the profit-seeking landowner and the agricultural workers. Innovations that are favorable to enhancing profits may be harmful for safety and welfare of agricultural workers, and vice versa. So we might imagine a see-saw of technological development, as the landowner and the worker gains more influence over the development of the technology.

As an undergraduate at the University of Illinois in the late 1960s I heard the radical political scientist Michael Parenti tell just such a story about his father’s struggle to maintain artisanal quality in the Italian bread he baked in New York City in the 1950s. Here is an online version of the story (link). Michael Parenti’s story begins like this:

Years ago, my father drove a delivery truck for the Italian bakery owned by his uncle Torino. When Zi Torino returned to Italy in 1956, my father took over the entire business. The bread he made was the same bread that had been made in Gravina, Italy, for generations. After a whole day standing, it was fresh as ever, the crust having grown hard and crisp while the inside remained soft, solid, and moist. People used to say that our bread was a meal in itself…. 

Pressure from low-cost commercial bread companies forced his father into more and more cost-saving adulteration of the bread. And the story ends badly …

But no matter what he did, things became more difficult. Some of our old family customers complained about the change in the quality of the bread and began to drop their accounts. And a couple of the big stores decided it was more profitable to carry the commercial brands.

Not long after, my father disbanded the bakery and went to work driving a cab for one of the big taxi fleets in New York City. In all the years that followed, he never mentioned the bread business again.

Parenti’s message to activist students in the 1960s was stark: this is the logic of capitalism at work.

Of course market pressures do not always lead to the eventual ruin of the products we buy; there is also an economic incentive created by consumers who favor higher performance and more features that leads businesses to improve their products. So the dynamic that ruined Michael Parenti’s father’s bread is only one direction that market competition can take. The crucial point is this: there is nothing in the development of technology and technique that guarantees outcomes that are more and more favorable for the public.

Hegel on labor and freedom

Hegel provided a powerful conception of human beings in the world and a rich conception of freedom. Key to that conception is the idea of self-creation through labor. Hegel had an “aesthetic” conception of labor: human beings confront the raw given of nature and transform it through intelligent effort into things they imagine that will satisfy their needs and desires.

Alexandre Kojève’s reading of Hegel is especially clear on Hegel’s conception of labor and freedom. This is provided in Kojève’s analysis of the Master-Slave section of Hegel’s Phenomenology in his Introduction to the Reading of Hegel. The key idea is expressed in these terms:

The product of work is the worker’s production. It is the realization of his project, of his idea; hence, it is he that is realized in and by this product, and consequently he contemplates himself when he contemplates it…. Therefore, it is by work, and only by work, that man realizes himself objectively as man. (Kojève, Introduction to the Reading of Hegel)

It seems to me that this framework of thought provides an interesting basis for a philosophy of technology as well. We might think of technology as collective and distributed labor, the processes through which human beings collectively transform the world around themselves to better satisfy human needs. Through intelligence and initiative human beings and organizations transform the world around them to create new possibilities for human need satisfaction. Labor and technology are emancipating and self-creating. Labor and technology help to embody the conditions of freedom.

However, this assessment is only one side of the issue. Technologies are created for a range of reasons by a heterogeneous collection of actors: generating profits, buttressing power relations, serving corporate and political interests. It is true that new technologies often serve to extend the powers of the human beings who use them, or to satisfy their needs and wants more fully and efficiently. Profit motives and the market help to ensure that this is true to some extent; technologies and products need to be “desired” if they are to be sold and to generate profits for the businesses that produce them. But given the conflicts of interest that exist in human society, technologies also serve to extend the capacity of some individuals and groups to wield power over others.

This means that there is a dark side to labor and technology as well. There is the labor of un-freedom. Not all labor allows the worker to fulfill him- or herself through free exercise of talents. Instead the wage laborer is regulated by the time clock and the logic of cost reduction. This constitutes Marx’s most fundamental critique of capitalism, as a system of alienation and exploitation of the worker as a human being. Here are a few paragraphs on alienated labor from Marx’s Economic and Philosophical Manuscripts:

The worker becomes all the poorer the more wealth he produces, the more his production increases in power and size. The worker becomes an ever cheaper commodity the more commodities he creates. The devaluation of the world of men is in direct proportion to the increasing value of the world of things. Labor produces not only commodities; it produces itself and the worker as a commodity – and this at the same rate at which it produces commodities in general.

This fact expresses merely that the object which labor produces – labor’s product – confronts it as something alien, as a power independent of the producer. The product of labor is labor which has been embodied in an object, which has become material: it is the objectification of labor. Labor’s realization is its objectification. Under these economic conditions this realization of labor appears as loss of realization for the workers objectification as loss of the object and bondage to it; appropriation as estrangement, as alienation.

So much does the labor’s realization appear as loss of realization that the worker loses realization to the point of starving to death. So much does objectification appear as loss of the object that the worker is robbed of the objects most necessary not only for his life but for his work. Indeed, labor itself becomes an object which he can obtain only with the greatest effort and with the most irregular interruptions. So much does the appropriation of the object appear as estrangement that the more objects the worker produces the less he can possess and the more he falls under the sway of his product, capital.

All these consequences are implied in the statement that the worker is related to the product of labor as to an alien object. For on this premise it is clear that the more the worker spends himself, the more powerful becomes the alien world of objects which he creates over and against himself, the poorer he himself – his inner world – becomes, the less belongs to him as his own. It is the same in religion. The more man puts into God, the less he retains in himself. The worker puts his life into the object; but now his life no longer belongs to him but to the object. Hence, the greater this activity, the more the worker lacks objects. Whatever the product of his labor is, he is not. Therefore, the greater this product, the less is he himself. The alienation of the worker in his product means not only that his labor becomes an object, an external existence, but that it exists outside him, independently, as something alien to him, and that it becomes a power on its own confronting him. It means that the life which he has conferred on the object confronts him as something hostile and alien.

So does labor fulfill freedom or create alienation? Likewise, does technology emancipate and fulfill us, or does it enthrall and disempower us? Marx’s answer to the first question is that it does both, depending on the social relations within which it is defined, managed, and controlled.

It would seem that we can answer the second question for ourselves, in much the same terms. Technology both extends freedom and constricts it. It is indeed true that technology can extend human freedom and realize human capacities. The use of technology and science in agriculture means that only a small percentage of people in advanced countries are farmers, and those who are enjoy a high standard of living compared to peasants of the past. Communication and transportation technologies create new possibilities for education, personal development, and self-expression. The enhancements to economic productivity created by technological advances have permitted a huge increase in the wellbeing of ordinary people in the past century — a fact that permits us to pursue the things we care about more freely. But new technologies also can be used to control people, to monitor their thoughts and actions, and to wage war against them. More insidiously, new technologies may “alienate” us in new ways — make us less social, less creative, and less independent of mind and thought.

So it seems clear on its face that technology is both favorable to the expansion of freedom and the exercise of human capacities, and unfavorable. It is the social relations through which technology is exercised and controlled that make the primary difference in which effect is more prominent.

Turing’s journey

A recent post comments on the value of biography as a source of insight into history and thought. Currently I am reading Andrew Hodges’ Alan Turing: The Enigma (1983), which I am finding fascinating both for its portrayal of the evolution of a brilliant and unconventional mathematician as well as the honest efforts Hodges makes to describe Turing’s sexual evolution and the tragedy in which it eventuated. Hodges makes a serious effort to give the reader some understanding of Turing’s important contributions, including his enormously important “computable numbers” paper. (Here is a nice discussion of computability in the Stanford Encyclopedia of Philosophylink.) The book also offers a reasonably technical account of the Enigma code-breaking process.

Hilbert’s mathematical imagination plays an important role in Turing’s development. Hilbert’s speculation that all mathematical statements would turn out to be derivable or disprovable turned out to be wrong, and Turing’s computable numbers paper (along with Godel and Church) demonstrated the incompleteness of mathematics. But it was Hilbert’s formulation of the idea that permitted the precise and conclusive refutations that came later. (Here is Richard Zack’s account in the Stanford Encyclopedia of Philosophy of Hilbert’s program; link.)

And then there were the machines. I had always thought of the Turing machine as a pure thought experiment designed to give specific meaning to the idea of computability. It has been eye-opening to learn of the innovative and path-breaking work that Turing did at Bletchley Park, Bell Labs, and other places in developing real computational machines. Turing’s development of real computing machines and his invention of the activity of “programming” (“construction of tables”) make his contributions to the development of digital computing machines much more advanced and technical than I had previously understood. His work late in the war on the difficult problem of encrypting speech for secure telephone conversation was also very interesting and innovative. Further, his understanding of the priority of creating a technology that would support “random access memory” was especially prescient. Here is Hodges’ summary of Turing’s view in 1947:

Considering the storage problem, he listed every form of discrete store that he and Don Bayley had thought of, including film, plugboards, wheels, relays, paper tape, punched cards, magnetic tape, and ‘cerebral cortex’, each with an estimate, in some cases obviously fanciful, of access time, and of the number of digits that could be stored per pound sterling. At one extreme, the storage could all be on electronic valves, giving access within a microsecond, but this would be prohibitively expensive. As he put it in his 1947 elaboration, ‘To store the content of an ordinary novel by such means would cost many millions of pounds.’ It was necessary to make a trade-off between cost and speed of access. He agreed with von Neumann, who in the EDVAC report had referred to the future possibility of developing a special ‘Iconoscope’ or television screen, for storing digits in the form of a pattern of spots. This he described as ‘much the most hopeful scheme, for economy combined with speed.’ (403)

These contributions are no doubt well known by experts on the history of computing. But for me it was eye-opening to learn how directly Turing was involved in the design and implementation of various automatic computing engines, including the British ACE machine itself at the National Physical Laboratory (link). Here is Turing’s description of the evolution of his thinking on this topic, extracted from a lecture in 1947:

Some years ago I was researching on what might now be described as an investigation of the theoretical possibilities and limitations of digital computing machines. I considered a type of machine which had a central mechanism and an infinite memory which was contained on an infinite tape. This type of machine appeared to be sufficiently general. One of my conclusions was that the idea of a ‘rule of thumb’ process and a ‘machine process’ were synonymous. The expression ‘machine process’ of course means one which could be carried out by the type of machine I was considering…. Machines such as the ACE may be regarded as practical versions of this same type of machine. There is at least a very close analogy. (399)

At the same time his clear logical understanding of the implications of a universal computing machine was genuinely visionary. He was evangelical in his advocacy of the goal of creating a machine with a minimalist and simple architecture where all the complexity and specificity of the use of the machine derives from its instructions (programming), not its specialized hardware.

Also interesting is the fact that Turing had a literary impulse (not often exercised), and wrote at least one semi-autobiographical short story about a sexual encounter. Only a few pages survive. Here is a paragraph quoted by Hodges:

Alec had been working rather hard until two or three weeks before. It was about interplanetary travel. Alec had always been rather keen on such crackpot problems, but although he rather liked to let himself go rather wildly to newspapermen or on the Third Programme when he got the chance, when he wrote for technically trained readers, his work was quite sound, or had been when he was younger. This last paper was real good stuff, better than he’d done since his mid twenties when he had introduced the idea which is now becoming known as ‘Pryce’s buoy’. Alec always felt a glow of pride when this phrase was used. The rather obvious double-entendre rather pleased him too. He always liked to parade his homosexuality, and in suitable company Alec could pretend that the word was spelt without the ‘u’. It was quite some time now since he had ‘had’ anyone, in fact not since he had met that soldier in Paris last summer. Now that his paper was finished he might justifiably consider that he had earned another gay man, and he knew where he might find one who might be suitable. (564)

The passage is striking for several reasons; but most obviously, it brings together the two leading themes of his life, his scientific imagination and his sexuality.

This biography of Turing reinforces for me the value of the genre more generally. The reader gets a better understanding of the important developments in mathematics and computing that Turing achieved, it presents a vivid view of the high stakes in the secret conflict that Turing was a crucial part of in the use of cryptographic advances to defeat the Nazi submarine threat, and it gives personal insights into the very unique individual who developed into such a world-changing logician, engineer, and scientist.

Worker-owned enterprises as a social solution

image: Mondragon headquarters, Arrasate-Mondragon, Spain

Consider some of the most intractable problems we face in contemporary society: rising inequalities between rich and poor, rapid degradation of the environment, loss of control of their lives by the majority of citizens. It might be observed that these problems are the result of a classic conundrum that Marx identified 150 years ago: the separation of society into owners of the means of production and owners of labor power that capitalism depends upon has a logic that leads to bad outcomes. Marx referred to these bad outcomes as “immiseration”. The label isn’t completely accurate because it implies that workers are materially worse off from decade to decade. But what it gets right is the fact of “relative immiseration” — the fact that in almost all dimensions of quality of life the bottom 50% of the population in contemporary capitalism lags further and further from the quality of life enjoyed by the top 10%. And this kind of immiseration is getting worse. 

A particularly urgent contemporary version of these problems is the increasing pace of automation of various fields, leading to dramatic reduction for the demand for labor. Intelligent machines replace human workers. 
The central insight of Marx’s diagnosis of capitalism is couched in terms of property and power. There is a logic to private ownership of the means of production that predictably leads to certain kinds of outcomes, dynamics that Marx outlined in Capital in fine detail: impersonalization of work relations, squeezing of wages and benefits, replacement of labor with machines, and — Marx’s ultimate accusation — the creation of periodic crises. Marx anticipated crises of over-production and under-consumption; financial crises; and, if we layer in subsequent thinkers like Lenin, crises of war and imperialism.

At various times in the past century or two social reformers have looked to cooperatives and worker-owned enterprises as a solution for the problems of immiseration created by capitalism. Workers create value through their labor; they understand the technical processes of production; and it makes sense for them to share in the profits created through ownership of the enterprise. (A contemporary example is the Mondragon group of cooperatives in the Basque region of Spain.) The reasoning is that if workers own a share of the means of production, and if they organize the labor process through some kind of democratic organization, then we might predict that workers’ lives would be better, there would be less inequality, and people would have more control over the major institutions affecting their lives — including the workplace. Stephen Marglin’s 1974 article “What do bosses do?” lays out the logic of private versus worker ownership of enterprises (link). Marglin’s The Dismal Science: How Thinking Like an Economist Undermines Community explores the topic of worker ownership and management from the point of view of reinvigorating the bonds of community in contemporary society.

The logic is pretty clear. When an enterprise is owned by private individuals, their interest is in organizing the enterprise in such a way as to maximize private profits. This means choosing products that will find a large market at a favorable price, organizing the process efficiently, and reducing costs in inputs and labor. Further, the private owner has full authority to organize the labor process in ways that disempower workers. (Think Fordism versus the Volvo team-based production system.) This implies a downward pressure on wages and a preference for labor-saving technology, and it implies a more authoritarian workplace. So capitalist management implies stagnant wages, stagnant demand for labor, rising inequalities, and disagreeable conditions of work. 

 
When workers own the enterprise the incentives work differently. Workers have an interest in efficiency because their incomes are determined by the overall efficiency of the enterprise. Further, they have a wealth of practical and technical knowledge about production that promises to enhance effectiveness of the production process. Workers will deploy their resources and knowledge intelligently to bring products to the market. And they will organize the labor process in such a way that conforms to the ideal of humanly satisfying work.

The effect of worker-owned enterprises on economic inequalities is complicated. Within the firm the situation is fairly clear: the range of inequalities of income within the firm will depend on a democratic process, and this process will put a brake on excessive salary and wage differentials. And all members of the enterprise are owners; so wealth inequalities are reduced as well. In a mixed economy of private and worker-owned firms, however, the inequalities that exist will depend on both sectors; and the dynamics leading to extensive inequalities in today’s world would be found in the mixed economy as well. Moreover, some high-income sectors like finance seem ill suited to being organized as worker-owned enterprises. So it is unclear whether the creation of a meaningful sector of worker-owned enterprises would have a measurable effect on overall wage and wealth inequalities.

There are several ways in which cooperatives might fail as an instrument for progressive reform. First, it might be the case that cooperative management is inherently less efficient, effective, or innovative than capitalism management; so the returns to workers would potentially be lower in an inefficient cooperative than a highly efficient capitalist enterprise. Marglin’s arguments in “What do bosses do?” gives reasons to doubt this concern as a general feature of cooperatives; he argues that private management does not generally beat worker management at efficiency and innovation. Second, it might be that cooperatives are feasible at a small and medium scale of enterprise, but not feasible for large enterprises like a steel company or IBM. Greater size might magnify the difficulties of coordination and decision-making that are evident in even medium-size worker-owned enterprises. Third, it might be argued that cooperatives themselves are labor-expelling: cooperative members may have an economic incentive to refrain from adding workers to the process in order to keep their own income and wealth shares higher. It would only make economic sense to add a worker when the marginal product of the next worker is greater than the average product; whereas a private owner will add workers at a lower wage when the marginal product is greater than the marginal product. So an economy in which there is a high proportion of worker-owned cooperatives may produce a high rate of unemployment among non-cooperative members. Finally, worker-owned enterprises will need access to capital; but this means that an uncontrollable portion of the surplus will flow out of the enterprise to the financial sector — itself a major cause of current rising inequalities. Profits will be jointly owned; but interest and finance costs will flow out of the enterprise to privately owned financial institutions.

And what about automation? Would worker-owned cooperatives invest in substantial labor-replacing automation? Here there are several different scenarios to consider. The key economic fact is that automation reduces per-unit cost. This implies that in a situation of fixed market demand, automation of an enterprise implies reduction of the wage or reduction of the size of the workforce. There appear to be only a few ways out of this box. If it is possible to expand the market for the product at a lower unit price, then it is possible for an equal number of workers to be employed at an equal or higher individual return. If it is not possible to expand the market sufficiently, then the enterprise must either lower the wage or reduce the workforce. Since the enterprise is democratically organized, neither choice is palatable, and per-worker returns will fall. On this scenario, either the work force shrinks or the per-worker return falls.

Worker management has implications for automation in a different way as well. Private owners will select forms of automation based solely on their overall effect on private profits; whereas worker-owned firms will select a form of automation taking the value of a satisfying workplace into account. So we can expect that the pathway of technical change and automation would be different in worker-owned firms than in privately owned firms.

In short, the economic and institutional realities of worker-owned enterprises are not entirely clear. But the concept is promising enough, and there are enough successful real-world examples, to encourage progressive thinkers to reconsider this form of economic organization.

(Here are several earlier posts on issues of institutional design that confront worker-owned enterprises (link, link). Noam Chomsky talks about the value of worker-owned cooperatives within capitalism here; link. And here is an interesting article by Henry Hansmann on the economics of worker-owned firms in the Yale Law Journal; link.)

 

Designing and managing large technologies

What is involved in designing, implementing, coordinating, and managing the deployment of a large new technology system in a real social, political, and organizational environment? Here I am thinking of projects like the development of the SAGE early warning system, the Affordable Care Act, or the introduction of nuclear power into the civilian power industry.

Tom Hughes described several such projects in Rescuing Prometheus: Four Monumental Projects That Changed the Modern World. Here is how he describes his focus in that book:

Telling the story of this ongoing creation since 1945 carries us into a human-built world far more complex than that populated earlier by heroic inventors such as Thomas Edison and by firms such as the Ford Motor Company. Post-World War II cultural history of technology and science introduces us to system builders and the military-industrial-university complex. Our focus will be on massive research and development projects rather than on the invention and development of individual machines, devices, and processes. In short, we shall be dealing with collective creative endeavors that have produced the communications, information, transportation, and defense systems that structure our world and shape the way we live our lives. (kl 76)

The emphasis here is on size, complexity, and multi-dimensionality. The projects that Hughes describes include the SAGE air defense system, the Atlas ICBM, Boston’s Central Artery/Tunnel project, and the development of ARPANET. Here is an encapsulated description of the SAGE process:

The history of the SAGE Project contains a number of features that became commonplace in the development of large-scale technologies. Transdisciplinary committees, summer study groups, mission-oriented laboratories, government agencies, private corporations, and systems-engineering organizations were involved in the creation of SAGE. More than providing an example of system building from heterogeneous technical and organizational components, the project showed the world how a digital computer could function as a real-time information-processing center for a complex command and control system. SAGE demonstrated that computers could be more than arithmetic calculators, that they could function as automated control centers for industrial as well as military processes. (kl 285)

Mega-projects like these require coordinated efforts in multiple areas — technical and engineering challenges, business and financial issues, regulatory issues, and numerous other areas where innovation, discovery, and implementation are required. In order to be successful, the organization needs to make realistic judgments about questions for which there can be no certainty — the future development of technology, the needs and preferences of future businesses and consumers, and the pricing structure that will exist for the goods and services of the industry in the future. And because circumstances change over time, the process needs to be able to adapt to important new elements in the planning environment.

There are multiple dimensions of projects like these. There is the problem of establishing the fundamental specifications of the project — capacity, quality, functionality. There is the problem of coordinating the efforts of a very large team of geographically dispersed scientists and engineers, whose work is deployed across various parts of the problem. There is the problem of fitting the cost and scope of the project into the budgetary envelope that exists for it. And there is the problem of adapting to changing circumstances during the period of development and implementation — new technology choices, new economic circumstances, significant changes in demand or social need for the product, large shifts in the costs of inputs into the technology. Obstacles in any of these diverse areas can lead to impairment or failure of the project.

Most of the cases mentioned here involve engineering projects sponsored by the government or the military. And the complexities of these cases are instructive. But there are equally complex cases that are implemented in a private corporate environment — for example, the development of next-generation space vehicles by SpaceX. And the same issues of planning, coordination, and oversight arise in the private sector as well.

The most obvious thing to note in projects like these — and many other contemporary projects of similar scope — is that they require large teams of people with widely different areas of expertise and an ability to collaborate across disciplines. So a key part of leadership and management is to solve the problem of securing coordination around an overall plan across the numerous groups; updating plans in face of changing circumstances; and ensuring that the work products of the several groups are compatible with each other. Moreover, there is the perennial challenge of creating arrangements and incentives in the work environment — laboratory, design office, budget division, logistics planning — that stimulate the participants to high-level creativity and achievement.

This topic is of interest for practical reasons — as a society we need to be confident in the effectiveness and responsiveness of the planning and development that goes into large projects like these. But it is also of interest for a deeper reason: the challenge of attributing rational planning and action to a very large and distributed organization at all. When an individual scientist or engineer leads a laboratory focused on a particular set of research problems, it is possible for that individual (with assistance from the program and lab managers hired for the effort) to keep the important scientific and logistical details in mind. It is an individual effort. But the projects described here are sufficiently complex that there is no individual leader who has the whole plan in mind. Instead, the “organizational intentionality” is embodied in the working committees, communications processes, and assessment mechanisms that have been established.

It is interesting to consider how students, both undergraduate and graduate, can come to have a better appreciation of the organizational challenges raised by large projects like these. Almost by definition, study of these problem areas in a traditional university curriculum proceeds from the point of view of a specialized discipline — accounting, electrical engineering, environmental policy. But the view provided from a discipline is insufficient to give the student a rich understanding of the complexity of the real-world problems associated with projects like these. It is tempting to think that advanced courses for engineering and management students could be devised making extensive use of detailed case studies as well as simulation tools that would allow students to gain a more adequate understanding of what is needed to organize and implement a large new system. And interestingly enough, this is a place where the skills of humanists and social scientists are perhaps even more essential than the expertise of technology and management specialists. Historians and sociologists have a great deal to add to a student’s understanding of these complex, messy processes.

Social construction of technical knowledge

After there was the sociology of knowledge (link), before there was a new sociology of knowledge (link), and more or less simultaneous with science and technology studies (link), there was Paul Rabinow’s excellent ethnography of the invention of the key tool in recombinant DNA research — PCR (polymerase chain reaction). Rabinow’s monograph Making PCR: A Story of Biotechnology appeared in 1996, after the first fifteen years of the revolution in biotechnology, and it provides a profound narrative of the intertwinings of theoretical science, applied bench work, and material economic interests, leading to substantial but socially imprinted discoveries and the development of a powerful new technology. Here is how Rabinow frames the research:

Making PCR

is an ethnographic account of the invention of PCR, the polymerase chain reaction (arguably the exemplary biotechnological invention to date), the milieu in which that invention took place (Cetus Corporation during the 1980s), and the key actors (scientists, technicians, and business people) who shaped the technology and the milieu and who were, in turn, shaped by them. (1)

This book focuses on the emergence of biotechnology, circa 1980, as a distinctive configuration of scientific, technical, cultural, social, economic, political, and legal elements, each of which had its own separate trajectory over the preceding decades. It examines the “style of life” or form of “life regulation” fashioned by the young scientists who chose to work in this new industry rather than pursue promising careers in the university world…. In sum, it shows how a contingently assembled practice emerged, composed of distinctive subjects, the site in which they worked, and the object they invented. (2)

There are several noteworthy features of these very exact descriptions of Rabinow’s purposes. The work is ethnographic; it proceeds through careful observation, interaction, and documentation of the intentionality and practices of the participants in the process. It is focused on actors of different kinds — scientists, lab technicians, lawyers, business executives, and others — whose interests, practices, and goals are distinctly different from each others’. It is interested in accounting for how the “object” (PCR) came about, without any implication of technological or scientific inevitability. It highlights both contingency and heterogeneity in the process. The process of invention and development was a meandering one (contingency) and it involved a large group of heterogeneous influences (scientific, cultural, economic, …).

Legal issues come into this account because the fundamental question — what is PCR and who invented it? — cannot be answered in narrowly technical or scientific terms. Instead, it was necessary to go through a process of practical bench-based development and patent law to finally be able to answer both questions.

A key part of Rabinow’s ethnographic finding is that the social configuration and setting of the Cetus laboratory was itself a key part of the process leading to successful development of PCR. The fact of hierarchy in traditional scientific research spaces (universities) is common — senior scientists at the top, junior technicians at the bottom. But Cetus had developed a local culture that was relatively un-hierarchical, and Rabinow believes this cultural feature was crucial to the success of the undertaking.

Cetus’s organizational structure was less hierarchical and more interdisciplinary than that found in either corporate pharmaceutical or academic institutions. In a very short time younger scientists could take over major control of projects; there was neither the extended postdoc and tenure probationary period nor time-consuming academic activities such as committees, teaching, and advising to divert them from full-time research. (36)

And later:

Cetus had been run with a high degree of organizational flexibility during its first decade. The advantages of such flexibility were a generally good working environment and a large degree of autonomy for the scientists. The disadvantages were a continuing lack of overall direction that resulted in a dispersal of both financial and human resources and in continuing financial losses. (143)

A critical part of the successful development of PCR techniques in Rabinow’s account was the highly skilled bench work of a group of lab technicians within the company (116 ff.). Ph.D. scientists and non-Ph.D. lab technicians collaborated well throughout the extended period during which the chemistry of PCR needed to be perfected; and Rabinow’s suggestion is that neither group by itself could have succeeded.

So some key ingredients in this story are familiar from the current wisdom of tech companies like Google and FaceBook: let talented people follow their curiosity, use space (physical and social) to elicit strong positive collaboration; don’t try to over-manage the process through a rigid authority structure.

But as Rabinow points out, Cetus was not an anarchic process of smart people discovering things. Priorities were established to govern research directions, and there were sustained efforts to align research productivity with revenue growth (almost always unsuccessful, it must be said). Here is Rabinow’s concluding observation about the company and the knowledge environment:

Within a very short span of time some curious and wonderful reversals, orthogonal movements, began happening: the concept itself became an experimental system; the experimental system became a technique; the techniques became concepts. These rapidly developing variations and mutually referential changes of level were integrated into a research milieu, first at Cetus, then in other places, then, soon, in very many other places. These places began to resemble each other because people were building them to do so, but were often not identical. (169).

And, as other knowledge-intensive businesses from Visicalc to Xerox to H-P to Microsoft to Google have discovered, there is no magic formula for joining technical and scientific research to business success.

Marx’s thinking about technology

 

It sometimes seems as though there isn’t much new to say about Marx and his theories. But, like any rich and prolific thinker, that’s not actually true. Two articles featured in the Routledge Great Economists series (link) are genuinely interesting. Both are deeply scholarly treatment of interesting aspects of the development of Marx’s thinking, and each sheds new light on the influences and thought processes through which some of Marx’s key ideas took shape. I will consider one of those articles here and leave the second, a consideration of Marx’s relationship to the physiocrats, for a future post.

Regina Roth’s “Marx on technical change in the critical edition” (link) is a tour-de-force in Marx scholarship. There are two aspects of this work that I found particularly worthwhile. The first is a detailed “map” of the work that has been done since the early twentieth century to curate and collate Marx’s documents and notes. This was an especially important effort because Marx himself rarely brought his work to publishable form; he wrote thousands of pages of notes and documents in preparation for many related lines of thought, and not all of those problem areas have been developed in the published corpus of Marx’s writings. Roth demonstrates a truly impressive grasp of the thousands of pages of materials included in the Marx-Engels Gesamtaushgabe (MEGA) and Marx-Engels Collected Works (MECW) collections, and she does an outstanding job of tracing several important lines of thought through published and unpublished materials. She notes that the MEGA collection is remarkably rich:

A second point I want to stress is that the MEGA offers more material than other editions, not only regarding the manuscripts mentioned above but also with other types of written material. If we look at the material gathered in the MEGA we find examples of several different levels of communication. We may think of manuscripts on a first level as witnessing the communication between the author with himself and with his potential readers. On a second level, his letters give us notice of what he talked about to the people around him. And, on a third level, there is the vast part of his legacy that documents Marx’s discourse with authors of his time: his excerpts, the books he read and his collections of newspaper cuttings. (1231)

Here is a table in which Roth correlates several important economic manuscripts in the two collections.

Careful study of these thousands of pages of manuscripts and notes is crucial, Roth implies, if we are to have a nuanced view of the evolution and logic of Marx’s thought.

They show, first of all, that Marx was never content with what he had written: he started five drafts of his first chapter, and added four fragments to the same subject, each of them with numerous changes within each text. (1228)

And study of these many versions, notes, and emendations shows something else as well: a very serious effort on Marx’s part to get his thinking right. He was not searching out the most persuasive or the simplest versions of some of his critical thoughts about capitalism; instead, he was trying to piece together the economic logic of this social-economic system in a way that made sense given the analytical tools at his disposal. Marx was not the dogmatic figure that he is sometimes portrayed to be.

There are many surprises in Roth’s study. The falling rate of profit? That’s Engels’ editorial summation rather than Marx’s finished conclusion! By comparing Marx’s original manuscripts with the posthumous published version of volume 3 of Capital, she finds that “Engels inserted the following sentence in the printed version [of Capital vol. 3]: ‘But in reality […] the rate of profit will fall in the long run’ ” (1233). In several important aspects she finds that Engels the editor was more definitive about the long-term tendencies of capitalism than Marx the author was willing to be. For example:

Therefore, Engels continued, this capitalist mode of production ‘is becoming senile and has further and further outlived its epoch.’ Marx did not give such a clear opinion with a view to the future of capitalism, at least not in Capital. (1233)

She notes also that Engels was anxious about Marx’s unwillingness to bring his rewriting and reconsideration of key theses to a close:

Shortly before the publication of Volume I of Capital, Engels worried: ‘I had really begun to suspect from one or two phrases in your last letter that you had again reached an unexpected turning-point which might prolong everything indefinitely.’ (1247)

The other important aspect of this article — the substantive goal of the piece — is Roth’s effort to reconstruct the development of Marx’s thinking about technology and technology change, the ways that capitalism interacts with technology, and the effects that Marx expected to emerge out of this complicated set of processes. But this requires careful study of the full corpus, not simply the contents of the published works.

To understand Marx’s views on technical change, his whole legacy, which is also comprised of numerous drafts, excerpts, letters, and so forth, must be considered. (1224)

In fact, the unpublished corpus has much more substantial commentary on technology and technical change than do the published works. “In Capital terms such as technical progress, technical change or simply technology turn up rarely” (1241).

Roth finds that Marx had a sustained interest in “the machinery question” — essentially, the history of mechanical invention and the role that machines play in the economic system of capitalism. He studied and annotated the writings of Peter Gaskell, Andrew Ure, and Charles Babbage, as well as many other writers on the technical details of industrial and mining practices; Roth mentions Robert Willis and James Nasmyth in particular.

The economic importance of technical change for Marx’s system is the fact that it presents the capitalist with the possibility of increasing “relative surplus value” by raising the productivity of labor (1241). But because technical innovation is generally capital-intensive (increasing the proportion of constant capital to variable capital, or labor), technical innovation tends to bring about a falling rate of profit (offset, as Roth demonstrates, by specific counteracting forces). So the capitalist is always under pressure to prop up the rate of profit, and more intensive exploitation of labor is one of the means available.

In the discussion in the General Council [of the IWMA], Marx argued that machines had effects that turned out to be the opposite of what was expected: they prolonged the working day instead of shortening it; the proportion of women and children working in mechanized industries increased; labourers suffered from a growing intensity of labour and became more dependent on capitalists because they did not own the means of production any more …. (1246)

So technology change and capitalism are deeply intertwined; and there is nothing emancipatory about technology change in itself.

%d bloggers like this: