The soft side of critical realism

Critical realism has appealed to a range of sociologists and political scientists, in part because of the legitimacy it renders for the study of social structures and organizations. However, many of the things sociologists study are not “things” at all, but rather subjective features of social experience — mental frameworks, identities, ideologies, value systems, knowledge frameworks. Is it possible to be a critical realist about “subjective” social experience and formations of consciousness? Here I want to argue in favor of a CR treatment of subjective experience and thought.

First, let’s recall what it means to be realist about something. It means to take a cognitive stance towards the formation that treats it as being independent from the concepts we use to categorize it. It is to postulate that there are facts about the formation that are independent from our perceptions of it or the ways we conceptualize it. It is to attribute to the formation a degree of solidity in the world, a set of characteristics that can be empirically investigated and that have causal powers in the world. It is to negate the slogan, “all that is solid melts into air” with regard to these kinds of formations. “Real” does not mean “tangible” or “material”; it means independent, persistent, and causal.  

 
So to be realist about values, cognitive frameworks, practices, or paradigms is to assert that these assemblages of mental attitudes and features have social instantiation, that they persist over time, and that they have causal powers within the social realm. By this definition, mental frameworks are perfectly real. They have visible social foundations — concrete institutions and practices through which they are transmitted and reproduced. And they have clear causal powers within the social realm.
A few examples will help make this clear.
Consider first the assemblage of beliefs, attitudes, and behavioral repertoires that constitute the race regime in a particular time and place. Children and adults from different racial groups in a region have internalized a set of ideas and behaviors about each other that are inflected by race and gender. These beliefs, norms, and attitudes can be investigated through a variety of means, including surveys and ethnographic observation. Through their behaviors and interactions with each other they gain practice in their mastery of the regime, and they influence outcomes and future behaviors. They transmit and reproduce features of the race regime to peers and children. There is a self-reinforcing discipline to such an assemblage of attitudes and behaviors which shapes the behaviors and expectations of others, both internally and coercively. This formation has causal effects on the local society in which it exists, and it is independent from the ideas we have about it. It is by this set of factors, a real part of local society. (If is also a variable and heterogeneous reality, across time and space.) We can trace the sociological foundations of the formation within the population, the institutional arrangements through which minds and behaviors are shaped. And we can identify many social effects of specific features of regimes like this. (Here is an earlier post on the race regime of Jim Crow; link, link.)
 
Here is a second useful example — a knowledge and practice system like Six Sigma. This is a bundle of ideas about business management. It involves some fairly specific doctrines and technical practices. There are training institutions through which individuals become expert at Six Sigma. And there is a distributed group of expert practitioners across a number of companies, consulting firms, and universities who possess highly similar sets of knowledge, judgment, and perception.  This is a knowledge and practice community, with specific and identifiable causal consequences. 
 
These are two concrete examples. Many others could be offered — workingclass solidarity, bourgeois modes of dress and manners, the social attitudes and behaviors of French businessmen, the norms of Islamic charity, the Protestant Ethic, Midwestern modesty. 
So, indeed, it is entirely legitimate to be a critical realist about mental frameworks. More, the realist who abjures study of such frameworks as social realities is doomed to offer explanations with mysterious gaps. He or she will find large historical anomalies, where available structural causes fail to account for important historical outcomes.
Consider Marx and Engels’ words in the Communist Manifesto:

All fixed, fast-frozen relations, with their train of ancient and venerable prejudices and opinions, are swept away, all new-formed ones become antiquated before they can ossify. All that is solid melts into air, all that is holy is profaned, and man is at last compelled to face with sober senses his real conditions of life, and his relations with his kind.

This is an interesting riff on social reality, capturing both change and persistence, appearance and reality. A similar point of view is expressed in Marx’s theory of the fetishism of commodities: beliefs exist, they have social origins, and it is possible to demystify them on occasion by uncovering the distortions they convey of real underlying social relations. 
There is one more perplexing twist here for realists. Both structures and features of consciousness are real in their social manifestations. However, one goal of critical philosophy is to show how the mental structures of a given class or gender are in fact false consciousness. It is a true fact that British citizens in 1871 had certain ideas about the workings of contemporary capitalism. But it is an important function of critical theory to demonstrate that those beliefs were wrong, and to more accurately account for the underlying social relations they attempt to describe. And it is important to discover the mechanisms through which those false beliefs came into existence.

So critical realism must both identify real structures of thought in society and demystify these thought systems when they systematically falsify the underlying social reality. Decoding the social realities of patriarchy, racism, and religious bigotry is itself a key task for a critical social sciences.

Dave Elder-Vass is one of the few critical realists who have devoted attention to the reality of a subjective social thing, a system of norms. In The Causal Power of Social Structures: Emergence, Structure and Agency he tries to show how the ideas of a “norm circle” helps explicate the objectivity, persistence, and reality of a socially embodied norm system. Here’s is an earlier post on E-V’s work (link).

 
 

Mechanisms according to analytical sociology

One of the distinguishing characteristics of analytical sociology is its insistence on the idea of causal mechanisms as the core component of explanation. Like post-positivists in other traditions, AS theorists specifically reject the covering law model of explanation and argues for a “realist” understanding of causal relations and powers: a causal relationship between x and y exists solely insofar as there exist one or more causal mechanisms producing it generating y given the occurrence of x. Peter Hedström puts the point this way in Dissecting the Social:

A social mechanism, as defined here, is a constellation of entities and activities that are linked to one another in such a way that they regularly bring about a particular type of outcome. (kl 181)

A basic characteristic of all explanations is that they provide plausible causal accounts for why events happen, why something changes over time, or why states or events co-vary in time or space. (kl 207)

The core idea behind the mechanism approach is that we explain not by evoking universal laws, or by identifying statistically relevant factors, but by specifying mechanisms that show how phenomena are brought about. (kl 334)

A social mechanism, as here defined, describes a constellation of entities and activities that are organized such that they regularly bring about a particular type of outcome. (kl 342)

So far so good. But AS ads another requirement about causal mechanisms in the social realm that is less convincing: that the only real or credible mechanisms are those involving the actions of individual actors. In other words, causal action in the social world takes place solely at the micro level. This assumption is substantial, non-trivial, and seemingly dogmatic. 

Sociological theories typically seek to explain social outcomes such as inequalities, typical behaviours of individuals in different social settings, and social norms. In such theories individuals are the core entities and their actions are the core activities that bring about the social-level phenomena that one seeks to explain. (kl 356)

Although the explanatory focus of sociological theory is on social entities, an important thrust of the analytical approach is that actors and actions are the core entities and activities of the mechanisms explaining plaining such phenomena. (kl 383)

The theory should also explain action in intentional terms. This means that we should explain an action by reference to the future state it was intended to bring about. Intentional explanations are important for sociological theory because, unlike causalist explanations of the behaviourist or statistical kind, they make the act ‘understandable’ in the Weberian sense of the term.’ (kl 476)

Here is a table in which Hedström classifies different kinds of social mechanisms; significantly, all are at the level of actors and their mental states.

The problem with this “action-level” requirement on the nature of social mechanisms is that it rules out as a matter of methodology that there could be social causal processes that involve factors at higher social levels — organizations, norms, or institutions, for example. (For that matter, it also rules out the possibility that some individual actions might take place in a way that is inaccessible to conscious knowledge — for example, impulse, emotion, or habit.) And yet it is common in sociology to offer social explanations invoking causal properties of things at precisely these “meso” levels of the social world. For example:

Each of these represents a fairly ordinary statement of social causation in which a primary causal factor is an organization, an institutional arrangement, or a normative system.

It is true, of course, that such entities depends on the actions and minds of individuals. This is the thrust of ontological individualism (linklink): the social world ultimately depends on individuals in relation to each other and in relation to the modes of social formation through which their knowledge and action principles have been developed. But explanatory or methodological individualism does not follow from the truth of ontological individualism, any more than biological reductionism follows from the truth of physicalism. Instead, it is legitimate to attribute stable causal properties to meso-level social entities and to invoke those entities in legitimate social-causal explanations. Earlier arguments for meso-level causal mechanisms can be found herehere, and here.

This point about “micro-level dogmatism” leads me to believe that analytical sociology is unnecessarily rigid when it comes to causal processes in the social realm. Moreover, this rigidity leads it to be unreceptive to many approaches to sociology that are perfectly legitimate and insightful. It is as if someone proposed to offer a science of cooking but would only countenance statements at the level of organic chemistry. Such an approach would preclude the possibility of distinguishing different cuisines on the basis of the palette of spices and flavors that they use. By analogy, the many approaches to sociological research that proceed on the basis of an analysis of the workings of mid-level social entities and influences are excluded by the strictures of analytical sociology. Not all social research needs to take the form of the discovery of microfoundations, and reductionism is not the only scientifically legitimate strategy for explanation.

(The photo above of a moment from the Deepwater Horizon disaster is relevant to this topic, because useful accident analysis needs to invoke the features of organization that led to a disaster as well as the individual actions that produced the particular chain of events leading to the disaster. Here is an earlier post that explores this feature of safety engineering; link.)

Moral limits on war

World War II raised great issues of morality in the conduct of war. These were practical issues during the war, because that conflict approached “total war” — the use of all means against all targets to defeat the enemy. So the moral questions could not be evaded: are there compelling reasons of moral principle that make certain tactics in war completely unacceptable, no matter how efficacious they might be said to be?

As Michael Walzer made clear in Just and Unjust Wars: A Moral Argument with Historical Illustrations in 1977, we can approach two rather different kinds of questions when we inquire about the morality of war. First, we can ask whether a given decision to go to war is morally justified given its reasons and purposes. This brings us into the domain of the theory of just war–self-defense against aggression, and perhaps prevention of large-scale crimes against humanity. And second, we can ask whether the strategies and tactics chosen are morally permissible. This forces us to think about the moral distinction between combatant and non-combatant, the culpable and the innocent, and possibly the idea of military necessity. The principle of double effect comes into play here — the idea that unintended but predictable civilian casualties may be permissable if the intended target is a legitimate military target, and the unintended harms are not disproportionate to the value of the intended target.

We should also notice that there are two ways of approaching both issues — one on the basis of existing international law and treaty, and the other on the basis of moral theory. The first treats the morality of war as primarily a matter of convention, while the latter treats it as an expression of valued moral principles. There is some correspondence between the two approaches, since laws and treaties seek to embody shared norms about warfare. And there are moral reasons why states should keep their agreements, irrespective of the content. But the rationales of the two approaches are different.

Finally, there are two different kinds of reasons why a people or a government might care about the morality of its conduct of war. The first is prudential: “if we use this instrument, then others may use it against us in the future”. The convention outlawing the use of poison gas may fall in this category. So it may be argued that the conventions limiting the conduct of war are beneficial to all sides, even when there is a shortterm advantage in violating the convention. The second is a matter of moral principle: “if we use this instrument, we will be violating fundamental normative ideals that are crucial to us as individuals and as a people”. This is a Kantian version of the morality of war: there are at least some issues that cannot be resolved based solely on consequences, but rather must be resolved on the basis of underlying moral principles and prohibitions. So executing hostages or prisoners of war is always and absolutely wrong, no matter what military advantages might ensue. Preserving the lives and well-being of innocents seems to be an unconditional moral duty in war. But likewise, torture is always wrong, not only because it is imprudent, but because it is fundamentally incompatible with treating people in our power in a way that reflects their fundamental human dignity.

The means of war-making chosen by the German military during World War II were egregious — for example, shooting hostages, murdering prisoners, performing medical experiments on prisoners, and unrestrained strategic bombing of London. But hard issues arose on the side of the alliance that fought against German aggression as well. Particularly hard cases during World War II were the campaigns of “strategic bombing” against cities in Germany and Japan, including the firebombing of Dresden and Tokyo. These decisions were taken in the context of fairly clear data showing that strategic bombing did not substantially impair the enemy’s ability to wage war industrially, and in the context of the fact that its primary victims were innocent civilians. Did the Allies make a serious moral mistake by making use of this tactic? Did innocent children and non-combatant adults pay the price in these most horrible ways of the decision to incinerate cities? Did civilian leaders fail to exercise sufficient control to prevent their generals from inflicting pet theories like the presumed efficacy of strategic bombing on whole urban populations?

 
And how about the decision to use atomic bombs against Hiroshima and Nagasaki? Were these decisions morally justified by the rationale that was offered — that they compelled surrender by Japan and thereby avoided tens of thousands of combatant deaths ensuing from invasion? Were two bombs necessary, or was the attack on Nagasaki literally a case of overkill? Did the United Stares make a fateful moral error in deciding to use atomic bombs to attack cities and the thousands of non-combatants who lived there?

These kinds of questions may seem quaint and obsolete in a time of drone strikes, cyber warfare, and renewed nuclear posturing. But they are not. As citizens we have responsibility for the acts of war undertaken by our governments. We need to be clear and insistent in maintaining that the use of the instruments of war requires powerful moral justification, and that there are morally profound reasons for demanding that war tactics respect the rights and lives of the innocent. War, we must never forget, is horrible.

Geoffrey Robertson’s Crimes Against Humanity: The Struggle for Global Justice poses these questions with particular pointedness. Also of interest is John Mearsheimer’s Conventional Deterrence.

The atomic bomb

Richard Rhodes’ history of the development of the atomic bomb, The Making of the Atomic Bomb, is now thirty years old. The book is crucial reading for anyone who has the slightest anxiety about the tightly linked, high-stakes world we live in in the twenty-first century. The narrative Rhodes provides of the scientific and technical history of the era is outstanding. But there are other elements of the story that deserve close thought and reflection as well.

One is the question of the role of scientists in policy and strategy decision making before and during World War II. Physicists like Bohr, Szilard, Teller, and Oppenheimer played crucial roles in the science, but they also played important roles in the formulation of wartime policy and strategy as well. Were they qualified for these roles? Does being a brilliant scientist carry over to being an astute and wise advisor when it comes to the large policy issues of the war and international policies to follow? And if not the scientists, then who? At least a certain number of senior policy advisors to the Roosevelt administration, international politics experts all, seem to have badly dropped the ball during the war — in ignoring the genocidal attacks on Europe’s Jewish population, for example. Can we expect wisdom and foresight from scientists when it comes to politics, or are they as blinkered as the rest of us on average?

A second and related issue is the moral question: do scientists have any moral responsibilities when it comes to the use, intended or otherwise, of the technologies they spawn? A particularly eye-opening part of the story Rhodes tells is the research undertaken within the Manhattan Project about the possible use of radioactive material as a poisonous weapon of war against civilians on a large scale. The topic seems to have arisen as a result of speculation about how the Germans might use radioactive materials against civilians in Great Britain and the United States. Samuel Goutsmit, scientific director of the US military team responsible for investigating German progress towards an atomic bomb following the Normandy invasion, refers to this concern in his account of the mission in Alsos (7). According to Rhodes, the idea was first raised within the Manhattan Project by Fermi in 1943, and was realistically considered by Groves and Oppenheimer. This seems like a clear case: no scientist should engage in research like this, research aimed at discovering the means of the mass poisoning of half a million civilians.

Leo Szilard played an exceptional role in the history of the quest for developing atomic weapons (link). He more than other physicists foresaw the implications of the possibility of nuclear fission as a foundation for a radically new kind of weapon, and his fear of German mastery of this technology made him a persistent and ultimately successful advocate for a major research and industrial effort towards creating the bomb. His recruitment of Albert Einstein as the author of a letter to President Roosevelt underlining the seriousness of the threat and the importance of establishing a full scale effort made a substantial difference in the outcome. Szilard was entirely engaged in efforts to influence policy, based on his understanding of the physics of nuclear fission; he was convinced very early that a fission bomb was possible, and he was deeply concerned that German physicists would succeed in time to permit the Nazis to use such a weapon against Great Britain and the United States. Szilard was a physicist who also offered advice and influence on the statesmen who conducted war policy in Great Britain and the United States.

Niels Bohr is an excellent example to consider with respect to both large questions (link). He was, of course, one of the most brilliant and innovative physicists of his generation, recognized with the Nobel Prize in 1922. He was also a man of remarkable moral courage, remaining in Copenhagen long after prudence would have dictated emigration to Britain or the United States. He was more articulate and outspoken than most scientists of the time about the moral responsibilities the physicists undertook through their research on atomic energy and the bomb. He was farsighted about the implications for the future of warfare created by a successful implementation of an atomic or thermonuclear bomb. Finally, he is exceptional, on a par with Einstein, in his advocacy of a specific approach to international relations in the atomic age, and was able to meet with both Roosevelt and Churchill to make his case. His basic view was that the knowledge of fission could not be suppressed, and that the Allies would be best served in the long run by sharing their atomic knowledge with the USSR and working towards an enforceable non-proliferation agreement. The meeting with Churchill went particularly badly, with Churchill eventually maintaining that Bohr should be detained as a security risk.

Here is the memorandum that Bohr wrote to President Roosevelt in 1944 (link). Bohr makes the case for public sharing of the scientific and technical knowledge each nation has gained about nuclear weapons, and the establishment of a regime among nations that precludes the development and proliferation of nuclear weapons. Here are a few key paragraphs from his memorandum to Roosevelt:

Indeed, it would appear that only when the question is raised among the united nations as to what concessions the various powers are prepared to make as their contribution to an adequate control arrangement, will it be possible for any one of the partners to assure himself of the sincerity of the intentions of the others.

Of course, the responsible statesmen alone can have insight as to the actual political possibilities. It would, however, seem most fortunate that the expectations for a future harmonious international co-operation, which have found unanimous expressions from all sides within the united nations, so remarkably correspond to the unique opportunities which, unknown to the public, have been created by the advancement of science.

These thoughts are not put forward in the spirit of high-minded idealism; they are intended to serve as sober, fact-based guides to a more secure future. So it is worth considering: do the facts about international behavior justify the recommendations?

In fact the world has settled on a hybrid set of approaches: the doctrine of deterrence based on mutual assured destruction, and a set of international institutions to which nations are signatories, intended to prevent or slow the proliferation of nuclear weapons. Another brilliant thinker and 2005 Nobel Prize winner, Thomas Schelling, provided the analysis that expresses the current theory of deterrence in his 1966 book Arms and Influence (link).

So who is closer to the truth when it comes to projecting the behavior of partially rational states and their governing apparatuses? My view is that the author of Micro Motives and Macro Behavior has the more astute understanding of the logic of disaggregated collective action and the ways that a set of independent strategies aggregate to the level of organizational or state-level behavior. Schelling’s analysis of the logic of deterrence and the quasi-stability that it creates is compelling — perhaps more so than Bohr’s vision which depends at critical points on voluntary compliance.

This judgment receives support from international relations scholars of the following generation as well. For example, in an extensive article published in 1981 (link) Kenneth Waltz argues that nuclear weapons have helped to make international peace more stable, and his argument turns entirely on the rational-choice basis of the theory of deterrence:

What will a world populated by a larger number of nuclear states look like? I have drawn a picture of such a world that accords with experience throughout the nuclear age. Those who dread a world with more nuclear states do little more than assert that more is worse and claim without substantiation that new nuclear states will be less responsible and less capable of self- control than the old ones have been. They express fears that many felt when they imagined how a nuclear China would behave. Such fears have proved un rounded as nuclear weapons have slowly spread. I have found many reasons for believing that with more nuclear states the world will have a promising future. I have reached this unusual conclusion for six main reasons.

First, international politics is a self- help system, and in such systems the principal parties do most to determine their own fate, the fate of other parties, and the fate of the system. This will continue to be so, with the United States and the Soviet Union filling their customary roles. For the United States and the Soviet Union to achieve nuclear maturity and to show this by behaving sensibly is more important than preventing the spread of nuclear weapons.

Second, given the massive numbers of American and Russian warheads, and given the impossibility of one side destroying enough of the other side’s missiles to make a retaliatory strike bearable, the balance of terror is indes tructible. What can lesser states do to disrupt the nuclear equilibrium if even the mighty efforts of the United States and the Soviet Union cannot shake it? The international equilibrium will endure. (concluding section)

The logic of the rationality of cooperation, and the constant possibility of defection, seems to undermine the possibility of the kind of quasi-voluntary nuclear regime that Bohr hoped for — one based on unenforceable agreements about the development and use of nuclear weapons. The incentives in favor of defection are too great.

So this seems to be a case where a great physicist has a less than compelling theory of how an international system of nations might work. And if the theory is unreliable, then so are the policy recommendations that follow from it.

Discovering the nucleus

In the past year or so I’ve been reading a handful of fascinating biographies and histories involving the evolution of early twentieth-century physics, paying attention to the individuals, the institutions, and the ideas that contributed to the making of post-classical physics. The primary focus is on the theory of the atom and the nucleus, and the emergence of the theory of quantum mechanics. The major figures who have come into this complex narrative include Dirac, Bohr, Heisenberg, von Neumann, Fermi, Rutherford, Blackett, Bethe, and Feynman, along with dozens of other mathematicians and physicists. Institutions and cities played a key role in this story — Manchester, Copenhagen, Cambridge, Göttingen, Budapest, Princeton, Berkeley, Ithaca, Chicago. And of course written throughout this story is the rise of Nazism, World War II, and the race for the atomic bomb. This is a crucially important period in the history of science, and the physics that was created between 1900 and 1960 has fundamentally changed our view of the natural world.

       

One level of interest for me in doing this reading is the math and physics themselves. As a high school student I was fascinated with physics. I learned some of the basics of the story of modern physics before I went to college — the ideas of special relativity theory, the hydrogen spectrum lines, the twin-slit experiments, the puzzles of radiation and the atom leading to the formulation of the quantum theory of electromagnetic radiation, the discoveries of superconductivity and lasers. In college I became a physics and mathematics major at the University of Illinois, though I stayed with physics only through the end of the first two years of course work (electricity and magnetism, theoretical and applied mechanics, several chemistry courses, real analysis, advanced differential equations). (Significantly for the recent reading I’ve been doing, I switched from physics to philosophy while I was taking the junior level quantum mechanics course.) I completed a mathematics major, along with a philosophy degree, and did a PhD in philosophy because I felt philosophy offered a broader intellectual platform on questions that mattered.

 
So I’ve always felt I had a decent layman’s understanding of the questions and issues driving modern physics. One interesting result of reading all this historical material about the period of 1910-1935, however, is that I’ve realized what large holes there are in my mental map of the topics, both in the physics and the math. And it is genuinely interesting to realize that there are deeply fascinating questions in this terrain which I haven’t really got an inkling about. It is energizing to know that it is entirely possible to open up new areas of knowledge and inquiry for oneself. 
 
Of enduring interest in this story is the impression that emerges of amazingly rapid progress in physics in these few decades, with major discoveries and new mathematical methods emerging in weeks and months rather than decades and centuries. The intellectual pace in places like Copenhagen, Princeton, and Göttingen was staggering, and scientists like Bohr, von Neumann, and Heisenberg genuinely astonish the reader with the fertility of their scientific abilities. Moreover, the theories and mathematical formulations that emerged had amazingly precise and unexpected predictive consequences. Physical theory and experimentation reached a fantastic degree of synergy together. 
 
The institutions of research that developed through this period are fascinating as well. The Cavendish lab at Cambridge, the Institute for Advanced Studies at Princeton, the Niels Bohr Institute in Copenhagen, the math and physics centers at Göttingen, and the many conferences and journals of the period facilitated rapid progress of atomic and nuclear physics. The USSR doesn’t come into the story as fully as one would like, and it is intriguing to speculate about the degree to which Stalinist dogmatism interfered with the development of Soviet physics. 
 
I also find fascinating in retrospect the relations that seem to exist between physics and the philosophy of science in the twentieth century. In philosophy we tend to think that the discipline of the philosophy of science in its twentieth-century development was too dependent on physics. That is probably true. But it seems that the physics in question was more often classical physics and thermodynamics, not modern mathematical physics. Carnap, for example, gives no serious attention to developments in the theory of quantum mechanics in his lectures, Philosophical Foundations of Physics. The philosophy of the Vienna Circle could have reflected relativity theory and quantum mechanics, but it didn’t to any significant degree. Instead, the achievements of nineteenth-century physics seem to have dominated the thinking of Carnap, Schlick, and Popper. Logical positivism doesn’t seem to be much influenced by modern physics, including relativity theory, quantum theory, and mathematical physics.  Post-positivist philosophers Kuhn, Hanson, and Feyerabend refer to some of the discoveries of twentieth-century physics, but their works don’t add up to a new foundation for the philosophy of science. Since the 1960s there has been a robust field of philosophy of physics, and the focus of this field has been on quantum mechanics; but the field has had only limited impact on the philosophy of science more broadly. (Here is a guide to the philosophy of physics provided to philosophy graduate students at Princeton; link.)

On the other hand, quantum mechanics itself seems to have been excessively influenced by a hyper version of positivism and verificationism. Heisenberg in particular seems to have favored a purely instrumentalist and verificationist interpretation of quantum mechanics — the idea that the mathematics of quantum mechanics serve solely to summarize the results of experiment and observation, not to allow for true statements about unobservables. It is anti-realist and verificationist.

I suppose that there are two rather different ways of reading the history of twentieth-century physics. One is that quantum mechanics and relativity theory demonstrate that the physical world is incomprehensibly different from our ordinary Euclidean and Kantian ideas about ordinary-sized objects — with the implication that we can’t really understand the most fundamental level of the physical world. Ordinary experience and relativistic quantum-mechanical reality are just fundamentally incommensurable. But the other way of reading this history of physics is to marvel at the amount of new insight and clarity that physics has brought to our understanding of the subatomic world, in spite of the puzzles and anomalies that seem to remain. Mathematical physical theory made possible observation, measurement, and technological use of the microstructure of the world in ways that the ancients could not have imagined. I am inclined towards the latter view.

It is also sobering for a philosopher of social science to realize that there is nothing comparable to this history in the history of the social sciences. There is no comparable period where fundamental and enduring new insights into the underlying nature of the social world became possible to a degree comparable to this development of our understanding of the physical world. In my view as a philosopher of social science, that is perfectly understandable; the social world is not like the physical world. Social knowledge depends on fairly humdrum discoveries about actors, motives, and constraints. But the comparison ought to make us humble even as we explore new theoretical ideas in sociology and political science.

If I were asked to recommend only one out of all these books for a first read, it would be David Cassidy’s Heisenberg volume, Beyond Uncertainty. Cassidy makes sense of the physics in a serious but not fully technical way, and he raises important questions about Heisenberg the man, including his role in the German search for the atomic bomb. Also valuable is Richard Rhodes’ book, The Making of the Atomic Bomb: 25th Anniversary Edition.

How organizations adapt

Organizations do things; they depend upon the coordinated efforts of numerous individuals; and they exist in environments that affect their ongoing success or failure. Moreover, organizations are to some extent plastic: the practices and rules that make them up can change over time. Sometimes these changes happen as the result of deliberate design choices by individuals inside or outside the organization; so a manager may alter the rules through which decisions are made about hiring new staff in order to improve the quality of work. And sometimes they happen through gradual processes over time that no one is specifically aware of. The question arises, then, whether organizations evolve toward higher functioning based on the signals from the environments in which they live; or on the contrary, whether organizational change is stochastic, without a gradient of change towards more effective functioning? Do changes within an organization add up over time to improved functioning? What kinds of social mechanisms might bring about such an outcome?

One way of addressing this topic is to consider organizations as mid-level social entities that are potentially capable of adaptation and learning. An organization has identifiable internal processes of functioning as well as a delineated boundary of activity. It has a degree of control over its functioning. And it is situated in an environment that signals differential success/failure through a variety of means (profitability, success in gaining adherents, improvement in market share, number of patents issued, …). So the environment responds favorably or unfavorably, and change occurs.

Is there anything in this specification of the structure, composition, and environmental location of an organization that suggests the possibility or likelihood of adaptation over time in the direction of improvement of some measure of organizational success? Do institutions and organizations get better as a result of their interactions with their environments and their internal structure and actors?

There are a few possible social mechanisms that would support the possibility of adaptation towards higher functioning. One is the fact that purposive agents are involved in maintaining and changing institutional practices. Those agents are capable of perceiving inefficiencies and potential gains from innovation, and are sometimes in a position to introduce appropriate innovations. This is true at various levels within an organization, from the supervisor of a custodial staff to a vice president for marketing to a CEO. If the incentives presented to these agents are aligned with the important needs of the organization, then we can expect that they will introduce innovations that enhance functioning. So one mechanism through which we might expect that organizations will get better over time is the fact that some agents within an organization have the knowledge and power necessary to enact changes that will improve performance, and they sometimes have an interest in doing so. In other words, there is a degree of intelligent intentionality within an organization that might work in favor of enhancement.

This line of thought should not be over-emphasized, however, because there are competing forces and interests within most organizations. Previous posts have focused on current organizational theory based on the idea of a “strategic action field” of insiders and outsiders who determine the activities of the organization (Fligstein and McAdam, Crozier; linklink). This framework suggests that the structure and functioning of an organization is not wholly determined by a single intelligent actor (“the founder”), but is rather the temporally extended result of interactions among actors in the pursuit of diverse aims. This heterogeneity of purposive actions by actors within an institution means that the direction of change is indeterminate; it is possible that the coalitions that form will bring about positive change, but the reverse is possible as well.

And in fact, many authors and participants have pointed out that it is often enough not the case that the agents’ interests are aligned with the priorities and needs of the organization. Jack Knight offers persuasive critique of the idea that organizations and institutions tend to increase in their ability to provide collective benefits in Institutions and Social Conflict. CEOs who have a financial interest in a rapid stock price increase may take steps that worsen functioning for shortterm market gain; supervisors may avoid work-flow innovations because they don’t want the headache of an extended change process; vice presidents may deny information to other divisions in order to enhance appreciation of the efforts of their own division. Here is a short description from Knight’s book of the way that institutional adjustment occurs as a result of conflict among players of unequal powers:

Individual bargaining is resolved by the commitments of those who enjoy a relative advantage in substantive resources. Through a series of interactions with various members of the group, actors with similar resources establish a pattern of successful action in a particular type of interaction. As others recognize that they are interacting with one of the actors who possess these resources, they adjust their strategies to achieve their best outcome given the anticipated commitments of others. Over time rational actors continue to adjust their strategies until an equilibrium is reached. As this becomes recognized as the socially expected combination of equilibrium strategies, a self-enforcing social institution is established. (Knight, 143)

A very different possible mechanism is unit selection, where more successful innovations or firms survive and less successful innovations and firms fail. This is the premise of the evolutionary theory of the firm (Nelson and Winter, An Evolutionary Theory of Economic Change). In a competitive market, firms with low internal efficiency will have a difficult time competing on price with more efficient firms; so these low-efficiency firms will go out of business occasionally. Here the question of “units of selection” arises: is it firms over which selection operates, or is it lower-level innovations that are the object of selection?

Geoffrey Hodgson provides a thoughtful review of this set of theories here, part of what he calls “competence-based theories of the firm”. Here is Hobson’s diagram of the relationships that exist among several different approaches to study of the firm.

The market mechanism does not work very well as a selection mechanism for some important categories of organizations — government agencies, legislative systems, or non-profit organizations. This is so, because the criterion of selection is “profitability / efficiency within a competitive market”; and government and non-profit organizations are not importantly subject to the workings of a market.

In short, the answer to the fundamental question here is mixed. There are factors that unquestionably work to enhance effectiveness in an organization. But these factors are weak and defeasible, and the countervailing factors (internal conflict, divided interests of actors, slackness of corporate marketplace) leave open the possibility that institutions change but they do not evolve in a consistent direction. And the glaring dysfunctions that have afflicted many organizations, both corporate and governmental, make this conclusion even more persuasive. Perhaps what demands explanation is the rare case where an organization achieves a high level of effectiveness and consistency in its actions, rather than the many cases that come to mind of dysfunctional organizational activity.

(The examples of organizational dysfunction that come to mind are many — the failures of nuclear regulation of the civilian nuclear industry (Perrow, The Next Catastrophe: Reducing Our Vulnerabilities to Natural, Industrial, and Terrorist Disasters); the failure of US anti-submarine warfare in World War II (Cohen, Military Misfortunes: The Anatomy of Failure in War); and the failure of chemical companies to ensure safe operations of their plants (Shrivastava, Bhopal: Anatomy of Crisis). Here is an earlier post that addresses some of these examples; link. And here are several earlier posts on the topic of institutional change and organizational behavior; linklink.)

Divided …

Why is part of the American electoral system so susceptible to right-wing populist appeals, often highlighting themes of racism and intergroup hostility? Doug McAdam and Karina Kloos address the causes of the radical swing to the right of the Republican Party in Deeply Divided: Racial Politics and Social Movements in Postwar America. Here is the key issue the book attempts to resolve:

If the general public does not share the extreme partisan views of the political elites and party activists and, more to the point, is increasingly dismayed and disgusted by the resulting polarization and institutional paralysis that have followed from those views, how has the GOP managed to move so far to the right without being punished by the voters? Our answer — already telegraphed above — is that over the past half century social movements have increasingly challenged, and occasionally supplanted, parties as the dominant mobilizing logic and organizing vehicle of American politics. (Kindle location 303-307). 

Not surprisingly given McAdam’s long history in the social movements research field, McAdam and Kloos argue that social movements are commonly relevant to electoral and party politics; they suggest that the period of relatively high consensus around the moderate middle (1940s and 1950s) was exceptional precisely because of the absence of powerful social movements during these decades. But during more typical periods, national electoral politics are influenced by both political parties and diffuse social movements; and the dynamics of the latter can have complex effects on the behavior and orientation of the former.

McAdam and Kloos argue that the social movements associated with the 1960s Civil Rights movement and its opposite, the white segregationist movement, put in motion a political dynamic that pushed each party off of its “median voter” platform, with the Republican Party moving increasingly in the direction of white supremacy and preservation of white privilege.

More accurately, it is the story of not one, but two parallel movements, the revitalized civil rights movement of the early 1960s and the powerful segregationist countermovement, that quickly developed in response to the black freedom struggle. (lc 1220)

The dynamics of grassroots social movements are thought to explain how positions that are unpalatable to the broad electorate nonetheless become committed platforms within the parties. (This also seems to explain the GOP preoccupation with “voter fraud” and their efforts at restricting voting rights for people of color.) The primary processes adopted by the parties after the 1968 Democratic convention gave a powerful advantage to highly committed social activists, even if they do not represent the majority of a party’s members.

This historical analysis gives an indication of an even more basic political factor in American politics: the polarizing issues that surround race and the struggle for racial equality. The Civil Rights movement of the 1950s and 1960s was a widespread mobilization of large numbers of ordinary citizens in support of equal rights for African Americans in terms of voting, residence, occupation, and education. Leaders like Ralph Abernathy or Julian Bond (or of course, Martin Luther King, Jr.) and organizations like the NAACP and SNCC were effective in their call to action for ordinary people to take visible actions to support greater equality through legal means. This movement had some success in pushing the Democratic Party towards greater advocacy of reforms promoting racial justice. And the political backlash against the Democratic Party following the enactment of civil rights legislation spawned its own grassroots mobilizations of people and associations who objected to these forms of racial progress. And lest we imagine that progressive steps in the struggle for racial justice largely derived from the Democratic Party, the authors remind us that a great deal of the support that civil rights legislation came from liberal Republicans:

The textbook account also errs in typically depicting the Democrats as the movement’s staunch ally. What is missed in this account is the lengths to which all Democratic presidents—at least from Roosevelt to Kennedy—went to placate the white South and accommodate the party’s Dixiecrat wing. (kl 411)

The important point is that as long as the progressive racial views of northern liberal Democrats were held in check and tacit support for Jim Crow remained the guiding—if unofficial—policy of the party, the South remained solidly and reliably in the Democratic column. (lc 1301)

So M&K are right — issues and interests provide a basis for mobilization within social movements, and social movements in turn influence the evolution of party politics.

But their account suggests a more complicated causal story of the evolution of American electoral politics as well. M&K make the point convincingly that the dynamics of party competition by themselves do not suffice to explain the evolution of US politics to the right, towards a more and more polarized relationship between a divided electorate. They succeed in showing that social movements of varying stripes played a key causal role in shaping party politics themselves. So explaining American electoral politics requires analysis of both parties and movements. But they also inadvertently make another point as well: that there are underlying structural features of American political psychology that explain much of the dynamics of both movements and parties, and these are the facts of racial division and the increasingly steep inequalities of income and wealth that divide Americans. So structural facts about race and class in American society play the most fundamental role in explaining the movements and alliances that have led us to our current situation. Social movements are an important intervening variable, but pervasive features of inequality in American society are even more fundamental.

Or to put the point more simply: we are divided politically because we are divided structurally by inequalities of access, property, opportunity, and outcome; and the mechanisms of electoral politics are mobilized to challenge and defend the systems that maintain these inequalities.

Designing and managing large technologies

What is involved in designing, implementing, coordinating, and managing the deployment of a large new technology system in a real social, political, and organizational environment? Here I am thinking of projects like the development of the SAGE early warning system, the Affordable Care Act, or the introduction of nuclear power into the civilian power industry.

Tom Hughes described several such projects in Rescuing Prometheus: Four Monumental Projects That Changed the Modern World. Here is how he describes his focus in that book:

Telling the story of this ongoing creation since 1945 carries us into a human-built world far more complex than that populated earlier by heroic inventors such as Thomas Edison and by firms such as the Ford Motor Company. Post-World War II cultural history of technology and science introduces us to system builders and the military-industrial-university complex. Our focus will be on massive research and development projects rather than on the invention and development of individual machines, devices, and processes. In short, we shall be dealing with collective creative endeavors that have produced the communications, information, transportation, and defense systems that structure our world and shape the way we live our lives. (kl 76)

The emphasis here is on size, complexity, and multi-dimensionality. The projects that Hughes describes include the SAGE air defense system, the Atlas ICBM, Boston’s Central Artery/Tunnel project, and the development of ARPANET. Here is an encapsulated description of the SAGE process:

The history of the SAGE Project contains a number of features that became commonplace in the development of large-scale technologies. Transdisciplinary committees, summer study groups, mission-oriented laboratories, government agencies, private corporations, and systems-engineering organizations were involved in the creation of SAGE. More than providing an example of system building from heterogeneous technical and organizational components, the project showed the world how a digital computer could function as a real-time information-processing center for a complex command and control system. SAGE demonstrated that computers could be more than arithmetic calculators, that they could function as automated control centers for industrial as well as military processes. (kl 285)

Mega-projects like these require coordinated efforts in multiple areas — technical and engineering challenges, business and financial issues, regulatory issues, and numerous other areas where innovation, discovery, and implementation are required. In order to be successful, the organization needs to make realistic judgments about questions for which there can be no certainty — the future development of technology, the needs and preferences of future businesses and consumers, and the pricing structure that will exist for the goods and services of the industry in the future. And because circumstances change over time, the process needs to be able to adapt to important new elements in the planning environment.

There are multiple dimensions of projects like these. There is the problem of establishing the fundamental specifications of the project — capacity, quality, functionality. There is the problem of coordinating the efforts of a very large team of geographically dispersed scientists and engineers, whose work is deployed across various parts of the problem. There is the problem of fitting the cost and scope of the project into the budgetary envelope that exists for it. And there is the problem of adapting to changing circumstances during the period of development and implementation — new technology choices, new economic circumstances, significant changes in demand or social need for the product, large shifts in the costs of inputs into the technology. Obstacles in any of these diverse areas can lead to impairment or failure of the project.

Most of the cases mentioned here involve engineering projects sponsored by the government or the military. And the complexities of these cases are instructive. But there are equally complex cases that are implemented in a private corporate environment — for example, the development of next-generation space vehicles by SpaceX. And the same issues of planning, coordination, and oversight arise in the private sector as well.

The most obvious thing to note in projects like these — and many other contemporary projects of similar scope — is that they require large teams of people with widely different areas of expertise and an ability to collaborate across disciplines. So a key part of leadership and management is to solve the problem of securing coordination around an overall plan across the numerous groups; updating plans in face of changing circumstances; and ensuring that the work products of the several groups are compatible with each other. Moreover, there is the perennial challenge of creating arrangements and incentives in the work environment — laboratory, design office, budget division, logistics planning — that stimulate the participants to high-level creativity and achievement.

This topic is of interest for practical reasons — as a society we need to be confident in the effectiveness and responsiveness of the planning and development that goes into large projects like these. But it is also of interest for a deeper reason: the challenge of attributing rational planning and action to a very large and distributed organization at all. When an individual scientist or engineer leads a laboratory focused on a particular set of research problems, it is possible for that individual (with assistance from the program and lab managers hired for the effort) to keep the important scientific and logistical details in mind. It is an individual effort. But the projects described here are sufficiently complex that there is no individual leader who has the whole plan in mind. Instead, the “organizational intentionality” is embodied in the working committees, communications processes, and assessment mechanisms that have been established.

It is interesting to consider how students, both undergraduate and graduate, can come to have a better appreciation of the organizational challenges raised by large projects like these. Almost by definition, study of these problem areas in a traditional university curriculum proceeds from the point of view of a specialized discipline — accounting, electrical engineering, environmental policy. But the view provided from a discipline is insufficient to give the student a rich understanding of the complexity of the real-world problems associated with projects like these. It is tempting to think that advanced courses for engineering and management students could be devised making extensive use of detailed case studies as well as simulation tools that would allow students to gain a more adequate understanding of what is needed to organize and implement a large new system. And interestingly enough, this is a place where the skills of humanists and social scientists are perhaps even more essential than the expertise of technology and management specialists. Historians and sociologists have a great deal to add to a student’s understanding of these complex, messy processes.

Ideologies, policies, and social complexity

 

The approach to social and historical research that I favor is one that pays attention to the heterogeneity and contingency of social processes. It advises that social and historical researchers should disaggregate the large patterns they start with and try to identify the multiple underlying mechanisms, causes, motivations, movements, and contingencies that came together to create higher-level outcomes. Social research needs to focus on the micro- or meso-level processes that combined to create the macro world that interests us. The theory of assemblages fits this intellectual standpoint very well, since it emphasizes contingency and heterogeneity all the way down. The diagram above was chosen to give a visual impression of the complexity and interconnectedness of factors and causes that are associated with this approach to the social world.

According to the premises of this approach, we are not well served by imagining that there are simple, largescale forces that drive the outcomes in history. Examples of efforts at overly simplified explanations like these include:

  • Onerous conditions of the Treaty of Versailles caused the collapse of the Weimar Republic.
  • The Chinese Revolution succeeded because of post-Qing exploitation of the peasants.
  • The Industrial Revolution occurred in England because of the vitality of English science.

Instead, each of these large outcomes is the result of a large number of underlying processes, motivations, social movements, and contingencies that defy simple summary. To understand the Mediterranean world over the sweep of time, we need the detailed and granular research of a Fernand Braudel rather than the simplified ideas of Johann Heinrich von Thunen in the economic geography of central place theory.

In situations of this degree of underlying complexity, it is pointless to ask for a simple answer to the question, “what caused outcome X?” So the Great Depression wasn’t the outcome of capital’s search for profits; it was instead the complex product of interacting forms of private business activity, financial institutions, government action, legislation, war, and multiple other forces that conjoined to create a massive and persistent economic depression.

This approach has solid intellectual and ontological foundations. This is pretty much how the social world works. But this ontological vision about the nature of the social world is hard to reconcile with the large intellectual frameworks on the left and on the right that are used to diagnose our times and sometimes to prescribe solutions to the problems identified.

An ideologue is a thinker who seeks to subsume the sweep of history or current events under an overarching narrative with simple explanatory premises and interpretive schemes. The ideologue wants to portray history as the unfolding of a simple set of forces or drivers — whether markets, classes, divine purposes, or philosophies. And the ideologue is eager to force the facts into the terms of the narrative, and to erase inconvenient facts that appear to conflict with the narrative.

Consider Lenin, von Hayek, and Ronald Reagan. Each had a simplified mental framework that postulated a set of ideas about how the world worked. For Lenin it was expressed in a few paragraphs about class, the economic structure of capitalism, and the direction of history. For von Hayek it was the idea that free economic activity within idealized markets lead to the best possible outcomes for the whole of society. For Reagan it was a combination of von Hayek and the simplified notions of realpolitik associated with Kennan, Morgenthau, or Kissinger.

There are two problems for these kinds of approaches to understanding the social world. First is the indifference ideologues express to the role of facts and empirical validation in their thinking. This is an epistemic shortcoming. But second, and equally problematic, is their insistence on representing the social world as a fundamentally simple process, with a few driving forces whose impact can be forecast. This is an ontological shortcoming. The social world is not simple, and there are not a small number of dominant forces whose effects overshadow the myriad of other socially relevant processes and events that make up a given situation.

Ideologues are insidious for serious historians, since they denigrate careful efforts to discover how various events actually unfolded, in favor of the demands of a particular interpretation of history. It is not possible to gain adequate or insightful historical knowledge from within the framework of a rigid and dogmatic ideology. But even more harmful are policy makers driven by ideologies. An ideological policy maker is an actor who takes the simplistic assumptions of an ideology and attempts to formulate policy interventions based on those assumptions. Ideology-based policies are harmful, of course, because the world has its own properties independent from our theories, and interventions based on false hypotheses about how the world works are unlikely to bring about their intended results. Policies need to be driven by theories that are fact-based and approximately true. And policy makers and officials need to be rejected when they flout science and fact-based inquiry in favor of pet theories and ideologies.

A hard question that this line of thought poses and that I have not addressed here is whether policies can be formulated at all within the context of a fundamentally heterogeneous and contingent world. It might be argued that policy formation requires fairly simple cause-and-effect relationships in order to justify the idea of an intervention; and complexity makes it unlikely that such relationships exist. I believe policies can be formulated within this ontological framework; but I agree that the case must be made. A few earlier posts are relevant to this topic (link, linklink, link, link).

SSHA 2017 Call for Papers


SSHA CALL FOR PAPERS
Macrohistorical Dynamics Network
42nd Annual Meeting of the Social Science History Association
Montréal, Québec Canada


 2-5 November 2017
Submission Deadline: 3 March 2017

Changing Social Connections in Time and Space
 
Please consider participation in Macrohistorical Dynamics (MHD) panels of the 42nd annual meeting of the Social Science History Association, November 2-5, 2017 in Montréal For more information on the meeting as well as the call for proposals, please refer to the SSHA website at www.ssha.org. Here is the SSHA call for proposals (link).

The deadline for paper and/or panel submissions is March 3, 2017.
 
In recognition of Canada’s policy of official bilingualism, SSHA will accept paper presentations in either English or French for our meeting in Montreal. Sessions may be monolingual English, monolingual French, or bilingual English/French. Session organizers must clearly indicate which language(s) will be spoken at their session, and paper submitters must indicate if their paper will be delivered in French. All paper abstracts must be submitted with an English version, regardless of the language in which the paper will be presented. Please contact the Program Committee co-chair Barry Eidlin (barry.eidlin@mcgill.ca) with any questions regarding conference language policies.
 
The thematic topic of the annual meeting is “Changing Social Connections in Time and Space” – a theme that works very well with the research interests of many of the scholars involved in the Macrohistorical Dynamics network.

Macrohistorical Dynamics (MHD) is an interdisciplinary social science research field that focuses on problems of large-scale, comparative historical inquiry. Contributors to the field have brought perspective on a wide variety of problem areas, including macro- and historical sociology; comparative histories; world history; world-system analysis; comparative study of civilizations; philosophy of history; and studies of long-term socio-ecological, technological, demographic, cultural, and political trends and transformations.  The Macrohistorical Dynamics network brings a rigorous perspective to bear on questions having to do with “large” history.

The list of MHD panel themes for 2017 is open, and we encourage you to submit proposals for panel themes or individual paper topics.

The MHD network will be able to host at least six panels in 2017 and will also be able to place additional papers through co-sponsorship with other networks (for example, with History/Methods, Politics, Culture, State-Society, Historical Geography, etc.).

SSHA requests that submissions be made by means of its web conference management system. Paper title, brief abstract, and contact information should be submitted on the site http://www.ssha.org, where the general SSHA 2017 call for papers is also available.  (If you haven’t used the system previously you will need to create an account, which is a very simple process.)  The direct link for submissions is now open for submissions (link). 

 
NOTE: There is an SSHA rule concerning book sessions.  For a book session to proceed, the author (or at least one of multiple authors) MUST be present.  Proposals for book sessions should only be submitted if there is high confidence that the author will be able to travel to Baltimore November 17-20, 2016.

SSHA has set up a mechanism for networks to share papers, so even if you have a solo paper, send the idea along.  It is possible and useful to identify a paper not only by the MHD network, but also by some other co-sponsoring networks–for example, Theory/Methods, Historical Geography, Politics, Culture, Economics, etc.  Co-sponsored panels and papers are encouraged by the SSHA Program Committee as a means of broadening the visibility of the various networks.

Feel free to contact the co-chairs of the Macrohistorical Dynamics network for further information.

Prof. Daniel Little
University of Michigan-Dearborn
delittle@umich.edu
 
Prof. Peter Perdue
Department of History
Yale University
peter.c.perdue@yale.edu

Prof. James Lee
School of Humanities and Social Science
Hong Kong University of Science and Technology
jqljzl@gmail.com
 

0
0
1
785
4477
University of Michigan
37
10
5252
14.0

96
800×600

Normal
0

false
false
false

EN-US
JA
X-NONE

/* Style Definitions */
table.MsoNormalTable
{mso-style-name:”Table Normal”;
mso-tstyle-rowband-size:0;
mso-tstyle-colband-size:0;
mso-style-noshow:yes;
mso-style-priority:99;
mso-style-parent:””;
mso-padding-alt:0in 5.4pt 0in 5.4pt;
mso-para-margin:0in;
mso-para-margin-bottom:.0001pt;
mso-pagination:widow-orphan;
font-size:10.0pt;
font-family:”Times New Roman”;}

 
%d bloggers like this: