Public opinion research raises many difficult questions. (See an earlier post on this topic.) We would like to know what Americans are thinking about current circumstances and issues; we’d like to know how those attitudes differ across social groups; and we’d like to have a basis for attempting to explain changes in attitudes over time. There is a vast amount of survey research underway at any given time in the United States. For example, the Pew Research Center (link) and the Roper Center (link) provide substantial survey research on social and political attitudes in the United States. But what does it really tell us?
There are, of course, the normal statistical questions about the interpretation of results: what is a representative sample? What is the margin of error? What is the variance of the population around a given topic? But these are the easy questions; there are fairly specific statistical answers to them, no different in kind from analogous questions about quality sampling methods in a manufacturing process.
But the harder questions are conceptual. What is the social fact that is being reported when a study finds that “41 percent of Americans believe that they are better off than their parents”? What are we trying to learn when we sample a population of 250 million people with a survey of a set of topically organized questions about perceptions, values, or beliefs?
Let’s start constructively. We can suppose, to begin, that each person has a set of values, beliefs, or attitudes on a range of subjects. And let’s say that we are interested in measuring some of these attitudes through a survey using questions based on a Likert scale (discussion
); the respondent is asked to rate level of agreement with the statement on a five-point scale. Survey responses will display a distribution of answers for the topics on the survey questionnaire. We can describe this distribution in statistical terms; for example, we might find that the mean value of “trust/mistrust my elected officials” for the population is 3.5 with a standard deviation of .8. And we would probably try to group a set of questions around a single attribute (e.g., “social conservative”), and then examine the profile of individuals and groups according to their responses to these grouped questions.
But here is the hard question: what really do these descriptive statistics tell us about the population? At bottom, they tell us that, if we were to randomly draw an individual subject and ask the question, there is a high likelihood that the respondent’s rating will be within the range defined by the mean for the population plus/minus the standard deviation. If the mean for the question is 3.5 and the standard deviation is .8, then this implies a range from 2.7 to 4.3 — from “slightly disapprove” to “strongly approve”. So this hypothetical study doesn’t tell us very much, given the underlying variation in attitudes in the general population; the random person may range from negative to strongly positive. This is where we stand for the population as a whole, and it is not very informative because there is so much variation within the population. In other words: in a fairly specific sense, there is no single answer to the question, “what do Americans think?”, because there is so much heterogeneity of attitude and opinion across the full population.
We get more information if we are able to discover that there are sub-groups that show significantly less variance around a given attitudinal position. Some groupings that may have a significant association with attitudes might include: region (northeast, south, midwest, west coast); race/ethnicity; gender; occupational status; education level; income level; age; immigration status; and so forth. And when we break down the data by groups, we may find results like these: men and women have different levels of support for capital punishment; blacks, latinos, and whites have different levels of trust in their elected officials; or well-educated and poorly-educated people differ significantly in their attitudes towards immigration policy. The population distribution is simply the sum of the distributions of attitudes in the composite groups of the society.
This differentiation of results by groups tells us that the attitudes are not randomly distributed across the population, but rather are significantly associated with group membership. And this poses a significant sociological problem for research: what explains the differences across sub-groups? How are region, gender, race, religion, age, income, or occupation relevant to the formation of attitudes and beliefs, so that groups defined by these characteristics show greater similarity to each other than does the general population?
Now we can take a stab at answering the question we began with: what do Americans think? Our studies may have allowed us to say that there are a few topics where the full population thinks roughly the same thing: there are no significant differences in the distribution of responses from individuals from sub-groups with respect to these topics. (“It is important for the recession to come to an end as early as possible”; “It is important for our country to invest in the education of children”.) Second, and more commonly, there is probably a much wider group of topics where we do not find uniformity across groups; instead, African-Americans may offer one distribution of responses and Arab-Americans may offer a significantly different distribution of responses with respect to many topics. Or midwesterners and southerners may offer different distributions. In this case we can’t say “Americans think X,” but instead “sub-groups A, B, C, … have significantly different attitudes with respect to this topic.” And maybe this is an important possible direction for future research: how to incorporate the representation of differentiation and variation across a population into the interpretation of public opinion research. Are there better ways of visualizing the population and the modes of variation of attitude and belief that it embodies?