The Trouble with Polling

Karlyn Bowman

Summer 2018

As political pundits and the general public prepare for the 2018 midterm elections this fall, it's a safe bet that pollsters will undergo fresh scrutiny. Questions are still being raised about their performance in the 2016 presidential election, and the results from some major 2017 contests did little to allay those concerns. Few polls, for example, predicted the size of Virginia governor Ralph Northam's nine-point victory last November. And in the special election last December for a U.S. Senate seat in Alabama, the final polls ranged from a nine-point victory for Republican Roy Moore to a 10-point victory for Democrat Doug Jones. Jones won by one and a half percentage points.

The problems of election polling aren't limited to the U.S. The list of recent high-profile misses includes the elections in Britain and Israel in 2015, the Brexit referendum in June 2016, and the referendum on the Colombian peace deal with the Revolutionary Armed Forces of Colombia later that year. In France, pollsters gave presidential candidate Emmanuel Macron a substantial lead in the 2017 runoff. But according to CNN's Harry Enten, not one of the 18 polls conducted in the final two weeks of the campaign predicted his landslide 32-point margin of victory. At 10 points, the gap between the two-week polling average and the actual results of the 2017 French election, Enten noted, was greater than it had been in eight previous presidential-election runoffs. All the final polls in the 2017 British general election, in which Prime Minister Theresa May's Conservative Party lost its parliamentary majority, had the Conservatives ahead, though their margins differed significantly. And last November, polls overstated the vote of the first-round winner in Chile's presidential contest.

These results demonstrate weaknesses that experts across the industry are working to address. On our shores, the highly respected American Association for Public Opinion Research (AAPOR) conducted a thorough post-mortem on the 2016 polls. Another exhaustive analysis by Will Jennings and Christopher Wlezien — which included more than 30,000 polls from 351 general elections in 45 countries, covering the years between 1942 and 2017 — acknowledged that the industry faces significant challenges, but concluded that there was "no evidence to support the claims of a crisis in the accuracy of polling."

But there are other problems with the polls that go unnoticed by blue-ribbon commissions examining outcomes and methodology. The polling business has changed significantly in recent decades: There are more entrants, and there is more competition. Moreover, pollsters are deeply dependent on media coverage. Whether these developments have improved polling and deepened our understanding of the public is debatable. The issues the AAPOR committee examined are real, but other, more subtle changes reveal a chasm between pollsters and the public they observe, posing a threat to the credibility and usefulness of polls.

TAKING ACCOUNT

Six months after the 2016 U.S. presidential election, AAPOR's committee of election-polling analysts and practitioners released their findings, declaring that the national polls were "among the most accurate in estimating the popular vote since 1936," while the state polls had a "historically bad year." The committee's work, which began in the spring of 2016 on the primary-election polls, cited a number of reasons for the polling problems, including late decisions by voters in key states who broke disproportionally for Donald Trump, and the over-representation in polls of college graduates who leaned toward Hillary Clinton. They believe that evidence for a "shy Trump" effect — voters who were reticent to tell pollsters that they supported the eventual president — was mixed at best. They acknowledged that changes in turnout from previous elections could have played a role; a Census Bureau report that came out just days after the AAPOR post-mortem pointed to, among other things, a consequential decline in the black turnout rate and a stabilization of the white voting rate, both of which clearly played a role in the 2016 outcome. AAPOR rejected the idea of a left-leaning partisan bias affecting poll results, noting that the general-election polls in 2000 and 2012 underestimated support for the Democratic presidential candidate.

The AAPOR analysts also noted that the estimates of aggregators such as FiveThirtyEight, RealClearPolitics, and the Huffington Post, which predicted a strong chance of a Clinton win, colored perceptions of the pollsters. The AAPOR committee stressed that surveys are a snapshot in time and not probability estimates of the kind that aggregators make. This important distinction seems to be largely lost in public commentary these days. Probability forecasts are becoming more popular in election analysis, and they were a regular feature of the 2016 election coverage.

Beyond the AAPOR self-examination (the Brits did their own extensive post-mortem after Brexit), several polling organizations have been publicly addressing the methodological challenges to their work and the business as a whole. After Gallup's final pre-election poll in 2012 predicted that Republican presidential candidate Mitt Romney would win the popular vote narrowly, the polling organization completed an exhaustive examination of its practices and released it to the public. In January 2018, Gallup reported that its response rate for its social series in 2017 was 7% on average, down from 28% in 1997. As Gallup points out, low response rates "do not pose a risk to data integrity," but they do mean that conducting good surveys takes more time and is more expensive.

In a May 2017 report on methods, the Pew Research Center similarly noted that response rates have plateaued in the past four years at 9% (down from 36% in 1997). Pew believes it is still possible to conduct good polls despite declining participation; their research shows "that the bias introduced into survey data by current levels of participation is limited in scope." On measures of party affiliation, political ideology, and religion, low-response telephone polls "track well" with high-response surveys conducted in person and over a range of lifestyle, health, and demographic factors. Still, it is an open question whether other pollsters in the public domain have the deep pockets to do the kind of work that Gallup and Pew do.

POLLING CRITICS

Survey research should not be judged solely on the results of election polls, but these polls are, to quote the late Andrew Kohut of Pew, one of the giants of the profession, the "most visible expression of the validity of the survey research method." And today, election polls (and polls more broadly) are subjected to withering criticism from many quarters.

Let's start with the public's verdict. In 2011, a comprehensive assessment in the polling field's academic journal, Public Opinion Quarterly, described a "markedly negative shift in attitudes toward public opinion researchers and polls across several dimensions between the mid-1990s and the first decade of the 2000s." It is hard to imagine that impressions have since improved. In election years, Pew asks people to grade the pollsters in terms of the way they conduct themselves in the campaign. In its 2016 post-election poll, 21% of registered voters gave the pollsters an "A" or "B" — half the support pollsters received when Pew asked the identical question in 1988. A quarter gave the pollsters a "C," 21% a "D," and 30% a failing grade. Additionally, a July 2017 NPR/PBS NewsHour/Marist poll asked people how much they trusted different American institutions. Thirty-five percent trusted public-opinion polls a great deal or a good amount, a rating very similar to the 37% who gave that response about the Trump administration. The polls fared slightly better than the media (30%) and Congress (29%), but far below the intelligence community and the courts (both 60%). 

And then, of course, there are the politicians' criticisms. Trump's tweets about the polls are only the most recent. A few days before his inauguration, the president-elect tweeted about polls showing him with historically low approval ratings: "The same people who did the phony election polls, and were so wrong, are now doing approval rating polls. They are rigged just like before." While it is a time-honored tradition for politicians and presidents to criticize the polls when they don't like the results, Trump's tweets about "fake polls" to his more than 40 million followers come at a time when many public pollsters are feeling vulnerable.

To make matters worse, more than one phony poll has been discovered recently. In the highly competitive special election this March for a U.S. House district in Pennsylvania, Timothy Blumenthal, the author of a fake poll from "Blumenthal Research Daily," said he made up a fraudulent poll on the contest in 15 minutes, piecing it together on a "rather sloppy google doc" in "an obvious troll" attempt. Blumenthal said that "almost nobody" fell for the poll, though that is hardly comforting.

Criticism of election polling also comes from within campaigns. In a New York Times op-ed published days before the 2016 election, Jim Messina, President Obama's 2012 campaign manager, wrote that "[t]he best campaigns don't bother with national polls." He said the Obama campaign compiled data from competitive states but did "no national tracking polls." Painting with a very broad brush, he concluded, "I've come to hate public polling, period." Many campaign strategists, aware of the importance of states in our electoral system, were astonished to learn that the Clinton campaign conducted, in the words of Democratic pollster Stan Greenberg, "no state polls in the final three weeks of the general election," relying instead on data analytics (collecting massive amounts of data from many sources to find patterns and rate voters in order to optimize contact with them). This censure of the Clinton campaign's top brass (Greenberg labeled it "malpractice") is not a criticism of the polls themselves, but it shows how traditional polls were regarded in a key campaign. Another reminder of the persistent criticism of election polling appeared in a New York Times story on November 6, 2017, with the headline, "After a Tough 2016, Many Pollsters Haven't Changed Anything."

But criticism of election polling shouldn't indict the whole business. Anna Greenberg and Jeremy Rosner, partners at the highly regarded Democratic firm Greenberg Quinlan Rosner Research, made the case recently for old-school polling when it is done well, with "hard listening" and significant investment. They worry that many firms and clients are not willing to pay the cost of collecting high-quality data, and are moving to data analytics instead. The demise in 2016 of the highly regarded Field Poll in California (founded in 1947), after its multinational corporate sponsor pulled the plug, may be a harbinger of things to come for high-quality research.

Other aspects of the polling business are also facing scrutiny: Standard surveys, for example, may be becoming a smaller and less meaningful part of market research. Advertising Age (now known as Ad Age) has been tracking big marketing companies' shift away from traditional surveys for a long time. In a 2011 panel discussion at the Advertising Research Foundation conference, Joan Lewis, who presided over Procter & Gamble's $350 million annual research budget as the company's global-consumer and market-knowledge officer, noted that P&G would continue to use survey research — even though it would likely be less important as modern social-media tools enable businesses to learn more about their customers. Then came the 2016 election results, which sent shock waves through the market-research industry. In the New York Times, Sarah Hofstetter, chairwoman of the digital-strategy agency 360i, said the outcome of the election was a "wake-up call" that data do not always "give you the full picture" of people, their actions, and their intentions. These conclusions, Hofstetter said, call into question "the rules of market research," which usually relies on national surveys and focus groups, much like traditional campaigns. These widespread criticisms from inside and outside the industry present a challenge to pollsters going forward.

BUSINESS MODELS

Determining how many pollsters regularly conduct surveys these days can be a difficult task. For its 2016 election analysis, the AAPOR committee reached out to 46 different polling organizations; only half responded to its requests.

Some pollsters have been in business for decades. Gallup, the Opinion Research Corporation, and the Roper Organization began their work in the 1930s and '40s. ABC, CBS, and NBC's polling efforts began in the 1970s and '80s, and soon they had print partners: the New York Times, the Washington Post, and the Wall Street Journal, respectively. CNN began polling in the 1980s. Pew's work started as a research project for the Times Mirror Company, but became the Pew Research Center for the People and the Press in 1996. Fox News began its polling effort with a bipartisan team in 2000.

The five major networks, together with the Associated Press, conduct the Election Day exit poll, called the National Election Pool, but Fox and AP are withdrawing from this expensive and valuable exercise, and it is not clear what that will mean for the remaining members of the consortium after 2018. (The remaining pool members — ABC, CBS, CNN, and NBC — plan to conduct a national exit poll and some state exit surveys in 2018.) There are a number of longstanding university polls, including the Rutgers-Eagleton Poll (founded in 1971) and the Marist Poll (founded in 1978). Universities such as Quinnipiac, Suffolk, and Monmouth also field regular national polls. In 2008, more than 40 academic survey-research groups formed the Association of Academic Survey Research Organizations. 

Other long-running surveys that have made immense contributions to knowledge about public opinion include the University of Michigan's American National Election Studies (started in 1948) and its Surveys of Consumers (started in 1946). NORC at the University of Chicago was founded in 1941 and started fielding its highly regarded General Social Survey in the early 1970s. UCLA has surveyed college freshmen for more than 50 years, and the Chicago Council on Global Affairs began tracking foreign-policy attitudes in the early 1970s. Gallup and Phi Delta Kappa began conducting yearly surveys of attitudes about schooling in 1969, although in recent years PDK has partnered with Langer Research Associates. In addition to polling regularly on health care, the Kaiser Family Foundation has conducted more than 30 polls with the Washington Post on a variety of topics. 

Joining these old-time pollsters are relative newcomers that spew out data on a regular basis: YouGov, Survey Monkey, and Morning Consult provide reams of data collected online almost weekly. Given the expense of traditional telephone/cell-phone surveys, many pollsters are moving to online surveys; in March of this year, Kos Media and its website, the Daily Kos, began using daily online tracking polls for a number of popular survey questions. While there are questions about methodology, the online polls often drive media reporting on the public mood. 

While criticism of methodology and outcomes may be lessening the use of surveys, the changing composition of the field is also upending the polling business itself. In today's fast-changing news environment, pollsters (even those without major media partners) compete to attract media attention for their latest survey. This isn't new. In an essay in Public Opinion Quarterly from 1980, the late pollster and professor Irving Crespi argued that "[t]opics that can be expected to create front-page headlines dominate, leading to a spasmodic coverage of the agenda of public concerns." He also noted that media-sponsored polls on the front pages of major publications limit coverage of more important long-term trends. While Crespi was writing about media-sponsored polls, even pollsters without media partners practice "pack-polling" in terms of the issues they cover. The codependent media-polling relationship thus accounts for some of the self-inflicted problems with the polls that often go undetected by polling analysts and the general public. 

POLLING PROBLEMS

The reliance of pollsters on the media, whether they are sponsored by those outlets or simply seeking coverage, has engendered other significant, but less noticed, problems for the polling industry. First is the issue of "short-termism": Pollsters ask questions about a controversial news event to secure coverage, only to move onto the next topic, making it difficult to determine how public attitudes are changing over time.

On February 3, 2017, HuffPost Pollster, a site that tracks polls, analyzed seven surveys conducted within days of President Trump's announcement of a travel ban, which suspended refugee admissions to the U.S. and temporarily barred entrants from some Muslim-majority countries. Three of the polls gave the ban a thumbs up, while four found the public reacting negatively to it. HuffPost Pollster noted correctly that the wording of questions on a sensitive topic and different survey methods can affect results; additionally, initial reactions aren't always settled ones. Other pollsters also conducted surveys in the early weeks after the president's executive order, but their efforts petered out when the ban ceased to dominate the front page.

Collecting this large cache of data on an issue of public concern was valuable, but it was troubling that the initial results conflicted as much as they did. An even greater concern is that most pollsters soon dropped the subject completely in favor of new topics as fresh controversies took over the news cycle. One could make the same point about polls in many other areas. In previous eras, most pollsters steered their own course. They have always been sensitive to the news, and new questions will always arise, but consistent polling on the same subjects establishes baselines of public opinion. These baselines allow for comparison and perspective as the public reacts to a particular event. But if pollsters ask about a subject only at the height of a controversy, as many tend to do now, caution should be exercised in reporting the results. Yet most present their findings as settled — even when they may be far from it.

During the Occupy Wall Street protests, for instance, pollsters asked questions about capitalism, income inequality, and similar topics, yielding valuable insights. But once Occupy petered out, only a few pollsters returned to those broader questions. It is difficult to know how fixed attitudes are on a given subject if pollsters only measure them when the topic is front-page news. Income inequality hasn't gone away, and knowing whether people's views have changed or stabilized is worthwhile. Similarly, several pollsters were in the field for months after the 2008 financial crisis, when many Americans were worried that our financial system could collapse. The 10th anniversary of the crash is this year, but only a handful of pollsters in the public domain have asked about how secure the financial sector is now.

When Michael Brown was killed in August 2014 in Ferguson, Missouri, pollsters rushed into the field with questions about race relations and police practices. They were able to refer back to questions asked during previous policing controversies in Los Angeles and New York City, and they captured important changes. But after the initial rush of activity, questions about race relations and the police mostly disappeared. In December 2016, the Cato Institute published survey results on public attitudes toward police tactics; the next month, Pew released a substantial survey on the attitudes of police officers themselves; and Gallup released new data last summer on public confidence in the police, which provided useful updates. But these were the exceptions — until the violent confrontation last August between white nationalists and counter-protesters in Charlottesville, Virginia, prompted a new surge of opinion polling. This is yet another example of pollsters swooping in when something is hot, then dropping the subject.

The same phenomenon can be seen with polling about sexual harassment. During the Supreme Court confirmation hearings for Clarence Thomas in 1991, when he faced accusations of sexual harassment from former colleague Anita Hill, pollsters asked questions about sexual harassment. Until this issue came roaring back in late 2017 with the #MeToo movement, questions were asked but they were frequently about political scandals (like those involving President Clinton, Senator Robert Packwood, and presidential candidate Herman Cain). There are few identical questions asked over this span that could give us a sense of how American women have experienced harassment.

A second problem with a polling industry dependent on the media is that pollsters tend to focus too much on political questions and the obsessions of Washington insiders, rather than the concerns of ordinary Americans and other important topics and long-term trends. Polls today are infused with Washington's preoccupations. For example, long before Hope Hicks became White House communications director in September 2017, one pollster had been asking almost weekly whether Americans had a positive or negative impression of her. Did the public really know or care about her? One gets the impression from some of these polls that they aren't being done for the public or to learn about the public, but instead to satisfy inside-the-Beltway political junkies.

Between January 1 and November 6, 2016, according to the Roper Center's archive, pollsters asked people whether they had a favorable or unfavorable opinion of Hillary Clinton, and separately, of Donald Trump, roughly 70 times. (The Roper Center at Cornell University houses the largest archive of survey data from major pollsters in the United States; the center does not archive internet-only surveys such as those conducted by YouGov.) It was obviously important to know that both candidates were (and still are) regarded negatively. But the endless repetition of the question seemed to be driven by the need to solicit media attention — to have the latest poll on the two unattractive candidates — rather than helping observers understand the electorate. It also crowds out other important issues: For example, only 11 questions between January and November 2016 in the Roper Center's archive asked people to compare how Trump and Clinton would handle taxes. The same criticisms can be made of the frequent polls on job approval. As Peter Hart, the dean of Democratic pollsters, put it in 1993, "It's like pulling a plant up every three days and looking at its roots to see how it is growing."

More polling about the everyday lives of Americans would likely yield more insight about the state of our nation than our present obsessive focus on politics. In the early years of polling, pollsters regularly asked people questions about daily life — questions such as whether they had been more than 1,000 miles from home, received a speeding ticket, won a prize in a contest or played the lottery, been robbed, spent a night in jail, given up smoking or drinking, or taken out a mortgage. Pollsters asked about bedtimes and whether a family ate dinner together, all alongside political questions. Reading the Roper Center/Fortune and Gallup poll commentary from decades ago, one is struck by the tremendous respect these pollsters had for the general public. Yet questions in the public domain about routines and behaviors — ordinary life — are rarer these days.

For decades, three polling organizations provided in-depth profiles of public attitudes toward work. They revealed that most people are highly satisfied with their jobs: They like their co-workers a lot, and their levels of satisfaction in areas as different as vacation time, on-the-job stress, and employer flexibility have been high and stable. Each organization used slightly different wording, and their varied approaches gave us unique insights about a central part of American life. Today, only Gallup is updating these trends about employment for public release. Stability and satisfaction don't appear to be news; nor are long-term trends in areas such as this.

Likewise with patriotism, another topic that few pollsters ask about these days. Here again, the results have been stable for some time, perhaps explaining pollsters' lack of interest. But if our democracy is endangered, as some academics have recently claimed, isn't it important to know if and how Americans have changed their minds on this topic?

The third problem is that pollsters too often accentuate Americans' political polarization, depicting the nation as deeply divided. Many measures certainly indicate that the country is sharply split politically, but even some of the most careful pollsters hype their results. Unlike the denizens of Washington, most people aren't consumed by politics, so the deep partisan divisions these polls unearth do not necessarily mean we are hopelessly divided in our daily lives. In a deeply divided country, most pre-election polls showed that only around 10% of Americans had a major argument, or had ended or lost a friendship, because of the 2016 campaign. We don't have much historical data on this question, so it is hard to know if the 2016 results are significantly different from what people would have said in earlier campaigns.

One recent set of questions, widely covered by the media, asks people whether they would be displeased (1960 wording), somewhat or very upset (2008), or somewhat or very unhappy (2010), if their son or daughter were to marry someone with a different party affiliation. The answers show substantial change from 1960, although fewer than half of Republicans or Democrats in the 2010 measure said they would be unhappy. (The research would be more persuasive if the wording of the questions had been identical.) A 2014 survey found that roughly equal percentages of Democrats (15%) and Republicans (17%) say they would be unhappy bringing someone from the other party into their family. Most said it wouldn't matter, but the media hyped the results. The pollsters can't be blamed when the media over-interpret careful work in areas such as social distance. But the pollsters need to attract attention for their work, and hyping results gets it. 

In the past, the well-documented negativity bias that dominates the media did not affect the polls, but it certainly does now, as evidenced by the relentless pursuit of scandals in Democratic and Republican administrations alike. Already, for example, three pollsters have started asking regularly about impeaching Trump. The question is a fair one, but it's a shame that it crowds out other, potentially more useful subjects.

The fourth change I've observed in polling, not directly related to media coverage, is that pollsters have been much more likely in recent decades to ask questions involving emotions. Data from Cornell's Roper Center reveal that, in the 40 years between 1935 and 1975, pollsters asked a total of seven questions using the words "anger" or "angry." In the 40 years between 1975 and 2015, by contrast, they asked more than 1,000. Another instructive example concerns questions about lying. Looking at polls between 1935 and 1975, the words "lie" or "lying" were used four times. Only one question using the word was asked during Watergate, and it was a question about Richard Nixon's aides, John Ehrlichman and H. R. Haldeman, not Nixon himself (though there were questions about his truthfulness). There were more than 700 between 1975 and 2015. And there are hardly any political-compassion questions at all in the Roper Center's archive before the late 1970s. Has American society become more emotional, and are the polls simply reflecting that change? Perhaps, but it is also possible that using more emotionally loaded questions has distorted perceptions of what Americans are really thinking. How you word questions about levels of anger can produce very different results.

The fifth and final problem with polling is the implicit bias of pollsters. Many conservatives see overt bias in the polls, but I don't find much evidence of this. Most pollsters are highly dedicated professionals who don't appear to be particularly political or ideological. But, like all of us, they are products of the milieu they inhabit, and this affects the type of questions they ask. For years, pollsters asked the public whether we needed stricter regulation of guns. Then a pollster asked whether more gun laws would make a difference. Many people said they wouldn't (though this sentiment may be changing now). The newer question casts the issue of gun violence in a different light, but it doesn't appear to be a perspective that most pollsters had previously considered. Similarly, although there are dozens of questions about abortion each year, there are only a handful of questions in the public domain that ask if abortion is murder or if it ends a human life.

To take another example, a major pollster completed another substantial survey on climate change this year, confirming that there are divisions between Republicans and Democrats on the subject. But the extensive poll only touched lightly on whether new technology could mitigate its worst effects, or whether adaptation strategies might help to address the problem. In these areas, there were modest differences among partisans. None of this is deliberate; it simply reflects the educated milieu in which many survey-research professionals live and work. 

REGAINING TRUST

Pollsters deserve praise for their efforts to address the methodological challenges they face, but it remains an open question whether these and other concerns about traditional surveys can be overcome, and whether new techniques or improved old ones can address the problems that have been identified. These debates will continue as pollsters review their performance and experiment with new methods, but in the meantime, these concerns are prompting healthy and substantive self-analysis. Improving polling techniques will be vital, as supplanting polls with data analytics offers a tempting alternative. Despite its growth and promise, data analytics can only ever reveal so much about public opinion; it simply can't capture the public's voice the way that polls do.

Surveys are an invaluable way to understand America and how the views of its citizens have changed over time. The importance of polling makes the problems with the industry — the obvious problems and the subtler ones — all the more troubling. As the media-saturated polling environment encourages pollsters to flit from controversy to controversy, asking the questions that garner attention and front-page headlines, they neglect important issues and long-term trends. In overlooking the concerns of ordinary Americans and overemphasizing their partisan divisions, pollsters fail to perform one of their most important roles. Polls are intended to serve the public and help us better understand the views of Americans, but many in the industry seem to have lost touch with the people they are supposed to poll. If pollsters are going to regain the confidence of the public and the rest of their critics, they must start by refocusing on the issues and values that matter most to Americans.

Karlyn Bowman is a senior fellow at the American Enterprise Institute. She thanks Heather Sims for her assistance.


Insight

from the

Archives

A weekly newsletter with free essays from past issues of National Affairs and The Public Interest that shed light on the week's pressing issues.

advertisement

Sign-in to your National Affairs subscriber account.


Already a subscriber? Activate your account.


subscribe

Unlimited access to intelligent essays on the nation’s affairs.

SUBSCRIBE
Subscribe to National Affairs.