At least 10 Republican presidential candidates will take part in a crucial televised debate Thursday night, their participation determined by the results of a handful of public opinion polls.
It’s likely those polls are wrong.
“Everyone has been way off, all over the world,” said Marcus Leach, a political consultant in Kansas City. “It’s become very difficult to properly model an election because of polling industry failures.”
Pollster Cliff Zukin, writing in The New York Times: “Election polling is in near crisis.”
Complaining about the accuracy of public opinion surveys is an old habit in the United States, of course. Polls have wrong before, sometimes spectacularly so. Trailing candidates routinely grouse.
But many campaign experts and academics believe cellphones and a reluctant public have combined to cast new, deepening shadows over the tools they use to understand what the nation wants.
And the problem extends beyond campaigns. Polls are now deeply integrated into other public choices — on health care, education, criminal justice, immigration. Polls tell us what Americans think about race relations, or a treaty with Iran, or Obamacare. Data collected through random phone surveys are used by scientists, educators and policymakers.
If candidate polling is inaccurate, opinion polls, and the decisions they lead to, may also be wrong.
“The electorate is becoming harder and harder to reach,” explained Titus Bond, president of Remington Research Group, a Kansas City-based polling firm. “Public polls are worse than they used to be.”
Polling companies and some politicians say the concern is exaggerated despite recent failures in Kansas and elsewhere. Those mistakes make headlines, they say, but most polls still accurately predict election results and voters’ views.
Nevertheless, in a world awash in smartphones, frustrated citizens and bewildering, polarized politics, pollsters are re-examining their techniques and conclusions, often on the fly.
“It’s an interesting and challenging time for the polling industry,” said Mollyann Brodie, president of the American Association for Public Opinion Research.
Costly and difficult
Poll takers once traveled from door to door, sampling voters’ views on candidates and campaigns.
Their findings were occasionally accurate and usually interesting, but they had little value beyond entertainment. That began to change in the 1920s and 1930s, when advertisers and marketers began combining sophisticated research techniques with politics. Familiar names like George Gallup, Elmo Roper and later Lou Harris began forming polling companies and studying ways to improve public opinion research.
There were still problems. A magazine confidently predicted Kansan Alf Landon would defeat Franklin Roosevelt in 1936. In 1948, some polling companies said Harry Truman would lose to Tom Dewey by double digits.
Those failures focused attention on the need for more accuracy and dependability in polls. Slowly, polling firms discarded door-to-door surveys in favor of random, telephone-based, demographically balanced polling techniques. Computers and automated dialing made information-gathering relatively inexpensive, leading to more polls and polling firms through the end of the 20th century.
Hundreds of firms now conduct public opinion polls.
But the era of cheap, reliable telephone-based polls began to crumble as cellphones became common. Federal rules require a human presence when polling cellphone users, and that left pollsters with a difficult choice: leave cellphone users out of their sample or include them at a much higher cost per call.
Today most popular national polls include cellphone users because most people use cellphones. But adding them has driven up the cost of a rigorous national or state-based poll to $50,000 or more, a price few news or information organizations want to pay repeatedly over an election cycle.
Campaigns and political parties are more willing to spend what it takes for more accurate results, but those polls are usually closely held.
Pollsters can reduce their costs by cutting the number of people they call.
“You have national polls now with 500 people in them,” Bond noted.
Pollsters can also make robocalls to landline phones instead of real people.
Those strategies, though, can lead to less accurate results.
“All polls are not created equal,” said Peter Brown, assistant director of the Quinnipiac University Poll in Connecticut. “It has to do with money.”
Paying for a highly accurate poll is further complicated by the growing reluctance of many people to answer questions about their preferences. Bond said he sometimes makes 60,000 phone calls to obtain 1,000 usable responses — an astonishing ratio. Other pollsters report response rates in the low single digits.
Moreover, the people who agree to answer survey questions about politics tend to be more politically active. Using their answers can lead to misleading conclusions as well.
“The relatively few people who respond to polls may not be representative of the majority who don’t,” public opinion and statistics analyst Nate Silver wrote in June.
These new challenges come on top of the traditional issues all pollsters must address. The way questions are asked can skew results, for example, or the order in which queries are posed. Some respondents may lie. Some may change their minds — the predictive power of poll results starts to expire a few hours after those results are published.
And all polls have a built-in margin of error, not only because they use a specific polling technique but because they contain information collected randomly.
No survey can be 100 percent accurate.
Those problems have led some GOP campaigns to complain they’re being unfairly shut out of Thursday’s prime-time Fox News debate. The network says its first-of-the-season exchange will be restricted to the top 10 Republican candidates in a polling average, but as of Friday it had not said which polls it will use or where each candidate stands.
“A national poll is a lousy way, in my view, to determine who should be on the stage,” Sen. Lindsey Graham said in mid-July. “And I quite frankly resent it.”
Fox is also expected to position candidates on the stage in relation to their poll numbers. Donald Trump, who leads in most polls, will apparently stand in the middle of the stage — and the TV screen.
Those left out of the evening event can debate earlier in the day, an event mocked by some pundits as the “undercard” or “kiddie table.” Yet some soon-to-be second-tier candidates have complained — accurately — that the gap between the last candidate in and the first one out is within any single poll’s margin of error.
That’s why candidates seeking a place in the Republican debate have raised their profiles in recent days, confident that headlines and cable TV appearances will give them a small bump in the polls. Such “gaming” of polls is a common tactic, and a nuisance for pollsters.
Polling researchers have responded to these challenges by weighting the results to repair expected discrepancies. But manipulating raw results can introduce further error — and frustrate the public, which may think pollsters are showing their own biases.
At the same time, raw numbers aren’t very helpful either.
“Polling isn’t purely science,” Bond said. “There’s a lot of art involved in it as well.”
Wrong on Kansas
Last October, independent Greg Orman led Sen. Pat Roberts in several Kansas opinion polls. The lead was narrow but consistent: Of the 19 public opinion polls taken between August and the end of the campaign, Orman was ahead in 13 and tied in another.
Roberts won by almost 11 points — a landslide. Roberts’ supporters voted. Orman’s did not.
“Who is going to really turn out?” asked Jim Jonas, Orman’s campaign manager and a veteran of several campaigns, explaining his problem with polls.
“People fib to pollsters all the time about their plans to actually vote and their intensity levels of support.”
The gap between polls and reality was large in Kansas in 2014 — poll averages predicted Paul Davis would beat Gov. Sam Brownback — but the state was hardly unique. The governor’s race in Maryland changed dramatically last year. So did the Virginia Senate race.
And the trend has continued in 2015. Pollsters badly missed electoral predictions in Great Britain, Israel and Greece.
Analysts say the best approach in an era of substandard public polls is to average results over time. That can reduce the impact of outlier surveys and may yield a more accurate picture of the election.
Yet averages present problems too. Democrat Anthony Brown had a huge lead in the Maryland governor’s race until the final poll, when Republican Larry Hogan surged ahead. Hogan won — and the final poll caught the last-minute wave in his direction that poll averages could not.
Indeed, much of the polling problem in 2014 reflected an anti-Republican bias: Almost every GOP candidate outperformed his or her poll numbers.
Analysts are not sure why. Some think pollsters felt subtle pressure to make their predictions align with those in other polls, creating a survey “echo chamber” that reinforced bad data. Others believe conservatives may be less willing to participate in polls conducted by mainstream media sources they distrust, yielding left-leaning results.
Yet Democrats outperformed public polling in 2012, suggesting technique, not ideology, is the problem.
There are other ways pollsters might obtain more accurate results at a low cost. Some think Internet-based polling is the path forward, although digital questionnaires present their own set of concerns.
Pollsters and politicians say it’s highly unlikely that surveys will simply disappear. Polls are too embedded in the nation’s decision-making fabric. Polls must be made better, experts say, not discarded.
“Good polls are important,” said Russell Renka, a retired political science professor and author of a major paper on polling techniques. “They’re powerful and informative. We rely on them all the time, in the business world and the political world.”
And, for all their problems, polls remain entertaining for campaigns, candidates, reporters and pundits. Friday morning, all will be arguing over the results of the first presidential debate of the year.
Who won? Let’s see what the polls say.