Back in 1962, Elmo Roper, a pioneer in public opinion polling, identified a problem in his field. “A preference for certainty over doubt, for the plausible over the proved, for drama over accuracy, for hunch and intuition over the hard-to-assemble facts, is a common human tendency,” he wrote.
Fifty-five years later, Nate Silver — today’s political-statistics guru — has the same complaint.
“There’s a strong desire for a narrative, and a lot of groupthink,” the founder of FiveThirtyEight told me last week.
I had called him up to ask why should anyone ever trust polls again. After all, last year, we were told that Hillary Clinton would be a shoo-in over Donald Trump. And when that turned out to be spectacularly wrong, there was hand-wringing throughout the land, and a call to substitute shoe-leather reporting for cold numbers.
Never miss a local story.
Then a year passes, and the biggest election in the country, the Virginia gubernatorial race, approached. Once again basing their views on public opinion polls, media reports told us that Republican Ed Gillespie had closed an early gap and might well defeat Democrat Ralph Northam.
“Close polls set Democrats’ teeth on edge,” wrote Politico in one take. The Washington Post described the race as “neck and neck.”
“A lot of smart reporters and commentators, people I respect, were talking about Northam blowing it,” said Tom Bonier, chief executive of the polling firm TargetSmart. “The idea was that once again Democrats would be sunk by identity politics.”
Then came election night, and Northam not only won but won big. Voters might reasonably feel they were once again led astray.
Silver, not for the first time, argues that the numbers themselves are not to blame. In aggregate, and allowing for margins of error, they were reasonably accurate.
It’s the interpretation by journalists — particularly the pundit class — that’s to blame, he says. For one thing, they often don’t use the aggregation of the various polls, but rather a single one, often the most recent. And they interpret them to create a dramatic — preferably surprising — horse-race story.
So the last-days narrative went like this: “Gillespie has come from behind and could easily end up winning.”
No one wanted to repeat the failure to predict accurately in 2016, so most went the other way, favoring Gillespie.
Bonier also sees fault in the nature of polling itself.
“Every poll rests on a prediction of turnout,” he told me. And that’s hard to fathom. So polls simply can’t be seen as a precise road map — more like a blurry mirror of a single moment.
Then there’s the fact that a lot of news consumers, and too often journalists, don’t know how polls work or how to interpret them.
That was never clearer than during last year’s campaign when various news organizations were depicting the probability of Hillary Clinton winning. Some of the models showed the probability, even on the last days, at well over 80 percent.
Silver’s site was among the most cautious. By the last days of the campaign, he had increased Trump’s chances to about 30 percent.
Pro-Clinton voters found this reassuring, Silver said, but they shouldn’t have.
“How would you feel if you were about to board a flight and you were told the plane had a 30 percent chance of crashing?” Silver asked.
What’s the answer? It’s not to substitute anecdotal reporting for numbers-based reporting. Both have a role in painting a fairly accurate picture, and they work well in tandem.
But lose the groupthink, says Silver. Let go of the need for a surprising narrative. Provide more context and more help interpreting the numbers, which are always an imperfect measure.
As Bonier puts it: “Pollsters and prognosticators — and I would include the media — need to do a better job presenting the uncertainty.”
But, as Elmo Roper noted half a century ago, human nature suggests that won’t come easy.