Norman Ornstein and Alan Abramowitz warn against over-interpreting poll fluctuations:
In this highly charged election, it’s no surprise that the news media see every poll like an addict sees a new fix. That is especially true of polls that show large and unexpected changes. Those polls get intense coverage and analysis, adding to their presumed validity.
The problem is that the polls that make the news are also the ones most likely to be wrong.
Well put. Don’t chase the goddamn noise. We discussed this point on the sister blog the other day.
But this new op-ed by Ornstein and Abramowitz goes further by picking apart problems in recent outlier polls:
Take the Reuters/Ipsos survey. It showed huge shifts during a time when there were no major events. There is a robust scholarship, using sophisticated panel surveys, that demonstrates remarkable stability in voter preferences, especially in times of intense partisan preferences and tribal political identities. The chances that the shifts seen in these polls are real and not artifacts of sample design and polling flaws? Close to zero.
What about the neck-and-neck race described in the NBC/Survey Monkey poll? A deeper dig shows that 28 percent of Latinos in this survey support Mr. Trump. If the candidate were a conventional Republican like Mitt Romney or George W. Bush, that wouldn’t raise eyebrows. But most other surveys have shown Mr. Trump eking out 10 to 12 percent among Latino voters.
There’s only one place where I disagree with Ornstein and Abramowitz. They write:
Part of the problem stems from the polling process itself. Getting reliable samples of voters is increasingly expensive and difficult, particularly as Americans go all-cellular. Response rates have plummeted to 9 percent or less. . . . With low response rates and other issues, pollsters try to massage their data to reflect the population as a whole, weighting their samples by age, race and sex.
So far so good, but then they say:
But that makes polling far more of an art than a science, and some surveys build in distortions, having too many Democrats or Republicans, or too many or too few minorities. If polling these days is an art, there are a lot of mediocre or bad artists.
What’s my problem with that paragraph? First off, I don’t like the “more of an art than a science” framing. Science is an art! Second, and most relevant in this context, the adjustment of sample to the population is a scientific process! Suppose a chemist is calculating energy release in an experiment and has to subtract off the energy emitted by a heat source. That’s science—it’s taking raw data and adjusting it to estimate what you want to learn. And that’s what we do when we do survey adjustment (for example, here). Yes, you can do this adjustment badly or with bias, just as you can introduce sloppy or bias adjustments in a chemistry experiment. But it’s still science.
Anyway, I agree with the main points in their op-ed.
P.S. For more on polling biases in the 2016 campaign, see this thoughtful news article by Nate Cohn, “Is Traditional Polling Underselling Donald Trump’s True Strength?”
P.P.S. I would’ve posted this all on the sister blog where it’s a natural fit, but I couldn’t muster the energy to add paragraphs of background material. One pleasant thing about blogging here is that I can take it as your responsibility to figure out what I’ve written, not my responsibility to make it accessible to you. Indeed, I suspect that part of the fun of reading this blog for many people is that I don’t write down to you. I write at my own level and give you the chance to join in.
I’m planning to write a few books, though, so I’ll have to shift gears at some point. Damn. I’ve become so comfortable with this style. Good for me to get out of my comfort zone, I know. But still.