Yesterday I wrote about problems with the Electoral Integrity Project, a set of expert surveys that are intended to “evaluate the state of the world’s elections” but have some problems, notably rating more than half of the U.S. states in 2016 as having lower integrity than Cuba (!) and North Korea (!!!) in 2014.
I was distressed to learn that these shaky claims regarding electoral integrity have been promoted multiple times on the Monkey Cage, a blog with which I am associated. Here, for example, is that notorious map showing North Korea as having “moderate” electoral integrity in 2014.
The post featuring North Korea has the following note:
The map identifies North Korea and Cuba as having moderate quality elections. The full report online gives details on how to interpret this. It does not mean that these countries are electoral or liberal democracies. The indicators measure expert perceptions of the quality of an election based on multiple criteria derived from international standards.
It’s good to recognize the problem, but the above note isn’t nearly enough. When you have a measure that makes no sense in some cases, the appropriate response is not to just restate that you’re measuring “expert perceptions of the quality of an election” but to figure out what exactly went wrong! Recall that in this case, North Korea was rated as above 50 on every one of the “multiple criteria” given in their report. You can say “expert perceptions” and “international standards” as many times as you want and it doesn’t resolve this one.
When you find a bug in your code, you shouldn’t just exclude the case that doesn’t work, you should try to track down the problem.
More recently, the Electoral Integrity Index was featured in this Monkey Cage post entitled, “Why don’t more Americans vote? Maybe because they don’t trust U.S. elections,” by Pippa Norris, Holly Ann Garnett and Max Grömping, who concluded with the statement that “the U.S. ranks 52nd out of 153 countries worldwide in the 2016 Perceptions of Electoral Integrity index, and at the bottom of equivalent Western democracies.”
That post also featured a correlational analysis—states with higher measured electoral integrity also, on average, had higher voter turnout—and gave this an entirely unpersuasive causal interpretation (“electoral integrity has an effect [on turnout] as well”). That’s just terrible to say that, really contrary to the principles of social science.
At this point, I wouldn’t be surprised if word processors such as Microsoft Word and Google Docs could even have a Social Science Mode that would find unsupported causal claims in your text (search for “cause,” “effect,” and a few other words) and highlight them in red.
Just to be clear: I’m not saying that this sort of work could or should be excluded from the Monkey Cage. Norris et al. are studying an important topic, even if their methods are seriously flawed. But it is disturbing that we’ve been presenting their work entirely uncritically.
The Monkey Cage is one of the few public faces of political science, and when we feature work claiming that North Korea has moderate electoral integrity, or that subliminal smiley faces have huge effects on political attitudes, or that there are large numbers of votes cast by non-citizens, we’re discrediting our own field as well as polluting the public discourse. So, yes, let’s present controversial and preliminary work: if a well-respected survey gives results that don’t make sense, this is a fine topic for the Monkey Cage. It would just be best to express such claims in the spirit of scientific speculation rather than as scientific fact.
The next step is that we correct our errors and learn from them. For example, after a blogger pointed out implausible estimates in my election maps, I went back, figured out what I’d been doing wrong, and posted an update. After the Monkey Cage published that post on non-citizen voting, our editors added the following note:
The post occasioned three rebuttals (here, here, and here) as well as a response from the authors. Subsequently, another peer-reviewed article argued that the findings reported in this post (and affiliated article) were biased and that the authors’ data do not provide evidence of non-citizen voting in U.S. elections.
And after reading my criticisms of her work, Pippa Norris posted a long note with some details on her project.
That’s the way to go forward.