All the things that don’t make it into the news

I got buzzed last week by a couple of NY journalists about this recent political science fraud case. My responses were pretty undramatic so I don’t think they made their way into the news stories.

Which is fine. As a reader of the news, I like to see excitement so it’s fair enough that reporters run with the juicy quotes.

The most recent exchange went like this:

Journalist:

Able to comment on LaCour’s response? I’m trying to put together a comprehensive response to his argument by Monday or Tuesday, and could use a statistician’s input. . . .

My snappy reply:

Someone pointed this out to me the other day and here’s what I wrote:

My favorite bit in LaCour’s document was this:

Instead, I raffled raffled Apple computers, tablets, and iPods to survey respondents as incentives. I located some of the receipts, Link here. Some of the raffle prizes were purchased for a previous experiment I conducted.

The receipts are for a few computers and peripherals he bought over a 2-year period. But what’s really funny is the last bit. He did a “previous experiment” in which he promised to raffle off electronics, and then he just kept the prizes himself to use in experiment #2?? That ain’t cool.

And what’s with this R output at 7 decimal places??? I’m having less and less respect for this guy.

In all seriousness, I don’t see that any of us should be wasting any time looking at his numbers at this point. The only reason to believe that any of these numbers represent real data is from LaCour’s words, and he’s already demonstrated repeatedly that he is willing to lie about just about anything. Even in his document he admitted tons of lies. Just, for example, his statement that he did a raffle for an earlier experiment and then kept the prizes. I believe this guy on data about as much as I believe Tony Blair on weapons of mass destruction.

That’s a pretty good line and I’m thinking maybe it’ll make it into the published article.

But I thought you might be interested in what didn’t make it into the news. Consider this a small strike against selection bias. These came in last week.

Journalist:

Apparently there was an anonymous post on PoliSciRumors.com way back in December which included some R or STATA output highlighting the weirdly high test-retest reliability on the feelings thermometer . . . What I don’t know is exactly what to make of it. Is it fair to say that, if in fact someone had posted output on the test-retest reliability issue in December or January, and that post had stayed up, it would have been relatively easy for stats-savvy folks to take the ball and run from there?

I replied blandly: I have not looked at these data myself, but if there indeed is a problem with them, then I imagine the biggest hurdle is not the analysis, so much as in imagining there might be fraud in the first place. It’s hard for me to say because, as I wrote on the Monkey Cage a few months ago, I didn’t know what to think about these results in any case. It makes sense that researchers who cared more about this topic were the ones to look into the data in detail and find the problems.

Journalist:

You’ve probably heard about the recent study about voters’ attitudes on gay marriage that is probably fraudulent. . . . What’s getting less attention is a similar study that the same author, graduate student Michael LaCour, conducted about people’s opinions on abortion. That study was not published, but he presented the data at a conference — see PDF attached.

I’m hoping that a statistical expert might look over this abortion data and see if there are any irregularities in the data. This data, though not published by a journal, has received a good deal of press, and so we think it’s worth scrutinizing.

To which I replied: I took a look. LaCour is a good writer, I’ll give him that! And he knows how to make pretty graphs. I’m reading the paper . . . he says he recruited people from the voter file but he doesn’t say how he did it. I assume it’s by mail, because the voter file has addresses? He reports a 13% recruitment rate which sounds high for a recruitment by mail. But, who knows, maybe that’s possible.

In short, there’s not enough information in the paper you sent me to determine if the data have irregularities—at least, nothing that I can see. Given the problems with his other study, though, I would see no reason to believe this one.

OK, so now you’ve seen a couple of out-takes from the news. Consider yourself an insider.

17 thoughts on “All the things that don’t make it into the news

    • Apropos one of the themes of this blog. Note that Martin, cited above, suggests that LaCour engaged in some unattributed cutting and pasting. Martin wrote:
      “Appendix A contains a lengthy verbatim excerpt from Gentzkow and Shapiro
      (2010) that is not identified as such. Compare Figures 3 and 4.”

      Bob

  1. I suspect what LaCour meant when he said he purchased the prizes for a previous experiment, is that he bought some iPads with research funds, used them as part of an experiment (had people punch in responses on them, etc.), then raffled off the ‘nearly new, barely used’ iPads. Or something of the sort. I have colleagues who regularly end up with free electronics because of corporate purchases for a specific use; these electronics are used once or twice, then handed out for people to use/have afterward.

    That’s the impression I got from his statement, anyway, not that he purchased prizes for another raffle which he didn’t end up handing out. Purchasing for an experiment versus purchasing for a raffle.

    • Bill:

      Yes, that seems like a reasonable idea on Broockman’s part to post his analyses anonymously. Too bad someone didn’t tell LaCour and Green about that posting back then, so they could’ve retracted their article a few months earlier.

      • the email exchanges released by Lacour (if not themselves faked) show that he and Green were aware of the anonymous post at the time it was made. Lacour of course didn’t move to retract, and Green did not at the time suspect fraud, though he questioned Lacour about the high over-time correlation, to which Lacour provided an “innocent explanation”.

  2. Can I go off on a tangent to point out that not only did Blair and the UN believe Iraq had weapons of mass destruction, so did Saddam. The mendacity of his underlings blindsided everyone.

    This stuff is well documented and it’s tedious to see false narratives being peddled as cheap shots.

    • The problem was the category “WMD” was invented for this purpose. The UN suspected Iraq still had some chem weapons somewhere. No one thought bio weapons. No one thought nukes. “WMD” was coined to blur the three in the public mind.

  3. You nailed it with this sentence

    > In short, there’s not enough information in the paper you sent me to determine if the data have irregularities

    unfortunately, it applies to a lot of published papers!

Leave a Reply

Your email address will not be published. Required fields are marked *