The quals and the quants

After I recently criticized Gregg Easterbrook for assigning Obama an implausible 90+% chance of beating Mitt Romney, some commenters thought I was being too critical, that I should cut Easterbrook some slack because he just was speaking metaphorically.

In other words, Easterbrook is a “qual.” He uses numbers in his writing because that’s what everyone is supposed to do nowadays, but he doesn’t intend those numbers to be meant literally. Similarly, he presumably didn’t really mean it when he wrote that Scott Brown and Elizabeth Warren “couldn’t be more different — personally or politically.” And he had no problem typing that Obama’s approval rating was 23% because, to him, “23%” is just another word for “low.”

He’s a qual, that’s all.

Similarly, when Samantha Power was just being a qual when she wrote the meaningful-sounding but actually empty statement, “Since 1968, with the single exception of the election of George W. Bush in 2000, Americans have chosen Republican presidents in times of perceived danger and Democrats in times of relative calm.” (See here for my deconstruction.)

What really irritates me are when quals are overconfident about throwing around numbers that don’t make sense. I have no problem with qualitative research and qualitative thinking—I’m a big fan of George Orwell and Steven Rhoads, and, for that matter, this post, along with much of my blogging, has no quantitative content—but I don’t like when quals make strong statements that are essentially quantitative but without any attempt to evaluate them as such. I get irritated in the same way that Bill James gets annoyed by baseball writers who disparage statistics and then turn around and celebrate some player who had flashy stats one year in a hitter’s park.

I get particularly annoyed when such people speak with an air of authority. That’s why I dinged Michael Barone, longtime editor of the legendary Almanac of American Politics, for choking on Colorado voting statistics, and it’s why I felt the need to scream when political theorist David Runciman wrote, “But viewed in retrospect, it is clear that it has been quite predictable.” ticke. If Gregg Easterbrook or Samantha Power or any of these other people would just say they don’t know! But no, that’s not an option. Everybody has to be an expert.

P.S. Quants can make mistakes too! Often the error comes after the statistical analysis ends and when story time begins. Recall the notorious claim (by an economist, no less!) that “a raise won’t make you work harder.”

P.P.S. As noted above, I think there’s a lot of great qualitative research being done. What I don’t like is the Easterbrook-like attitude that numbers can be thrown around without consequences, and what I was exploring in my post above was the impression that many people seem to have that what Easterbrook was doing was just fine. If Easterbrook wants to give his subjective impressions, or if he wants to do a Thomas Friedman and interview some taxi drivers, that’s fine. Nobody told him he had to make up numbers.

14 thoughts on “The quals and the quants

  1. As someone who does qualitative research, IMO, what they are doing is in no way qualitative, except in the sense that they are not being quantitative, and qualitative thinking and research is not really an opposite of quantitative thinking and research, they’re both just different ways of thinking and researching about topics. What I’d call what they’re doing is bad quantitative thinking, otherwise know as the MSU method (Making Sh*t Up), charitably known as guessing or even more charitably as educated guessing, which is nothing to do with qualitative research or thinking.

  2. So, whenever someone uses number, but uses them poorly, they are qual?

    What a conclusion, well done Mr. Gelman. What should we call someone that makes such terrible conclusions?

    Good qualitative research is very precise with numbers, naturally. It has to be, if it wants to be good research. Sigh.

  3. In contrast (to Gelman) people regularly profess to be shocked when I tell them I don’t naturally equate strong belief in a claim with a number like .90.* I mean if he was prepared to bet accordingly, doesn’t it follow that’s his subjective probability? For a similar reason, I don’t see why a subjective prior that really indicated subjective opinion (riso priors, in my Dec 11 blogpost), and satisfied Bayesian coherency requirements, would fail to be warranted (let alone be “terrible” as Berger alleges some might be)–for a subjective Bayesian. Or rather, some non-Bayesian criterion seems to be operating.
    *(even if I did, it wouldn’t be a posterior probability.)

    • Mayo:

      I don’t think .9 was Easterbrook’s subjective probability nor do I think he’ll be betting on it anytime soon. I think it was just a rhetorical flourish on his part. My problem is that if you look at his essay carefully, it all falls apart. He makes a bunch of statements that don’t make sense and are in contradiction to what everybody else believes. With bit of quantitative understanding, I think he could’ve realized the problems and re-thought his ideas.

      Similarly, I think if Samantha Power had taken her own words seriously in the article quoted above, she could’ve realized how little sense she was making. Instead, though, she just let the words flow without carefully examining their meaning. I think that’s how our students are trained in high school and college, to be able to smoothly write something that has the form of a logical argument, without actually being logical. Quantitative thinking can be a way out of that trap; I find it’s harder to b.s. when I take my numbers seriously. The numbers have a logic of their own.

  4. I think this inappropriate usage of numbers is more equivalent to cursing that anything else.

    We all know people who use various 4 letter terms for emphasis. We don’t actually want the Cubs to be literally damned by the Almighty when they blow another game, for example. I think Easterbrook is just using various numbers in the same way. They aren’t to be taken literally — they are just thrown in there for emphasis.

    This results in huge confusion in cases like Easterbrook’s, since he’s not twittering this stuff to friends but is a widely read journalist.

    • Z:

      I was bothered by Easterbrook’s and Power’s statements were not that they were sloppy but because they were wrong, or at least unsupported by data.

      Easterbrook claimed that Obama would (a) almost certainly beat Gingrich but (b) would be better off facing Romney. I don’t see any reason to believe this, and I think a bit of quantitative reasoning might have helped Easterbrook see how off base he was.

      Power made a meaningless statement with quantitative implications. Again, I’d like to think that, had she taken her own words more seriously, she could’ve moved to a deeper level on this.

      In both cases, they’re not simply saying something true but exaggerating (e.g., “Michael Jordan was 10 times better than Wilt Chamberlain ever was”) but rather they’re using the forms of logical reasoning to make statements which are false (or, at best, highly debatable).

  5. To paraphrase Andrew: “Similarly, I think if Andrew Gelman had taken his own words seriously in the article quoted above, he could’ve realized how little sense he was making.”

    Seriously Andrew. Could you please explain how the heck you can transform Easerbrooks mistake into a qual-quan discussion? If one of my first year (business school) students tried to put the above “argument” forward, I’d fail them.

    • Anonymous:

      As noted in the blog post above, I am trying to explore how it is that people give Easterbrook, Power, etc., a break when they throw around meaningless and wrong claims. I think it’s the soft bigotry of low expectations: E and P are “quals,” so nobody expect their statements to make sense, they only have to sound good. I don’t think it would be so horrible for your first-year business students to think about this! Maybe you could give them an assignment where they have to make up some numbers, Easterbrook-style, or claim a pattern in data, Power-style, without ever looking anything up. Once they realize it can be done, maybe this will inspire them to closer reading more generally.

      This sort of critical reasoning can be done with statistics books too.

  6. I get the sense that you deliberately try to misunderstand my post. The point is very simple – they may be poor qual “researchers”, but how does it make sense to turn it into a qual-quan discussion? I could pick a horrible quan researcher and do exactly the same. It would be very unproductive and in no way highlight any of the important aspects of doing either qual or quan research. Maybe you just want to talk about how qual researchers are understood – meaning you’re not talking about qual research, but how qual research is perceived by the public/journalist. Different issue, as I am sure you already knew.

    Btw., concerning first year business students. Of course they learn such stuff. But they also learn how to generalize, e.g. not to let a poor (qual?) document be a reference point for qual research. It’s poor reasoning and unproductive. I can state it in a logical equation, if that helps?

    (I am qual and quan, btw).

    • Anonymous:

      No, I’m not trying to misunderstand you. The only time I deliberately misunderstand people is when they’re really rude, then I often like to give a completely straight response as if they were being reasonable. But you’re not being rude at all!

      To clarify: I am not accusing either Easterbrook or Power of being researchers (at least not when they were writing the items quoted above). But I think they both would’ve benefited from taking their quantitative statements more seriously, and I disagree with the various commenters above who think they should get a pass because they’re just using being metaphorical or whatever.

  7. I don’t think they should get a pass. I do think that they’re using system 1 while they’re pretending to be using system 2 (I’m using the terminology I just discovered in the new book by Kahneman). And it’s good that you show their errors. I just don’t think that would defend their numbers. As you said, they thought in qualitative ways, and put numbers as if they have thought in quantitative way. But that’s what we do all the time (use system 1 but think that it’s system 2 thinking).

    Happy new year,

    Manoel

  8. I came across a question you can probably answer. By how much can a presidential candidate win the popular vote but still lose the election? Not some inconceivable result but a realistic measure please.

  9. I think its important to distinguish between logical and rigorous research (which can be qualitative, quantitative or mixed) and people throwing around opinions without any rigorous research methodology (of any kind) underlying them. Unfortunately journalism has largely deteriorated into the latter.

  10. Based on the title I thought this was going to be your thoughts on ethnography. I believe there is a rhetorical / personality schism among academics: “the rhetoric of hard-headed empiricism”, as you said in http://statmodeling.stat.columbia.edu/2011/07/descriptive_sta/ — “facts and numbers” folks, versus whatever counts as good ethnography (I don’t know, I come from the numbers half of the academic divide).

    If you’ve already blogged elsewhere your thoughts as a statistician on ethnography (good versus bad, culture wars, whatever), I apologise for not finding it. If you haven’t, I’d love to hear your thoughts on the subject.

Comments are closed.