Buggy-whip update

On 12 Aug I sent the following message to Michael Link, president of the American Association for Public Opinion Research.  (I could not find Link’s email on the AAPOR webpage but I did some googling and found an email address for him at nielsen.com.):

Dear Dr. Link:

A colleague pointed me to a statement released under your name criticizing non-probability opt-in surveys in which you wrote that “these methods have little grounding in theory.”  Can you please explain what you mean here?  As a statistician and political scientist who has worked in survey research for over twenty years, this statement of yours makes no sense to me. I’ve written about this here:
http://statmodeling.stat.columbia.edu/2014/08/06/president-american-association-buggy-whip-manufacturers-takes-strong-stand-internal-combustion-engine-argues-called-automobile-little-grounding-theory/
and here (with my colleague David Rothschild):
http://www.washingtonpost.com/blogs/monkey-cage/wp/2014/08/04/modern-polling-requires-both-sampling-and-adjustment/
but I thought it would make sense to ask you directly what you are getting at.  The problem I have is that, when probability samples have 90% nonresponse rates, it’s not clear that they have any “grounding in theory” either?  But in your statement you seem to be implying that traditional polls (such as those conducted by Gallup) have some “grounding in theory” that non-probability sampled polls (such as those conducted by YouGov) do not.  And I can’t see where you’re getting that from.  Some references to the relevant theory would help, perhaps.

Thanks much.

Yours
Andrew Gelman

I received no response.  If any of you know Michael Link, perhaps you could contact him directly?

I get frustrated when people don’t respond to my queries.  Just to be clear, I’m not saying that Link has any duty or obligation to respond to me:  I’ve done some service for AAPOR on occasion but I’m not a member, and I’m sure he has better things to do at work than to respond to requests for references from statisticians.

On the other hand, Link did stick his neck out and make a strong claim about the the theory of survey sampling, so I assume he’d be interested in either backing up his claim with evidence, or backing down from his claim, now that it’s been questioned by an economist and a statistician who work in survey research.

Unless I hear further, I’ll have to assume that Link does not actually have any evidence to back up his claims regarding the theoretical grounding of various survey methods, and I’ll have to continue to assume that the official statement which he signed, badmouthing non-probablility opt-in surveys, is just a collection of fine words with no theoretical grounding.

I’m happy to be corrected, though, so if anyone can contact Michael Link or whoever wrote that statement and find out what they meant, I’d be interested in hearing what they have to say.

That’s the scholarly way:  we consider our claims critically and back them up with evidence, we don’t just do drive-by criticism.

P.S.   I see from this news article by Paul Voosen that Link “regrets some of the language chosen for the letter. It was meant as a caution to the public—especially news outlets—and not as a condemnation of the research, he says. ‘Maybe the statement could have been a little clearer.'”  I’m not sure exactly what he means by this since I don’t see that he’s released an updated statement, but perhaps “Maybe the statement is could have been a little clearer” is bureaucrat-ese for “Whoops—we made some false statements here.” Or maybe he really does have some research on “grounding in theory” that he can share with us. We’ll see. The comment box remains open, or he could just send me an email.

P.P.S. Commenter G. H. posts some links to reports from 2011 and 2012 on problems with online surveys. I don’t see anything useful in these papers on “theoretical grounding” (yes, the first paper linked by G. H., from Langer Research Associates, says that opt-in surveys “operate outside the realm of inferential statistics, meaning there is no theoretical basis on which to conclude that they produce valid and reliable estimates of broader public attitudes or behavior,” but it’s hard to make much of this criticism, relative to telephone surveys, in an era where the latter are subject to 91% nonresponse rates), but they do present some evidence from 2007-2011 on empirical inaccuracies of estimates from opt-in online surveys. I do think that, in doing such surveys, it can be necessary to put in some effort to adjust using poststratification. I don’t see this as an issue of “theoretical grounding” but it is a real practical concern. And of course if traditional telephone surveys had no problems we’d all be continuing to use them. But they do have problems, and we’ve been doing a lot of research on survey adjustment, and . . . I still see no evidence regarding the point about “theoretical grounding.”

14 thoughts on “Buggy-whip update

  1. Andrew: You and the readers of this blog know that Mr. Link made an uninformed comment, perhaps mendacious. “talking out of his ass” you said, and we all agreed. Even Mr. Link must know by now that he was talking out of his ass and should not have been. You know there are no references. You know there is no such “theory”. We all know. What are you expecting from him? Do you really want to know or you just want an admission of ass-talking?

    • Hernan:

      I don’t know exactly what Link had in mind when he wrote what he wrote. But if he really has no justification for his claims, I think there’s a lot he should do. Recall that he did not write that letter as an individual; he wrote it in his position as the president of the American Association for Public Opinion Research, a respected professional organization. If he indeed has no support for his original statement, I think he should issue a new statement saying that he was in error and apologizing for it, and AAPOR should give this new statement equal publicity to his earlier statement. Otherwise what’s the point of such statements: the AAPOR president just gets to mouth off about anything?

      Recall the commutative property of reputations. Link used AAPOR’s reputation to lend credibility to his questionable statement about survey practices. But, to the extent that this statement is recognized to be bogus, it in turn decreases the credibility of AAPOR.

      I’ve always had a lot of respect for AAPOR but now I think of it as the organization that puts out misinformed statements about survey sampling. Not a great reputation for an organization devoted to public opinion research!

      Look at it another way.

      David Brooks’s job is to sell newspapers. If he wants to write provocative but false statements, and the New York Times wants to print it, ultimately that’s their business decision. It makes me realize that the Times does not care about the accuracy of what’s printed on their op-ed page, and that’s a useful think for me to know. The op-ed page is known for contentious opinions, and now I also recognize that NYT columnists have no duty to stick to the facts.

      Ray Keene answers to no one but Ray Keene. If British newspapers want to print his recycled material, that’s up to them, and the publications that Keene is stealing from, and maybe the courts.

      And, in this case, Michael Link is speaking for the AAPOR. As a public opinion researcher myself (although not, I admit, an AAPOR member), I don’t like to see the AAPOR spreading mistakes. Link could rectify this any time he wants by issuing a new statement, correcting his errors and apologizing. It’s no big deal. I make mistakes all the time. And when people point out my errors, I engage with them. I don’t issue an incendiary statement and then hide.

  2. Pingback: Briefly | Stats Chat

  3. Dear Andrew!

    /Start Fun:

    While it is a testament to you following the way of the scholar that you add information as you discover it, it seems to me – a mere novice in the secret arts discussed in this blog – that this post also started to live as the result of venting frustration in a drive-by way. One of the few skills I like to share with my students is the absolute necessity to first do research and a few other arcane scientific incantations and only at the end of this process go public…

    But let me show you by example what I found in 10 min worth of searching – done by a novice in your field – so I might be off-topic and leave the evaluation to the masters:

    * http://www.langerresearch.com/uploads/Langer_Research_Briefing_Paper-Opt-in_Online_Panels.pdf

    Opt-in online surveys fall short in empirical testing as well as theoretically. A series of academic
    studies in the past decade have found inaccurate estimates, wide variability across providers and
    inconsistent relationships among variables in data produced using this method. (See, for
    example, Malhotra & Krosnick, 2007; Pasek & Krosnick, 2010; and Yeager et al., 2011.)

    * http://tinyurl.com/m9e3q9j

    The SAGE Handbook of Online Research Methods about opt-in

    * http://www.academia.edu/3872987/Accuracy_of_Web_Survey_Data_The_State_Of_Research_on_Factual_Questions_in_Surveys

    Some background for novices like me.

    (…)

    /End Fun:

    I really understand where this is coming from but much as plagiarism is your passion, driving home the point of “research first” is one of mine – and any of my students could find your excellent blog.

  4. Pingback: HUFFPOLLSTER: Most Americans Think Torture Is Sometimes Justified — LiberalVoiceLiberalVoice — Your source for everything about liberals and progressives! — News and tweets about everything liberals and progressives

  5. Dr. Gelman,

    I came across a 2010 AAPOR report on online panels, “Research synthesis: AAPOR Report on Online Panels.” Its findings may, in part, support the spirit of the claims of Dr. Link. [Public Opinion Quarterly, Vol. 74, No. 4, Winter 2010, pp. 711-781]. The first finding in the conclusion section states

    “Researchers should avoid nonprobability online panels when one of the research objectives is to accurately estimate population values. There currently is no generally accepted theoretical basis from which to claim that survey results using samples from nonprobability online panels are projectable to the general population.”

    Hope this helps.

    • Imo:

      See P.P.S. above. It’s hard for me to make much of this criticism, relative to telephone surveys, in an era where the latter are subject to 91% nonresponse rates). I do think that, in doing any surveys of human populations, whether from probability sampling or otherwise, and whether online or otherwise, it can be necessary to put in some effort to adjust using poststratification. I don’t see this as an issue of “theoretical grounding” but it is a real practical concern, and I wouldn’t have minded if Link had written something like that in his statement.

      As far as I’m concern, Link could feel tree to take the hard line of not trusting anything that’s not a random sample—but then he’d have to throw away all those telephone polls that have high nonresponse rates. It would be a funny position for the president of AAPOR to take, but that’s his call. My problem is with his implication that there is a theoretical basis for imputing those missing 91%, which is somehow different from the theoretical basis for how we perform inference from opt-in online panels.

      Don’t get me wrong—I think random sampling is great. But it’s only part of the picture. If Link wants to talk theory, I’d love to talk theory. There are lots of difficult, important problems in survey adjustment, and hiding behind random-digit-dialing isn’t going to help you—just ask Gallup about 2012.

  6. you have to remember that aapor is not an academic organization as much as it is an organization representing business interests, and they are in the business of expensive probability surveys.

    • Gg:

      Maybe so, to some extent. But not entirely. I know people who are involved in AAPOR who are not selling anything. And, for that matter, I have no reason to believe that Michael Link is selling anything either. He just seems to be (a) misinformed and (b) uninterested in learning more about the topics he is writing about.

  7. Pingback: Sociological improvisations

Leave a Reply to gg willikers Cancel reply

Your email address will not be published. Required fields are marked *