Skip to content
 

Too much backscratching and happy talk: Junk science gets to share in the reputation of respected universities

Nick Stevenson writes:

I agree that it’s disappointing that so many publications that pride themselves on the quality of their journalism – NYTimes, WashPo, Slate, Vox – ran with the EIP’s work, but does the fault really lie with them? This work has been promoted on the conference circuit for years by a full professor at Harvard Kennedy School. According to the EIP’s website: “From July 2012 to December 2016, work on electoral integrity by staff and visiting scholars has generated ten books, 34 peer-reviewed journal articles and edited book chapters, eleven policy reports, many electronic datasets, and two special issues of scholarly journals (Electoral Studies and the Electoral Law Journal). In addition, three further books are currently in press and forthcoming with major university publishers during 2017. These outputs have attracted more than 250 scholarly citations to EIP publications, as well as hundreds of articles in the international news media.”

Suppose you were a journalist or editor who felt dubious about running the North Carolina story – but saw Harvard Kennedy School, 34 peer reviewed articles, 250 citations. Is it the journalist’s fault if the study subsequently falls apart? Isn’t the bigger story the half-decade long failure of the section of the political science community that just applauded politely when the EIP people gave their talks?

Stevenson points to this opinion article by David French saying something similar. As French puts it:

The EIP has glittering credentials. Harvard University’s Pippa Norris directs the program, and its research team is “based at” Harvard’s Kennedy School of Government and the University of Sydney. In other words, if you combine a report that hits Republicans (especially the hated North Carolina Republicans) as authoritarian monsters with the academic “H bomb” — Harvard — then you’ve got journalistic gold.

They have a point. The funny thing is, I do like some of Norris’s work. I cited her work with Ingelhart in my Red State Blue State book. When it comes to media exposure, though, yeah, I feel like a lot of scholars just think that all exposure is good exposure. Reynolds and Norris were annoyed with me after my post, but they didn’t seem to be complaining about all the positive publicity they’d been getting.

Her North Korea map was in the Monkey Cage! And the Monkey Cage post even mentioned the North Korea result—as a real finding, not as a reason to discard the estimates for that country. There does seem to be a tendency for academics to look away politely when seeing this sort of thing; I didn’t look at that North Korea thing carefully until all that North Carolina publicity grabbed my attention.

By the way, when I see results of my own that make no sense, I check carefully! As here.

Anyway, yeah, the Harvard name—and, more generally, the passivity of the academic community—has gotta be part of the story of how the “North Carolina is not a democracy” thing got taken so seriously. All this backscratching and happy talk, it’s a problem. Remember, Harvard’s the university that released the press release with the astoundingly clueless claim that “the replication rate in psychology is quite high—indeed, it is statistically indistinguishable from 100%.”

Another example is Cornell University, whose prestige lent credibility to the much publicized low quality data-salad efforts of Daryl Bem and Brian Wansink. And let’s throw in Columbia, as we have Dr. Oz and this guy.

No institution is perfect—the point is not to slam Harvard, Cornell, and Columbia as a whole just because they harbor some knaves and fools. Rather, I’m highlighting the challenge for journalists and academics in navigating this landscape. Backscratching and happy talk may help keep your institution’s reputation afloat in the short term, but at a cost of burning your future credibility.

24 Comments

  1. Keith O'Rourke says:

    > see results of my own that make no sense

    “Doubt is an art which has to be acquired with difficulty.” CS Peirce

  2. Bob says:

    Another big problem is that when senior “gatekeepers” get behind a project, few of the assistants/students want to challenge it lest their careers suffer. Easier to cite and move on. I think it shows again how peer review is often at the bottom of the broken incentive system, with influential reviewers holding the keys to the kingdom with little accountability.

    • Anonymous says:

      “I think it shows again how peer review is often at the bottom of the broken incentive system, with influential reviewers holding the keys to the kingdom with little accountability.”

      I agree. That’s why i find it worrying that so-called solutions to current problems are being promoted that heavily and unnecessary rely on peer-review.

      I took a closer look at “Registered Reports” (https://osf.io/8mpji/wiki/home/) and i found that several of the published ones do not seem to provide the reader access to any pre-registration information.

      This made me wonder, why are they even called “Registered Reports”, what does “registered” imply here, and why is this information not available to the reader? And if anything is actually registered, it looks like the burden of the usefulness of this registration falls entirely in the hands of reviewers, which seems problematic, and totally unnecessary to me.

      In fact you could argue that in these cases “Registered Reports” have managed to take away most of what is good and useful about (pre-)registration: possible accountability, and transparency.

      • Ben Prytherch says:

        Thank you for pointing this out. Considering how many people are saying that pre-registration is no substitute for registered reports, I hope your challenge will be answered.

        • Anonymous says:

          “I hope your challenge will be answered.”

          I have tried to get answers several times, but the people behind “Registered Reports” simply ignore them. E.g. see most recent comments on 2 posts here:

          http://neurochambers.blogspot.nl/2012/10/changing-culture-of-scientific.html

          http://neurochambers.blogspot.nl/2016/08/registered-reports-for-qualitative.html#comment-form

          • Brian Nosek says:

            Thanks for pointing this out. Andrew alerted me to your comment. It is definitely best practice to have the registration for an RR in a public registry. We are initiating two evaluation studies of RRs now and will incorporate investigation of whether the preregistrations are findable and accessible. Here are two of the big test cases in which they are all available:

            https://elifesciences.org/collections/9b1e83d1/reproducibility-project-cancer-biology — RP:CB publishes the RRs as papers and then the final study as a separate paper called a Replication Study.

            The proof-of-concept for RRs was a 2014 special issue of Social Psychology that included 15 Registered Reports. All of them have the approved protocols on OSF along with all the data, materials, and supplementary analyses: https://osf.io/hxeza/

            • Anonymous says:

              Thank you Andrew! Of course it takes an established researcher to get attention (e.g. see http://blogs.plos.org/absolutely-maybe/2017/08/29/bias-in-open-science-advocacy-the-case-of-article-badges-for-data-sharing/), this is the world of psychology and the “in-crowd” after all.

              “We are initiating two evaluation studies of RRs now and will incorporate investigation of whether the preregistrations are findable and accessible”

              That sounds super fancy, but how about simply mandating putting a link to the pre-registration in the paper? Other ways of making this information “findable” or “accessible” (e.g. put it online somewhere as “supplementary materials”) seem super sub-optimal because when the paper is shared it will not include a link to the crucial pre-registration information when reading the actual paper. This all seems like common sense to me. I don’t get why you are making this so complicated (?).

              Of course, it also seems like common sense for “Registered Reports” to at least have people declare in their pre-registration that they did not already look at the data (or something like that), this is of course important for transparency and (possible) accountability. Has this even been asked for “Registered Reports” that have been published? And if so, why is this not available to the reader?

              In light of the latter for instance, it is remarkable that “Comprehensive Results in Social Psychology” seems to have already published a “Registered Report” of a study that has been submitted elsewhere before. This implies there was no pre-registration at all where the authors had to indicate that they did not look at the data before they submitted their work as a so-called “Registered Report” (where peer-review is supposed to happen before executing the research). See: https://www.facebook.com/groups/psychmap/permalink/377583222618606/

              Perhaps you can come up with a special “Registered Report” -template on the OSF, which includes crucial and important things like indicating that you did not look at the data before submitting stage 1 proposal, and including all measures and analysis plan (introduction and methods) etc. Of course you could/should mandate that every journal that publishes “Registered Reports” uses that template (which could be written up and frozen after stage 1 peer-review and subsequently included in stage 2 peer-review and final paper). Anyway, this is not rocket science, you get the idea.

              In my opinion, you guys have completely messed up “Registered Reports” in ways that seem so incomprehensibly weird that i wonder how and why this has happened. On top of that, you have completely ignored all the criticism expressed in several places at several times. It seems to me you don’t want any help whatsoever, at least not by someone not in the psychology “in-crowd”. So, forgive me and with all due respect, but i am not buying into your reaction (political correctness, non-substantial, empty) here now, nor do i trust that you will fix any of this “Registered Report” mess in a sensible manner.

              Thank you Andrew once again!

              I’m done with this BS. I’m gonna watch my favorite movie now: “Good Will Hunting”. At has one of my favorite scenes of all time (academically related even): https://www.youtube.com/watch?v=azM6xSTT2I0

              • Chris Chambers says:

                Hi Anonymous

                Thanks for commenting. You sound angry and perhaps have given up on this conversation, but in case you do stop by again – and for the benefit of other readers who may be interested in this aspect of Registered Reports – I’d like to add something to what Brian has said.

                First, I want to apologise for never responding to the comment you left on my blog last year in which you raised this concern. This was purely down to my poor organisational skills and the fact that I was stretched very thin at the time, but I did read it and it had an impact that I will explain below. I agree with your point (and have long believed) that requiring the provisionally accepted protocol to be made freely and publicly accessible is an important and often missing feature of Registered Reports in their current form.

                To explain why this is, I need to run through some of the historical context behind the introduction of RRs. The format began life in 2013 in two journals, more or less at the same time: Cortex and Perspectives in Psychological Science. I was one of the main drivers of the Cortex initiative. Cortex publishes research in cognitive neuroscience and neuropsychology, historically very conservative disciplines for which an open science initiative like RRs was an extraordinary leap into the unknown, one that many on the editorial board felt would be a catastrophic mistake – in fact it is a strange quirk of history that a journal like Cortex was the first to do this (mostly down to the progressive vision of Sergio Della Sala, the chief editor, and several associate editors). We did a lot of consultation before the launch and two recurring messages that came back from the community were: (1) researchers in these fields were not interested in reading study protocols that didn’t contain data (hence the journal should not publish them), and (2) there was a prevailing fear that authors could be scooped if the protocol was published (anywhere) prior to the completed work appearing in the journal. Now in an ideal world, researchers would still register their protocols publicly after provisional acceptance, and for those worried about scooping, they could do this with the option of doing so under an embargo that would be required to be released at the point of Stage 2 submission, with a link placed in the paper. However, at the time I wasn’t aware of a registry that offered this kind of sliding feature. There may well have been one and I missed it. There were also concerns raised that requiring additional bureaucracy would deter authors (I was less concerned about this, but you have to realise that this was a complete unknown for us — for all we knew, the format itself would be so unattractive already to the field that we would get zero submissions…or a hundred a week…we had no idea, so aimed to make it feel as similar as possible publishing a regular article).

                Based on this, we decided to proceed at Cortex without requiring public registration of the in principle accepted (IPA’d) protocol. Instead we set up a team of five editors (including me) who would oversee submissions, collectively (with the reviewers) ensuring that any changes to the protocol at Stage 2 were transparently flagged as part of the Stage 2 submission and review process (as tracked changes in the resubmitted manuscript they were always obvious and authors were required to inform the editorial board during the study, and PRIOR to Stage 2 submission, of any protocol deviations because they needed to be approved in advance). Of course this is less transparent that making the protocol public *as well*. Was that a mistake? I’ll leave others to be the judge. I’m sure you will say it is is. All I can say is that we appeared to make the format reasonably attractive to a conservative discipline and received submissions right from the start. Had we required publication of the IPA’d protocol (without embargo) my instinct is that we would have received fewer submissions due to fears about scooping. We will never know.

                From there, other journals began emulating the Cortex policy (many quite literally) and none added the public registration feature. You have to understand that Brian and I (and the COS) do not dictate the RR policy at most of the 73 journals that currently offer the format. We merely advise and provide materials to help journals do so. Any of the adopting journals could have proactively added a public registration feature but not all did, even though I mentioned it often (though there are exceptions life eLife, as Brian points out above). As time went on, I admit this issue faded from my mind among all the other issues faced with adapting and optimising RRs (keeping in mind also that for most of us at the forefront of this initiative it is not our day jobs – we are mostly doing it unpaid in our own time).

                Fast forward to earlier this year, and your comment on my blog in January brought the issue back into focus and I have been thinking again about it. Given the momentum RRs have now achieved, I believe we can now safely push this added transparency as a requirement without compromising uptake by authors or journals, particularly now that the OSF offers embargoed preregistration, which I’m not sure it did back in 2013. We have therefore made several changes and we have plans to address this further as follows:

                1) Since this summer we have updated the template author guidelines that most adopting journals use (https://osf.io/pukzy/) to include the following requirement:

                “Stage 1 cover letter must include a statement confirming that, following Stage 1 in principle acceptance, the authors agree to register their approved protocol on the Open Science Framework (https://osf.io/) or otherrecognised repository, either publicly or under private embargo until submission of the Stage 2 manuscript.”

                “Stage 2 manuscript must contain a link to the approved Stage 1 protocol on the Open Science Framework or other recognised repository. The cover letter should state the page number in the manuscript that lists the URL (to facilitate ready checking).”

                2) This change is being implemented in two RR formats launching later this month, and I recommended it at BMC Medicine in August (who agreed and have already launched accordingly). I will also be updating the policy at Cortex, European Journal of Neuroscience, and Royal Society Open Science where I edit RRs. We will also be adding a new column to our features policy spreadsheet to indicate which journals require public registration (https://docs.google.com/spreadsheets/d/1D4_k-8C_UENTRtbPzXfhjEyu3BfLxdOsn9j-otrO870/edit#gid=0)

                3) We plan to create a central OSF Stage 1 RR registry that contains, or links to, all IPA’d Stage 1 protocols.

                4) We plan to approach all current adopting journals to encourage them to update their policy to require public registration, if they don’t already do so (I am confident most will agree).

                5) For journals that don’t require public registration at the point of IPA, we plan to approach the authors of already published Stage 2 RRs – authors specifically who didn’t archive their protocols voluntarily (as some do anyway) – and ask them to grant us permission to obtain their IPA’d protocols from the journal and deposit them in a central OSF registry. These links will then be added if possible to the Zotero entries and we may also add a Pubmed Commons comment below each article linking to their registry entry.

                We’re open to additional suggestions as well and any direct offers of assistance.

                Chris Chambers

                PS. in relation to your comment “This implies there was no pre-registration at all where the authors had to indicate that they did not look at the data before they submitted their work”. No it does not: there was and always is a preregistration for an RR, it’s just (often) not a public one. Despite it being not public, it is rigorous in a way that conventional unreviewed registration is not, and strategies such as you mention above are not possible without committing fraud. See the FAQ “Can’t authors cheat the Registered Reports model by ‘pre-registering’ a protocol for a study they have already completed?” on left menu ‘FAQ’ tab at https://cos.io/rr/

              • Andrew says:

                Anon: Thanks for raising these issues.

                Chris, Brian: Thanks for going to the trouble to reply in depth.

              • Anonymous says:

                Dear mr. Chambers,

                Thank you for your reply! This is actually substantial, and provides some answers. As a side-note: I pre-registered the hypothesis that the “Registered Reports” people would only reply to these types of questions when someone influential would give it attention (e.g. as is proven here in this thread, see above). I could link to this pre-registration here so you could check it to see if I am not lying or deceiving but in this case I think it’s best to just say that I did pre-register, and showed it to my friends. So it’s “peer-reviewed” in a rigorous way that conventional unreviewed registration is not, and you can simply trust me and this process.

                One last thing, you wrote: “in relation to your comment “This implies there was no pre-registration at all where the authors had to indicate that they did not look at the data before they submitted their work”. No it does not: there was and always is a preregistration for an RR, it’s just (often) not a public one. Despite it being not public, it is rigorous in a way that conventional unreviewed registration is not, and strategies such as you mention above are not possible without committing fraud.”

                I reason it will depend on what this “pre-registration” implies concerning the fraud thing. Like I wrote above, it seems common sense to at least include a statement like “I did not perform the research yet” and/or “I did not look at the data yet” so there is actual clarity about this crucial aspect of “Registered Reports” and (possible) accountability. I took a look at the “Registered Reports” mr. Nosek linked to. I assume this is such a “registration”: https://osf.io/q6mbp/. I can’t find any statement about not looking at the data and/or not having performed the research yet, it’s just an introduction + method.

                When I quickly glanced over the updated template (https://osf.io/pukzy/) you refer to, I can’t find anything about these crucial types of statements. These types of statements could simply be added in the list of requirements for Stage 1 review, but should of course also be part of the publicly shared pre-registration itself (so there can be no doubt/there is true accountability). As I said above, it seems like common sense to me that these types of questions are included in a “pre-registration”, and are of course made public. If these types of questions/statements are not included in the pre-registration, couldn’t I simply submit my already performed research as a “Registered Report” without technically committing fraud? (e.g. how would this hold up in court).

                Anyway, as an outsider I would summarize “Registered Reports” as it exists now, as a format which has effectively managed to completely get rid of 2 crucial aspects of “pre-registration”: (possible) accountability and transparency. The fact that this format has been, and is being promoted as a “solution” to all the problems of psychological science worries me. The fact that the “open science” people are uncritically going along with this, and that it takes an outsider like me to bring up these issues, worries me most though…

              • Brian Nosek says:

                [This is actually a reply to the comment at the bottom of the thread, but the “reply to this comment” is gone down there, perhaps we have hit the nesting limit.]

                Thanks for the follow-up feedback. I agree that those statements are important parts of preregistration as affirmative statements about credibility of the preregistration. We didn’t have them in all of the preregistration options on OSF, but variations of those items are standard practice for most of the registration forms that we host now (see, for example, the form for the Prereg Challenge: http://osf.io/prereg/).

                Continuing feedback is most welcome. RRs are still new and there is not sufficient evidence to know their impact on the credibility of evidence that they produce compared to “standard practice”. There will surely be lots of iterative improvements to the model to maximize its quality in assessing whether it is a useful contribution to reforms for scientific communication and practice.

  3. Alton says:

    Related is how new students are being taught. I just went through orientation in my master’s program at Carnegie Mellon. One of the professors cited Duckworth and Cuddy in his presentation about the importance of soft skills. If the professors feed us junk science, how do we even know? It’s hard to not be passive when graduate education doesn’t emphasize the quality of studies.

    • Andrew says:

      Alton:

      Wow: a CMU professor is citing the work of Duckworth and Cuddy . . . in 2017? That’s pretty sad. Perhaps they could follow up with tips based on the work of Bem, Kanazawa, and Hauser.

      • Claire says:

        Today I came across a snipped in the news that retracted articles are cited 10% less than comparable articles. The journalist concluded that peer-review and the ‘system’ are working. I was wondering about the 90%…

    • Martha (Smith) says:

      Thanks for speaking up. I hope you can continue to do so.

      I find it particularly disturbing that Cuddy and Duckworth were mentioned in a presentation about “soft skills”, since there are many other types of soft skills that are not flashy, but very important to good science — for example, a statistical consultant or collaborator needs the soft skills of being able to listen to (and ask questions of) the client/collaborator to be able to understand the problem at hand; and the soft skills of being able to explain intelligibly and respectfully why any changes need to be made, etc.

      • Martha (Smith) says:

        This is a philosophical/summary/opinion type article that has lots of references to research papers, but there remains the problem that if the research in those research papers is not sound, the the whole argument is a house of cards. They might be right, but their argument is not sound unless the research results it depends on are also sound — and I’m not inclined to read them all to decide that, particularly knowing that a lot of research in psychological science is not sound, but based on “that’s the way we’ve always done it” methodology that doesn’t stand up to careful scrutiny.

        • Carol says:

          Hi Martha (Smith),

          Yes, I guite agree. That’s the problem with Amy Cuddy claiming support for her power pose research on the basis of a review or meta analysis of multiple replication studies. If those studies are not very good, then this isn’t support (garbage in, garbage out, as we used to say). And there’s no way to judge the quality of those studies except by reading them all and probably also obtaining the data and taking a close look.

          Carol

          • Keith O'Rourke says:

            > also obtaining the data
            That can be very difficult – what percentage of authors do promptly provide data in that field?

            • Carol says:

              Hi Keith O’Rourke,

              Yep, you are right. When I request data from an author, most of the time I get no reply whatsoever. A couple of times when I was refused the dataset or the study materials, I requested assistance from the author’s university IRB, and both times I then received what I requested. But those two cases were arduous and consumed a lot of my time. Obtaining datasets for all of the articles in a meta analysis is probably impossible.

              Carol

Leave a Reply