Likelihood Ratio ≠ 1 Journal

Dan Kahan writes:

The basic idea . . . is to promote identification of study designs that scholars who disagree about a proposition would agree would generate evidence relevant to their competing conjectures—regardless of what studies based on such designs actually find. Articles proposing designs of this sort would be selected for publication and only then be carried out, by the proposing researchers with funding from the journal, which would publish the results too.

Now I [Kahan] am aware of a set of real journals that have a similar motivation.

One is the Journal of Articles in Support of the Null Hypothesis, which as its title implies publishes papers reporting studies that fail to “reject” the null. Like JASNH, LR ≠1J would try to offset the “file drawer” bias and like bad consequences associated with the convention of publishing only findings that are “significant at p < 0.05." But it would try to do more. By publishing studies that are deemed to have valid designs and that have not actually been performed yet, LR ≠1J would seek to change the odd, sad professional sensibility favoring studies that confirm researchers' hypotheses. . . . Some additional journals that likewise try (very sensibly) to promote recognition of studies that report unexpected, surprising, or controversial findings include Contradicting Results in Science; Journal of Serendipitous and Unexpected Results; and Journal of Negative Results in Biomedicine. These journals are very worthwhile, too, but still focus on results, not the identification of designs the validity of such would be recognized ex ante by reasonable people who disagree! I am also aware of the idea to set up registries for designs for studies before they are carried out. See this program, e.g. A great idea, certainly. But it doesn't seem realistic, since there is little incentive for people to register, even less than that to report "nonfindings," and no mechanism that steers researchers toward selection of designs that disagreeing scholars would agree in advance will yield knowledge no matter what the resulting studies find. . . . Papers describing the design and ones reporting the results will be published separately, and in sequence, to promote the success of LR≠1's sister journal, "Put Your Money Where Your Mouth Is, Mr./Ms. 'That's Obvious,' " which will conduct on-line predication markets for "experts" & others willing to bet on the outcome of pending LR≠1 studies. . . . For comic relief, LR ≠1J will also run a feature that publishes reviews of articles submitted to other journals that LR≠1J referees agree suggest the potential operation of one of the influences identified above.

More details at the link, also Dan follows up here, where he writes:

LR ≠1J would (1) publish pre-study designs that (2) reviewers with opposing priors agree would generate evidence — regardless of the actual results — that warrant revising assessments of the relative likelihood of competing hypotheses. The journal would then (3) fund the study, and finally, (4) publish the results.

This procedure would generate the same benefits as “adversary collaboration” but without insisting that adversaries collaborate.

Rather than adding any new comments, I’ll just refer you to my two discussions (here and here) from last year of four other entries (by Brendan Nyhan, Larry Wasserman, Chris Said, and Niko Kriegeskorte) in the ever-popular genre of, Our Peer-Review System is in Trouble; How Can We Fix It? I stand by whatever I happened to have written on this when the question came up before.

And, if I could get all Dave Krantz-y for a moment, I’d suggest that this discussion could be improved on all sides (including my own) by starting with goals and going from there, rather than jumping straight into problems and potential solutions.

11 thoughts on “Likelihood Ratio ≠ 1 Journal

  1. I agree with this type of publication option in general, but in practice I think it would struggle. Much like “research design” assignments that graduate students write for seminar courses, the designs proposed to the journal might look and sound great. But I’ve always found that my ‘great’ ideas — in design and analysis — tend to evolve a great deal once I actually see the data. I’m not advocating for data mining at all, but I also cannot think of any research designs that I’ve read that, start to finish, remained unchanged once data were present. Little issues that weren’t apparent in the design stage often manifest in the data. And that forces us to change our tools or analytic approach. So at the end of the day, you’re still publishing aspects of the analysis that didn’t show up in the “peer reviewed” design.

    I like the idea of publication based on the merit of the ideas. I’m not sure that this gets us there.

  2. These idaes have been kicked around in clinical research communities for many years.

    The Cochrane Collaboration regulary _publishes_ meta-analysis protocols before they are done and there are clinical journals that will publish the potocols of funded studies that are about to proceed.

    Maybe the James Lind library would be a place to start looking.

    As Brian Ripley used to say, statisticians don’t read the [appropriate] literature.

  3. Back when publishing, storing, and distribution were expensive journals acted as curators. Their role was to publish so-called “interesting findings” in a distribution channel with limited carrying capacity.

    This system has to go. Open source online publishing is cheap, easy, scalable, and searchable. There is no need to curate the findings, only the scientific integrity. Put simply, review should be limited to:

    (a) Is the research question well posed? (note I did not even mention interesting, that is too subjective)

    (b) Are the data and procedures used to address it sound (e.g. in accordance with minimal scientific standards as expressed in text books, etc.)

    (c) Can it be replicated?

    If a manuscript meets all these then publish it, irrespective of the “finding”.

    Some may complain that this will make publishing too easy. That is fine with me. In a world where publishing is easy people should not be measured so much by their number of publications as by the impact these have.

    I suppose many in my generation don’t read journals, and don’t subscribe to them. We search Google and other online databases.

  4. I am so tempted to submit a paper to the Journal of Serendipitous and Unexpected Results entitled ‘Raising Children’

  5. I have thought about this a lot and once believed that opposing sides on an issue, e.g., effectiveness of an intervention, might be brought together to negotiate conditions for a design that would produce dispositive results however the results turned out. But I agree with the idea that no design gets implemented with precision, usually not even close, leaving gaps in the defensive line. And the capacity of “losers” to find fault after the fact cannot be exaggerated.
    Science will just have to creep and crawl along.

  6. “LR ≠1J would (1) publish pre-study designs that (2) reviewers with opposing priors agree would generate evidence — regardless of the actual results — that warrant revising assessments of the relative likelihood of competing hypotheses.”

    Okay, but in the social sciences, are there really that many honest disagreements?

    For example, is there any study design conceivable that the results of which would make pro-immigration spokesmen say: “Uh oh, I didn’t know that, maybe I’ve been too optimistic about the long run impact of illegal immigration? Maybe this ‘self-deportation’ idea isn’t so bad?”

    Right now there is supposedly a grand “debate” going on over immigration policy. But I hear Emma Lazarus’s poem being quoted a lot more than, say, Ortiz and Telles’ “Generations of Exclusion” study of the educational performance of Mexican-Americans through the fourth generation after immigration, which shows that Mexican immigrants, on average, don’t rise in society like Ellis Island immigrants. But who wants to hear that? Myth sells, so we get a lot of Ellis Island kitsch instead of discussion of the existing data. And over the last few months I’ve heard zero calls for more data on the subject of Mexican-American performance.

    Instead, “researching is racist” appears to be the conventional wisdom.

    • Well, I don’t know about the Mexican/American example you site, but Gelman, Nameless, and I had the skill, fortitude, and bravery in the face of crushing peer pressure to answer the hard questions about Greenland:

      http://statmodeling.stat.columbia.edu/2013/01/16/americans-including-me-dont-know-much-about-other-countries/#comment-129254

      Oddly, if you read the Icelandic sagas, you immediately get the impression that the death/murder rate among these vikings must have been extraordinarily high even by mid-20th century European standards. According to the sagas in a semi-permanent settlement, probability in Newfoundland, half the vikings just up and murdered the other half one day because some woman wanted their boats or something.

Comments are closed.