Revisiting “Is the scientific paper a fraud?”

Javier Benitez points us to this article from 2014 by Susan Howitt and Anna Wilson, which has subtitle, “The way textbooks and scientific research articles are being used to teach undergraduate students could convey a misleading image of scientific research,” and begins:

In 1963, Peter Medawar gave a talk, Is the scientific paper a fraud?, in which he argued that scientific journal articles give a false impression of the real process of scientific discovery. In answering his question, he argued that, “The scientific paper in its orthodox form does embody a totally mistaken conception, even a travesty, of the nature of scientific thought.” His main concern was that the highly formalized structure gives only a sanitized version of how scientists come to a conclusion and that it leaves no room for authors to discuss the thought processes that led to the experiments.

Medawar explained that papers were presented to appear as if the scientists had no pre-conceived expectations about the outcome and that they followed an inductive process in a logical fashion. In fact, scien- tists do have expectations and their observa- tions and analysis are made in light of those expectations. Although today’s scientific papers are increasingly presented as being hypothesis-driven, the underlying thought processes remain hidden; scientists appear to follow a logical and deductive process to test their idea and the results of these tests lead them to support or reject the hypothesis. However, even the trend toward more explicit framing of a hypothesis is often misleading, as hypotheses may be framed to explain a set of observations post hoc, suggesting a linear process that does not describe the actual discovery.

Howitt and Wilson continue:

There is, of course, a good reason why the scientific paper is highly formalized and structured. Its purpose is to communicate a finding and it is important to do this as clearly as possible. Even if actual process of discovery had been messy, a good paper presents a logical argument, provides supporting evidence, and comes to a conclusion. The reader usually does not need or want to know about false starts, failed experiments, and changes of direction.

Fair enough. There’s a tension between full and accurate description of the scientific process, on one hand, and concise description of scientific findings, on the other. Howitt and Wilson talk about the relevance of this to science teaching: by giving students journal articles to read, they get a misleading impression of what science is actually about.

Here I want to go in a slightly different discussion and talk about the ways in which the form of conventional science paper has been directly damaging science itself.

The trouble comes in when the article contains misrepresentations or flat-out lies, when the authors falsely or incompletely describe the processes of design, data collection, data processing, and data analysis. We’ve seen lots of examples of this in recent years.

Three related problems arise:

1. A scientific paper can mislead. People can read a paper, or see later popularizations of the work, and think that “science shows” something that science didn’t show. Within science, overconfidence in published claims can distort future research: lots of people can waste their time trying to track down an effect that was never there in the first place, or is too variable to measure using existing techniques.

2. Without a culture of transparency, there is an incentive to cheat. OK, a short-term incentive. Long-term, if your goal is scientific progress, cheating can just send you down the wrong track, or at best a random track. Cheating can get you publication and fame. There’s also an incentive for a sort of soft cheating, in which researchers pursue a strategy of active incompetence. Recall Clarke’s Law.

3. Scientific papers are typically written as triumphant success stories, and this can fool real-life adult scientists—not just students—leading them to expect the unrealistic, and to make it hard for them to learn from the data they do end up collecting.

P.S. One problem is that some people who present themselves as promoters or defenders of science will go around defending incompetent science, I assume because they think that science in general is a good thing, and so even the bad stuff deserves defending. What I think is that the people who do the bad science are exploiting this attitude.

To all the science promoters and defenders out there—yes, I’m looking at you, NPR, PNAS, Freakonomics, Gladwell, and the Cornell University Media Relations Office: Junk scientists are not your friends. By defending bad work, or dodging criticism of bad work, you’re doing science no favors. Criticism is an essential part of science. The purveyors of junk science are taking advantage of your boosterish attitude.

To put it another way: If you want to defend every scientific paper ever published—or every paper ever published in Science, Nature, Psychological Science, and PNAS, or whatever—fine. But then, on the same principle, you should be defending every Arxiv paper ever written, and for that matter every scrawling on every lab notebook ever prepared by anyone with a Ph.D. or with some Ivy League connection or whatever other rule you’re using to decide what’s worth giving the presumption of correctness. If you really think the replication rate in your field is “statistically indistinguishable from 100%”—or if you don’t really believe this, but you think you have to say it to defend your field from outsiders—then you’re the one being conned.

11 thoughts on “Revisiting “Is the scientific paper a fraud?”

  1. 4. A scientific paper the states it tested a certain hypothesis may in fact have not really done this (optimally) (HARK-ing, p-hacking).

    5. The scientific papers that are published may reflect an unrepresentative sample of the total amount of studies that have actually been performed by scientists (publication bias).

    (I wonder what would happen if i talk to a 10 year old kid, and say that i have a magical hat and when i put on that hat that it allows me to predict which color candy i take out of a bag of candies without looking.

    They would probably not believe me and want me to show them, so i put on the hat and predict that i will take a green piece of candy out of the bag without looking. I pick a red one, but say that “that one didn’t count, the next one will though”, and repeat that process until i get it right (publication bias?).

    Or i put on the hat, not say upfront which color it will be, pick out a green one, and then say i predicted it from the start (HARK-ing?)

    Or, i put on the hat, say i will pick a red one, then pick a whole handful of pieces of candy out of the hat at once, look for any red ones and then pick it up and show them the red one and say i only picked out one piece of candy which happened to be red like i predicted (p-hacking/ making the p-value un-interpretable?)

    I reason 10 year old children would view these things as not really fair in order to be able to decide whether i am indeed able to predict which color i pick from the bag when i put on my magical hat.

    If this is an appropiate analogy of today’s science where publication bias, HARK-ing, and p-hacking are present, it makes me wonder how and why on earth this could have all happened like these are just minor details in the scientific process…)

  2. Excellent topic once again. I am in the process of perusing data given in a specific problem. I was actually befuddled by its format, vague data description, and even the question presented to us. This befuddlement especially given that we as students are to rely on statistics and subject matter expertise. We are then essentially left to paraphrasing what specific experts outline or elaborate in their research presentations in journal articles or in TED talks.

    The typical classroom/seminar environment has its limitations understandably. Why I speculate I like the British higher education structure whereby one can spend a few years just thinking about a particular subject and question of interest.

    It comes back to how and which curricula advance critical thinking for purposes of social science, statistics, legal, and medical research. I would speculate that in each of these areas, we don’t advance critical thinking to the extent that we may like or presume. Pedagogy is the issue. Not to mention the quality of teaching skills.

    No doubt pre-registrations, preprints, blogs are venues for exploring research questions and analytic approaches. But we are biased by virtue of the prestige accorded to scientific papers.

    And now we are even cautioned against the resort to specific statistics tests. So we are being encouraged to think in alternative, perhaps potentially more creative ways. But that takes a lot more time than is afforded to students and researchers in general. That is one reason why the Open Science movement is really a turning point in forging more honesty and transparency. Even then how we are to make best use of the tools afforded through the Open Science framework remain an assiduous effort.

    But as I was pondering yesterday, there are millions of papers of which we access a tiny percent. How do we overcome this challenge? And the answer seems quite clear, larger collaborations will be required to address this challenge. It appears that some experts have embarked on meta-analytic review of existing literature. John Ioannidis does note though that every five years or so some percentage of it disappears altogether.

    So at least we may be able to redeem such concerns expressed above through the Open Science Framework.

    • “It comes back to how and which curricula advance critical thinking for purposes of social science, statistics, legal, and medical research. I would speculate that in each of these areas, we don’t advance critical thinking to the extent that we may like or presume”

      Yes !!

      I am still puzzled by the fact that i myself did not receive any education in logical thinking/reasoning at university. This to me is incomprehensible as i reason it is important for critical thinking, theory development and -testing, scientific writing, debating, etc.

      There, in my opinion, have been (too) many possible recent examples of (sometimes even tenured) psychological scientists who seem to be incapable of sound reasoning.

      Next to the new emphasis on statistical and methodological education, there should also be a new emphasis on reasoning and logic. I have yet to read any recent papers about this…

      • Re: Next to the new emphasis on statistical and methodological education, there should also be a new emphasis on reasoning and logic. I have yet to read any recent papers about this.

        ———–

        I gather that some academics may assign a logic text in a statistics course. Even so, I think that such an assignment comes at too late a stage. Rather logic course should be made compulsory in the 1st year of college, or even in high school. However that is not the only problem. It’s that we haven’t yet updated theory and practices in some social sciences. In particular psychology. Yet we continue to rely on attribution unnecessarily in decision making.

      • My grounding in logical reasoning was in high school geometry (unfortunately, that course is now often taught without that emphasis), and further practiced in undergraduate and graduate courses in mathematics. When I began to teach statistics, I became acutely aware of the difference between statistical inference and logical reasoning, and tried to point out the difference whenever possible. (Unfortunately, statistics textbooks all too often do not point out the difference.)

  3. I thought that this presentation raised even more questions than answers. I guess that it is a good thing. However if Miguel Hernan proposes that conventional statistics measures introduce or entail bias, the proclivity is to jump ahead and offer a more advanced statistics methods that has not been introduced to a beginning statistics student.

    https://www.youtube.com/watch?v=MIBmdqE0tAM&t=3604s

    However I do appreciate the point that Heranan makes: We are bred to give answers; but whether we learn to probe and ask the right questions is an equally or more important query.

  4. “Moreover, students need to learn that mistakes or false starts are not time wasted, but are an essential part of making progress.”

    They are an inevitable part of making progress AND time wasted. I would broadly suggest the most productive people in sciences are adept at avoiding long chases of false hypotheses. Maybe this attitude can be cultivated.

    For two examples:

    McElreath’s book for sure and likely BDA3 make points about creativity in model checking, which could be cast as finding flaws in your model before investing too much in it.

    In algebra through freshman calculus, just check your freakin’ answers. Get good at arithmetic to check your alegbra, get good at derivatives to check your soultions to differential equations. Giving large partial credit for answers with minor artihmetic errors works against this; exercises about finding errors develop it explicitly

    Silicon Valley has the Cult of Rapid Failure with similar apologetics. (I speculate that signalling membership in an in-group is the real story here, though.)

    • I venture that experienced epidemiologists probably have full appreciation of just how incomplete the data are. As students, we are not sufficiently apprised of that concern. Rather the opposite. More exercises in data collection might be beneficial in the earliest coursework.

      We are given some data and expected to apply statistical tests to the data. Test results seem to substitute for substantive inferences, as has been noted most emphatically by Rex Kline, Concordia Univ. He has also conducted some research on competencies of statistics professors too. So maybe we should review it.

      Question is whether or not introductory level statistics curricula have been updated to address current controversies in statistics and social sciences. There are expressed ongoing gaps it seems from one textbook to another. A handful of universities/academics constitute thought leadership in statistics reform curricula. But not sure how widely it is accessed.

  5. Viewpoint from an engineer: A model reflecting reality simply CANNOT be built from variables that cannot be precisely defined and accurately measured.. They may give insights (for readers to test with their own logic), but any notion that they’re offering “truths” is just simplistic.

  6. I recently encountered issue (1) and (3) in talking with a real scientist friend about an area she’s not expert in (she’s a chemist), namely the study of the effect of Public vs Private schools on educational outcomes. (Note, I’m not an expert in the specific studies in this research area either, but I am very very aware of the research difficulties in statistical studies of this general type thanks to a decade of discussing them on this blog)

    She linked to some recent popular article in the WaPo about how private schools aren’t better than public schools because *SCIENCE PROVES IT*.

    https://www.washingtonpost.com/news/answer-sheet/wp/2018/07/26/no-private-schools-arent-better-at-educating-kids-than-public-schools-why-this-new-study-matters/?noredirect=on&utm_term=.ac84f710e68b

    Well the science was an observational study of 1000 kids at 10 schools which “controlled for” socioeconomic demographics and discovered that this “control” completely eliminated all the observed improvements for private school kids.

    There was as far as I could see *no* effort to be able to causally identify anything (ie. some kind of “natural experiment”) and furthermore although she was totally impressed that 1000 students were involved, to actually address the general question would require 3 orders of magnitude higher sample size, because you’d want to sample widely from schools as well as have large sample sizes within schools… 1000 schools spread around the country * 1000 students per school gives order of magnitude of a million kids required to deal with all the noise sources.

    The point is, the researchers were fooling themselves and the reporters were fooling the public, and smart people were coming to the conclusion that ITS BEEN PROVEN that private schools have no effect on kids education…

    yeesh, this general problem is extremely large and damaging to getting “less wrong” as Keith likes to say.

Leave a Reply

Your email address will not be published. Required fields are marked *