“Cancer Research Is Broken”

Michael Oakes pointed me to this excellent news article by Daniel Engber, subtitled, “There’s a replication crisis in biomedicine—and no one even knows how deep it runs.”

Engber suggests that the replication problem in biomedical research is worse than the much-publicized replication problem in psychology.

One reason, which I didn’t see Engber discussing, is financial incentives. Psychology researchers typically don’t like criticism of their published work: no surprise, they’re people like everyone else, also their careers are at stake. But in biomedicine, it’s not just careers and reputations on the line, it’s big money. Lots more incentives to cheat, or to use sloppy methods that can be twisted to get that magic p-value, etc.

59 thoughts on ““Cancer Research Is Broken”

  1. I would agree this is why it remains broken “The act of reproducing biomedical experiments—I mean, just attempting to obtain the same result—takes enormous time and money, far more than would be required for, say, studies in psychology. That makes it very hard to diagnose the problem of reproducibility in cancer research and understand its scope and symptoms. If we can’t easily test the literature for errors, then how are we supposed to fix it up?” (I was part of a much smaller PI agreed to joint replication effort of ~10 studies in Oxford 2001-3 that failed, not even one could be assessed as replicatable or not.)

    My suggestion would be random audits – but thats unlikely to happen anytime soon.

    But I would not dismiss this “That’s a “hackable problem,” says Silicon Valley pooh-bah Sean Parker” if they establish cross-checks, purposely redundant work (e.g. double data entry, double sub-study runs) and adapt good reproducible methods into practice up front. The thing about random audits is people know they might get caught for cutting corners or being sloppy and the same will be the case here: someone else in the group might be separately be doing the same work (double entry science but just on a random sample.)

    The FDA Sentinel has done these sorts of things and they seem to hope they can scale that up more generally to evidence generation http://blogs.fda.gov/fdavoice/index.php/2016/04/what-we-mean-when-we-talk-about-evgen-part-i-laying-the-foundation-for-a-national-system-for-evidence-generation/?source=govdelivery&utm_medium=email&utm_source=govdelivery

    • One point that makes Biomed research harder is that we are willing to invest a lot even for a small potential effect, in matters concerning our health.

      Even the whiff of a small effect can still convince us to spend on a drug; that sort of tiny effect size would never pass the initial gate had it been a better catalyst or better adhesive etc.

      Ergo, it’s not entirely surprising that there’s a crisis in Biomed. We are often knowingly chasing very small effects.

  2. Andrew,
    Can you elaborate on “But in biomedicine, it’s not just careers and reputations on the line, it’s big money”?

    Are you talking about money from big pharma? If so, I don’t think they are incentivized to encourage p-hacking since they ultimately need to perform a successful large double blind randomized trial to make a cent. And they waste lots of money following false leads. It was actually a big pharma company that published the Nature paper about low reproducibility rates.

    Or are you talking about the huge size of grants for biomedical research? While this type of research is more expensive to conduct, it’s not like the academic scientists make much more money than psychology professors, is it?

    I suppose biomedical researchers might have the incentive to publish a bunch of papers quickly and then land a cushy job for a pharma company, but is this what you were talking about?

    • Z:

      I didn’t have any specific source in mind, more like all the different ways in which money flows through biomedical research, including big grants (lots of bioscientists at universities have multimillion dollar labs), pharma (there have been some famous examples recently of statistical cheating, data being hidden, etc.), and the potential for big payoffs with patents. Overall the stakes can be high. There also are multicenter trials where the people running the trial at each center have a motivation to cheat.

      • You should normalize the stakes to the typical reward in the field. A researcher whose typical consulting revenue is $20,000 /yr is as likely to cheat for a $10,000 extra payoff as a biomed researcher with proportionally larger monies at stake.

        The real incentive to cheat correlation is likely very complicated.

    • >’they ultimately need to perform a successful large double blind randomized trial to make a cent”

      Even these don’t mean much (it is just high-class NHST after all). Drawing incorrect conclusions is just a matter of measuring the wrong thing (usually a questionable proxy), messing up the blinding, unbalanced attrition (your treatment made the sickest people drop out), trying enough different trials until a few in a row happen to be successful, etc. The only reason it *seems* to work is the expense, so people only spend the money if they believe the effect is likely to be large enough.

      • FDA routinely checks for all these things. I won’t say it’s perfect but you don’t get to pick your own proxy measurement, skip the missing=failure analysis, or screw up the blinding. Obviously there are other mistakes you could make (intentionally or not) or things the FDA could miss, but the bar is much higher than I think your comment implies.

        • >”FDA routinely checks for all these things.”

          How do you reconcile your optimism with FDA self-report (2007) that it was incapable of doing this:

          “1.2 Major Findings
          1.2.1 The FDA cannot fulfill its mission because its scientific base has eroded and its scientific organizational structure is weak
          1.2.2 The FDA cannot fulfill its mission because its scientific workforce does not have sufficient capacity and capability
          1.2.3 The FDA cannot fulfill its mission because its information technology (IT) infrastructure is inadequate”
          http://www.fda.gov/ohrms/dockets/ac/07/briefing/2007-4329b_02_01_FDA%20Report%20on%20Science%20and%20Technology.pdf

          Did the situation get resolved at the FDA? If so, what year? I honestly don’t know. What about the reports in a comment below and elsewhere that we are getting little and misleading info on what happens in these trials?

          http://statmodeling.stat.columbia.edu/2016/04/20/cancer-research-is-broken/#comment-270152
          http://www.nytimes.com/2013/06/30/business/breaking-the-seal-on-drug-research.html?_r=0

        • Government agencies that are lobbying for MORE funding? Gasp!

          Ok, sarcasm aside, you have to consider the incentives at play in such a report as well. They may be purposefully overstating things.

        • I’m not sure how lying to get money can be more favorable…

          Anyway, even when they do detect problems it doesn’t seem to often make it into the literature. There are tons of stories about this if you search:

          “Fifty-seven published clinical trials were identified for which an FDA inspection of a trial site had found significant evidence of 1 or more of the following problems: falsification or submission of false information, 22 trials (39%); problems with adverse events reporting, 14 trials (25%); protocol violations, 42 trials (74%); inadequate or inaccurate recordkeeping, 35 trials (61%); failure to protect the safety of patients and/or issues with oversight or informed consent, 30 trials (53%); and violations not otherwise categorized, 20 trials (35%). Only 3 of the 78 publications (4%) that resulted from trials in which the FDA found significant violations mentioned the objectionable conditions or practices found during the inspection. No corrections, retractions, expressions of concern, or other comments acknowledging the key issues identified by the inspection were subsequently published.”
          http://www.ncbi.nlm.nih.gov/pubmed/25664866

          “Surrogate measures, which allow for shorter, smaller and cheaper clinical trials, have opened the gates to a steady stream of costly drugs of dubious value.
          “The whole paradigm is broken, and it is an unmitigated disaster,” said Peter F. Thall, a biostatistician at MD Anderson Cancer Center in Houston who designs clinical trials for cancer research.
          The system creates a veneer of innovation that hides a deeper problem, say Thall and other critics of this change in emphasis.
          By encouraging drug companies to focus on surrogate measures, the critics say, the FDA is undermining the development of drugs that actually will improve and prolong people’s lives.
          “We’ve spent billions of dollars on trials that should have never been done in the first place,” Thall said.”
          http://www.jsonline.com/watchdog/watchdogreports/fda-approves-cancer-drugs-without-proof-theyre-extending-lives-b99348000z1-280437692.html

        • I’m inclined to agree with Anoneuoid that the current system at the FDA is broken. (See for example the two bmj articles I linked to downthread.) I think the European Medicines Agency has similar problems.

          But I don’t see the FDA and EMA as necessarily being villains — their hands are tied by constraints imposed by Congress, presumably under the influence of drug companies.

          Also, to the best of my knowledge, the agencies have no power to see that accurate information is published.

        • Thanks for the links. I’m not surprised that the FDA and drug discovery etc are broken. But I’m not sure it’s a lack of “scientific capacity” or problems with the FDA’s “scientific workforce” or inadequate “information technology” etc (which I read as “we’re incompetent and can’t hire good people and can’t figure out how to use the massive amount of Free Software available”) I think it’s far more likely to be regulatory capture, political pressures, and bureaucratic fiefdom protections etc. Rocking the boat is bad for hundreds of middle managers.

        • >”I’m not surprised that the FDA and drug discovery etc are broken. But I’m not sure it’s a lack of “scientific capacity” or problems with the FDA’s “scientific workforce” or inadequate “information technology” etc…”

          I agree. However what is the more generous interpretation?
          1) FDA honestly saying it cannot do its job
          2) FDA has a problem with an institutional culture of lying to get money

          I choose #1 without hesitation. I don’t understand how proposing #2 should mean there is less of a problem, but maybe that isn’t what you are saying.

          The claim being challenged was that the FDA can be relied upon to deal with clinical trials “measuring the wrong thing (usually a questionable proxy), messing up the blinding, unbalanced attrition (your treatment made the sickest people drop out), trying enough different trials until a few in a row happen to be successful, etc.”

        • Anoneuoid:

          I’ve got maybe the least generous interpretation:

          the FDA can’t do its job because of politics, regulatory capture, etc etc.

          the FDA is blaming that fact on lack of “scientific” resources and “information technology” so they can get more money.

        • Given these challenges to undertaking meta-analyses of published data =e.g. Tom Jefferson, et al (of The Cochrane Collaboration). Risk of bias in industry-funded oseltamivir trials: comparison of core reports versus full clinical study reports http://bmjopen.bmj.com/content/4/9/e005253.full

          I suggested to one of the founders of the Cochran Collaboration that they “close up shop”.

          They kindly replied “Because CSRs [reports regulatory agencies do get] aren’t available for most randomised trials I don’t think it’s likely that people will give up on doing their best to make sense of whatever data are available”

          So right now, given whatever flaws FDA has, they do get CSRs, can audit those providing them and redo the analyses various ways, so they are in a far better position to be the best to make sense of drug effects. Leveling the playing field for academics would be great!

          Now, Bob O’Neil was very candid about some of the problems at a recent JSM presentation (e.g. when we originally got funding to build the biostats group, there were not enough good candidates, but we had to staff up then not to lose the positions and we are still suffering from that.) They have recently increased their statistical staff and the Sentinel project looks promising for post market assessments. (Also, in the links, Peter Thall was referring to cancer studies specifically and unfortunately that’s a very desperate area of drug development.)

        • I reconcile by the fact that these are different issues than what you raised?

          The original question and my response had very specific things that even at my fair distance from actual trials I’ve seen the FDA do. It’s not hard (or, importantly, resource intensive) to say everyone doing this type of study is using survival as an endpoint, or saying if that’s not feasible we’ll only accept this proxy, or to re-run numbers and say all discontinued treatments are failures.[1] On the other hand, there are mistakes–in one case that pissed me off a company buried a “disclosure”[2] as a footnote in a probably 20,000+ page application then won in court when the FDA realized the impact and tried to nail the company. The FDA definitely fails at stuff, but that doesn’t means it misses at the most basic levels.

          The internal FDA report you link to doesn’t seem to say anything like that either. The main theme of the stats section is “we need to be more innovative to keep up with the larger safety data sets we can now collect, and new realms of science,” pretty consistent with the “we need more money” goal (and not requiring lying, either.) I don’t see self-flagellation about missing on the most basic level of study design. Stories about literature problems that you link to are a problem but I didn’t comment them and I honestly don’t even know what the FDA’s mandate is there. Do they even have statutory authority to influence private publications? They’ve lose on freedom-of-speech grounds in some vaguely similar cases where they did try to control research communications to medical professionals.

          Overall I don’t think of myself as optimistic about the FDA, though maybe I am in relative terms. I definitely think there are problems in biomedical research.

          [1] According to one veteran of many approvals I’ve worked with, the standard approach is they’ll always do this and if it doesn’t knock you below the magic ‘0.05’ threshold for your primary endpoint they’ll generally defer to your classification
          [2] About the potential impact of some relevant metabolite IIRC. If the FDA had caught this correctly they can make life painful, for bad reporting as well as the underlying substance of the issue. They seem to have lost regulatory authority by not flagging it in the correct window and then the court system bailed out the comapny.

        • I think the best way forward is to get away from talking about generalities. What recent FDA-monitored clinical trials that lead to a drug approval have had their results published? Pick one and let us study it.

        • I don’t think we’re even having the same conversation. I was talking about a few specific statistical errors that are tough to get past the FDA, you propose the best way to reach an agreement is seeing what got past the peer review process.

          It seems to me you read the FDA has scientific deficiencies and thus you attribute to them the same sort of deficiencies a bad journal would have. They have problems but they are not necessarily the same. There are FDA statistical reviews available from the agency’s site and other comments on applications if you want to dig in. I won’t “study” them with you since I couldn’t add much to the medicine or the stats, although if you find one where the efficacy is solely due to unbalanced attrition I’d be interested in it.

        • mark k,

          I thought we were talking about this (quoted from above):

          ‘The claim being challenged was that the FDA can be relied upon to deal with clinical trials “measuring the wrong thing (usually a questionable proxy), messing up the blinding, unbalanced attrition (your treatment made the sickest people drop out), trying enough different trials until a few in a row happen to be successful, etc.”’

          I figured studying an example of what gets through should give us an idea of what errors get caught or not. I would put forth the effort, but understand that it is something that may require you to devote more time than you can spare.

          >”if you find one where the efficacy is solely due to unbalanced attrition I’d be interested in it”

          Of course it won’t be possible to convincingly attribute an effect solely to that (it is a vague explanation), but go ahead and pick one. Also, the problems I will point out have nothing to do with statistics beyond noting when the authors confuse stats for science.

    • My own anecdotal experience as a lab rat was that cancer labs have a ton of money, not just compared to psych labs but compared to other biotech/biomedical labs as well. All that money creates a huge research machine that requires a large and constant influx of more money, and that future money is dependent on past results. Even for scientists who aren’t personally getting rich, that creates a huge incentive for unethical research practices.

  3. I don’t even know how you would decide whether a study in the medical literature is reproducible. In the clinical literature, the methods sections are often brief and vague. One could easily imagine many ways in which the study was carried out that are meaningfully different but all consistent with what is reported. Journals impose severe limits on the lengths of articles. Those limits may have been adequate decades ago when studies were smaller and simpler, but in the current context they often leave authors with no choice but to elide salient information from the methods and from the results.

    The grant approval process at NIH also contributes to this. Several decades ago, the scientific portion of a research proposal was allowed up to 25 pages. This typically provided enough room that you could really set out what you planned to do in a concrete way, so that somebody could actually replicate it or audit it. Now the scientific sections are limited to 12 pages for a typical proposal (and only 6 pages for smaller grant mechanisms)–which, again, forces the omission of important content. If your study design is so simple that it can be described in enough detail to be replicable or auditable in 12 pages it’s probably too simplistic a study to be a serious candidate for NIH funding.

    This has another interesting side effect. Since reviewers cannot be given a full description of the methodology to evaluate, the reputation of the investigators looms larger in the evaluation, further pressurizing the “publish dramatic results in a high-impact journal to stay viable” phenomenon.

    • An associated and worrisome issue is that, once a grant is approved and funded, investigators seem to feel little obligation to conduct and analyze the study as described in the approved proposal. There seems little consequence to taking the money and going off and doing something rather different than proposed. I have the sense that the methods descriptions in the proposals essentially serve to demonstrate the competence of the investigators to complete the study (trust us, we know what we’re doing), rather than being a contractual commitment to do what was stated.

  4. i don’t disagree about the financials (as someone tangentially related to the field)- but i will add that the problem would be much simpler if it was p-values. No! indeed! then you could maybe just look at some of the data and see, oh, it’s garbage. it’s worse than that.

    the issue is that the data (at least the stuff with reproducibility issues) is largely in analog form of western blots, cell images, biological reagents, things like that. these are the kinds of things where you can often squint and come up with a bunch of different possibilities. hell, the antibodies used in westerns will often cross-react with many proteins, meaning you don’t even know what you’re looking at a lot of the time. cell images are probably worse, i have less experience there.

    and the real digital data, the worst stuff, is the newest: high-throughput dna sequencing. the data are huge and the analysis often can be shown to mean whatever you want. in this instance, people do use p-values, because when you’ve got a gazillion data points you can get anything to be not only p<.05, but p<2.2E-16. so you've got big mounds of apparently statistically unassailable garbage.

    okay, end rant. sorry, couldn't keep my mouth shut.

  5. Let’s not forget one of the most important problems: the game-theory/feedback effect.

    For the most part, total grant funds are fixed. To get a grant you need to show a history of having done some kind of fancy work (ie. publish papers in fancy journals). If you start out planning to be careful and run proper controls, and look for alternative explanations, and basically do careful science, you will have several effects running against you for funding:

    1) Other people are not so careful and are more willing to hype. They compete for the slots in the journals.

    2) Being a careful scientist takes more time, so your publication rate will be lower even if you ignore the competition for journal slots. Number of publications and the journal fanciness matters FAR more than accuracy or carefulness because no one is policing accuracy or carefulness, certainly the reviewers are not effective.

    3) Not only does publishing beget funding, but also funding begets funding, so there is a monetary feedback effect for those willing to cut corners.

    4) Initial funds are limited, so there is a “death” process that eliminates careful labs that don’t get lucky or don’t hype things up.

    5) Using careful language and being careful in comparisons and controls begets more detailed criticism by reviewers and longer more expensive time-to-publication.

    By the end of this process you realize that the vast majority of people getting grants are going to be doing something questionable (whether it’s choice of what to work on being motivated more by feeding the machine than by discovering truly important things, choice of methods, lack of controls, lack of serious self-criticism, or in the extreme case just making stuff up like in the STAP stem cell debacle and several other high profile cases).

    • Basically, instead of how to solve a biomed-problem we are working on how to get into a Journal?

      There’s too much focus on researchers. They will game the system the way you describe & in more ways.

      If the *fund givers* cared, we would fare better.

      • > There’s too much focus on researchers
        Who else is doing the research and its how they do it and especially report that makes (here usually rather poor) research.

        I agree with Daniel, but you raise an important point “They will game the system the way you describe & in more ways” and as an instance, the first use for published clinical trail quality score guidelines was _corner cutters_ learning what they needed to say they did in their trails (regardless of what they actually did) in order to be more likely to get published.

        That is why I think the process has to actually be checked (randomly audited or managed by a co-operative group that randomly redoes each others work).

        > If the *fund givers* cared
        But it was/is just too good for them not to swan (sit there and look good).

        They get a pile of money, ask researchers to submit proposals, ask other researchers to rank those proposals, fund the higher ranked proposals until the money runs out (funding each a little less than is needed to get more projects) and bask in the glory of the self-reported peer reviewed success stories.

        It is (hopefully was) a wonderful gig!

        • Exactly. The “fund givers” don’t care as much if a drug really improves cancer mortality or QUALYs. Their immediate incentives are, as you put it, to “sit there and look good”.

          It’s the principal-agent problem at its worst because the “principal” is a diffuse, unorganized aggregation of voters, taxpayers, patients etc.

          The “agents” care more about their self interest than a better drug.

  6. This issue bothers me a lot. I work in clinical trials and I think there is a real problem of how we select interventions to evaluate in trials. Part of the reason so few trials find effective treatments may be that we are being misled by pre-clinical studies into thinking lots of interventions are worth pursuing when in fact they aren’t. I’ve had limited exposure to pre-clinical studies but those I have seen have been concerning from a statistical point of view.

    I should clarify that I’m talking mostly about academic clinical trials rather than pharma drug development.

    • Naive question:

      What is the nature of pre-clinical studies typically? The animal studies? Or stuff like drug docking, high throughput evaluations etc?

      • There are people better than me to answer this in detail but I’ll give it a whirl. The short answer being it’s friggin’ complicated.

        – Animal studies are often the best options. But in many cases there are no validated animal models. (E.g., if you get a mouse to have a fibrotic liver by damaging it chemically, then heal the damage, does that say anything at all about human damage caused by a fatty diet or alcohol?)
        – Cell based tests can be used, again with varying degrees of confidence. They might tell you a lot in infectious diseases, enough to fool you in cancer, and be hopeless in something like some cardiovascular diseases.
        – A lot of times people believe in the biochemistry, so proving a drug inhibits or activates a certain enzyme will carry a lot of weight with decision makers. There’s a philosophical dispute about whether industry overdoes “target-based” (ie, biochemical) tests these days.
        – All of the above implicitly focuses on efficacy. Drugs need other properties too, like a good “ADMET” (absorption, distribution, metabolism, excretion and toxicity) profile, which are all a mix of animal & in vitro testing.
        – If by drug docking you mean computational models, and not binding, that is never close to enough on its own.
        – High throughput evaluations typically find leads, which are then optimized (ie, 1000x more potent and made able to dose orally) to be become drugs. High throughput would, again, never be enough on its own.

      • Maybe preclinical isn’t quite the right term. All of the sorts of study we use to justify initiating randomised trials, really, which includes animal, in vitro, genetics sometimes, and physiological studies of real people (mostly observational), and observational clinical studies. I suspect similar problems apply to all of these. I don’t know too much about the lab-based world but what I have seen about these studies does worry me.

        • Thanks. So (a) are we choosing the wrong interventions to take to clinical trial or (b) there just aren’t any good interventions we are identifying with the current pre-clinical efforts.

        • Simon:

          In 1997, I gave a talk on the challenges of doing meta-analysis of clinical trials to a group of statistical researchers and after Tom Louis http://www.biostat.jhsph.edu/~tlouis/ commented that I painted an overly bleak picture of clinical research.

          I really hoped he was right but nothing since then has suggested otherwise – but it can get better and seems poised to – see Martha’s link re use of clinical reports and my link to FDA announcement (e.g. Sentinel data should help with the observational clinical studies).

        • Perhaps, it’s again the fact that when it concerns things that will save / lengthen our lives we just use a different yardstick.

          e.g. Even if we don’t have any promising pre-clinical ideas, we just cannot see ourselves wait. We then grab at straws and push whatever little we have to a clinical trial in a desperate, magical hope that we can yet get something worthwhile out of it.

          Fundamentally, we are irrational when it concerns self-preservation projects.

        • At the individual level, and when the money being spent is insurance money, I think you’re right. But I don’t think this is necessarily the case at the level of organizations, such as pharma companies or NIH. The NIH isn’t “grasping at straws trying to get just a tiny improvement” it’s fundamentally, as you say, a principal-agent problem. At the level of Pharma, what they see is that insurance money is available to whoever is “The best” regardless of how big the margin is. If they can find a small margin improvement they can capture the majority of the revenue stream.

          A “prize/challenge” based funding system specifically for cancer might work a lot better. For the first 10 years after approval, 1/2 GDP/capita/yr for every QUALY saved by your drug, or something like that. If the QUALYs saved translate to money earned, people will maximize that metric, instead of searching for the minimum incremental improvement allowing them to capture the insurance revenue stream.

  7. >>>> “If the *fund givers* cared, we would fare better.”

    .

    …What ?? You don’t trust NIH bureaucrats to wisely spend other peoples (taxpayer) money?

    The core systemic problem is of course that the federal government is mismanaging things they have no constitutional authority to be involved in at all. Private organizations/persons would spend their own money much more wisely, and in productive competition with each other.

    The NIH dumps ~$32 BILLION annually as the world’s largest “medical research” agency.

    Taxpayer money is dumped through 50,000 grants to 300,000 “researchers” at 2,500+ universities, medical schools, and other entities worldwide.

    10% of NIH’s budget supports its own laboratory bureaucracy in Bethesda Maryland with 6,000 scientists.

    • Yes, yes, I could cheer the “Big Government = Bad” rant.

      But sadly, even if we shut down the NIH in one fell stroke, I don’t see any easy ways to do this better. Basically, we have a diffuse, unorganized set of people (taxpayers, citizens, patients whatever) who want to pool their individually-meager monies to find a (hard to verify) solution to a common malady. How do they execute this?

      You are simultaneously confronting a very difficult principal-agent problem, asymmetric information, expensive experiments, highly heterogeneous cohorts, noisy measurements & potentially small effect sizes.

      What solution do you propose?

    • Nobody in the private sector wants to spend billions of dollars on exploratory basic medical research. That’s why we have the government do it. The private sector swoops in when marketable products are found.

      “The NIH dumps ~$32 BILLION annually as the world’s largest “medical research” agency.”

      Is that supposed to be an absurdly large number? That’s about 5% of the size of the defense budget which, to be blunt, has at least $32 BILLION of waste in it.

      And what’s with the scare quotes around “medical research”? Are you contending that the NIH isn’t conducting/funding medical research?

      “Taxpayer money is dumped through 50,000 grants to 300,000 “researchers” at 2,500+ universities, medical schools, and other entities worldwide.”

      Aside from using the pejorative word “dumped” do you have a point?

      “10% of NIH’s budget supports its own laboratory bureaucracy in Bethesda Maryland with 6,000 scientists.”

      Well the NIH main campus has laboratories, a grant-dispersing bureaucracy (distinct from the laboratories, which makes me wonder what a ‘laboratory bureaucracy’ is), a clinical center, an internationally renowned web site, and other features. Again, what is your point? 6,000 is a scary big number?

  8. Most medical research is conducted in academic institutions.
    Within these institutions you are judged by the number of publications you have, not by the quality of your publications – that would be too hard and too time consuming to assess. Simply the number of publications.

    When you apply for grants, the number of publications you have establishes your track record as a researcher.

    Publications are the currency in academia.

    And here’s the thing. To get a research paper published you only have to do one thing; keep submitting it to journals until it’s accepted. If, during this process, you get peer reviews that point out flaws in your study design or analysis, simply ignore them and keep submitting. Ultimately your paper will be accepted for publication. In medical research there will always be a journal that will accept it.

    That’s it, that’s all you have to do.

    • “Within these institutions you are judged by the number of publications you have, not by the quality of your publications – that would be too hard and too time consuming to assess. Simply the number of publications.”

      That’s too simplistic. Journal impact factor and the number of citations are also considered. Also the people running the departments have their own understanding of the respective fields and can make individual judgments about who produces high quality work and who doesn’t.

      The myth of the value of publication volume is one thing that hurts academia. A lot of crap papers get published because a lot of authors think it’s somehow beneficial to their careers to produce high volumes of crap papers. No, it’s not completely unimportant, but it’s far less important than you make it out to be.

      • I’m not sure of the good judgement of people running the Journals if they gladly publish stuff like the papers on Himmicanes, power poses, red=fertility etc.

        And it’s a very similar cohort that runs the Departments and the Journals.

  9. “Basically, we have a diffuse, unorganized set of people…”

    Yes, yes… it’s virtually impossible for human beings to voluntarily organize themselves into any productive large scale activity… without the firm hand of government politicians & bureaucrats guiding things.

    The American populace would have had no effective economic mechanism for feeding , clothing, and housing itself, researching/developing new methods and products, industrializing mass production/transportation/communications, etc. Total mystery how the American population so successfully prospered into year 1900 without a vast central government directing and caring for that hapless nation of bumpkins.

    Certainly most all Americans care nothing about their health and would not organize for… nor voluntarily support any serious medical research. Thank God for the NIH.

    And the NIH has accomplished so much in the past century with the hundreds of billion$ it spent. Not a dime was wasted. NIH website claims as some of its greatestdevnullmail.com achievements — “…the development of MRI, understanding of how viruses can cause cancer, insights into cholesterol control, and knowledge of how our brain processes visual information, among dozens of other advances.” Let the NIH critics contemplate that unassailable record of heroic, indirect accomplishment.

Leave a Reply to Martha (Smith) Cancel reply

Your email address will not be published. Required fields are marked *