“Seeding trials”: medical marketing disguised as science

Paul Alper points to this horrifying news article by Mary Chris Jaklevic, “how a medical device ‘seeding trial’ disguised marketing as science.”

I’d never heard of “seeding trials” before. Here’s Jaklevic:

As a new line of hip implants was about to be launched in 2000, a stunning email went out from the manufacturer’s marketing department. It described a “clinical research strategy” to pay orthopedic surgeons $400 for each patient they enrolled in a company-sponsored trial. . . . Ostensibly the trial was intended to measure how often liners of the Pinnacle Hip System, made by Johnson & Johnson’s DePuy subsidiary, stayed in place after five years. But according to a newly published review article [by Joan Steffen, Ella Fassler, Kevin Reardon, and David Egilman], the trial was really a scheme to gin up sales momentum under the guise of scientific research.

How did the scam work? Jaklevic explains:

The internal company email outlined a “strategy for collecting survivorship data on PINNACLE while maximizing our impact on the market.” It said the trial would include a large group of 40 surgeons in order to achieve “very fast patient enrollment” that would generate sales of 1,000 implants in a year.

While $345,000 would be paid to doctors, it said, “The sales revenue estimate for this study is $4.2 million.” . . .

“Seeding trials are one method by which drug or device companies can just pay physicians for using their products without calling it an actual bribe,” said Adriane Fugh-Berman MD, a professor of pharmacology and physiology at Georgetown University . . .

According to the paper, the trial generated millions in sales but yielded no valid research findings, although the company heavily manipulated the data it did collect to show a false 99.9% success rate that was used in promotional materials. . . .

Dayum. 99.9% success rate, huh? You usually hear about that level of success; indeed the only examples I can think of offhand are old-time elections in the Soviet Union, and the replication rate as reported by the Harvard psychology department.

Here are some details:

J&J violated its own clinical research guidelines in manipulating data, delaying reports of adverse events and failing to follow parameters established for the study, such as not reporting results until all patients had been enrolled for five years and not retrospectively enrolling patients, it says. In some cases there were no patient consents. One surgeon continued to enroll patients in the trial and submit data even after his hospital’s review board refused to approve his participation in the trial.

Further, the company went to elaborate lengths to show a 99.9% success rate for five years, using sleights-of-hand such as not including certain types of device failures and hiding the fact that just 21 patients had been followed for a full five years.

Wait a minute. If there are only 21 patients, then the success rate could be 21/21 = 100%, or 20/21 = 95.2% . . . How do you get 99.9%?

Jaklevic continues:

Eventually, the bogus trial data was used as the “fundamental selling point” in Pinnacle marketing, providing physicians and patients with “a false sense of security,” the article says. The incredible near-perfect track record appeared in ads in medical journals, patient brochures, and ads in consumer publications such as the Ladies’ Home Journal and Golf Digest. . . .

Some of those ads featured an endorsement from Duke University basketball coach and hip implant recipient Mike Krzyzewski, even though Krzyzewski didn’t have Pinnacle implants. Krzyzewski’s osteoarthritis awareness promotions were covered uncritically by USA Today, the Florida Times- Union, and CBS News, only the latter of which mentioned he was paid.

No . . . not Coach K! All my illusions are shattered. Next you’re gonna tell me that Michael Jordan doesn’t really eat at McDonalds?

P.S. Concern with “seeding trials” is not new. Alper wrote about the topic in 2011, citing a Wikipedia article that pointed to a journal article from 1996. But it keeps happening, I guess.

P.P.S. Full disclosure: I’ve been a paid consultant for pharmaceutical companies.

31 thoughts on ““Seeding trials”: medical marketing disguised as science

  1. You usually hear about that level of success; indeed the only examples I can think of offhand are old-time elections in the Soviet Union,….
    Egyptian presidential elections?

    I once got an r=1.0! Further inspection of the data suggest that I should not be regressing x1 on x1.

    • Then again, I was once hired to reverse-engineer a quality index. A review of the index’s brochure revealed that it was calculated from a specified set of attributes (with details on how they were measured), and the scoring process, though described in obfuscatory terms, was clearly a linear combination. I was quite pleased to get R squared = 1.0, and would have counted it a failure at anything less. (OK, I could give myself slack for rounding error perhaps.)

  2. Wait a minute. If there are only 21 patients, then the success rate could be 21/21 = 100%, or 20/21 = 95.2% . . . How do you get 99.9%?

    Only 21 were followed for 5 years but maybe thousands were followed for 4 months or whatever. It wouldn’t be hard to get high success after very short times.

    • The 99.9% figure came with an asterisk: It was the output of a survival model for one component of the implant.

      There are a lot of mechanical parts that are much more reliable than chocolate cake recipes. It is not remarkable one way or the other to find one in a hip implant.

  3. I hadn’t heard of the term ‘seeding trials’ but I had surmised in the early 90’s that this was one marketing strategy deployed.

    I think John Ioannidis will probably be viewed as a patron saint of the scientific enterprise due to his comprehensive exposes to crises of scientific research. He will lead cutting edge cross disciplinary collaborations. I truly admire him and all those who venture into this challenging touraine.

      • Dzhaughn,

        I did not make the claim that it took John Ioannidis’ insights to understand this. But it is the case that not all thought leaders in epidemiology, statistics, nor medicine have acknowledged the role and extent of conflicts of interests either.

        There was a turning point with the advent of the evidence-based movement back in early 90s. The controversies about Null Hypothesis Significance Testing and the characterization of replication as crises preceded John Ioannidis’ seminal article: Why Most Published Research Findings Are False. It’s that John Ioannidis was able to communicate the crisis of confidence well: in understandable terms even to the public.

        I think that, by happenstance, there arose a broader crisis of confidence in decision making. So perhaps there were synergies there in drawing attention to the state of scientific research.

    • Well, without generalizing to all vaccines, it is the case that with regard to measles vaccine and autism risk (probably the most important of the vaccine controversies), there is ample data from non-pharmaceutical sources supporting the absence of an association.

        • Meh, I remember pointing out on this site at some point there was a lot of problems with the measles vaccine. Eg, no measles vaccine blinded RCT (despite Merck claiming otherwise with false references) and all the evidence was of the form “vaccine introduced, measles diagnoses go down”.

          Then I mentioned that people used to have their children get infected with measles on purpose. Public health campaigns accompanying the introduction of the vaccine slowed/stopped this pracitce. At the same time the diagnostic criteria for measles became more strict, and new more discriminative diagnostic tests were developed (these days ~9,995/10,000 suspected measles cases are diagnosed as something else with similar symptoms). And there are plenty of reports where doctors who happen to be blinded to vaccination status do often misdiagnose measles.

          All that could easily account for 90-99% of the “vaccine effect”. Also measles is still hitting the news in the same seasonal cycle it has for at least a century (peak weak 15, rough week 35).

          Then there is the danger of the “honeymoon period” during which you have low incidence as susceptible people accumulate, leading eventually to an epidemic far worse than could have other happened in an environment of continuous exposure. This was never taken into account to any cost-benefit analysis.

          Then there is the talking point that unvaccinated people are a danger to newborns since “it isn’t safe for them until they are a year old”, etc. No, its because newborns are protect by maternal antibodies. Its pointless to vaccinate them since it doesnt work (the maternal antibodies already protect the newborn and block the vaccine too). But there goes the main “think of the children” argument… and now you have a generation of mothers whose antibodies wane quicker than if they had experienced a full infection, but the age of vaccination can’t be lowered due to the fear-inducing propaganda these mothers were exposed to.

          Afaict, no one really cares about the evidence when it comes to that topic. It is totally based on argument from authority and emotion. I’ll find the sources for all that if anyone is actually interested.

  4. Wait a minute. If there are only 21 patients, then the success rate could be 21/21 = 100%, or 20/21 = 95.2% . . . How do you get 99.9%?

    Only 21 patients were tracked for 5 years, they didn’t mention the 2000 patients who hadn’t had any failures in the first 2 weeks after installation.

  5. “You usually hear about that level of success; indeed the only examples I can think of offhand are old-time elections in the Soviet Union, and the replication rate as reported by the Harvard psychology department.”

    While I thought it might eventually get old, I literally never get tired of mocking Gilbert et al.’s claim that the replication rate in psychology is “statistically indistinguishable from 100%”

    • Anon:

      I remain disappointed, not so much that these esteemed Harvard profs made a statistical error—statistics is hard, after all, and these people are not statisticians but I fear they are surrounded by yes-men who can’t or don’t tell them when they’re wrong—but that they would avoid opportunities to apologize and correct their errors. Really, what’s the point of being a tenured professor if you’re not interested or willing to correct your mistakes?

  6. As long as we’re discussing seeding it should be noted that the author of the “review article” cited here is an expert witness for the plaintiffs in the hip implant litigation. His publishing habits, as a search of pubmed.gov will attest, appear strongly correlated with his testifying; whether it’s asbestos, hexavalent chromium, “popcorn lung”, benzene, beryllium, etc. (He is by the way particularly hostile to any suggestion that certain observational studies and the conclusions drawn from them rest upon dubious statistical methods and inferences.) Seeding the literature with “peer reviewed” articles that happily support your consulting career and savage the reputations of those who dare to question your opinions wouldn’t happen if scientific publishing worked as advertised.

    • I am the author of the paper underlying this blog and comments. Thanatos Savehn ad hominum attack is understandable since he failed to provide even a single substantive criticism. It should be noted that this paper like some of the my other papers was based on previously confidential documents. In this case the documents included case report forms (CRF) which J&J/DePuy researchers altered. (Picture of altered CRF in paper). Much corporate research malfeasance goes unreported because journals do not review raw data or methods.

      I get access to this type of information in my consulting and I fight like the dickens to get it into the public domain. That is why many of my papers relate to litigation cases in which I have testified. I assure your readership that cross examination which goes on for days, weeks or in some cases months is far more rigorous that the peer review process.

  7. So to speak, all that glitters is not the gold standard. The gold standard in the medical world, respected by statisticians, medical practitioners and the public alike is the randomized clinical trial. Depending on what tribe of statistics one belongs to, the p-value, the effect size, or the posterior probability would determine whether or not a given treatment is superior to the others and/or a placebo.

    But just because something has the appearance of a randomized clinical trial does not mean it is one.

    From Wikipedia https://en.wikipedia.org/wiki/Seeding_trial:
    ——-
    The [seeding] trial is of an intervention with many competitors
    Use of a trial design unlikely to achieve its stated scientific aims (e.g., un-blinded, no control group, no placebo)
    Recruitment of physicians as trial investigators because they commonly prescribe similar medical interventions rather than for their scientific merit
    Disproportionately high payments to trial investigators for relatively little work
    Sponsorship is from a company’s sales or marketing budget rather than from research and development
    Little requirement for valid data collection

    Seeding trials are not illegal, but such practices are considered unethical. The obfuscation of true trial objectives (primarily marketing) prevents the proper establishment of informed consent for patient decisions. Additionally, trial physicians are not informed of the hidden trial objectives, which may include the physicians themselves being intended study subjects (such as in undisclosed evaluations of prescription practices). Seeding trials may also utilize inappropriate promotional rewards, which may exert undue influence or coerce desirable outcomes.
    —–

    • This type of “trial” could never be used in an application for FDA approval. Unfortunately, unlike drugs, devices such as hip implants are very loosely regulated. If a new device is substantially similar to any existing approved device, then approval of the new device follows a streamlined process requiring no evidence that the differences from existing devices are, in fact, not “substantial.”

      Perhaps this is a taste of what is to come in drugs as well if the current administration follows through on its intentions to loosen drug regulation, particularly with regard to the type of evidence required to support safety and effectiveness. If you want to see what the future would look like under such a regimen, just look back to the wild west snake-oil market that prevailed prior to the Pure Food and Drug Act.

      • Clyde Schecthter wrote “This type of “trial” could never be used in an application for FDA approval. Unfortunately, unlike drugs, devices such as hip implants are very loosely regulated. If a new device is substantially similar to any existing approved device, then approval of the new device follows a streamlined process requiring no evidence that the differences from existing devices are, in fact, not ‘substantial.'”

        He is quite correct and to see how dangerous implants are and how poorly regulated they are, read “The Danger Winthin Us” by Jeanne Lenzer. Her interview with Dave Davies is available at https://www.npr.org/2018/01/17/578562873/are-implanted-medical-devices-creating-a-danger-within-us

      • “If a new device is substantially similar to any existing approved device, then approval of the new device follows a streamlined process requiring no evidence that the differences from existing devices are, in fact, not “substantial.””

        This is really depressing!

        • Why is it (doubly :) ) depressing? Upon whom do you ask devices be tested before they’re tested in you or yours? Who are these people who are supposed to act as guinea pigs for us? Given that RCTs of devices function as little more than rituals to create the illusion of testiness, and responses once marketed vary as widely as critics of RCTs assert, why not just accept the fact that all implanted devices pose unknown risks? The FDA assesses similarity to something already in use. After that the manufacturer bears the risk (as the civil courts place the risk of defective products squarely on the manufacturer). Seems reasonable. Perhaps, ultimately, they’re all N-of-1 trials as our individual responses are all that really matter.

          And what of the consumer? I deposed a man who’d gone through multiple replacements of the old style hip implants. Each had lasted as long as expected. Now there was nothing left of the upper third of his femur and he was wheelchair bound. If he’d had the option he’d have tried the hopefully longer-lasting version as he might still be walking. If only we could live counterfactually … but then, I’d have to work for a living.

        • I think you’re reading things into my comment.

          A big part of my concern is that the people who participate in clinical trials of devices are not (to the best of my knowledge) fully informed of the risks they face, and the trials may indeed, as you say, “function as little more than rituals to create the illusion of testiness.” This is in itself depressing. Then to have a new device only required to have a “streamlined” approval process because it is asserted (with no evidence provided) to be substantially similar to an existing device, is just extending the ritual.

          What is needed is more detailed informed consent for participants in clinical trials, better running and analysis of those clinical trials, and follow-up after the trials to give better evaluations of the devices. In addition, “substantially the same” for a new device needs strong evidence. If such evidence is not provided, the the new device also needs a rigorous testing in clinical trial(s), which need to be informed in part by follow-up on participants in earlier clinical trials of similar devices.

          In other words, more informed consent (both for participants in clinical trials and subsequent patients receiving the devices); better quality of running of clinical trials; intellectual honesty and best practices in analyzing them; and requiring quality replication before approval of devices are all needed.

      • “If a new device is substantially similar to any existing approved device, then approval of the new device follows a streamlined process requiring no evidence that the differences from existing devices are, in fact, not “substantial.””

        This is really depressing.

        • Sorry for the double posting — it didn’t look like it had posted either time; I didn’t think to look back again until I Saw Daniel Lakeland’s comment about posting problems in the China Discontinuity discussion.

  8. Ben Goldacre’s books “Bad Pharma” describes similar dynamics in great detail, though he uses the term “marketing trials”.

    There was a media report a couple of years back alleging that some companies appear to be using Clinicaltrials.gov to market their wares or treatments directly to patients in the guise of clinical trials.

  9. This report by Transparency International, Cochrane and TranspariMED lists multiple instances in which selective and misleading reporting of clinical trials have been used by pharma companies to inflate apparent effectiveness and conceal harms:
    https://docs.wixstatic.com/ugd/01f35d_def0082121a648529220e1d56df4b50a.pdf

    This report explores case studies in which such evidence distortion has lead to patient deaths and waste of public health funds on a large scale:
    https://media.wix.com/ugd/01f35d_0f2955eb88e34c02b82d886c528efeb4.pdf

    Sadly, none of this is new to people working in the field of clinical trial transparency. Even more sadly, policy makers have yet to wake up to the potential public health and fiscal benefits of enacting and enforcing effective transparency rules for clinical trials.

    For example, the FDA has so far failed to impose a single fine on companies that break a 2007 law requiring (some) clinical trial to publicly post their results within a year. Wider benefits aside, collecting these fines could net the US taxpayer $400 million (and counting). See:
    https://fdaaa.trialstracker.net/

    More on this topic can be found at:
    https://www.transparimed.org/resources

Leave a Reply to Dzhaughn Cancel reply

Your email address will not be published. Required fields are marked *