Skip to content

Drug-funded profs push drugs

Someone who wishes to remain anonymous writes:

I just read a long ProPublica article that I think your blog commenters might be interested in. It’s from February, but was linked to by the Mad Biologist today ( Here is a link to the article:

In short, it’s about a group of professors (mainly economists) who founded a consulting firm that works for many big pharma companies. They publish many peer-reviewed articles, op-eds, blogs, etc on the debate about high pharmaceutical prices, always coming to the conclusion that high prices are a net benefit (high prices -> more innovation -> better treatments in the future vs poor people having no access to existing treatment today). They also are at best very inconsistent about disclosing their affiliations and funding.

One minor thing that struck me is the following passage, about their response to a statistical criticism of one of their articles:

The founders of Precision Health Economics defended their use of survival rates in a published response to the Dartmouth study, writing that they “welcome robust scientific debate that moves forward our understanding of the world” but that the research by their critics had “moved the debate backward.”

The debate here appears to be about lead-time bias – increased screening leads to earlier detection which can increase survival rates without actually improving outcomes. So on the face it doesn’t seem like an outrageous criticism. If they have controlled it appropriately, they should have a “robust debate” so they can convince their critics and have more support for increasing drug prices! Of course I doubt they have any interest in actually having this debate. It seems similar to the responses you get from Wansink, Cuddy (or the countless other researchers promoting flawed studies who have been featured on your blog) when they are confronted with valid criticism: sound reasonable, do nothing, and get let off the hook.

This interests me because I consult for pharmaceutical companies. I don’t really have anything to add, but this sort of conflict of interest does seem like something to worry about.

We talk a lot on this blog about bad science that’s driven by some combination of careerism and naivite. We shouldn’t forget about the possibility of flat-out corruption.


  1. Big Pharma has a long history of out-and-out corruption. See, for example: “GlaxoSmithKline fined $3bn after bribing doctors to increase drugs sales“. Here’s an excerpt:

    The company admitted corporate misconduct over the antidepressants Paxil and Wellbutrin and asthma drug Advair.

    Psychiatrists and their partners were flown to five-star hotels, on all-expenses-paid trips where speakers, paid up to $2,500 to attend, gave presentations on the drugs. They could enjoy diving, golf, fishing and other extra activities arranged by the company.

    GSK also paid for articles on its drugs to appear in medical journals and “independent” doctors were hired by the company to promote the treatments, according to court documents.

    Paxil – which was only approved for adults – was promoted as suitable for children and teenagers by the company despite trials that showed it was ineffective, according to prosecutors.

    This sort of thing is depressingly common in the industry.

  2. Jonathan (another one) says:

    This problem becomes a complete non-problem with transparency, and Lakdawalla says that explicitly in the article, and he’s right:
    “Conflicts are always a concern, which is why it is important to be transparent about study methods — that way they can be scrutinized and debated in the academic literature.”

    IMO, there’s nothing more to it than that. The problems develop when arguments come from *authority,* not data and methods. Then you have no way of judging whether or not an opinion has been bought. With transparency I care not one bit who is the paymaster or how much they are paying.

    • John says:

      Good point. You can’t eliminate conflicts of interest, but you can push for transparency.

      Everybody is paid from some source, and that can influence their judgment, whether the source is private or public.

    • ‘Jonathan (another one)’ wrote: “The problems develop when arguments come from *authority,* not data and methods.”

      A recent preprint at BioRxiv is a very nice example of a research paper without data and without a method. The authors of this preprint simply state: “Many people are discussing changes in publishing. We bring to this our diverse and collective experience of a number of traditional and start-up publishers as well as of developing infrastructures for open access and other publishing innovations.” That’s all. No data and no methods (and also no definitions).

      Source: (“Amending published articles: time to rethink retractions and corrections?”).

      • That essay is not, nor does it claim to be, a research paper.

        • Raghu,

          Thanks alot for confirming that (“Amending published articles: time to rethink retractions and corrections?”) is not a research paper (“That essay is not, nor does it claim to be, a research paper.”, posted on 22 April 2017 at 1:21 PM).

          Note that it is stated at : “BioRxiv accepts preprints of articles covering all aspects of research in the life sciences.” Note that it is also stated at :

          * “BioRxiv is for the distribution of preprints, which are complete but unpublished research articles. (..) Research articles reporting new, confirmatory, or contradictory findings may be posted. (…). BioRxiv does not permit the posting of news, product advertisements, teaching materials, policy statements, theses, dissertations, student projects, recipes and simple protocols.”

          * “Can I post a review article on bioRxiv? BioRxiv is intended for rapid sharing of new research. Some review articles contain new data/theory/analyses and may therefore be deemed appropriate. Reviews that primarily summarize existing knowledge are not appropriate and neither are term papers, textbook excerpts, and undergraduate dissertations.”

          * “Can I post methods or protocols on bioRxiv? Research articles summarizing new experimental or computational methods are appropriate and may include step-by-step protocols. Step-by-step protocols alone are not sufficient and must be placed in the context of a complete research article that includes elements such as introduction, results, and discussion.”

          I have sent the authors on 19 April 2017 an extensive review of their manuscript. I state in my review that their manuscript cannot be listed as ‘research paper’. There is until now no response from the authors.

  3. stuart says:

    /// “We shouldn’t forget about the possibility of flat-out corruption.”

    Seems a trivial point. Some people in any occupation put self-interest & money ahead of personal integrity. Nobody is surprised by that aspect of human nature. The economics, scientific, and academic professions are not presumed to be immune from this problem… although we generally expect much less of it than from used car salesmen and lawyers.

    Do you have a suggested remedy for crooked economists & academics ?

  4. Dale Lehman says:

    This is a good follow-up to the recent post on how economists seem to “get it right.” Their methods may be more sophisticated than some other fields, but this is an area they don’t get right. I notice that this particular firm has no open data policy. They default to the policies of any of the journals that their economists publish in. Nor do they provide data for their research briefs. They believe it is fine to influence policy without providing the data to others. While that practice can be defended (and I’m sure they would appear to the proprietary nature of much of their data), that does not make the practice anything to be admired by other researchers and other fields. From my experience (as an economist), the economics profession will be among the last to practice open data policies (yes, the experimental economists are somewhat of an exception). There is a lot of money to be made by using your reputation, publishing in “good” journals, and keeping data private enough to prevent replication, re-examination, and alternative views.

    • jrc says:

      Dale – agreed in a sense. In another sense, compared to, say, Social Psychology, a huge fraction of Econ papers are written using publicly available data.

      Also, we should be clear to make distinctions between these USC profs (yeah, I’m picking on the LA-based ones, but whatever) doing research as USC profs, and doing “research” as employees/owners of Precision Health Economics.

      Neither of these comments should be construed as to be providing justification or exculpation for this research group. I literally (literally literally, not figuratively literally) have no idea about their research practices, but find the appearance of conflicts of interest real enough… fine, I just re-did my Mandatory Disclosure Training(tm) and there is nothing inherently wrong with the appearance of conflict of interest. Even if publishing work from consulting gigs in academic journals looks bad, it doesn’t automatically make the research unethical, and sometimes it might even contribute to our knowledge of the world.

      But pivoting to where I think you are right, and I do think there is truth in your comment – part of the interesting ethical question is ¿when does an academic researcher become completely inseparable from their moonlighting gig as a paid research consultant for industry? I don’t know – and again I have no insight at all regarding this group or these researchers. Supposing they can be kept separate, then I don’t think we have to hold the paid consultant world to the same standards of transparency that we hold academics. I, as an academic, should be held accountable to standards of reproducibility, transparency and honesty because a) the taxpayers pay my salary and fund my institution; and b) the point of the Academy as a research entity is to sustain knowledge and generate new insights. I’ll, for myself, even add c) I think that is the best way to push knowledge forward. But none of those reasons apply to paid consultants – they have fiduciary duty to their firm/shareholders to make money, not a duty to society to uphold and generate knowledge.

      Now, if at some point the day-job and the moonlighting-job become inseparable…. then maybe things change a bit and you have to bring with you to your consulting-gig all the trappings of your academic-gig. Or just quit the professor job – I mean, if Andrew wanted to just make money, I’m sure Novartis could pay him better than Columbia.

      • jrkrideau says:

        Perhaps I am taking too simplistic a view of the issue and I am not an ethicist, but I strongly disagree with: “paid consultants – they have fiduciary duty to their firm/shareholders to make money, not a duty to society to uphold and generate knowledge”

        Excuse me if I am mistaken but you seem to imply that a consultant is obliged to lie or spread false information at the behest of the client.

        The consultant does not, necessarily, have a duty to generate new knowledge, unless it is in the contract, but if the consultant does not have a duty “to uphold knowledge” then the consultant is being paid to lie but to give the lies a “sciency” or “expert” gloss. The consultant is prostituting their professional reputation and standing to be a sock-puppet for the client.

        There is Sir Henry Wotton’s aphorism, “An ambassador is an honest gentleman sent to lie abroad for the good of his country”. I am not sure, “A consultant is an “expert” sent to lie to increase the share-price” has quite the same ring.

        • I don’t think the consultant has a paid-duty to lie, but if the consultant comes up with something damaging to the client, it’s their duty to disclose it to the client, but not (necessarily) their duty to disclose it to the world.

          One might argue that whistleblowing is the gray area. If one is paid to consult, finds that the client is actively harming society, tells the client, and the client covers it up… there are ethical arguments about your duty to disclose the information. But these arguments are not special to consultants. A janitor who finds a note “dear CEO, I’ve discovered that our drug X is killing people” in the trash can has the same ethical issue.

          • jrkrideau says:

            Yes, the (necessarily) is the key issue. Failure of discloser in “trivial” cases of commercial/economic cases strikes me as a minor issue especially if we have a situation of dueling experts.

            If one is paid to consult, finds that the client is actively harming society, tells the client, and the client covers it up… there are ethical arguments about your duty to disclose the information

            I, personally, see none. There is an ethical and moral responsibility to disclose the information but I am open to arguments showing me wrong.

            The real issue in a real disclosure situation, as in whistleblowing situation for an employee, (as opposed to a simple refusal to write the desired report or provide the requested testimony) is the damage and possible danger to oneself,family and, possibly colleagues. In some cases, considering 25 years in prison for violating the local version of the Official Secrets Act may have an inhibiting effect.

  5. Speaking of flat-out corruption, how about 107 instances of suggesting reviewers who are actually the author’s sock puppets and then reviewing your own article?

    • Anoneuoid says:

      Some of those comments show a rather strange understanding of “science”:

      The authority of science is based on the trust that is vested in the peer review.
      The hypothesis that peer reviewed science is true science, is falsified if even only 1 example can be found in which the theory isn’t true.

      • Yeah, slashdot is not exactly the place to read comments on nuanced issues outside of direct application of technology, still it’s where I first saw this scandal hit, so it gets the credit.

        • Jack PQ says:

          I have seen several cases of “fake reviewers” reported, and what I don’t understand is this: as an editor myself, if I asked for reviewer suggestions (I don’t) I would require institutional email addresses. I would never send a paper out for review to someone, or whatever. I understand that many researchers prefer gmail, or yahoo, or msn, or whatever, to their institutional email, but even so, you would have the institutional email, say,, redirected through your more convenient email.

          The point is, if editors pay attention, this should not occur. This does not mean it’s OK for authors to recommend reviewers (I’m on the fence about it), just that it doesn’t have to be such an obvious problem. A more subtle problem is that authors recommend their friends, who are not fake reviewers, but kind reviewers.

  6. jrkrideau says:

    Ah yes, I have seen reference to this paper before. It seems to have come under some severe criticism. One blogger’s comments are here He supplies a broken like to a more detailed discussion of the survival versus mortality issue (not referencing this paper) which appears to be this one on the Respectful Insolence blog,

  7. Malcolm says:

    > This problem becomes a complete non-problem with transparency

    While transparency beats non-disclosure, I vigorously disagree that conflicts of interest are neutralised by disclosure. Imbalances of power, wealth, resources, influence, or publicity still can be massive problems.

    For example, when the entire literature in particular area consists of studies supported by a vested interest, what can you do? This is surprisingly common in medicine. Methodological shortcomings are likely arguable, and if the alternative is no evidence at all, even careful, well-intentioned practitioners will just go with the biased studies.

Leave a Reply