You won’t be able to forget this one: Alleged data manipulation in NIH-funded Alzheimer’s study

Nick Menzies writes:

I thought you might be interested in this case in our local news in Boston.

This is a case of alleged data manipulation as part of a grant proposal, with the (former) lead statistician as the whistleblower. Is a very large grant, so high stakes both in terms of reputation and money.

It seems that the courts have sided with the alleged data manipulators over the whistleblower.

I assume this does not look good for the statistician. Separate from the issue of whether data manipulation (or analytic searching) occurred, it makes clear that calling our malfeasance comes with huge professional risk.

The original link no longer works but a search yields the record of the case here. In a news report, Michael Macagnone writes:

Kenneth Jones has alleged that the lead author of the study used to justify the grants, Dr. Ronald Killiany, falsified data by remeasuring certain MRI scans. . . .

It was Jones’s responsibility as chief statistician in the 2001 study—an examination of whether brain scans could determine who would contract Alzheimer’s disease—to verify the reliability of data . . . he alleges he was fired from his position on the study for questioning the use of the information. . . .

I know nothing at all about this case.

21 thoughts on “You won’t be able to forget this one: Alleged data manipulation in NIH-funded Alzheimer’s study

  1. Oh my. Marilyn Albert is a very big name in the field. I would love to know how these claims are approached in courts – what counts as evidence? What happens if evidence you’d like to examine is missing?

    Personally I view scientific misconduct the same way I view sexual misconduct. Reporting it is altruistic, in that if you’re lucky your report will benefit society, but it will almost certainly not benefit you and stands a large chance of harming you. Escape (I.e. a job search) + whisper network is a lot safer. On the whole I don’t trust the justice system enough to risk using it.

    • Something that wasn’t clear from the blog post – the case itself is actually quite old, and even the most recent appeal dates to spring 2015, I believe. Too old, in fact, for it to have been covered by Retraction Watch, which was the second place I went looking for more details (after DuckDuckGo).

    • Erin,

      In a repeated game,in game theoretical sense, that is a dangerous attitude to have, let alone publizice.

      In this situations typically you want to announce a grim trigger strategy. Not sure rumors cut it.

      But then again your solution is to stop playing the game. Turning it over to a series of one shot games where your outcomes are driven by luck.

      Some ugly trade-offs here, depending on the likelihood and frequency of misconduct. Surely someone has done a formal analysis.

  2. Is this a typo?

    “it makes clear that calling our malfeasance comes with huge professional risk.”

    Should it not be

    “it makes clear that calling out malfeasance comes with huge professional risk.”

  3. Some notes:
    **********
    Confusion between precision and accuracy:
    >”In 1997, Killiany and another researcher, Dr. Teresa Gomez–Isla, developed a “protocol” to predictably locate and outline the EC
    […]
    The comparison yielded an inter-rater reliability measure, or Pearson coefficient, of 0.96, representing a very close match and indicating that two raters could predictably trace the EC and obtain consistent measurements.”
    http://caselaw.findlaw.com/us-1st-circuit/1694461.html

    Killiany does something that is totally reasonable, anyone scientifically minded would want to have the measurements be as accurate as possible:
    >”As he encountered anomalies and learned more about the EC, he reviewed his prior measurements. When a prior measurement seemed inaccurate, Killiany “would remeasure the area and reapply the operational definition, based on [an] increasing amount of information about measuring the structure on MRI.”

    However, this procedure was apparently incompatible with the default nil null hypothesis they chose. Then when the statistical test correctly detected the deviation from the null hypothesis, it became obvious that the link between the statistical hypothesis and research hypothesis was tenuous:
    >”Jones raised those concerns in a March 2001 meeting with Albert and informed her that a statistically significant relationship between the volume of a participant’s EC and her clinical dementia rating only existed when Killiany’s second set of measurements were used. By contrast, if Killiany’s original measurements were substituted, the relationship disappeared. Without Killiany’s remeasurements, no statistically significant relationship was apparent from the data.”

    A bit earlier, we see the usual “find statistically significant difference and then wildly leap to the conclusion your favorite idea is correct”:
    >”Based on Killiany’s second set of measurements, the study concluded that the volume of a subject’s EC could predict with 93% certainty whether a previously “questionable” participant with mild memory problems would become a “converter” and eventually develop Alzheimer’s disease.”

    Inaccurate reporting of the methods:
    “That article also reported the inter-rater reliability rating of 0.96.”
    ************************

    This is all standard behavior, totally unsurprising. I highly doubt these people really understood basic things like what hypothesis was being tested, the importance of independent replication to the scientific method (and thus high priority on accurate methods sections), etc. They really don’t realize anything is wrong with what they do. It isn’t fraud, it is mass confusion.

  4. To clarify, it was not really “the courts” that have sided with the alleged data manipulators, but rather the jury. Which raises the very real possibility that the alleged data manipulators were innocent of wrongdoing (after all, a plaintiff can allege whatever he or she wants in a lawsuit. Plenty of such allegations turn out to be iffy, at best).

  5. I’m confused: If the NIH was defrauded (“He [Jones] further alleged that Dr. Albert and Dr. Killiany violated federal regulations (43 CFR 50.103(c)(3) by making false statements in the NIH grant application. “) why isn’t this suit filed by a public prosecutor?

    Why are Jones’ attorneys fighting this? Is this a civil suit? Why?

    • Under the False Claims Act and a practice called “qui tam,” a private party (“relator “) can file suit in the name of (“ex rel”) the United States. The fact that the DOJ did not intervene and take over the case indicates that the government did not think it was worth litigating.

  6. Somehow the tone of this posts bothers me and seems to be bias against the researchers. In particular:

    Nick write “…it makes clear that calling our malfeasance comes with huge professional risk….”

    We don’t know if the claim is unfounded, i.e. whether the accuser is delusional or vindictive. If the claim is false, the accuser should suffer adverse consequences. Certainly the researchers have been harmed by being accused.

    We do know that the jury was not convinced by the evidence, and I don’t see evidence that the government was convinced enough to prosecute this case.

    • At least from what I read, it did not sound like fraud. If you read the linked description, you will see that the defense was that these errors were due to incompetence rather than intent to commit fraud:

      “Jones’s argument appears premised on the conclusion that the jury was required to believe his theory of the case that Killiany’s remeasurements constituted a knowing and purposeful manipulation of the data, and that Albert turned a blind eye to that problem.”
      http://caselaw.findlaw.com/us-1st-circuit/1694461.html

      What is described here is a story repeated over and over and over.
      1) Researchers design experiment that involves taking non-iid data samples. In this case, they retrace some entorhinal cortices depending on what was seen in later data.
      2) Researchers test the null hypothesis that they collected iid data samples.
      3) Researchers make some ridiculous claim after a statistical test identifies their null hypothesis is false (which it was by design). In this case, that would be “that the volume of a subject’s EC could predict with 93% certainty whether a previously “questionable” participant with mild memory problems would become a “converter” and eventually develop Alzheimer’s disease”

      I had another post that didn’t appear on this, it may be in the spam filter.

  7. This comment has no direct bearing on the fraud case but, still, I can’t resist.

    The link that Andrew says doesn’t work has been fixed:

    http://ahrp.org/harvard-to-be-tried-for-alzheimers-research-fraud/

    It goes to a page that describes the fraud case. At the top of the page is a carousel of names and faces. My first impression was that it is an honor role of medical researchers. Then I saw Andrew Wakefield flash across the screen. Wakefield, of course, was the author of the fraudulent study linking the MMR vaccine with autism:

    http://www.bmj.com/content/342/bmj.c7452

    http://briandeer.com/mmr/lancet-summary.htm

    So I switched to thinking that the carousel was a rogues gallery of medical villains.

    But then I had a closer look and realized that my first impression had been correct! It is an honor role:

    http://ahrp.org/category/honor-roll/

    The entry for Wakefield contains a long anti-vaccine rant:

    http://ahrp.org/andrew-wakefield-md/

    Of course, none of this means that Kenneth Jones is wrong. But the site reporting on the controversy has to be considered unreliable.

      • Neither do I. I definitely don’t want to smear Jones by association with this site. He can’t control who decides to support him.

        It was just such a weird thing when I noticed it. I felt I had to comment.

        • Mike:

          Yes, definitely. If this group is so off-base as to be endorsing Andrew Wekefield, they should be called on it. I did a web search and this organization doesn’t seem to get a lot of traction; they seem to spend most of their effort writing letters to the editor or posts on zero-circulation blogs. Kind of like one of those astroturf P.R. organizations but without the big industry money to support their efforts.

  8. I worked for Marilyn Albert in the late 1980’s. She would routinely change other doctors dictation, so it would say what she wanted. I do not know what happened in this case, but having worked for her, she is not above it.

Leave a Reply

Your email address will not be published. Required fields are marked *