More on the PACE (chronic fatigue syndrome study) scandal

Last week we reported on the push to get the data released from that controversial PACE study on chronic fatigue syndrome.

Julie Rehmeyer points to a news article with background on the story:

Patients rapidly discovered serious scientific problems with the 2011 Lancet paper. Despite these errors, the study, known as the PACE trial, went on to inform recommendations from such influential bodies as the Centers for Disease Control and Prevention, the Mayo Clinic, and the British National Health Service. . . .

But just days before the new study was released, on Oct. 21, the San Francisco journalist David Tuller published a major investigation exposing deep methodological flaws in the entire PACE trial that put its validity in serious doubt.

And this time, the new study has been met with intense criticism from outside the world of patients and advocates. On Friday, six researchers, including prominent scientists such as virologist Vincent Racaniello of Columbia University and geneticist Ronald Davis of Stanford University, released an open letter to the Lancet demanding an independent review of the PACE trial.

“The whole study is unbelievably amateur,” says Jonathan Edwards, a biomedical researcher at University College London who signed the letter. “The trial is useless.”

Rehmeyer reports on a lag between the scientific community and medical practice:

The PACE trial has exerted a strong influence on American physicians: If you ask your doctor about CFS, odds are good you’ll hear that cognitive behavioral therapy (the flavor of psychotherapy used in the trial) and exercise are the only proven treatments for CFS.

The American scientific research community, on the other hand, has rejected the psychiatric model that PACE epitomizes and is instead looking for physiological explanations for the disease.

And then the data story:

Starting in 2011, patients analyzing the study filed Freedom of Information Act requests to learn what the trial’s results would have been under the original protocol. Those were denied along with many other requests about the trial, some on the grounds that the requests were “vexatious.” The investigators said they considered the requests to be harassment.

And a garden of forking paths:

The study participants hadn’t significantly improved on any of the team’s chosen objective measures: They weren’t able to get back to work or get off welfare, they didn’t get more fit, and their ability to walk barely improved. Though the PACE researchers had chosen these measures at the start of the experiment, once they’d analyzed their data, they dismissed them as irrelevant or not objective after all. In addition, the patients researching the study found statistical errors, actions that might have pumped up the subjective ratings, measurement problems that allowed participants to deteriorate without being detected, conflicts of interest, and more.

And here’s a quote from one of my colleagues:

“The Lancet needs to stop circling the wagons and be open,” says Bruce Levin, a biostatistician at Columbia University who signed the open letter. “One of the tenets of good science is transparency.”

Indeed.

44 thoughts on “More on the PACE (chronic fatigue syndrome study) scandal

  1. Thank you Andrew for raising awareness of the issue. Professor Edwards is spot on when he says that the whole study is amateur. It is a scientific abomination that instead of being mocked has been promoted as a thing of beauty. It only went unchallenged initially because of the lack of scientific scrutiny into much of the ME/CFS research. Shody science must be challenged wherever it occurs and hiding away data to avoid this happening is not acceptable.

    • Clark:

      Another issue, I fear, is the journal’s reputation. Journalists and policymakers are trained to believe things that appear in top journals. On the other hand, reputations change. Psychological Science used to be considered a top journal, and maybe it will be considered a top journal again, but right now it’s notorious for junk science. The American Sociological Review is considered the top journal in that field, but my own experience is that they refused to run a correction. Why? Because they don’t run corrections. That doesn’t give me so much confidence in the papers in that journal that I haven’t happened to look at. PPNAS has the himmicanes and hurricanes and other such studies. As for the Lancet . . . there’s this study and then there was that Iraq deaths paper from a few years back. These are the two papers that first come to mind when I hear “The Lancet.” Not such good news for the journal’s reputation.

      • That’s hardly specific to Journals. Without using reputations it would be hard to take any decisions. e.g. What college to join, which Prof. to appoint on a study, which supplier to select etc.

        Even the fact that reputations aren’t static is hardly unique to Journals.

        And finally, two bad papers shouldn’t rationally lead you to adjust your opinion of the Lancet so drastically. If Lancet is so crappy what’s good? I think you are unduly prejudiced about some Journals: Lancet, Nature, Science etc.

        There’s enough bad Science to go all around. Lancet etc. hardly have a monopoly on it. If there’s a Journal systematically publishing far worse Science than what’s average for academic publishing it’s not the Lancet.

        • Rahul:

          My point is not that Lancet papers are worse than those in other journals. My concern is that Lancet papers are inappropriately taken more seriously than they should. Publishing a paper in Lancet is fine. But then if the paper has problems, it has problems. At that point it shouldn’t try to hide behind the Lancet reputation, which seems to be what is happening. And, yes, if that happens enough, it should degrade the journal’s reputation. If a journal is not willing to rectify errors, that’s a problem no matter what the journal is.

        • My point is that there always will & should exist a reputational gradient across Journals. People will take some more seriously than others. Hopefully, reputations change over time.

          About the rest of your comment, I agree.

      • I am under the impression that Psych Science is looked down upon by Andrew (with good reason), but not that it is considered a repository of junk science by psychologists in general. Is this incorrect? Is Psych Science widely considered notorious for junk science?

        • Shravan:

          I think Psychological Science is considered to have a mixture of good stuff and junk science. For awhile it seemed that more than half the papers published in that journal were junk (see, for example, slide 15 here), but maybe things have improved since then. Even at its worst, though, Psychological Science was publishing some good stuff. With any journal, the occasional bad paper will sneak in, but for awhile there Psych Science seemed to be out of control.

  2. The authors defined “the normal range” for physical function and fatigue to be roughly the same as that of patients with severe heart failure, whithout making an effort to prevent misunderstandings. These papers are full of nonsense such as this. I don’t know about the mediation analysis paper but it will probably continue the trend.

    My impression of all this is that the authors “optimized” their studies for persuading the casual reader, not shying away from violating methodological correctness and common sense, and simply hoping that nobody would notice. It’s disturbing that they have gotten as far as they did.

  3. Just one more comment: Journalist David Tuller did an amazing, in-depth investigation into the PACE trial, which is what finally (after nearly five years!) has brought the problems with the trial to general attention. It’s a devastating critique: http://www.virology.ws/2015/10/21/trial-by-error-i/

    His reporting on this has been an enormous act of service and generosity, and it looks like it may result in reduced suffering for 13 million chronic fatigue syndrome patients around the world.

      • There are some promising recent studies suggesting it may be an autoimmune disease for at least a subset of patients (rituximab trials, antibody findings etc).

        If you are interested in finding out what the state of the research is I would suggest reading the recent Institute of Medicine report. They did a very thorough job of reviewing and summarizing 9000 research papers on ME & CFS.
        (Their conclusions were in stark opposition to the PACE investigators)

        The reality as to causation is that as Ronald Davis put it, it is “the last major disease about which we know next to nothing”.
        This is due to a severe paucity of funding, the introduction in the eighties of a new name that trivialized the disease, and puts the focus on only one symptom, the introduction at the same time of various overly broad diagnostic criteria, and last but definitely not least, the worldwide effect the PACE trial and its promotion have had on perception of the disease, convincing much of the world that the disease is partially if not wholly behavioural.
        Eg – Ian lipkin is on record saying he was turned down for NIH grants twice by a reviewer who stated the disease was psychosomatic.

        • The diagnostic criteria sounded extremely fuzzy to me:

          e.g.

          “A patient with ME/CFS will meet the criteria for fatigue, post-exertional malaise and/or fatigue, sleep dysfunction, and pain; have two or more neurological/cognitive manifestations and one or more symptoms from two of the categories of autonomic, neuroendocrine, and immune manifestations; and the illness persists for at least 6 months”

          What is the unifying feature here that makes it a distinct disease?

        • MECFS is a syndrome, which AIUI means that it is a set of correlated symptoms, but not necessarily caused by some particular “distinct” disease. The unifying feature is being tired all the time for no identifiable reason, it is a diagnosis by exclusion in most cases.

        • “…being tired all the time….” is only one of the diagnostic symptoms. According to the Institute of Medicine’s comprehensive 2015 report, Post-Exertional Malaise (PEM) is a cardinal symptom of ME/CFS, and is required for making a diagnosis of ME/CFS (Myalgic Encephalomyelitis/Chronic Fatigue Syndrome):

          http://iom.nationalacademies.org/~/media/Files/Report%20Files/2015/MECFS/MECFS_DiagnosticAlgorithm?_ga=1.26080125.1732285569.1451231697

          The PACE trial was flawed from the start because the participants were chosen using the Oxford Criteria which only requires chronic fatigue. But chronic fatigue is a symptom in many illnesses besides ME/CFS. Because of this, many of the participants in the PACE trial did not in fact have ME/CFS.
          .

        • The idea that the unifying feature of chronic fatigue syndrome is being tired all the time is a myth originated in the misleading name created in the 1980’s. The distinguishing and unifying feature is post-exertional malaise. PEM is the exacerbation of primarily flu-like symptoms including flu-like exhaustion after normal exertion. Even exertion as mild as taking a shower or walking across a room can generate severe flu-like symptoms in more severe patients. Mental exertion (the brain uses a large amount of energy) can also cause symptom exacerbation. The unique feature of the disease is that even mild exertion causes these symptoms, which suggests an energy production disorder of unknown cause. This can be measured with a 2-day CPET test. However, this test is potentially damaging for many patients and is therefore not recommended for routine diagnosis.

          The IOM report suggests diagnosis should require post-exertional malaise (not simply unexplained fatigue), cognitive dysfunction, and orthostatic intolerance. That is not “being fatigued all the time”. The list of symptoms common in CFS is much larger and can be found in the full IOM report or the International Consensus Criteria for Myalgic Encephalomyelitis.

      • I ask two questions, and it is never easy to get the answer (probably because they are “no”):

        1) Have there been any independent direct replication studies that report similar results? IE has the existence of a stable phenomenon been demonstrated, are the experimental/observational conditions understood well enough to communicate them effectively?
        2) Have any precise a priori predictions been deduced from a theory and compared to new data? IE predictions regarding the form or size of the effect given that the theory is correct.

        From this I gather very little is understood about human health.

        • Can you cite the specific evidence that has convinced you of one of these achievements? I mean the evidence that, if shown to be flawed, would make you begin doubting the claim.

          The devil is in the details, I would investigate at least the following: Did people live for decades with HIV before? Did chicken pox rates begin decreasing before the vaccine? Did method of leukemia diagnosis change (eg it used to be symptoms now it is some genetic test)? Is some preventive intervention masking these disorders, eg by killing patients before they are diagnosed?

        • Anon:

          HIV is relatively new so I can’t be sure about this one. But the leukemia cure and the chicken pox, I’m pretty sure these are real. I mean, I don’t have the evidence for the moon landing or, for that matter, the existence of the islands of Fiji either, but I’m pretty sure they’re real too. I’ve seen arguments that modern medicine isn’t worth the money, or that most medical research doesn’t help people, and that might be true, but I don’t see any sense in arguing that there’s been no progress in medicine.

        • e.g. Maternal Mortality Rate: Circa. 1900 was 850 deaths per 100,000 births.

          Circa 2005: Less than 10 deaths per 100,000.

          100x improvement?

        • We would have to see exactly where these numbers come from. How accurate were birthrates in 1900? Has the definition of “live birth” changed? What about “maternal mortality”? For example, a quick search led me to this, just replace infant mortality with maternal:

          “The infant mortality rate is a ratio of all deaths in y 1 of life to the total number of live births as defined above. Completeness of birth registration is thus crucial to accuracy. In the United States today, almost all births take place in hospitals, making registration a relatively straightforward routine; early in the century, however, many, if not most, births took place in homes3 and were not officially registered. In those years, a new health officer in a rural area promptly learned that the quickest way to reduce his jurisdiction’s infant mortality figures was merely to increase birth registration!”

          Infant Mortality in the 20th Century, Dramatic but Uneven Progress. Myron E. Wegman. J. Nutr. February 1, 2001
          vol. 131 no. 2 401S-408S
          http://jn.nutrition.org/content/131/2/401S.short

          My point is that your post doesn’t convince me of anything (not that I think it is wrong). We need to inspect the evidence. Unfortunately, I have learned it is a huge mistake to assume this has been done competently.

    • In 2011 soon after the PACE trial results were published, Professor Malcolm Hooper wrote a detailed complaint letter to Lancet pointing out the glaring flaws of the trial (including changes to the entry criteria, data not reported/measures dropped, and adverse events/reactions & serious deterioration).

      http://www.meactionuk.org.uk/COMPLAINT-to-Lancet-re-PACE.htm
      http://www.meactionuk.org.uk/Comments-on-PDW-letter-re-PACE.htm

      Many thanks to David Tuller, Julie Rehmeyer, and James Coyne for continuing the fight to get the data released so that an accurate analysis of the PACE trial can finally be made.

  4. Another person who has been critiquing the PACE study is Keith Laws (http://keithsneuroblog.blogspot.com/2015/11/song-for-siren.html), who earlier critiqued a meta-analysis of studies of CBT as treatment for psychosis; see http://bjp.rcpsych.org/content/204/1/20.short. A blog entry (http://keithsneuroblog.blogspot.com/2015/05/science-politics-of-cbt-for-psychosis.html) gives a brief discussion and a link to an address to the British Psychological Association detailing, among other things, some of the difficulties in getting the critique published.

  5. >”Starting in 2011, patients analyzing the study filed Freedom of Information Act requests to learn what the trial’s results would have been under the original protocol. Those were denied along with many other requests about the trial, some on the grounds that the requests were “vexatious.” The investigators said they considered the requests to be harassment.”

    Reminds me of “replication bullies”, now we have “methodology harassers” and “open data vexationiators”. If it is not possible to check different analyses on this data then it should be ignored on those grounds, clearly there is something wrong with it or it has gone missing, etc.

    • On a related note, I tried to get access to data for this PNAS paper:

      “Resting-state functional connectivity predicts longitudinal change in autistic traits and adaptive functioning in autism”

      and was directed to a cool website (https://central.xnat.org/) to get access to the data. After filling out a form, I got an email saying:

      “We regret to inform you that your request to access the Resting-state functional connectivity predicts longitudinal change in autistic traits and adaptive functioning in autism project has been denied. Please consult the project manager for additional details at [email address of one of the authors].

      Proceed to the site to get started reviewing/using the data.”

      One of the authors did email me and said he needs to get permission from the participants to share the data and he is in the process of obtaining it. I am to contact him again in a few weeks. So I may yet be able to get it.

      I was surprised that experimental data are not completely anonymized and readily available, especially given that the reader is directed to a website for accessing the data.

      • The only solution for this is for Journals & Editors to put their foot down. No publishing before they release a copy of the data to the Journal.

        If there’s permissions to be gotten get them *before* you are accepted for publishing.

        If you are a referee on a article insist on a copy of data. Insist on data being loaded to a repository even if you are not going to use it during reviewing. Refuse to review articles that won’t play fair.

        But is there the will? Do Journals & Editors care? Do academics themselves care (who are the editors & referees after all?!)?

  6. I am rapidly coming to the conclusion that this has become a crisis. As the amount of data and sophistication of the techniques increases exponentially, it is time to establish standards of open access to data and insist on them. I am not optimistic, but I think it is time to get more insistent (for those of us that feel this way). There are plenty of “open access” journals and policies but they are often not followed (I haven’t done a careful study, but most of the journals that have policies of providing the data that I frequent, do not in fact, have the data available – it is listed as “proprietary”). At this point, I wonder if any study is capable of being replicated.

    Two recent attempts – one is a potentially important paper about health care prices issued by the Health Care Cost Institute, a nonprofit with a mission to promote independent research. The paper in question raised some red flags for me. My efforts to contact the author, and eventually the Institute, resulted in a message that “at this time HCCI does not license its claims database either to commercial entities or to university researchers for one-off projects.”

    The other recent paper (from NBER) has data about negotiated automobile sales prices based on a very large database from the 2002-06 period. I have concerns about this paper as well – in an email, one of the authors clarified somewhat an issue the paper had misreported. I also asked about access to the data but was told that it is strictly prevented (even for further research by the current authors). Of course, it is proprietary data – but, really, what secrets are revealed by 10 year old data on automobile sales’ prices?

    The list goes on and on – there is almost no study that appears to be capable of replication. The problems are access to the data, incorrect analysis of the data, and misreporting of the results. I may easily be part of the second type of error – I am quite sure that my analysis skills are imperfect (to put it mildly). But I no longer have patience for the lack of access to data. The solution is two fold – not easy, but I believe these would accomplish what is needed. First, decision makers (elected officials, regulators, etc.) should announce that studies that do not release their data will be accorded the appropriate consideration (i.e., close to none). Second, academics should start counting citations rather than publications. Releasing an interesting dataset (even if laboriously collected) should result in many citations. From my experience, citations are a far better measure of good research (yes, there are many citations to famous error-prone work – but I believe this usually reflects someone that at least worked on an important topic) than are the number of publications. This would prevent hoarding of data in order to pad a resume and promote data release as a way of increasing visibility.

    I am not optimistic about any of these changes, but I fear that until these two things are changed, conditions are not likely to improve.

      • I should probably add that the PACE trial is part of the “research evidence of benefit” and since it was a very large, relatively recent trial it presumably had a significant influence.

        As far as I can tell, that’s one of the central points of the patient community. If the PACE trial actually wasn’t “moderately effective”, as claimed, then the research evidence isn’t so clear after all.

    • No physician specialty has been willing to take responsibility for CFS. Patients are left in limbo without physicians willing or able to treat them. Some specialists treat individual symptoms of the disease without treating the disease as a whole. Those symptoms include, but are not limited to, thyroid dysfunction, sleep disorders, dysautonomia, immune abnormalities, and pain. Physicians informally specializing in medical treatment of CFS come from a variety of fields including internal medicine, infectious disease, and immunology.

      Immune abnormalities are common in patients who are tested for them, which is a small fraction of the patient population. However, since the abnormalities are not those well-known in other diseases and therefore do not have established tests or treatment protocols, few immunologists are willing to treat these patients. Much more research is needed. Recent studies in the effectiveness of Rituximab as a treatment for this condition suggest that most or all CFS patients are suffering from an unknown autoimmune disease or an infection residing in B-cells.

    • There is no speciality for CFS because the disease mechanism isn’t understood, and there are likely multiple broadly similar illnesses behind the CFS label. There are no tests and the research is severely underfunded (this has been changing fortunately). It is not an exaggeration to say that we know next to nothing.

      In some parts of psychiatry there seems to be the thinking that if the biological basis of a disease is unknown then it must surely mean that there is no biological basis and that the disease must therefore be psychosomatic (ie. caused by behaviour and thoughts). This has led to some psychiatrists claiming CFS as their turf and trying to cure it by correcting the unhealthy thoughts and behaviours with cognitive behaviour therapy and exercise. That’s how we got the PACE trial and similar studies.

      This line of thought is logically and scientifically flawed. Logically because absence of evidence is not evidence of absence, and scientifically because a variety of abnormalities have been documented that are incompatible with the psychosomatic explanation (see the IOM report). Invoking psychosomatic explanations in the context of documented abnormalities is magical thinking with no scientific basis, similar to how personality traits were claimed to cause cancer in the past.

      The biomedical approach to CFS views it mainly as a problem of a dysfunctioning immune system. It seems likely that CFS will be divided into at least a persistent infection and an autoimmunity subset.

      1. Beyond Myalgic Encephalomyelitis/Chronic Fatigue Syndrome: Redefining an Illness – See more at: http://iom.nationalacademies.org/Reports/2015/ME-CFS.aspx

      • I’m a patient, and I wasn’t aware of the open data movement until James Coyne got involved in PACE.

        I believe the PACE and other studies by the same group of psychiatrists is an excellent example of how bad science can harm patients. Patients consistently report being made worse by the CBT and GET approach advocated by PACE authors (1). The patient community as a whole has also been harmed. That after 30 years we still know next to nothing about CFS is a result of a severe lack of funding being made available, which in turn is a direct result of the views promoted by these psychiatrists that patients are just thinking incorrectly and resting too much and can get better if only they wanted to.

        There is also the economical aspect of wasting money on CBT and GET which increasingly look no more effective than a placebo.

        1. http://www.meassociation.org.uk/2015/05/23959/

        • If science were functioning properly in this area then researchers would acknowledge that CBT and GET don’t work and move on to try other things. Yet these people have been promoting this approach since at least the early 90’s. And after all this time they came out with the PACE trial and it turns out that the emperor has no clothes. The PACE trial cost £5 million by the way, and the authors allegedly spent another £750k on legal fees to prevent anyone from seeing the data. This money could have been spent on useful projects such as the ME/CFS Severely Ill-BIG DATA Study (1).

          1. http://www.openmedicinefoundation.org/mecfs-severely-ill-big-data-study/

Leave a Reply

Your email address will not be published. Required fields are marked *