I read this front-page New York Times article and was immediately suspicious. Here’s the story (from reporter Gina Kolata):
Could exercise actually be bad for some healthy people? A well-known group of researchers, including one who helped write the scientific paper justifying national guidelines that promote exercise for all, say the answer may be a qualified yes.
By analyzing data from six rigorous exercise studies involving 1,687 people, the group found that about 10 percent actually got worse on at least one of the measures related to heart disease: blood pressure and levels of insulin, HDL cholesterol or triglycerides. About 7 percent got worse on at least two measures. And the researchers say they do not know why.
“It is bizarre,” said Claude Bouchard, lead author of the paper, published on Wednesday in the journal PLoS One . . .
Dr. Michael Lauer, director of the Division of Cardiovascular Sciences at the National Heart, Lung, and Blood Institute, the lead federal research institute on heart disease and strokes, was among the experts not involved in the provocative study who applauded it. “It is an interesting and well-done study,” he said.
What made me suspicious? Two things. First, I didn’t see why the researcher described it as “bizarre” that some people could get less healthy under an exercise regimen. Each person is an individual, and I would not be surprised at all to learn that a treatment that is effective for most people can hurt others. Once you accept the idea of a varying treatment effect, it’s natural enough to think that the effect could be negative for some people.
The other thing that bugged me was this:
Dr. Bouchard stumbled upon the adverse exercise effects when he looked at data from his own study that examined genetics and responses to exercise. He noticed that about 8 percent seemed to be getting worse on at least one measure of heart disease risk.
But couldn’t they have been getting worse if they’d received the control instead? We all know that a simple before-after comparison doesn’t give you a causal effect (except under the strong and implausible assumption that there would be zero change under the control).
The news article continues:
Some experts, like Dr. Benjamin Levine, a cardiologist and professor of exercise sciences at the University of Texas Southwestern Medical Center, asked whether the adverse responses represented just random fluctuations in heart risk measures. Would the same proportion of people who did not exercise also get worse over the same periods of time? Or what about seasonal variations in things like cholesterol? Maybe the adverse effects just reflected the time of year when people entered the study.
But the investigators examined those hypotheses and found that they did not hold up.
Hmmm . . . now I’m curious. How did the investigators find that those claims “did not hold up”? I follow up the link to the paper, but I didn’t see the promised explanation. Here’s what they had:
A fundamental question is whether there are individuals who experience one or several adverse responses (ARs) in terms of exercise-induced changes in common risk factors. . . . Data on a maximum of 1687 adults from six studies were available for analysis. . . . For the four traits studied, some subjects experienced changes in an opposite, unfavorable direction compared to the expected beneficial effects. . . . we have conservatively defined an AR as a response beyond 2×TE in a direction indicating a worsening of the risk factor. For the four traits in the present study, twice the value of TE ["the technical error (TE), defined as the within-subject standard deviation as derived from repeated measures"] would mean that ARs would be reached if the exercise training-induced increases are ≥10 mm Hg for SBP, ≥0.42 mmol/L for plasma TG, and ≥24 pmol/L for plasma FI or if there is a decrease of ≤0.12 mmol/L for HDL-C. . . .
OK, so they’re defining a negative outcome as a before-after decline by at least 2 standard errors. This is not what I would do, but let’s go with it. The question remains: can’t such a decline occur, even in the absence of an exercise regimen? What makes the researchers so sure that these declines are attributable to the treatment, rather than that these were people who were going to have problems in any case?
I don’t see it. Here’s all that I could find:
The percentages of adverse responders for each trait for each study are depicted in Figure 2. It is remarkable that such cases were found in each study, even though the age and health status of the subjects were widely divergent and the exercise programs were quite heterogeneous.
I don’t see why this is so remarkable. As noted above, can’t some people just be getting better and some people getting worse?
My problems with the scientific paper and the news article are:
1. They make a big deal of the idea that exercise may increase heart risk, but it seems uncontroversial to me that an activity that helps most could be harmful to some.
2. All they seem to measure are before-after changes, so how can they be so sure about attributing causality?
Am I missing something? (This is not a rhetorical question!)
This is not my area of research, and there could well be something crucial that I didn’t notice. If I am right that the study is hopelessly flawed, the question then arises as to how the expert from the National Heart, Lung, and Blood Institute got fooled. It’s not such a surprise for a statistically-flawed article to appear in a scientific journal, that happens all the time, but I’d expect better from the New York Times. Not because NYT reporters know more than journal referees, but because reporters call other experts to get their take on it.
Given all this, I’ll reluctantly assign a high probability to the hypothesis that I’m missing something important here. Perhaps someone out there could help clarify the situation?