Does Benadryl make you senile? Challenges in research communication

Mark Tuttle points to a post, “Common anticholinergic drugs like Benadryl linked to increased dementia risk” by Beverly Merz, Executive Editor, Harvard Women’s Health Watch. Merz writes:

In a report published in JAMA Internal Medicine, researchers offers compelling evidence of a link between long-term use of anticholinergic medications like Benadryl and dementia. . . .

A team led by Shelley Gray, a pharmacist at the University of Washington’s School of Pharmacy, tracked nearly 3,500 men and women ages 65 and older who took part in Adult Changes in Thought (ACT), a long-term study conducted by the University of Washington and Group Health, a Seattle healthcare system. They used Group Health’s pharmacy records to determine all the drugs, both prescription and over-the-counter, that each participant took the 10 years before starting the study. Participants’ health was tracked for an average of seven years. During that time, 800 of the volunteers developed dementia. When the researchers examined the use of anticholinergic drugs, they found that people who used these drugs were more likely to have developed dementia as those who didn’t use them. Moreover, dementia risk increased along with the cumulative dose. Taking an anticholinergic for the equivalent of three years or more was associated with a 54% higher dementia risk than taking the same dose for three months or less. . . .

Scary. But then I scroll down, and here’s the very first comment, from Joe (no last name):

Took a look at the study
The diffence between the people that were in the non users and heavy users is massively different. 3x higher on EVERY risk factor stroke, obese, etc.. Odd this would get so much traction with the press.. Borderline irresponsible of Harvard to publish this on their blog. Nothing here even hints at causality

So whassup? I can click too . . . so let’s see what the study says.

They used Cox proportional hazard regression, adjusting for a bunch of background variables:

Screen Shot 2016-04-06 at 11.47.26 PM

They excluded people where any of these covariates were missing.

And here are their results:

Screen Shot 2016-04-06 at 11.51.02 PM

Seems pretty clear. Although I guess they’re relying pretty heavily on their regression model. Maybe it would make sense to clean the data first by doing some matching so that you have treatment and control groups that are more similar, before running the regression.

Anyway, this all indicates some of the challenges of statistical communication.

For more, see this article by Natalie Smith with the provocative title, “Clinical Misinformation: The Case of Benadryl Causing Dementia,” and this article by Cynthia Fox with the opposite spin: “Strong Link Found Between Dementia, Common Anticholinergic Drugs.”

Tuttle writes:

For all the reasons this article speculates on this could be true – these medications cause dementia.

But, as you know only too well there are so many confounding variables here – the simple one is that currently unknown precursors of dementia cause people to take these drugs.

I don’t really know what to say here. On one hand, yes, lots of potential confounders, also the usual issues of statistical uncertainties, garden of forking paths, etc. On the other hand, it does seem valuable for researchers to find out what is currently happening. The whole thing is a challenge, especially given people’s inclination to base their views on N=1 anecdotal evidence.

19 thoughts on “Does Benadryl make you senile? Challenges in research communication

  1. A rough but good way to get a sense of the reliability of results like this is to repeat the analysis on lots of negative controls. That is, pick a bunch of drugs that you’re pretty sure don’t cause dementia and see how frequently this methodology says they do. Then pick a bunch of outcomes you’re pretty sure Benadryl does not cause and see how frequently this methodology says Benadryl does cause them. The OHDSI group has done this for lots of standard epidemiological methods and lots of drug outcome pairs and the results were pretty discouraging. A study like this that picks a bunch of confounders to control for is certainly evidence for a causal relationship, but weak evidence.

  2. More BS outta Harvard. First, as Joe pointed out, the higher exposure groups are very clearly inherently less healthy than the lower exposure groups, and there is simply no way to control for that in a statistical model. Note that even controlling for all known confounders does not necessarily reduce net confounding bias. Second, there’s this: “to determine all the drugs, both prescription and over-the-counter, that each participant took the 10 years before starting the study. Participants’ health was tracked for an average of seven years. During that time, 800 of the volunteers developed dementia. When the researchers examined the use of anticholinergic drugs”… From all drugs to anricholinergic drugs? Come on!

    • I don’t know if the study is any good, but if you think that they went on a fishing expedition and caught an unlucky class of drug you’re wrong. The objective of the study is “to examine whether cumulative anticholinergic use is associated with a higher risk for incident dementia.” The fact that they “used Group Health’s pharmacy records to determine all the drugs, both prescription and over-the-counter, that each participant took” is relevant because, according to the article, “the University of Washington study is the first to include nonprescription drugs.”

      • Off topic: My coming is waiting for moderation. I didn’t include any link, so I guess the filter things I’m trying to sell life-enhancing chemical compounds here.

        • I’m so ashamed of my typos that I have to correct myself: “Off topic: My comment is waiting for moderation. I didn’t include any link, so I guess the filter thinks I’m trying to sell life-enhancing chemical compounds here.”

  3. It seems one can comment on the paper on the website where the paper appears. Why don’t some of the statisticians weigh in in the comments section?

  4. Adding potential confounders *always* reduces the point estimate at every anticholinergic consumption level. Whenever I see this, my first thought is: if all the ones you could measure reduce the point estimate, isn’t it likely that the ones you had no data on do as well? That doesn’t mean there’s nothing there, but the best point estimate is almost surely smaller than the “adjusted” point estimate, and the unadjusted one is garbage.

  5. The best possible source for determining use of medication is worstpills.org. Its motto seems to be, “never use a medication that is less than 7 years on the market.” Benadryl was invented in the 1940s

    http://archive.boston.com/news/globe/obituaries/articles/2007/09/30/george_rieveschl_invented_benadryl/

    and thus, predates the birth of almost all of the contributors to this blog. Another virtue of its ancient lineage is that any generic variant is extremely inexpensive.

    • The thing about Benadryl is that even if it doesn’t cause senility (and there is an actual mechanistic model for how anticholinergic activity could cause senility). It sure as hell makes you useless. So you can either have allergies and suffer through life, or you can have less allergies but spend all day every day sleeping…..

      No, Fexofenadine is a blockbuster drug for a reason. And, by now, it’s been on the market longer than 7 years, so yay all good.

      of course, if you’re taking the Benadryl *in order to sleep* well… it still does a pretty shitty job there, because in my experience you tend not to wake up feeling rested. I’d be interested to know how individual’s sleep cycles change under benadryl (time series of several days of sleep testing would be needed).

      • Sample of size one: I use Benadryl very occasionally and for me, it works fine either for allergies or for sleep. However, I am reluctant to extrapolate and now that I am over 80, perhaps I am already senile and don’t know it. By the way, where did I put the bottle?

  6. Putting aside the many other serious problems with this paper, the authors attempt to control for education—which is strongly associated with the outcomes of interest and many of the covariates—by including “years of education” as a control, and then in parentheses noting, “at least some college vs none.” So the control for education is simply a dummy for at least some college, obliterating all the variation above and below that threshold. Why do medical researchers so commonly do this to their data?

    Going back to at least Cochrane 1968 (http://www.jstor.org/stable/2528036?seq=1#page_scan_tab_contents), it has been known that categorizing a continuous control biases the estimates of the effect of interest. The bias is particularly severe when the continuous variable is dichotomized. Every once in a while someone writes a paper in the medical literature suggesting that maybe people should stop doing this, yet the memo doesn’t seem to be have been received (e.g., Royston et al, “Dichotomizing continuous predictors in multiple regression:
    a bad idea,” Statistics in Medicine 2006).

    I just run a toy simulation in which the DGP has two moderately highly correlated continuous covariates, but one is dichotomized for estimation. The resulting bias in the estimate on the other variable is comparable to effect sizes in the study above. In other words, even if we assume the *only* problem with this study is that they should not have dichotomized years of education, just that easily avoided flaw could have spuriously created the apparent effect.

    • Not disagreeing in principle, but education is a tricky variable especially one you get above grade 10 in the US. You need to decide where to put GED and technical schools, what do about people who take 6 years to graduate from college, and so on.

      • Couldn’t you just use “effective level of education”… ie. if you took 6 years to get out of college, you get credit for 4 years, if you got a GED you’re “like” a 12th grader… it’s imperfect but it’s probably a lot better than dichotomization!

        • Also, even breaking things into say 6 groups would probably be a lot better than dichotomization…

          group 1 = HS Dropout or less
          group 2 = HS Grad no college
          group 3 = HS Grade some college
          group 4 = College grad
          group 5 = some graduate school
          group 6 = Masters or PHD degree

  7. Controlling for the effects of variables is inherently noisy. Supposing (to make it easier to think about it) that each correction is done separately, each of those adjustments has some noise to it, some confidence range. That noise combines with the noise from other adjustments so as to increase the overall noise of the “adjusted” data. Conceptually this is similar to the difference between two means having a greater variance than either of them individually.

    By the time you have “corrected” for say 15 or 20 variables, how much room is left for seeing an actual effect above the extra noise? At the least, papers should state what this combined uncertainty, caused by the adjustments, is.

Leave a Reply to Daniel Lakeland Cancel reply

Your email address will not be published. Required fields are marked *