Spring forward, fall back, drop dead?

Screen Shot 2014-04-27 at 3.49.47 PM

Antonio Rinaldi points me to a press release describing a recent paper by Amneet Sandhu, Milan Seth, and Hitinder Gurm, where I got the above graphs (sorry about the resolution, that’s the best I could do).

Here’s the press release:

Data from the largest study of its kind in the U.S. reveal a 25 percent jump in the number of heart attacks occurring the Monday after we “spring forward” compared to other Mondays during the year – a trend that remained even after accounting for seasonal variations in these events. But the study showed the opposite effect is also true. Researchers found a 21 percent drop in the number of heart attacks on the Tuesday after returning to standard time in the fall when we gain an hour back.

Rinaldi thinks: “On Tuesday? No multiple comparisons here???”

The press release continues:

“What’s interesting is that the total number of heart attacks didn’t change the week after daylight saving time,” said Amneet Sandhu, M.D., cardiology fellow, University of Colorado in Denver, and lead investigator of the study. “But these events were much more frequent the Monday after the spring time change and then tapered off over the other days of the week. It may mean that people who are already vulnerable to heart disease may be at greater risk right after sudden time changes. . . . We know from previous studies that a lack of sleep can trigger heart attacks, but we don’t have a good understanding of why people are so sensitive to changes in sleep-wake cycles. Our study suggests that sudden, even small changes in sleep could have detrimental effects,” he said.

Rinaldi also found this news article:

The researchers found no difference in the total weekly number of PCIs performed in 2010 to 2012 during the spring time changes (week before, 661; week after, 654; P=.87) and fall time changes (week before, 610; week after, 652; P=.25).

However, the RR for MI was higher on the Monday after the spring time change compared with other Mondays (RR=1.24; 95% CI, 1.05-1.46) and lower on the Tuesday after the fall time change compared with other Tuesdays (RR=0.79; 95% CI, 0.62-0.99).

As with the story about extra births on Valentine’s day and fewer on Halloween, I think the right way to go is to do an analysis of all the days of the year rather than picking just one or two weeks

Analyzing all 365 days is more work, but hard work is what research is all about. I agree with Rinaldi that “the Tuesday after returning to standard time” is a funny comparison to pick out. It could make sense but, if so, I think it would show up in other weeks, not just after daylight savings.

14 thoughts on “Spring forward, fall back, drop dead?

  1. If this sort of results is publishable, the entire journal enterprise needs help. Read no further than: “What’s interesting is that the total number of heart attacks didn’t change the week after daylight saving time.” What is the utility of prolonging someone’s life by < one day? (assuming that we can accept the "statistical" nonsense which we can't)

    • Kaiser: I disagree, if it really is true that the time change induces heart attacks in vulnerable people, then something ABOUT the time change is the cause. Whatever that cause is, be it hormones or autonomic neural/brain activity or the like, that cause is interesting, and could be useful to understand.

      So I agree, if we’re talking about an intervention like “eliminate daylight savings time” because it will extend people’s life by 1 to 5 days… then useless, but if we’re talking about the first step in understanding causation which could prove useful eventually for other purposes…. then I think this is important. But, you’re right, it’s really just a very preliminary first step. And I’d love to see the “birthday effect” type model for heart attacks!

      • I didn’t get the Kaiser critique: Why would it only extend life by 1-5 days?

        Say, it’s a stressor, providing a nudge, eliminating it removes one trigger, doesn’t it?

        I’ve absolutely no clue whether the effect is real; but if it is, I don’t get the claim about how we are “only prolonging someone’s life by one day”?

        • The total number in the week after didn’t change, so I think Kaiser is saying that the only effect was to move heart attacks that would have occurred say tuesday, wed, thurs, fri, sat from those days to monday. So 1 day is maybe being a little facetious but it seems like the we’re talking at least that order of magnitude (1-5 days).

        • Tellingly, the chart Andrew displayed above is not found in the published paper. It’s replaced by a modeled control, which means you have to read the paper to figure out if you like how they constructed it.

          Daniel: On your earlier comment, I still think this DST analysis is a distraction. If we agree that 1-5 days shift of the timing of attacks is not a clinical important outcome, then there is nothing to investigate further. I am putting up a two-part discussion of this research, starting with this post.
          http://junkcharts.typepad.com/numbersruleyourworld/2014/06/another-pr-effort-to-scare-you-into-clicking.html

        • Kaiser, you’re thinking more like a statistician than a biologist. I’m interested in the core causes that form the trigger, because I think those core causes could also form triggers that are related to other situations like shift workers, police officers and other high stress jobs, even heart attacks that are subsequent to say traumatic injury.

          If someone could get a good dataset and a good model and show convincingly that DST transitions trigger heart attacks (even ones which were “lurking” in the wings and would have happened anyway soon after) then that would be a signal to look for biological stressors and build a causal BIOLOGICAL model of how such stressors work.

          is THIS study a good one? Sounds like NO. But I still think that a proper study would be informative for biologists.

        • By calling it a “trigger”, you are already assuming that there is an indirect causal relationship. How do you know it’s not completely spurious? In my work as a business statistician, I constantly work with observational data, and with reverse causation problems like this. There are no shortage of variables that one can correlate with the observed outcome. Just take a look at http://www.tylervigen.com/.
          There are lots of other analyses that can be done to solidify this claim. What about other states? There are countries that don’t have DST. There are probably places that switched from not having DST to having DST or vice versa.
          Again, I must point out that we are talking about 1-5 day shift in the timing of attacks. The researchers are not claiming that DST affected overall disease.

    • I had originally thought the same, though if this was the case wouldn’t the disturbance be seen on the Sunday rather than the Monday?

      I don’t have anything for the Tuesday story, that graph is just noise to me. I am not sure why they didn’t construct a story around the first Sunday in the time-series, that looks like the stronger effect.

  2. “the total number of heart attacks didn’t change” — this probably doesn’t mean literally no change, but no ‘statistically significant’ change via a test. If you take one day of “effect” and dilute it with several days of “no effect”, the overall test is diluted as well.

    I find Rinaldi’s question about multiple comparisons interesting. This is obviously a situation in which you would have to look at some data first, just to see if it’s worth collecting more. So you have ample opportunity to come up with a specific hypothesis, and then confirm it (or not) with the additional data.

  3. The paper upon which the presentation is based can be found at the open-access journal http://openheart.bmj.com/content/1/1/e000019.full

    From the last paragraph in the *methods and definitions* section: “No adjustments were made for multiple comparisons, and the analysis is intended to be exploratory in nature. As such, nominally significant results should be interpreted as hypothesis generating, rather than confirmatory evidence.” In the results, the authors draw attention to two dates (after summing across 3 years) that show “statistically significant” deviations from the regression model.

    If one finds an outlier when doing exploratory investigations, normally the next step is to do a host of follow-up studies to determine whether it is statistical “noise,” a bug in the code or a real effect. The fact that the paper discusses interpretations of these outliers in terms of impact of DST change-over on physiology without addressing the multiple comparison problem is mind-boggling.

    • It’s not so much mind boggling as sad — particularly that they express some awareness of the multiple comparison problem, but ignore its relevance for their study.

      • Well it was my own first blush reaction.

        The very first sentence in the abstract states: “Prior research has shown a transient increase in the incidence of acute myocardial infarction (AMI) after daylight savings time (DST) in the spring as well as a decrease in AMI after returning to standard time in the fall.” This means that the researchers already were testing a specific hypothesis, so any claim to not need to account for multiple comparisons because they are just generating potential hypotheses is defective.

        — What is the uncertainty in the modeled AMI counts for each day? Is it just counting uncertainty from a Poisson distribution?
        — The effect size is quoted at 7-8 AMI/day. What is the uncertainty in this estimate?
        — How does this compare to recognized risk factors?

        And it’s not as if doctors aren’t interested in the answers to these questions…

Leave a Reply

Your email address will not be published. Required fields are marked *