Skip to content
Archive of posts filed under the Public Health category.

Type M errors studied in the wild

Brendan Nyhan points to this article, “Very large treatment effects in randomised trials as an empirical marker to indicate whether subsequent trials are necessary: meta-epidemiological assessment,” by Myura Nagendran, Tiago Pereira, Grace Kiew, Douglas Altman, Mahiben Maruthappu, John Ioannidis, and Peter McCulloch. From the abstract: Objective To examine whether a very large effect (VLE; defined […]

Causal inference using data from a non-representative sample

Dan Gibbons writes: I have been looking at using synthetic control estimates for estimating the effects of healthcare policies, particularly because for say county-level data the nontreated comparison units one would use in say a difference-in-differences estimator or quantile DID estimator (if one didn’t want to use the mean) are not especially clear. However, given […]

We were unfair to traditional pollsters

A couple days ago, Slate ran an article by David Rothschild and myself, “We Need to Move Beyond Election-Focused Polling,” in which we wrote about various aspects of the future of opinion surveys. One aspect of this article was misleading. We wrote: And instead of zeroing in on elections, we should think of polling and […]

Rosenbaum (1999): Choice as an Alternative to Control in Observational Studies

Winston Lin wrote in a blog comment earlier this year: Paul Rosenbaum’s 1999 paper “Choice as an Alternative to Control in Observational Studies” is really thoughtful and well-written. The comments and rejoinder include an interesting exchange between Manski and Rosenbaum on external validity and the role of theories. And here it is. Rosenbaum begins: In […]

All cause and breast cancer specific mortality, by assignment to mammography or control

Paul Alper writes: You might be interested in the robocall my wife received today from our Medicare Advantage organization (UCARE Minnesota). The robocall informed us that mammograms saved lives and was available free of charge as part of her health insurance. No mention of recent studies criticizing mammography regarding false positives, harms of biopsies, etc. […]

What are best practices for observational studies?

Mark Samuel Tuttle writes: Just returned from the annual meeting of the American Medical Informatics Association (AMIA); in attendance were many from Columbia. One subtext of conversations I had with the powers that be in the field is the LACK of Best Practices for Observational Studies. They all agree that however difficult they are that […]

“Mainstream medicine has its own share of unnecessary and unhelpful treatments”

I have a story and then a question. The story Susan Perry (link sent by Paul Alper) writes: Earlier this week, I [Perry] highlighted two articles that exposed the dubious history, medical ineffectiveness and potential health dangers of popular alternative “therapies.” Well, the same can be said of many mainstream conventional medical practices, as investigative […]

Publish your raw data and your speculations, then let other people do the analysis: track and field edition

There seems to be an expectation in science that the people who gather a dataset should also be the ones who analyze it. But often that doesn’t make sense: what it takes to gather relevant data has little to do with what it takes to perform a reasonable analysis. Indeed, the imperatives of analysis can […]

The Pandora Principle in statistics — and its malign converse, the ostrich

The Pandora Principle is that once you’ve considered a possible interaction or bias or confounder, you can’t un-think it. The malign converse is when people realize this and then design their studies to avoid putting themselves in a position where they have to consider some potentially important factor. For example, suppose you’re considering some policy […]

“This finding did not reach statistical sig­nificance, but it indicates a 94.6% prob­ability that statins were responsible for the symptoms.”

Charles Jackson writes: The attached item from JAMA, which I came across in my doctor’s waiting room, contains the statements: Nineteen of 203 patients treated with statins and 10 of 217 patients treated with placebo met the study definition of myalgia (9.4% vs 4.6%. P = .054). This finding did not reach statistical sig­nificance, but […]

It’s hard to know what to say about an observational comparison that doesn’t control for key differences between treatment and control groups, chili pepper edition

Jonathan Falk points to this article and writes: Thoughts? I would have liked to have seen the data matched on age, rather than simply using age in a Cox regression, since I suspect that’s what really going on here. The non-chili eaters were much older, and I suspect that the failure to interact age, or […]

“Explaining recent mortality trends among younger and middle-aged White Americans”

Kevin Lewis sends along this paper by Ryan Masters, Andrea Tilstra, and Daniel Simon, who write: Recent research has suggested that increases in mortality among middle-aged US Whites are being driven by suicides and poisonings from alcohol and drug use. Increases in these ‘despair’ deaths have been argued to reflect a cohort-based epidemic of pain […]

How to design future studies of systemic exercise intolerance disease (chronic fatigue syndrome)?

Someone named Ramsey writes on behalf of a self-managed support community of 100+ systemic exercise intolerance disease (SEID) patients. He read my recent article on the topic and had a question regarding the following excerpt: For conditions like S.E.I.D., then, the better approach may be to gather data from people suffering “in the wild,” combining […]

Hey—here are some tools in R and Stan to designing more effective clinical trials! How cool is that?

In statistical work, design and data analysis are often considered separately. Sometimes we do all sorts of modeling and planning in the design stage, only to analyze data using simple comparisons. Other times, we design our studies casually, even thoughtlessly, and then try to salvage what we can using elaborate data analyses. It would be […]

Clinical trials are broken. Here’s why.

Someone emailed me with some thoughts on systemic exertion intolerance disease, in particular, controversies regarding the Pace trial which evaluated psychological interventions for this condition or, should I say, set of conditions. I responded as follows: At one point I had the thought of doing a big investigative project on this, formally interviewing a bunch […]

Further criticism of social scientists and journalists jumping to conclusions based on mortality trends

[cat picture] So. We’ve been having some discussion regarding reports of the purported increase in mortality rates among middle-aged white people in America. The news media have mostly spun a simple narrative of struggling working-class whites, but there’s more to the story. Some people have pointed me to some contributions from various sources: In “The […]

You can read two versions of this review essay on systemic exertion intolerance disease (chronic fatigue syndrome)

Julie Rehmeyer wrote a book, “Through the Shadowlands: A Science Writer’s Odyssey into an Illness Science Doesn’t Understand,” and my review appeared in the online New Yorker, much shortened and edited, and given the title, “A memoir of chronic fatigue illustrates the failures of medical research.” My original was titled, “Systemic exertion intolerance disease: The […]

Maternal death rate problems in North Carolina

Somebody named Jerrod writes: I though you might find this article [“Black moms die in childbirth 3 times as often as white moms. Except in North Carolina,” by Julia Belluz] interesting as it relates to some of your interests in health data and combines it with bad analysis and framing. My beef with the article: […]

Bayesian, but not Bayesian enough

Will Moir writes: This short New York Times article on a study published in BMJ might be of interest to you and your blog community, both in terms of how the media reports science and also the use of bayesian vs frequentist statistics in the study itself. Here is the short summary from the news […]

Problems with the jargon “statistically significant” and “clinically significant”

Someone writes: After listening to your EconTalk episode a few weeks ago, I have a question about interpreting treatment effect magnitudes, effect sizes, SDs, etc. I studied Econ/Math undergrad and worked at a social science research institution in health policy as a research assistant, so I have a good amount of background. At the institution […]