Skip to content
Archive of posts filed under the Miscellaneous Statistics category.

What’s Wrong with “Evidence-Based Medicine” and How Can We Do Better? (My talk at the University of Michigan Friday 2pm)

Tomorrow (Fri 9 Feb) 2pm at the NCRC Research Auditorium (Building 10) at the University of Michigan: What’s Wrong with “Evidence-Based Medicine” and How Can We Do Better? Andrew Gelman, Department of Statistics and Department of Political Science, Columbia University “Evidence-based medicine” sounds like a good idea, but it can run into problems when the […]

354 possible control groups; what to do?

Jonas Cederlöf writes: I’m a PhD student in economics at Stockholm University and a frequent reader of your blog. I have for a long time followed your quest in trying to bring attention to p-hacking and multiple comparison problems in research. I’m now myself faced with the aforementioned problem and want to at the very […]

Methodological terrorism. For reals. (How to deal with “what we don’t know” in missing-data imputation.)

Kevin Lewis points us to this paper, by Aaron Safer-Lichtenstein, Gary LaFree, Thomas Loughran, on the methodology of terrorism studies. This is about as close to actual “methodological terrorism” as we’re ever gonna see here. The linked article begins: Although the empirical and analytical study of terrorism has grown dramatically in the past decade and […]

p=0.24: “Modest improvements” if you want to believe it, a null finding if you don’t.

David Allison sends along this juxtaposition: Press Release: “A large-scale effort to reduce childhood obesity in two low-income Massachusetts communities resulted in some modest improvements among schoolchildren over a relatively short period of time…” Study: “Overall, we did not observe a significant decrease in the percent of students with obesity from baseline to post intervention […]

Snappy Titles: Deterministic claims increase the probability of getting a paper published in a psychology journal

A junior psychology researcher who would like to remain anonymous writes: I wanted to pass along something I found to be of interest today as a proponent of pre-registration. Here is a recent article from Social Psychological and Personality Science. I was interested by the pre-registered study. Here is the pre-registration for Study 1. The […]

Geoff Norman: Is science a special kind of storytelling?

Javier Benítez points to this article by epidemiologist Geoff Norman, who writes: The nature of science was summarized beautifully by a Stanford professor of science education, Mary Budd Rowe, who said that: Science is a special kind of story-telling with no right or wrong answers. Just better and better stories. Benítez writes that he doesn’t […]

My suggested project for the MIT Better Science Ideathon: assessing the reasonableness of missing-data imputations.

Leo Celi writes: We are 3 months away from the MIT Better Science Ideathon on April 23. We would like to request your help with mentoring a team or 2 during the ideathon. During the ideathon, teams discuss a specific issue (lack of focus on reproducibility across majority of journals) or problem that arose from […]

Another bivariate multivariate dependence measure!

Joshua Vogelstein writes: Since you’ve posted much on various independence test papers (e.g., Reshef et al., and then Simon & Tibshirani criticism, and then their back and forth), I thought perhaps you’d post this one as well. Tibshirani pointed out that distance correlation (Dcorr) was recommended, we proved that our oracle multiscale generalized correlation (MGC, […]

Bayes, statistics, and reproducibility: My talk at Rutgers 5pm on Mon 29 Jan 2018

In the weekly seminar on the Foundations of Probability in the Philosophy Departmentat Rutgers University, New Brunswick Campus, Miller Hall, 2nd floor seminar room: Bayes, statistics, and reproducibility The two central ideas in the foundations of statistics—Bayesian inference and frequentist evaluation—both are defined in terms of replications. For a Bayesian, the replication comes in the […]

How to get a sense of Type M and type S errors in neonatology, where trials are often very small? Try fake-data simulation!

Tim Disher read my paper with John Carlin, “Beyond Power Calculations: Assessing Type S (Sign) and Type M (Magnitude) Errors,” and followed up with a question: I am a doctoral student conducting research within the field of neonatology, where trials are often very small, and I have long suspected that many intervention effects are potentially […]

A Python program for multivariate missing-data imputation that works on large datasets!?

Alex Stenlake and Ranjit Lall write about a program they wrote for imputing missing data: Strategies for analyzing missing data have become increasingly sophisticated in recent years, most notably with the growing popularity of the best-practice technique of multiple imputation. However, existing algorithms for implementing multiple imputation suffer from limited computational efficiency, scalability, and capacity […]

Benefits and limitations of randomized controlled trials: I agree with Deaton and Cartwright

My discussion of “Understanding and misunderstanding randomized controlled trials,” by Angus Deaton and Nancy Cartwright, for Social Science & Medicine: I agree with Deaton and Cartwright that randomized trials are often overrated. There is a strange form of reasoning we often see in science, which is the idea that a chain of reasoning is as […]

“However noble the goal, research findings should be reported accurately. Distortion of results often occurs not in the data presented but . . . in the abstract, discussion, secondary literature and press releases. Such distortion can lead to unsupported beliefs about what works for obesity treatment and prevention. Such unsupported beliefs may in turn adversely affect future research efforts and the decisions of lawmakers, clinicians and public health leaders.”

David Allison points us to this article by Bryan McComb, Alexis Frazier-Wood, John Dawson, and himself, “Drawing conclusions from within-group comparisons and selected subsets of data leads to unsubstantiated conclusions.” It’s a letter to the editor for the Australian and New Zealand Journal of Public Health, and it begins: [In the paper, “School-based systems change […]

Now, Andy did you hear about this one?

We drank a toast to innocence, we drank a toast to now. We tried to reach beyond the emptiness but neither one knew how. – Kiki and Herb Well I hope you all ended your 2017 with a bang.  Mine went out on a long-haul flight crying so hard at a French AIDS drama that […]

I’m with Errol: On flypaper, photography, science, and storytelling

[image of a cat going after an insect] I’ve been reading this amazing book, Believing is Seeing: Observations on the Mysteries of Photography, by Errol Morris, who, like John Waters, is a pathbreaking filmmaker who is also an excellent writer. I recommend this book, but what I want to talk about here is one particular […]

Some of our work from the past year

Our published papers are listed here in approximate reverse chronological order (including some unexpected items such as a review of a book on international relations), and our unpublished papers are here. (Many but not all of the unpublished papers will eventually end up in the “published” category.) No new books this year except for the […]

Forking paths plus lack of theory = No reason to believe any of this.

[image of a cat with a fork] Kevin Lewis points us to this paper which begins: We use a regression discontinuity design to estimate the causal effect of election to political office on natural lifespan. In contrast to previous findings of shortened lifespan among US presidents and other heads of state, we find that US […]

A debate about robust standard errors: Perspective from an outsider

A colleague pointed me to a debate among some political science methodologists about robust standard errors, and I told him that the topic didn’t really interest me because I haven’t found a use for robust standard errors in my own work. My colleague urged me to look at the debate more carefully, though, so I […]

The failure of null hypothesis significance testing when studying incremental changes, and what to do about it

A few months ago I wrote a post, “Cage match: Null-hypothesis-significance-testing meets incrementalism. Nobody comes out alive.” I soon after turned it into an article, published in Personality and Social Psychology Bulletin, with the title given above and the following abstract: A standard mode of inference in social and behavioral science is to establish stylized […]

Walk a Crooked MiIe

An academic researcher writes: I was wondering if you might have any insight or thoughts about a problem that has really been bothering me. I have taken a winding way through academia, and I am seriously considering a career shift that would allow me to do work that more directly translates to societal good and […]