Skip to content
Archive of entries posted by

Against Screening

Matthew Simonson writes: I have a question that may be of interest to your readers (and even if not, I’d love to hear your response). I’ve been analyzing a dataset of over 100 Middle Eastern political groups (MAROB) to see how these groups react to government repression. Observations are at the group-year level and include […]

“This is a weakness of our Bayesian Data Analysis book: We don’t have a lot of examples with informative priors.”

Roy Tamura writes: I am trying to implement a recommendation you made a few years ago. In my clinical trial of drug versus placebo, patients were stratified into two cohorts and randomized within strata. Time to event is the endpoint with the proportional hazards regression with strata and treatment as independent factors. There is evidence […]

A link between science and hype? Not always!

Neal Beck points us to this news article by Aaron Carroll, “A Link Between Alcohol and Cancer? It’s Not Nearly as Scary as It Seems.” Here’s Carroll: Citing evidence, the American Society of Clinical Oncology warned that even light drinking could increase the risk of cancer. . . . It acknowledges that the greatest risks […]

Here are the data and code for that study of Puerto Rico deaths

A study just came out, Mortality in Puerto Rico after Hurricane Maria, by Nishant Kishore et al.: Using a representative, stratified sample, we surveyed 3299 randomly chosen households across Puerto Rico to produce an independent estimate of all-cause mortality after the hurricane. Respondents were asked about displacement, infrastructure loss, and causes of death. We calculated […]

All Fools in a Circle

A graduate student in psychology writes: Grants do not fund you unless you have pilot data – and moreover – show some statistically significant finding in your N of 20 or 40 – in essence trying to convince the grant reviewers that there is “something there” worth them providing your lab lots of money to […]

“Not statistically significant” != 0, stents edition

Doug Helmreich writes: OK, I work at a company that is involved in stents, so I’m not unbiased, but… http://www3.imperial.ac.uk/newsandeventspggrp/imperialcollege/newssummary/news_2-11-2017-15-52-46 and especially https://www.nytimes.com/2017/11/02/health/heart-disease-stents.html The research design is pretty cool—placebo participants got a sham surgery with no stent implanted. The results show that people with the stent did have better metrics than those with just the […]

Some experiments are just too noisy to tell us much of anything at all: Political science edition

Sointu Leikas pointed us to this published research article, “Exposure to inequality affects support for redistribution.” Leikas writes that “it seems to be a really apt example of “researcher degrees of freedom.’” Here’s the abstract of the paper: As the world’s population grows more urban, encounters between members of different socioeconomic groups occur with greater […]

Kaleidoscope

Dale Lehman writes: This one’s on a topic you have blogged about often and one that I still think is under-appreciated: measurement. The Economist recently reported on this fascinating article about lightning strikes and their apparent sensitivity to shipping lanes and the associated pollution. I [Lehman] immediately wondered about whether there is a bias in […]

Tali Sharot responds to my comments on a recent op-ed

Yesterday I posted some comments on an op-ed by by Tali Sharot and Cass Sunstein. Sharot sent the following response: I wanted to correct a few inaccuracies, which two of your commenters were quick to catch (Jeff and Dale). It seems you have 3 objections 1. “Participants did not learn about others’ opinions. There were […]

Three-warned is three-armed

Simon Gates writes: Here is a paper just published in JAMA, on correction for multiple testing, and the clinical trial it refers to (also, I’ve just noticed, relevant to yesterday’s post [this one, I think. — AG]). This sort of sequential testing (and non-testing) is quite common, for example in three-armed trials (not saying I […]

Comment of the year

From Jeff: “The decision to use mice for that study was terrible.” “Yeah, I know—and such small samples!” Sure, it’s only May. But I don’t think we’ll see anything better for awhile, so I’m happy to give out the award right now.

X spotted in L’Aimant, par Lucas Harari

I have a long post in preparation with lots of B.D. reviews, but in the meantime I wanted to flag this one right now because (a) the book was excellent, with a solid story and beautiful art and design, and (b) one of the characters looks just like Christian Robert, in an appropriate mountainous setting. […]

Click here to find out how these 2 top researchers hyped their work in a NYT op-ed!

Gur Huberman pointed me to this NYT op-ed entitled “Would You Go to a Republican Doctor?”, written by two professors describing their own research, that begins as follows: Suppose you need to see a dermatologist. Your friend recommends a doctor, explaining that “she trained at the best hospital in the country and is regarded as […]

Write your congressmember to require researchers to publicly post their code?

Stephen Cranney writes: For the past couple of years I have had an ongoing question/concern . . . In my fields (sociology and demography) much if not most of the published research is based on publicly available datasets; consequently, replicability is literally a simple matter of sending or uploading a few kilobytes of code text. […]

Stan on TV

For reals. Billions, Season 3, Episode 9 35:10.

“I admire the authors for simply admitting they made an error and stating clearly and without equivocation that their original conclusions were not substantiated.”

David Allison writes: I hope you will consider covering this in your blog. I admire the authors for simply admitting they made an error and stating clearly and without equivocation that their original conclusions were not substantiated. More attention to the confusing effects of regression to the mean are warranted as is more praise for […]

The anthropic principle in statistics

The anthropic principle in physics states that we can derive certain properties of the world, or even the universe, based on the knowledge of our existence. The earth can’t be too hot or too cold, there needs to be oxygen and water, etc., which in turn implies certain things about our solar system, and so […]

David Bellos’s book on translation

Seeing as linguistics is on the agenda, I thought I’d mention this excellent book I just finished, “Is That a Fish in Your Ear,” by David Bellos. Bellos is a translator and scholar of French literature, and in his book he covers all sorts of topics. Nothing deep, but, as a non-expert on the topic, […]

The statistical significance filter leads to overoptimistic expectations of replicability

Shravan Vasishth, Daniela Mertzen, Lena Jäger, et al. write: Treating a result as publishable just because the p-value is less than 0.05 leads to overoptimistic expectations of replicability. These overoptimistic expectations arise due to Type M(agnitude) error: when underpowered studies yield significant results, effect size estimates are guaranteed to be exaggerated and noisy. These effects […]

What is “weight of evidence” in bureaucratese?

Martha Smith writes: