Skip to content
Archive of posts filed under the Miscellaneous Science category.

“I agree entirely that the way to go is to build some model of attitudes and how they’re affected by recent weather and to fit such a model to “thick” data—rather than to zip in and try to grab statistically significant stylized facts about people’s cognitive illusions in this area.”

Angus Reynolds sent me a long email. I’ll share it in a moment but first here’s my reply: I don’t have much to say here, except that: 1. It’s nearly a year later but Christmas is coming again so here’s my post. 2. Yes, the effects of local weather on climate change attitudes do seem […]

When do we want evidence-based change? Not “after peer review”

Jonathan Falk sent me the above image in an email with subject line, “If this isn’t the picture for some future blog entry I’ll never forgive you.” This was a credible threat so here’s the post. But I don’t agree with that placard at all! Waiting for peer review is a bad idea for two […]

Please contribute to this list of the top 10 do’s and don’ts for doing better science

Demis Glasford does research in social psychology and asks: I was wondering if you had ever considered publishing a top ten ‘do’s/don’ts’ for those of us that are committed to doing better science, but don’t necessarily have the time to devote to all of these issues [of statistics and research methods]. Obviously, there is a […]

“Why bioRxiv can’t be the Central Service”

I followed this link to Jordan Anaya’s page and there to this post on biology preprint servers. Anyway, as a fan of preprint servers I appreciate Anaya’s point-by-point discussion of why one particular server, bioRxiv (which I’d never heard of before but I guess is popular in biology), can’t do what some people want it […]

BREAKING . . . . . . . PNAS updates its slogan!

I’m so happy about this, no joke. Here’s the story. For awhile I’ve been getting annoyed by the junk science papers (for example, here, here, and here) that have been published by the Proceedings of the National Academy of Sciences under the editorship of Susan T. Fiske. I’ve taken to calling it PPNAS (“Prestigious proceedings […]

“Do statistical methods have an expiration date?” My talk at the University of Texas this Friday 2pm

Fri 6 Oct at the Seay Auditorium (room SEA 4.244): Do statistical methods have an expiration date? Andrew Gelman, Department of Statistics and Department of Political Science, Columbia University There is a statistical crisis in science, particularly in psychology where many celebrated findings have failed to replicate, and where careful analysis has revealed that many […]

Apply for the Earth Institute Postdoc at Columbia and work with us!

The Earth Institute at Columbia brings in several postdocs each year—it’s a two-year gig—and some of them have been statisticians (recently, Kenny Shirley, Leontine Alkema, Shira Mitchell, and Milad Kharratzadeh). We’re particularly interested in statisticians who have research interests in development and public health. It’s fine—not just fine, but ideal—if you are interested in statistical […]

Contribute to this pubpeer discussion!

Alex Gamma writes: I’d love to get feedback from you and / or the commenters on a behavioral economics / social neuroscience study from my university (Zürich). This would fit perfectly with yesterday’s “how to evaluate a paper” post. In fact, let’s have a little journal club, one with a twist! The twist is that […]

I am (somewhat) in agreement with Fritz Strack regarding replications

Fritz Strack read the recent paper of McShane, Gal, Robert, Tackett, and myself and pointed out that our message—abandon statistical significance, consider null hypothesis testing as just one among many pieces of evidence, recognize that all null hypotheses are false (at least in the fields where Strack and I do our research) and don’t use […]

Using black-box machine learning predictions as inputs to a Bayesian analysis

Following up on this discussion [Designing an animal-like brain: black-box “deep learning algorithms” to solve problems, with an (approximately) Bayesian “consciousness” or “executive functioning organ” that attempts to make sense of all these inferences], Mike Betancourt writes: I’m not sure AI (or machine learning) + Bayesian wrapper would address the points raised in the paper. […]

Type M errors in the wild—really the wild!

Jeremy Fox points me to this article, “Underappreciated problems of low replication in ecological field studies,” by Nathan Lemoine, Ava Hoffman, Andrew Felton, Lauren Baur, Francis Chaves, Jesse Gray, Qiang Yu, and Melinda Smith, who write: The cost and difficulty of manipulative field studies makes low statistical power a pervasive issue throughout most ecological subdisciplines. […]

“How conditioning on post-treatment variables can ruin your experiment and what to do about it”

Brendan Nyhan writes: Thought this might be of interest – new paper with Jacob Montgomery and Michelle Torres, How conditioning on post-treatment variables can ruin your experiment and what to do about it. The post-treatment bias from dropout on Turk you just posted about is actually in my opinion a less severe problem than inadvertent […]

Bird fight! (Kroodsma vs. Podos)

Donald Kroodsma writes: Birdsong biologists interested in sexual selection and honest signalling have repeatedly reported confirmation, over more than a decade, of the biological significance of a scatterplot between trill rate and frequency bandwidth. This ‘performance hypothesis’ proposes that the closer a song plots to an upper bound on the graph, the more difficult the […]

Wolfram on Golomb

I was checking out Stephen Wolfram’s blog and found this excellent obituary of Solomon Golomb, the mathematician who invented the maximum-length linear-feedback shift register sequence, characterized by Wolfram as “probably the single most-used mathematical algorithm idea in history.” But Golomb is probably more famous for inventing polyominoes. The whole thing’s a good read, and it […]

Reproducing biological research is harder than you’d think

Mark Tuttle points us to this news article by Monya Baker and Elie Dolgin, which goes as follows: Cancer reproducibility project releases first results An open-science effort to replicate dozens of cancer-biology studies is off to a confusing start. Purists will tell you that science is about what scientists don’t know, which is true but […]

Iceland education gene trend kangaroo

Someone who works in genetics writes: You may have seen the recent study in PNAS about genetic prediction of educational attainment in Iceland. the authors report in a very concerned fashion that every generation the attainment of education as predicted from genetics decreases by 0.1 standard deviations. This sounds bad. But consider that the University […]

Recently in the sister blog

This research is 60 years in the making: How “you” makes meaning “You” is one of the most common words in the English language. Although it typically refers to the person addressed (“How are you?”), “you” is also used to make timeless statements about people in general (“You win some, you lose some.”). Here, we […]

How to design future studies of systemic exercise intolerance disease (chronic fatigue syndrome)?

Someone named Ramsey writes on behalf of a self-managed support community of 100+ systemic exercise intolerance disease (SEID) patients. He read my recent article on the topic and had a question regarding the following excerpt: For conditions like S.E.I.D., then, the better approach may be to gather data from people suffering “in the wild,” combining […]

They want help designing a crowdsourcing data analysis project

Michael Feldman writes: My collaborators and myself are doing research where we try to understand the reasons for the variability in data analysis (“the garden of forking paths”). Our goal is to understand the reasons why scientists make different decisions regarding their analyses and in doing so reach different results. In a project called “Crowdsourcing […]

“The Null Hypothesis Screening Fallacy”?

[non-cat picture] Rick Gerkin writes: A few months ago you posted your list of blog posts in draft stage and I noticed that “Humans Can Discriminate More than 1 Trillion Olfactory Stimuli. Not.” was still on that list. It was about some concerns I had about a paper in Science (http://science.sciencemag.org/content/343/6177/1370). After talking it through […]