As an applied statistician, I don’t do a lot of heavy math. I did prove a true theorem once (with the help of some collaborators), but that was nearly twenty years ago. Most of the time I walk along pretty familiar paths, just hoping that other people will do the mathematical work necessary for me […]

**Miscellaneous Statistics**category.

## Just Filling in the Bubbles

Collin Hitt writes: I study wrong answers, per your blog post today. My research focuses mostly on surveys of schoolchildren. I study the kids who appear to be just filling in the bubbles, who by accident actually reveal something of use for education researchers. Here’s his most recent paper, “Just Filling in the Bubbles: Using […]

## Asking the question is the most important step

In statistics, the glamour often comes to those who perform a challenging data analysis that extracts signal from noise, as in Aki Vehtari’s decomposition of the famous birthday data which led to the stunning graphs on the cover of BDA3. But, from a social-science point of view, the biggest credit has to go to whoever […]

## 3 postdoc opportunities you can’t miss—here in our group at Columbia! Apply NOW, don’t miss out!

Hey, just once, the Buzzfeed-style hype is appropriate. We have 3 amazing postdoc opportunities here, and you need to apply NOW. Here’s the deal: we’re working on some amazing projects. You know about Stan and associated exciting projects in computational statistics. There’s the virtual database query, which is the way I like to describe our […]

## Ta-Nehisi Coates, David Brooks, and the “Street Code” of Journalism

In my latest Daily Beast column, I decide to be charitable to the factually-challenged NYT columnist: From our perspective, Brooks’s refusal to admit error makes him look like a buffoon. But maybe we’re just judging him based on the norms of another culture. . . . From our perspective, Brooks spreading anti-Semitic false statistics in […]

## Click here to get FREE tix to my webinar with Brad Efron this Wednesday!

The Royal Statistical Society (U.K.) has organized a discussion of a new paper, Frequentist accuracy of Bayesian estimates, by Brad Efron. The discussion will be an online event (a “webinar”) on 21 Oct 2015 (that’s right, “Back to the Future Day”) at noon 11am eastern time (4pm in the U.K.). Brad will present, I’ll ask […]

## Hierarchical logistic regression in Stan: The untold story

Corey Yanofsky pointed me to a paper by Neal Beck, Estimating grouped data models with a binary dependent variable and fixed effects: What are the issues?, which begins: This article deals with a very simple issue: if we have grouped data with a binary dependent variable and want to include fixed effects (group specific intercepts) […]

## Mindset interventions are a scalable treatment for academic underachievement — or not?

Someone points me to this post by Scott Alexander, criticizing the work of psychology researcher Carol Dweck. Alexander looks carefully at an article, “Mindset Interventions Are A Scalable Treatment For Academic Underachievement,” by David Paunesku, Gregory Walton, Carissa Romero, Eric Smith, David Yeager, and Carol Dweck, and he finds the following: Among ordinary students, the […]

## How to use lasso etc. in political science?

Tom Swartz writes: I am a graduate student at Oxford with a background in economics and on the side am teaching myself more statistics and machine learning. I’ve been following your blog for some time and recently came across this post on lasso. In particular, the more I read about the machine learning community, the […]

## Low-power pose

“The samples were collected in privacy, using passive drool procedures, and frozen immediately.” Anna Dreber sends along a paper, “Assessing the Robustness of Power Posing: No Effect on Hormones and Risk Tolerance in a Large Sample of Men and Women,” which she published in Psychological Science with coauthors Eva Ranehill, Magnus Johannesson, Susanne Leiberg, Sunhae […]

## What was the worst statistical communication experience you’ve ever had?

In one of the jitts for our statistical communication class we asked, “What was the worst statistical communication experience you’ve ever had?” And here were the responses (which I’m sharing with permission from the students): Not sure if this counts, but I used to work with a public health researcher who published a journal article […]

## “I do not agree with the view that being convinced an effect is real relieves a researcher from statistically testing it.”

Florian Wickelmaier writes: I’m writing to tell you about my experiences with another instance of “the difference between significant and not significant.” In a lab course, I came across a paper by Costa et al. [Cognition 130 (2) (2014) 236-254 (http://dx.doi.org/10.1016/j.cognition.2013.11.010). In several experiments, they compare the effects in two two-by-two tables by comparing the […]

## “The frequentist case against the significance test”

Richard Morey writes: I suspect that like me, many people didn’t get a whole lot of detail about Neyman’s objections to the significance test in their statistical education besides “Neyman thought power is important”. Given the recent debate about significance testing, I have gone back to Neyman’s papers and tried to summarize, for the modern […]

## Unreplicable

Leonid Schneider writes: I am cell biologist turned science journalist after 13 years in academia. Despite my many years experience as scientist, I shamefully admit to be largely incompetent in statistics. My request to you is as follows: A soon to be published psychology study set on to reproduce 100 randomly picked earlier studies and […]

## Yes.

Reflecting on the recent psychology replication study (see also here), journalist Megan McArdle writes an excellent column on why we fall for bogus research: The problem is not individual research papers, or even the field of psychology. It’s the way that academic culture filters papers, and the way that the larger society gets their results. […]

## P-values and statistical practice

What is a p-value in practice? The p-value is a measure of discrepancy of the fit of a model or “null hypothesis” H to data y. In theory the p-value is a continuous measure of evidence, but in practice it is typically trichotomized approximately into strong evidence, weak evidence, and no evidence (these can also […]

## To understand the replication crisis, imagine a world in which everything was published.

John Snow points me to this post by psychology researcher Lisa Feldman Barrett who reacted to the recent news on the non-replication of many psychology studies with a contrarian, upbeat take, entitled “Psychology Is Not in Crisis.” Here’s Barrett: An initiative called the Reproducibility Project at the University of Virginia recently reran 100 psychology experiments […]