Rick Desper writes: I face some tough career choices. I have a background in mathematical modeling (got my Ph.D. in math from Rutgers back in the late ’90s) and spent several years working in the field of bioinformatics/computational biology (its name varies from place to place). I’ve worked on problems in modeling cancer progression and […]

**Miscellaneous Statistics**category.

*Stan Case Studies* Launches

There’s a new section of the Stan web site, with case studies meant to illustrate statistical methodologies, classes of models, application areas, statistical computation, and Stan programming. Stan Case Studies The first ten or so are up, including a grab bag of education models from Daniel Furr at U.C. Berkeley: Hierarchical Two-Parameter Logistic Item Response […]

## Swimsuit special: “A pure Bayesian or pure non-Bayesian is not forever doomed to use out-of-date methods, but at any given time the purist will be missing some of the most effective current techniques.”

Joshua Vogelstein points me to this paper by Gerd Gigerenzer and Julian Marewski, who write: The idol of a universal method for scientific inference has been worshipped since the “inference revolution” of the 1950s. Because no such method has ever been found, surrogates have been created, most notably the quest for significant p values. This […]

## Statistics is like basketball, or knitting

I had a recent exchange with a news reporter regarding one of those silly psychology studies. I took a look at the article in question—this time it wasn’t published in Psychological Science or PPNAS so it didn’t get saturation publicity—and indeed it was bad, laughably bad. They didn’t just have the garden of forking paths, […]

## Good advice can do you bad

Here are some examples of good, solid, reasonable statistical advice which can lead people astray. Example 1 Good advice: Statistical significance is not the same as practical significance. How it can mislead: People get the impression that a statistically significant result is more impressive if it’s larger in magnitude. Why it’s misleading: See this classic […]

## Le Menu Dit : a translation app

This post is by Phil Price. “Le Menu Dit” is an iPhone app that some friends and I wrote, which translates restaurant menus from English into French. (The name is French for “The Menu Says.”) The friends are Nathan Addy and another excellent programmer who would like to remain nameless for now. Here’s how the […]

## Bruised and battered, I couldn’t tell what I felt. I was ungeneralizable to myself.

One more rep. The new thing you just have to read, if you’re following the recent back-and-forth on replication in psychology, is this post at Retraction Watch in which Nosek et al. respond to criticisms from Gilbert et al. regarding the famous replication project. Gilbert et al. claimed that many of the replications in the […]

## No, this post is not 30 days early: *Psychological Science* backs away from null hypothesis significance testing

A few people pointed me to this editorial by D. Stephen Lindsay, the new editor of Psychological Science, a journal that in recent years has been notorious for publishing (and, even more notoriously, promoting) click-bait unreplicable dead-on-arrival noise-mining tea-leaf-reading research papers. It was getting so bad for awhile that they’d be publishing multiple such studies […]

## Fundamental difficulty of inference for a ratio when the denominator could be positive or negative

I happened to come across this post from 2011, which in turn is based on thoughts of mine from about 1993. It’s important and most of you probably haven’t seen it, so here it is again: Ratio estimates are common in statistics. In survey sampling, the ratio estimate is when you use y/x to estimate […]

## In general, hypothesis testing is overrated and hypothesis generation is underrated, so it’s fine for these data to be collected with exploration in mind.

In preparation for writing this news article, Kelly Servick asked me what I thought about the Kavli HUMAN Project (see here and here). Here’s what I wrote: The general idea of gathering comprehensive data seems reasonable to me. I’ve often made the point that careful data collection and measurement are important. Data analysis is the […]

## Forking paths vs. six quick regression tips

Bill Harris writes: I know you’re on a blog delay, but I’d like to vote to raise the odds that my question in a comment to http://andrewgelman.com/2015/09/15/even-though-its-published-in-a-top-psychology-journal-she-still-doesnt-believe-it/gets discussed, in case it’s not in your queue. It’s likely just my simple misunderstanding, but I’ve sensed two bits of contradictory advice in your writing: fit one complete model all at […]

## What’s the difference between randomness and uncertainty?

Julia Galef mentioned “meta-uncertainty,” and how to characterize the difference between a 50% credence about a coin flip coming up heads, vs. a 50% credence about something like advanced AI being invented this century. I wrote: Yes, I’ve written about this probability thing. The way to distinguish these two scenarios is to embed each of […]

## The Notorious N.H.S.T. presents: Mo P-values Mo Problems

Alain Content writes: I am a psycholinguist who teaches statistics (and also sometimes publishes in Psych Sci). I am writing because as I am preparing for some future lessons, I fall back on a very basic question which has been worrying me for some time, related to the reasoning underlying NHST [null hypothesis significance testing]. […]

## The time-reversal heuristic—a new way to think about a published finding that is followed up by a large, preregistered replication (in context of Amy Cuddy’s claims about power pose)

[Note to busy readers: If you’re sick of power pose, there’s still something of general interest in this post; scroll down to the section on the time-reversal heuristic. I really like that idea.] Someone pointed me to this discussion on Facebook in which Amy Cuddy expresses displeasure with my recent criticism (with Kaiser Fung) of […]

## 2 new reasons not to trust published p-values: You won’t believe what this rogue economist has to say.

Political scientist Anselm Rink points me to this paper by economist Alwyn Young which is entitled, “Channelling Fisher: Randomization Tests and the Statistical Insignificance of Seemingly Significant Experimental Results,” and begins, I [Young] follow R.A. Fisher’s The Design of Experiments, using randomization statistical inference to test the null hypothesis of no treatment effect in a […]

## Paxil: What went wrong?

Dale Lehman points us to this news article by Paul Basken on a study by Joanna Le Noury, John Nardo, David Healy, Jon Jureidin, Melissa Raven, Catalin Tufanaru, and Elia Abi-Jaoude that investigated what went wrong in the notorious study by Martin Keller et al. of the GlaxoSmithKline drug Paxil. Lots of ethical issues here, […]

## Read this to change your entire perspective on statistics: Why inversion of hypothesis tests is not a general procedure for creating uncertainty intervals

Dave Choi writes: A reviewer has pointed me something that you wrote in your blog on inverting test statistics. Specifically, the reviewer is interested in what can happen if the test can reject the entire assumed family of models, and has asked me to consider discussing whether it applies to a paper that I am […]