Skip to content
Archive of posts filed under the Miscellaneous Statistics category.

How has my advice to psychology researchers changed since 2013?

Four years ago, in a post entitled, “How can statisticians help psychologists do their research better?”, I gave the following recommendations to researchers: – Analyze all your data. – Present all your comparisons. – Make your data public. And, for journal editors, I wrote, “if a paper is nothing special, you don’t have to publish […]

Static sensitivity analysis

After this discussion, I pointed Ryan Giordano, Tamara Broderick, and Michael Jordan to Figure 4 of this paper with Bois and Jiang as an example of “static sensitivity analysis.” I’ve never really followed up on this idea but I think it could be useful for many problems. Giordano replied: Here’s a copy of Basu’s robustness […]

This company wants to hire people who can program in R or Python and do statistical modeling in Stan

Doug Puett writes: I am a 2012 QMSS [Columbia University Quantitative Methods in Social Sciences] grad who is currently trying to build a Data Science/Quantitative UX team, and was hoping for some advice. I am finding myself having a hard time finding people who are really interested in understanding people and who especially are excited […]

Some natural solutions to the p-value communication problem—and why they won’t work.

John Carlin and I write: It is well known that even experienced scientists routinely misinterpret p-values in all sorts of ways, including confusion of statistical and practical significance, treating non-rejection as acceptance of the null hypothesis, and interpreting the p-value as some sort of replication probability or as the posterior probability that the null hypothesis […]

#NotAll4YearOlds

I think there’s something wrong this op-ed by developmental psychologist Alison Gopnik, “4-year-olds don’t act like Trump,” and which begins, The analogy is pervasive among his critics: Donald Trump is like a child. . . . But the analogy is profoundly wrong, and it’s unfair to children. The scientific developmental research of the past 30 […]

Hotel room aliases of the statisticians

Barry Petchesky writes: Below you’ll find a room list found before Game 1 at the Four Seasons in Houston (right across from the arena), where the Thunder were staying for their first-round series against the Rockets. We didn’t run it then because we didn’t want Rockets fans pulling the fire alarm or making late-night calls […]

Taking Data Journalism Seriously

This is a bit of a followup to our recent review of “Everybody Lies.” While writing the review I searched the blog for mentions of Seth Stephens-Davidowitz, and I came across this post from last year, concerning a claim made by author J. D. Vance that “the middle part of America is more religious than […]

Accounting for variation and uncertainty

[cat picture] Yesterday I gave a list of the questions they’re asking me when I speak at the Journal of Accounting Research Conference. All kidding aside, I think that a conference of accountants is the perfect setting for a discussion of of research integrity, as accounting is all about setting up institutions to enable trust. […]

A completely reasonable-sounding statement with which I strongly disagree

From a couple years ago: In the context of a listserv discussion about replication in psychology experiments, someone wrote: The current best estimate of the effect size is somewhere in between the original study and the replication’s reported value. This conciliatory, split-the-difference statement sounds reasonable, and it might well represent good politics in the context […]

7th graders trained to avoid Pizzagate-style data exploration—but is the training too rigid?

[cat picture] Laura Kapitula writes: I wanted to share a cute story that gave me a bit of hope. My daughter who is in 7th grade was doing her science project. She had designed an experiment comparing lemon batteries to potato batteries, a 2×4 design with lemons or potatoes as one factor and number of […]

What hypothesis testing is all about. (Hint: It’s not what you think.)

From 2015: The conventional view: Hyp testing is all about rejection. The idea is that if you reject the null hyp at the 5% level, you have a win, you have learned that a certain null model is false and science has progressed, either in the glamorous “scientific revolution” sense that you’ve rejected a central […]

The statistical crisis in science: How is it relevant to clinical neuropsychology?

[cat picture] Hilde Geurts and I write: There is currently increased attention to the statistical (and replication) crisis in science. Biomedicine and social psychology have been at the heart of this crisis, but similar problems are evident in a wide range of fields. We discuss three examples of replication challenges from the field of social […]

The Bolt from the Blue

Lionel Hertzog writes: In the method section of a recent Nature article in my field of research (diversity-ecosystem function) one can read the following: The inclusion of many predictors in statistical models increases the chance of type I error (false positives). To account for this we used a Bernoulli process to detect false discovery rates, […]

“The earth is flat (p > 0.05): Significance thresholds and the crisis of unreplicable research”

Valentin Amrhein​, Fränzi Korner-Nievergelt, and Tobias Roth write: The widespread use of ‘statistical significance’ as a license for making a claim of a scientific finding leads to considerable distortion of the scientific process. We review why degrading p-values into ‘significant’ and ‘nonsignificant’ contributes to making studies irreproducible, or to making them seem irreproducible. A major […]

“Data sleaze: Uber and beyond”

Interesting discussion from Kaiser Fung. I don’t have anything to add here; it’s just a good statistics topic. Scroll through Kaiser’s blog for more: Dispute over analysis of school quality and home prices shows social science is hard My pre-existing United boycott, and some musing on randomness and fairness etc.

Using prior knowledge in frequentist tests

Christian Bartels send along this paper, which he described as an attempt to use informative priors for frequentist test statistics. I replied: I’ve not tried to follow the details but this reminds me of our paper on posterior predictive checks. People think of this as very Bayesian but my original idea when doing this research […]

The next Lancet retraction? [“Subcortical brain volume differences in participants with attention deficit hyperactivity disorder in children and adults”]

[cat picture] Someone who prefers to remain anonymous asks for my thoughts on this post by Michael Corrigan and Robert Whitaker, “Lancet Psychiatry Needs to Retract the ADHD-Enigma Study: Authors’ conclusion that individuals with ADHD have smaller brains is belied by their own data,” which begins: Lancet Psychiatry, a UK-based medical journal, recently published a […]

Teaching Statistics: A Bag of Tricks (second edition)

Hey! Deb Nolan and I finished the second edition of our book, Teaching Statistics: A Bag of Tricks. You can pre-order it here. I love love love this book. As William Goldman would say, it’s the “good parts version”: all the fun stuff without the standard boring examples (counting colors of M&M’s, etc.). Great stuff […]

My proposal for JASA: “Journal” = review reports + editors’ recommendations + links to the original paper and updates + post-publication comments

[cat picture] Whenever they’ve asked me to edit a statistics journal, I say no thank you because I think I can make more of a contribution through this blog. I’ve said no enough times that they’ve stopped asking me. But I’ve had an idea for awhile and now I want to do it. I think […]

My talk this Friday in the Machine Learning in Finance workshop

[cat picture] This is kinda weird because I don’t know anything about machine learning in finance. I guess the assumption is that statistical ideas are not domain specific. Anyway, here it is: What can we learn from data? Andrew Gelman, Department of Statistics and Department of Political Science, Columbia University The standard framework for statistical […]