Is Rigor Contagious? Much of the theory and practice of statistics and econometrics is characterized by a toxic mixture of rigor and sloppiness. Methods are justified based on seemingly pure principles that can’t survive reality. Examples of these principles include random sampling, unbiased estimation, hypothesis testing, Bayesian inference, and causal identification. Examples of uncomfortable reality […]

**Miscellaneous Statistics**category.

## He wants to know what book to read to learn statistics

Tim Gilmour writes: I’m an early 40s guy in Los Angeles, and I’m sort of sending myself back to school, specifically in statistics — not taking classes, just working through things on my own. Though I haven’t really used math much since undergrad, a number of my personal interests (primarily epistemology) would be much better […]

## Unethical behavior vs. being a bad guy

I happened to come across this article and it reminded me of the general point that it’s possible to behave unethically without being a “bad guy.” The story in question involves some scientists who did some experiments about thirty years ago on the biological effects of low-frequency magnetic fields. They published their results in a […]

## Vine regression?

Jeremy Neufeld writes: I’m an undergraduate student at the University of Maryland and I was recently referred to this paper (Vine Regression, by Roger Cooke, Harry Joe, and Bo Chang), also an accompanying summary blog post by the main author) as potentially useful in policy analysis. With the big claims it makes, I am not […]

## Measurement error and the replication crisis

Alison McCook from Retraction Watch interviewed Eric Loken and me regarding our recent article, “Measurement error and the replication crisis.” We talked about why traditional statistics are often counterproductive to research in the human sciences. Here’s the interview: Retraction Watch: Your article focuses on the “noise” that’s present in research studies. What is “noise” and […]

## Theoretical statistics is the theory of applied statistics: how to think about what we do (My talk at the University of Michigan this Friday 3pm)

Theoretical statistics is the theory of applied statistics: how to think about what we do Andrew Gelman, Department of Statistics and Department of Political Science, Columbia University Working scientists and engineers commonly feel that philosophy is a waste of time. But theoretical and philosophical principles can guide practice, so it makes sense for us to […]

## The “What does not kill my statistical significance makes it stronger” fallacy

As anyone who’s designed a study and gathered data can tell you, getting statistical significance is difficult. Lots of our best ideas don’t pan out, and even if a hypothesis seems to be supported by the data, the magic “p less than .05” can be elusive. And we also know that noisy data and small […]

## Long Shot

Frank Harrell doesn’t like p-values: In my [Frank’s] opinion, null hypothesis testing and p-values have done significant harm to science. The purpose of this note is to catalog the many problems caused by p-values. As readers post new problems in their comments, more will be incorporated into the list, so this is a work in […]

## No guru, no method, no teacher, Just you and I and nature . . . in the garden. Of forking paths.

Here’s a quote: Instead of focusing on theory, the focus is on asking and answering practical research questions. It sounds eminently reasonable, yet in context I think it’s completely wrong. I will explain. But first some background. Junk science and statistics They say that hard cases make bad law. But bad research can make good […]

## How to attack human rights and the U.S. economy at the same time

I received this email from a postdoc in a technical field: As you might have heard, Trump signed an executive order today issuing a 30-day total suspension of visas and other immigration benefits for the citizens of Iran and six other countries. For my wife and me, this means that our visas are suspended; we […]

## Absence of evidence is evidence of alcohol?

Arho Toikka writes: I ran across what I feel is a pretty peculiar use of statistical significance and p-values, and thought I’d send you a message and see if you find it interesting too or if I’m just confused about something: I read a news story about a study that showed that previous studies on […]

## “Statistical heartburn: An attempt to digest four pizza publications from the Cornell Food and Brand Lab”

Tim van der Zee, Jordan Anaya, and Nicholas Brown posted this very detailed criticism of four papers published by food researcher and business school professor Brian Wansink. The papers are all in obscure journals and became notorious only after Wansink blogged about them in the context of some advice he was giving to graduate students. […]

## Historical critiques of psychology research methods

David Lockhart writes: I found these two papers – in of all places the presentation which Emil Kirkegaard and John Fuerst are presenting in London this weekend, which they claim is preventing them from responding to the can of worms they have opened by publishing a large, non-anonymized database of OKCupid dating profiles. This seems […]

## Looking for rigor in all the wrong places

My talk in the upcoming conference on Inference from Non Probability Samples, 16-17 Mar in Paris: Looking for rigor in all the wrong places What do the following ideas and practices have in common: unbiased estimation, statistical significance, insistence on random sampling, and avoidance of prior information? All have been embraced as ways of enforcing […]

## “Estimating trends in mortality for the bottom quartile, we found little evidence that survival probabilities declined dramatically.”

Last year there was much discussion here and elsewhere about a paper by Anne Case and Angus Deaton, who noticed that death rates for non-Hispanic white Americans aged 45-54 had been roughly flat since 1999, even while the death rates for this age category had been declining steadily in other countries and among nonwhite Americans. […]

## Frank Harrell statistics blog!

Frank Harrell, author of an influential book on regression modeling and currently both a biostatistics professor and a statistician at the Food and Drug Administration, has started a blog. He sums up “some of his personal philosophy of statistics” here: Statistics needs to be fully integrated into research; experimental design is all important Don’t be […]

## Problems with “incremental validity” or more generally in interpreting more than one regression coefficient at a time

Kevin Lewis points us to this interesting paper by Jacob Westfall and Tal Yarkoni entitled, “Statistically Controlling for Confounding Constructs Is Harder than You Think.” Westfall and Yarkoni write: A common goal of statistical analysis in the social sciences is to draw inferences about the relative contributions of different variables to some outcome variable. When […]