A correspondent writes:

I thought you might enjoy this…

I’m refereeing a paper which basically looks at whether survey responses on a particular topic vary when the question is asked in two different ways. In the main results table they split the sample along several relevant dimensions (education; marital status; religion; etc). I give them credit for showing all the results, but only one differential is statistically significant at 5%, and of course they focus the interpretation on that one. In my initial report, I asked if they either had tried or would try correcting for multiple hypothesis testing. I just received their response:

“We agree with the referee, but we do not think it is possible given that we really do not have enough power.”

So they left it as is and don’t discuss the issue at all in the revision!

As is often the case, this is an example where I suspect we’d be better off had p-values never been invented.

“But if we did that we would not find an effect…”

+1

JMR: The Journal of Meaningless Results.

alpha is the expected value of p

I’m curious how this would be different without p-values.

Authors: We investigated each subgroup using Bayesian models with flat priors. One of the subgroups had a 95% posterior interval that didn’t include zero. … insert interpretation …

Referee: Did you try fitting one model incorporating all subgroups with informative priors?

Authors: We agree with the referee, but with our sample size informative priors would likely shrink all estimates to 0.

Referee:

+1

The problems with p-values are not just with p-values.

In my experience (as a reviewer) there is a very substantial chance that the editor will be satisfied with the authors’ response.

The experience that bothered me the most was a paper that pointed out the posterior seemed surprising, but explained they had expended a lot of effort checking MCMC convergence.

Me: Why not plot the likelihood?

Authors: When we plotted the likelihood, as suggested, it was clear why the posterior was like that (plot attached). However, we decided not to add it to the paper (or offer to make it available by request).

Editor: That’s fine.

Why would authors want to leave out a plot that makes it clear what is actually happening in a statistical analysis?

Why would an editor let them get away with it?