Dan Goldstein made this amusing graph:

in discussing this paper.

Posted by Andrew on 13 December 2006, 12:00 am

Dan Goldstein made this amusing graph:

in discussing this paper.

## Recent Comments

- Simon on How do companies use Bayesian methods?
- Dean Eckles on “Your Paper Makes SSRN Top Ten List”
- David Manheim on “Your Paper Makes SSRN Top Ten List”
- Andrew on How do companies use Bayesian methods?
- Andrew on The Fault in Our Stars: It’s even worse than they say
- Matthew on How do companies use Bayesian methods?
- Matthew on How do companies use Bayesian methods?
- Martin on “Your Paper Makes SSRN Top Ten List”
- Matthew on How do companies use Bayesian methods?
- Matthew on How do companies use Bayesian methods?
- Anonymous on How do companies use Bayesian methods?
- Anonymous on How do companies use Bayesian methods?
- Hmmmm on Prediction Market Project for the Reproducibility of Psychological Science
- genauer on “Your Paper Makes SSRN Top Ten List”
- Daniel Lakens on The Fault in Our Stars: It’s even worse than they say
- Daniel Lakens on The Fault in Our Stars: It’s even worse than they say
- Anonymous on How do companies use Bayesian methods?
- Corey on How do companies use Bayesian methods?
- Rahul on “Your Paper Makes SSRN Top Ten List”
- Anonymous on How do companies use Bayesian methods?

## Categories

Taking the difference between the significance of effects as evidence of a difference of effects is a very common mistake of clinical research trainees – that hopefully is corrected early in their careers.

The comment in the paper that "we are pretty sure the article would not have been published had the results not been statistically significant" identifies an arguably larger inference problem with little hope of practical resolution. (Some have gone as far as to suggest that if such is the case, such studies should neither be funded nor published)

Keith

Bob Rosenthal and colleague made exactly this point empirically about 40 years ago, showing that social scientists (and now lawyers, judges, legislatures, etc.) simply don't credit a finding unless there's a ".05" lying around.

-Robert Rosenthal & J. Gaito, The Interpretation of Levels of Significance by Psychologists, 55 J. PSYCHOL. 33 (1963); Robert Rosenthal & J. Gaito, Further Evidence for the Cliff Effect in the Interpretation of Levels of Significance, 15 PSYCHOL. REP. 570 (1964)

Depressing.

Thanks for the reference Jeremy.

I forgot to attach this one with my post – Sterling TD (1959). Publication decisions and their possible effects on inferences drawn from tests of significance — or vice versa. Journal of the American Statistical Association 54:30-34.

(Topic was suggest to Sterling by RA Fisher)

Keith

In grad school I thought I'd heard there was a Jacob Cohen quote to the effect of "Surely god loves p=.06 as much as god loves p=.05" – I've done some searching and found nothing…

Not that I believed the person telling me, really, well…maybe…

The quote is from Rosnow & Rosenthal (1989, p1277):

"We are not interested in the logic itself,

nor will we argue for replacing the .05 alpha with another level of alpha, but at this point in our discussion we only wish to emphasize that dichotomous significance testing has no ontological basis. That is, we want to underscore

that, surely, God loves the .06 nearly as much as the .05. Can there be any doubt that God views the strength of evidence for or against the null as a fairly continuous function of the magnitude of p?"

Rosnow, R.L. & Rosenthal, R. (1989). Statistical procedures and the justification of knowledge in psychological science. American Psychologist, 44,

1276-1284.

Thom