[I]t is unacceptably easy to publish “statistically significant” evidence consistent with any hypothesis.
The culprit is a construct we refer to as researcher degrees of freedom. In the course of collecting and analyzing data, researchers have many decisions to make: Should more data be collected? Should some observations be excluded? Which conditions should be combined and which ones compared? Which control variables should be considered? Should specific measures be combined or transformed or both?
It is rare, and sometimes impractical, for researchers to make all these decisions beforehand. Rather, it is common (and accepted practice) for researchers to explore various analytic alternatives, to search for a combination that yields “statistical significance,” and to then report only what “worked.” The problem, of course, is that the likelihood of at least one (of many) analyses producing a falsely positive finding at the 5% level is necessarily greater than 5%.
Another excellent link via Yalda Afshar. Other choice quotes, “Everything reported here actually happened”, “Author order is alphabetical, controlling for father’s age (reverse-coded)”.
I [Malecki] would rank author guidelines №s 5 & 6 higher in the order.