Masanao sends this one in, under the heading, “another incident of misunderstood p-value”:
Warren Davies, a positive psychology MSc student at UEL, provides the latest in our ongoing series of guest features for students. Warren has just released a Psychology Study Guide, which covers information on statistics, research methods and study skills for psychology students.
Despite the myriad rules and procedures of science, some research findings are pure flukes. Perhaps you’re testing a new drug, and by chance alone, a large number of people spontaneously get better. The better your study is conducted, the lower the chance that your result was a fluke – but still, there is always a certain probability that it was.
Statistical significance testing gives you an idea of what this probability is.
In science we’re always testing hypotheses. We never conduct a study to ‘see what happens’, because there’s always at least one way to make any useless set of data look important. We take a risk; we put our idea on the line and expose it to potential refutation. Therefore, all statistical tests in psychology test the possibility that the hypothesis is correct, versus the possibility that it isn’t.
I like the BPS Research Digest, but one more item like this and I’ll have to take them off the blogroll. This is ridiculous! I don’t blame Warren Davies–it’s all-too-common for someone teaching statistics to (a) make a mistake and (b) not realize it. But I do blame the editors of the website for getting a non-expert to emit wrong information. One thing that any research psychologist should know is that statistics is tricky. I hate to see this sort of mistake (saying that statistical significance is a measure of the probability the null hypothesis is true) being given the official endorsement of British Psychological Society.
P.S. To any confused readers out there: The p-value is the probability of seeing something as extreme as the data or more so, if the null hypothesis were true. In social science (and I think in psychology as well), the null hypothesis is almost certainly false, false, false, and you don’t need a p-value to tell you this. The p-value tells you the extent to which a certain aspect of your data are consistent with the null hypothesis. A lack of rejection doesn’t tell you that the null hyp is likely true; rather, it tells you that you don’t have enough data to reject the null hyp. For more more more on this, see for example this paper with David Weakliem which was written for a nontechnical audience.
P.P.S. This “zombies” category is really coming in handy, huh?