Going Beyond the Book: Towards Critical Reading in Statistics Teaching

My article with the above title is appearing in the journal Teaching Statistics. Here’s the introduction:

We can improve our teaching of statistical examples from books by collecting further data, reading cited articles and performing further data analysis. This should not come as a surprise, but what might be new is the realization of how close to the surface these research opportunities are: even influential and celebrated books can have examples where more can be learned with a small amount of additional effort.

We discuss three examples that have arisen in our own teaching: an introductory textbook that motivated us to think more carefully about categorical and continuous variables; a book for the lay reader that misreported a study of menstruation and accidents; and a monograph on the foundations of probability that over interpreted statistically insignificant fluctuations in sex ratios.

And here’s the conclusion:

Individually, these examples are of little importance. After all, one does not go to a statistics textbook to learn about handedness, menstruation, and sex ratios. It is striking, however, that the very first examples I looked at in the Zeisel and von Mises books – the examples with interesting data patterns – collapsed upon further inspection. In the Zeisel example, we went to the secondary source and found that his sketch was not actually a graph of any data, and that he in fact misinterpreted the results of the study. In the von Mises example, we reanalysed the data and found his result to be not statistically significant, thus casting doubt on his already doubtful story about ethnic differences in sex ratios. In the Utts and Heckard example, we were inspired to collect data on handedness and look at survey questions on religious attendance to find underlying continuous structures.

You can do it yourself!

These are examples that I’ve encountered during the past twenty years of teaching. The real message I want to send, though, is that you can do it yourself. Anything you read, you can check, for example this implausible (and, indeed, false) claim by a public health expert that “Consumption [of chicken] in the US has increased . . . a hundredfold between 1934 and 1994.” (It actually increased by a factor of six.)

Textbooks are commonly written in an authoritative style, but that doesn’t mean everything in them is correct. You can learn a lot by going back to the original source of the data, and even running the occasional chi-squared test of your own!

6 thoughts on “Going Beyond the Book: Towards Critical Reading in Statistics Teaching

  1. Hey Andrew this is a great point. People should start compiling the information. It would help assist to inform authors (who might not know of changes). It would also be interesting to crowd source textbooks (like a wikipedia). I doubt it would ever happen but would be cool nonetheless. It seems there has been no real innovation in textbook writing.

  2. Hi Andrew,

    I am one of the readers of your blog and I have a question about one of your previous post. I should have posted it below that entry but I thought you may not follow your previous posts.

    My question is about your post: Why we usually don’t worry about multiple hypothesis test

    Please correct me if I am wrong: It seems to me that the argument of that paper is that using Hierarchical Bayesian modelling reduces false positive via merging intervals to each other but it is not as conservative as Bonferroni correction and we can still hope for getting higher much true positive (than classical Bonferroni)? The argument was that type S error (sign error) and type M error (magnitude error) are more important, is that right? It sounds to me that it is different question that I do not exactly follow why Hierarchical Bayesian modelling is superior when one is more interested in type S and M errors? Is there any theoretical result about this?

    Plus my understanding from the proposed method is that it makes the estimation more robust via regularising the estimation with a prior, right? But then, one can argue that the prior introduces bias, right?

    Thanks,

  3. You can learn a lot by going back to the original source of the data, and even running the occasional chi-squared test of your own!

    I thought we weren’t supposed to use chi-square tests anymore.

  4. Is that second Ziesel/Dalton example really appropriate as a mixture model? The mixture model is just an artifact of the coding. We’re dealing with a cycle, and the “zero point” is a bit arbitrary.

    Imagine this is the real bimodal data in the article (I don’t have the data so the numbers are illustrative):

    1-4: 10
    5-8: 4
    9-12: 3
    13-16: 2
    17-20: 5
    21-24: 2
    25-28: 10

    We can just as correctly code this as the following unimodal data:

    -11 – -8: 5
    -7 – -4: 2
    -3 – 0: 10
    1-4: 10
    5-8: 4
    9-12: 3
    13-16: 2

  5. You can see some discussions of highly misleading examples in a statistics textbook in my list of errors in the 3rd (1999) edition of Moore & McCabe’s Introduction to the Practice of Statistics.

  6. One of the things I find really annoying about text books is when you get book after book re-analysing the same set of ‘classic’ data. I realise that maybe the data is not the point of the book, but would it really stretch the authors to find new literature data (which is abundant) or even generate some of their own?

Comments are closed.