I’m speaking at the Electronic Conference on Teaching Statistics on Mon 16 May at 11am.
I’ve given many remote talks but this is the first time I’ve spoken at an all-electronic conference. It will be a challenge. In a live talk, everyone’s just sitting in the room staring at you, but in an electronic conference everyone will be reading their email and surfing the web. So the bar for “replacement level” (as they say in baseball) is a lot higher.
At the very least, I have to be more lively than my own writing, or people will just tune me out and start reading old blog entries.
Here’s my title and abstract:
Changing everything at once: Student-centered learning, computerized practice exercises, evaluation of student progress, and a modern syllabus to create a completely new introductory statistics course
It should be possible to improve the much-despised introductory statistics course in several ways: (1) altering the classroom experience toward active learning, (2) using adaptive software to drill students with questions at their level, repeating until students attain proficiency in key skills, and (3) standardized pre-tests and post-tests, both for measuring individual students’ progress and for comparing the effectiveness of different instructors and different teaching strategies. All these ideas are well established in the education literature but do not seem to be part of the usual statistics course. We would like to implement all these changes in the context of (4) a restructuring of the course content, replacing hypothesis testing, p-values, and the notorious “sampling distribution of the sample mean” with ideas closer to what we see as good statistical practice. We will discuss our struggles in this endeavor. This work is joint with Eric Loken.
I’m planning to start as follows:
An important characteristic of a good scientist is the capacity to be upset, to recognize anomalies for what they are, and to track them down and figure out what in our understanding is lacking. This sort of unsettled-ness—an unwillingness to sweep concerns under the rug, a scrupulousness about acknowledging one’s uncertainty—is, I would argue, particularly important for a statistician.
For a teacher, maybe not. Some of the most effective teachers I’ve known do not push and push; rather, they have a clean understanding of the world which they can convey to their students.
What about researchers like myself who also teach? What about textbook writers? We need to walk the line, to present a clear structure for students to learn, while acknowledging the dragons that lurk just outside the borders of our well-mapped territory. And this balance is particularly difficult in statistics, a practice which is full of approximations and judicious choices which cannot easily be codified.
As I said, I strongly believe that each of you needs to cultivate your capacity for being upset. And, toward this goal, I’d like to begin today’s talk by upsetting as many of you as possible.
I have soooo many different ways to upset you. I’d like to upset you from all directions. I could upset you with a simple example demonstrating the serious, serious failings of textbook Bayesian inference (and, yes, I include our textbook here as one of the failures). I could upset you by reminding you that we preach the virtues of controlled experimentation yet do not follow any such protocol when evaluating any aspects of our own work. Or I could upset you by arguing—convincingly, I think—that many of the worst misconceptions of statistical practitioners arise not from a lack of statistical education but because the field of statistics has conveyed some of its pernicious messages all too well. But today I’ll do my best to upset you in another way . . .