Someone writes in:
I was wondering if you had a chance to see the commentary by the Stockwells on blended learning strategies that was recently published in Cell and which also received quite a nice write up by Columbia. It’s also currently featured on Columbia’s webpage.
In fact, I was a student in Prof. Stockwell’s Biochemistry class last year, and a participant in this study, which was why I was so surprised that it ended up in Cell and received the attention that it did.
I was part of the textbook group, for which he assigned over 30 pages of dense textbook reading (which would probably have taken multiple hours to fully digest, and was more than 2-3 times more than what he’d assign for a typical class), so I’m sure the video was much more tailored to the material he covered in class and ultimately quizzed everyone on. Moreover, in his interview Stockwell claims that he’ll “use video lectures and assign them in advance,” rather than relying exclusively on a textbook, yet it was surprising that in their commentary they write:
We also compared the exam scores of students in the textbook versus video preparation groups but found no statistically significant difference in this relatively modest sample size, despite the trend toward higher scores in the group that received the video assignment.
Perhaps the most reasonable explanation is that no one watched the video or did the textbook reading for a class that wasn’t going to be covered on any of the exams? What’s even more confusing to me is that they admit the sample size of the textbook/video groups were “modest,” but are readily able to draw conclusions about which of the 4 arms provides the most effect model for learning, when each arm had half as many participants as these two larger groups! I’m not sure if that’s just confirmation bias, or if the results are truly significant? I’m also not sure if the figure in the paper is mislabeled since Group 2 and 3 and panel A are different than what’s used in panel D (see above).
Do you have any thoughts on the statistical power of such a study?
I know the above seems a little bit like I have an axe to grind, but it seemed to me like the conclusions of this experiment were quite reaching, especially for such a short study with so few participants, and I was wondering what someone else with more expertise on experimental design than I have thought.
I had not heard about this study and don’t really have the time to look at it, but I’m posting it here in case any of you have any comments.
As to why Cell chose to publish it: This seems clear enough. Everybody knows that teaching is important and it’s hard to get students to learn, we try lots of teaching strategies but there are not a lot of controlled trials of teaching methods, so when there is such a study, and when it gives positive results with that magic “p less than .05,” then, yeah, I’m not surprised it gets published in a top journal.