Sandro Ambuehl writes:
As an avid reader of your blog, I thought you might like (to hate) the attached PNAS paper with the following findings: (i) sending two flyers about the importance of STEM fields to the parents of 81 kids improves ACT scores by 12 percentile points (intent-to-treat effect… a bit large, perhaps?) and (ii) ACT-scores predict college science-course taking. The paper concludes that “these findings demonstrate that a motivational intervention with parents can have … downstream effects on STEM career pursuit 5 y later” (as if correlation were transitive…). To their credit, they do state, buried in the text, that “There were no significant direct intervention effects on posthigh-school STEM career pursuit variables”.
You’ll correctly guess the editor.
My quick answer is that I neither love nor hate this paper. I get the impression that the authors are doing their best and are presenting their work clearly, and they just happen to be working within a framework in which effects are overestimated and overgeneralized.
The highlighting at the above link is from Ambuehl, I think. I agree that’s a bit much to think that handing out two brochures and a link to a website would be enough to cause an increase of mathematics and science ACT scores by about 12 percentage points. But we know that published point estimates are biased. We can take our Edlin factor and scale that down to, ummm, 3 percentage points? Even 3 percentage points isn’t nothing. The mechanism suggested in the paper is that when the parents got the brochures and website link, the kids were more likely to take STEM (science, technology, engineering, and math) courses in high school, and it makes sense that they could learn something in these classes and then do better on the exam.
I agree with Ambuehl that the logic of transitive correlations is in error. In particular, in the concluding sentence of the abstract, “Overall, these findings demonstrate that a motivational intervention with parents can have important effects on STEM preparation in high school, as well as downstream effects on STEM career pursuit 5 y later,” that last phrase does not seem supported by their data.
And, by the way, Ambuehl was right: I did guess the editor! Actually I think this paper is much much much better than the air-rage paper, the himmicanes paper, and the ages-ending-in-9-paper. This paper is just fine, almost. It’s hard to get kids to take math and science classes, and for this group of students whose families were already enrolled in a longitudinal study, it turns out that sending brochures to their parents seems to have been an effective intervention. It wouldn’t be so hard to mail such brochures to every parent of a high-school kid in the country, and I guess if they all take one more math or science class and one less class in, ummm, driver’s ed? U.S. history? Spanish? whatever? I guess that would be good, I dunno. I think they’re drawing a lot of conclusions from just 181 kids in this study, and for the usual reasons I’m suspicious of a claim such as, “a modest intervention aimed at parents can produce significant changes in their children’s academic choices.” The problem is that there are so many modest interventions happening all the time, and they can’t all have big effects.
Fundamentally I see this kind of thing as “engineering” rather than “science.” I don’t mean “engineering” in a bad way, not at all! These researchers are trying to figure out ways of getting kids to take more STEM classes, and here’s something they tried in this small group, and it seemed to work, so it’s good to share the information. It seems odd to me for this work to have appeared in a psychology journal (Psychological Science) and then a general science journal (PPNAS) rather than in some sort of education policy journal, but that’s just an artifact of our current decentralized system of research communication. I assume the results made their way into the What Works Clearinghouse so the relevant policymakers will know about it.
Overall I have the impression that many of the mistakes we see in statistical inference are created by the framing of research in terms of scientific discovery, in that researchers are pushed to make deterministic and overstated claims. But you can’t really say PPNAS did anything particularly wrong in this particular case: they popularized a bit of workaday policy research which seems at least on first glance to be a solid piece of work, flawed more in its presentation than its execution. It’s a little study, not a big deal. But not everything has to be a big deal.
P.S. Upon reflection, I think I was too generous in my above assessment. Or, to put it another way, no, I don’t believe the published estimates. I think they’re biased, and I’d expect that if someone were to try a controlled replication, that the results would probably be smaller. Perhaps I was giving the paper a soft reception because it was in PPNAS: the soft bigotry of low expectations and all that. Also, just about every published paper in policy analysis uses estimates that are positively biased, so we shouldn’t single out this particular articles. It really is much better than the PPNAS classics on himmicanes, air rage, etc.