Alex Gamma sends along a recently published article by Carola Salvi, Irene Cristofori, Jordan Grafman, and Mark Beeman, along with the note:
This might be of interest to you, since it’s political science and smells bad.
From The Quarterly Journal of Experimental Psychology: Two groups of 22 college students each identified as conservatives or liberals based on two 7-point Likert scales were asked whether they solved a word association task by insight or by analysis. Statistically significant group x solving strategy interaction in ANOVA (Fig. 2), and the findings are declared as “providing novel evidence that political orientation is associated with problem-solving strategy”. This clearly warrants the paper’s title “The politics of insight.”
I replied: N=44, huh?
To which Gamma wrote:
About the N=44, the authors point out that they matched students pairwise on their scores in the two Likert tasks, but I’m not sure that makes their conclusions more trustworthy. From the paper:
“Our final sample consisted of 22 conservatives who were matched with 22 liberal participants. For example, each participant who scored 7 on the conservatism scale and 1 on the liberalism scale was matched (on age and ethnicity) with another participant who scored 7 on the liberalism scale and 1 on the conservatism scale. Each participant who scored 7 on the conservatism scale and 2 on the liberalism scale was matched (on age and ethnicity) with another participant who scored 7 on the liberalism scale and 2 on the conservatism scale and so on. The final sample of 44 participants was balanced for political orientation and ethnicity.”
I see no reason to believe these results. That is, in a new study I have no particular expectation that these results would replicate. They could replicate—it’s possible—I just don’t find this evidence to be particularly strong.
From their Results section:
As I typically say when considering this sort of study, I think the researchers would be better off looking at all their results rather than sifting based on statistical significance.
Am I being too hard on these people?
But wait! you say. Isn’t almost every study plagued by forking paths? And I say, sure, that’s why I don’t take a p-value as evidence here. If you have some good evidence, fine. If all you have is a p-value and a connection to a vague theory, then no, I see no reason to take it seriously.
And, just to say this one more time, I’m not recommending a preregistered replication here. If they want to, fine, but it would seem to me to be a waste of time.