Skip to content
Archive of posts filed under the Sociology category.

Where’d the $2500 come from?

Brad Buchsbaum writes: Sometimes I read the New York Times “Well” articles on science and health. It’s a mixed bag, sometimes it’s quite good and sometimes not. I came across this yesterday: What’s the Value of Exercise? $2,500 For people still struggling to make time for exercise, a new study offers a strong incentive: You’ll […]

Criminology corner: Type M error might explain Weisburd’s Paradox

[silly cartoon found by googling *cat burglar*] Torbjørn Skardhamar, Mikko Aaltonen, and I wrote this article to appear in the Journal of Quantitative Criminology: Simple calculations seem to show that larger studies should have higher statistical power, but empirical meta-analyses of published work in criminology have found zero or weak correlations between sample size and […]

“Bombshell” statistical evidence for research misconduct, and what to do about it?

Someone pointed me to this post by Nick Brown discussing a recent article by John Carlisle regarding scientific misconduct. Here’s Brown: [Carlisle] claims that he has found statistical evidence that a surprisingly high proportion of randomised controlled trials (RCTs) contain data patterns that cannot have arisen by chance. . . . the implication is that […]

All the things we have to do that we don’t really need to do: The social cost of junk science

I’ve been thinking a lot about junk science lately. Some people have said it’s counterproductive or rude of me to keep talking about the same few examples (actually I think we have about 15 or so examples that come up again and again), so let me just speak generically about the sort of scientific claim […]

The Other Side of the Night

Don Green points us to this quantitative/qualitative meta-analysis he did with Betsy Levy Paluck and Seth Green. The paper begins: This paper evaluates the state of contact hypothesis research from a policy perspective. Building on Pettigrew and Tropp’s (2006) influential meta-analysis, we assemble all intergroup contact studies that feature random assignment and delayed outcome measures, […]

PCI Statistics: A preprint review peer community in statistics

X informs me of a new effort, “Peer community in . . .”, which describes itself as “a free recommendation process of published and unpublished scientific papers.” So far this exists in only one field, Evolutionary Biology. But this looks like a great idea and I expect it will soon exist in statistics, political science, […]

How to think scientifically about scientists’ proposals for fixing science

I wrote this article for a sociology journal: Science is in crisis. Any doubt about this status has surely been been dispelled by the loud assurances to the contrary by various authority figures who are deeply invested in the current system and have written things such as, “Psychology is not in crisis, contrary to popular […]

My review of Duncan Watts’s book, “Everything is Obvious (once you know the answer)”

We had some recent discussion of this book in the comments and so I thought I’d point you to my review from a few years ago. Lots to chew on in the book, and in the review.

“P-hacking” and the intention-to-cheat effect

I’m a big fan of the work of Uri Simonsohn and his collaborators, but I don’t like the term “p-hacking” because it can be taken to imply an intention to cheat. The image of p-hacking is of a researcher trying test after test on the data until reaching the magic “p less than .05.” But, […]

“Everybody Lies” by Seth Stephens-Davidowitz

Seth Stephens-Davidowitz sent me his new book on learning from data. As is just about always the case for this sort of book, I’m a natural reviewer but I’m not really the intended audience. That’s why I gave Dan Ariely’s book to Juli Simon Thomas to review; I thought her perspective would be more relevant […]

Honesty and transparency are not enough

[cat picture] From a recent article, Honesty and transparency are not enough: This point . . . is important for two reasons. First, consider the practical consequences for a researcher who eagerly accepts the message of ethical and practical values of sharing and openness, but does not learn about the importance of data quality. He […]

“An anonymous tip”

[cat picture] I and a couple others received the following bizarre email: **’s research is just the tip of the iceberg. If you want to expose more serious flaws, look at research by his co-authors – ** at ** and ** at **. I won’t be checking this disposable e-mail address again. People send me […]

Discussion with Lee Jussim and Simine Vazire on eminence, junk science, and blind reviewing

Lee Jussim pointed me to the recent article in Psychological Science by Joseph Simmons and Uri Simonsohn, expanding on their blog post on flaws in the notorious power pose article. Jussim then commented: I [Jussim] think that Cuddy/Fiske world is slowly shrinking. I think your “What Has Happened Here…” post was: 1. A bit premature […]

A completely reasonable-sounding statement with which I strongly disagree

From a couple years ago: In the context of a listserv discussion about replication in psychology experiments, someone wrote: The current best estimate of the effect size is somewhere in between the original study and the replication’s reported value. This conciliatory, split-the-difference statement sounds reasonable, and it might well represent good politics in the context […]

7th graders trained to avoid Pizzagate-style data exploration—but is the training too rigid?

[cat picture] Laura Kapitula writes: I wanted to share a cute story that gave me a bit of hope. My daughter who is in 7th grade was doing her science project. She had designed an experiment comparing lemon batteries to potato batteries, a 2×4 design with lemons or potatoes as one factor and number of […]

Another perspective on peer review

Andrew Preston writes: I’m the co-founder and CEO of Publons.com. Our mission is to modernise peer review. We’ve developed a way to give researchers formal recognition for their peer review (i.e., evidence that can go on your CV) and have partnerships with over 1k top journals. I’ve just finished reading your rebuttal of Susan Fiske’s […]

A whole fleet of Wansinks: is “evidence-based design” a pseudoscience that’s supporting a trillion-dollar industry?

Following a recent post that mentioned

Drug-funded profs push drugs

Someone who wishes to remain anonymous writes: I just read a long ProPublica article that I think your blog commenters might be interested in. It’s from February, but was linked to by the Mad Biologist today (https://mikethemadbiologist.com/). Here is a link to the article: https://www.propublica.org/article/big-pharma-quietly-enlists-leading-professors-to-justify-1000-per-day-drugs In short, it’s about a group of professors (mainly economists) […]

My proposal for JASA: “Journal” = review reports + editors’ recommendations + links to the original paper and updates + post-publication comments

[cat picture] Whenever they’ve asked me to edit a statistics journal, I say no thank you because I think I can make more of a contribution through this blog. I’ve said no enough times that they’ve stopped asking me. But I’ve had an idea for awhile and now I want to do it. I think […]

Reputational incentives and post-publication review: two (partial) solutions to the misinformation problem

So. There are erroneous analyses published in scientific journals and in the news. Here I’m not talking not about outright propaganda, but about mistakes that happen to coincide with the preconceptions of their authors. We’ve seen lots of examples. Here are just a few: – Political scientist Larry Bartels is committed to a model of […]