Kanazawarama

Thomas Volscho writes: David Weakliem mentioned your blog posting on one of Kanazawa’s papers and its methodological shortcomings. I wrote a critique of one of his papers for The Sociological Quarterly and the editor gave him a chance to respond … Continue reading

“When will AI be able to do scientific research both cheaper and better than us, thus effectively obsoleting humans?”

Alexey Guzey asks: How much have you thought about AI and when will AI be able to do scientific research both cheaper and better than us, thus effectively obsoleting humans? My first reply: I guess that AI can already do … Continue reading

Statistical methods that only work if you don’t use them (more precisely, they only work well if you avoid using them in the cases where they will fail)

Here are a couple examples. 1. Bayesian inference You conduct an experiment to estimate a parameter theta. Your experiment produces an unbiased estimate theta_hat with standard error 1.0 (on some scale). Assume the experiment is clean enough that you’re ok … Continue reading

Hey—check out this post from 2006 when I was mellow . . . actually too mellow, too nice, and too understanding

This was my second post on the work of the controversial sociologist Satoshi Kanazawa, author of noise-mining classics “Beautiful parents have more daughters,” “Engineers have more sons, nurses have more daughters,” “Violent men have more sons,” and so on. My … Continue reading

In research as in negotiation: Be willing to walk away, don’t paint yourself into a corner, leave no hostages to fortune

There’s a saying in negotiation that the most powerful asset is the ability to walk away from the deal. Similarly, in science (or engineering, business decision making, etc.), you have to be willing to give up your favorite ideas. When … Continue reading

The retraction paradox: Once you retract, you implicitly have to defend all the many things you haven’t yet retracted

Mark Palko points to this news article by Beth Skwarecki on Goop, “the Gwyneth Paltrow pseudoscience empire.” Here’s Skwarecki: When Goop publishes something weird or, worse, harmful, I often find myself wondering what are they thinking? Recently, on Jimmy Kimmel, … Continue reading

“Bombshell” statistical evidence for research misconduct, and what to do about it?

Someone pointed me to this post by Nick Brown discussing a recent article by John Carlisle regarding scientific misconduct. Here’s Brown: [Carlisle] claims that he has found statistical evidence that a surprisingly high proportion of randomised controlled trials (RCTs) contain … Continue reading

Reputational incentives and post-publication review: two (partial) solutions to the misinformation problem

So. There are erroneous analyses published in scientific journals and in the news. Here I’m not talking not about outright propaganda, but about mistakes that happen to coincide with the preconceptions of their authors. We’ve seen lots of examples. Here … Continue reading