Skip to content
Archive of entries posted by

Further evidence that creativity and innovation are stimulated by college sports: Evidence from a big regression

Kevin Lewis sent along this paper from the Creativity Research Journal: Further Evidence that Creativity and Innovation are Inhibited by Conservative Thinking: Analyses of the 2016 Presidential Election The investigation replicated and extended previous research showing a negative relationship between conservatism and creative accomplishment. Conservatism was estimated, as in previous research, from voting patterns. The […]

Chess records page

Chess records page (no, not on the first page, or the second page, or the third page, of a google search of *chess records*). There’s lots of good stuff here, enough to fill much of a book if you so desire. As we’ve discussed, chess games are in the public domain so if you take […]

Getting the right uncertainties when fitting multilevel models

Cesare Aloisi writes: I am writing you regarding something I recently stumbled upon in your book Data Analysis Using Regression and Multilevel/Hierarchical Models which confused me, in hopes you could help me understand it. This book has been my reference guide for many years now, and I am extremely grateful for everything I learnt from […]

Air rage update

So. Marcus Crede, Carol Nickerson, and I published a letter in PPNAS criticizing the notorious “air rage” article. (Due to space limitations, our letter contained only a small subset of the many possible criticisms of that paper.) Our letter was called “Questionable association between front boarding and air rage.” The authors of the original paper, […]

“What we know and don’t know about the 2016 election—and beyond” (event at Columbia poli sci dept next Monday midday)

On Monday 25 Sep, 12:10-1:45pm, in the Playroom (707 International Affairs Bldg): “What we know and don’t know about the 2016 election—and beyond” (discussion led by Bob Shapiro, Bob Erikson, me, and other Columbia political science faculty)

It’s not enough to be a good person and to be conscientious. You also need good measurement. Cargo-cult science done very conscientiously doesn’t become good science, it just falls apart from its own contradictions.

Kevin Lewis points us to a biology/psychology paper that was a mix of reasonable null claims (on the order of, the data don’t give us enough information to say anything about XYZ) and some highly questionable noise mining supported by p-values and forking paths. The whole thing is just so sad. The researchers are aware […]

Using black-box machine learning predictions as inputs to a Bayesian analysis

Following up on this discussion [Designing an animal-like brain: black-box “deep learning algorithms” to solve problems, with an (approximately) Bayesian “consciousness” or “executive functioning organ” that attempts to make sense of all these inferences], Mike Betancourt writes: I’m not sure AI (or machine learning) + Bayesian wrapper would address the points raised in the paper. […]

p less than 0.00000000000000000000000000000000 . . . now that’s what I call evidence!

I read more carefully the news article linked to in the previous post, which describes a forking-pathed nightmare of a psychology study, the sort of thing that was routine practice back in 2010 or so but which we’ve mostly learned to at least try to avoid. Anyway, one thing I learned there’s something called “terror […]

As if the 2010s never happened

E. J. writes: I’m sure I’m not the first to send you this beauty. Actually, E. J., you’re the only one who sent me this! It’s a news article, “Can the fear of death instantly make you a better athlete?”, reporting on a psychology experiment: For the first study, 31 male undergraduates who liked basketball […]

Maybe this paper is a parody, maybe it’s a semibluff

Peter DeScioli writes: I was wondering if you saw this paper about people reading Harry Potter and then disliking Trump, attached. It seems to fit the shark attack genre. In this case, the issue seems to be judging causation from multiple regression with observational data, assuming that control variables are enough to narrow down to […]

Where does the discussion go?

Jorge Cimentada writes: In this article, Yascha Mounk is saying that political scientists have failed to predict unexpected political changes such as the Trump nomination and the sudden growth of populism in Europe, because, he argues, of the way we’re testing hypotheses. By that he means the quantitative aspect behind science discovery. He goes on […]

Type M errors in the wild—really the wild!

Jeremy Fox points me to this article, “Underappreciated problems of low replication in ecological field studies,” by Nathan Lemoine, Ava Hoffman, Andrew Felton, Lauren Baur, Francis Chaves, Jesse Gray, Qiang Yu, and Melinda Smith, who write: The cost and difficulty of manipulative field studies makes low statistical power a pervasive issue throughout most ecological subdisciplines. […]

Type M errors studied in the wild

Brendan Nyhan points to this article, “Very large treatment effects in randomised trials as an empirical marker to indicate whether subsequent trials are necessary: meta-epidemiological assessment,” by Myura Nagendran, Tiago Pereira, Grace Kiew, Douglas Altman, Mahiben Maruthappu, John Ioannidis, and Peter McCulloch. From the abstract: Objective To examine whether a very large effect (VLE; defined […]

New Zealand election polling

Llewelyn Richards-Ward writes: Here is a forecaster apparently using a simulated (?Bayesian) approach and smoothing over a bunch of poll results in an attempt to guess the end result. I looked but couldn’t find his methodology but he is at University of Auckland, if you want to track him down… As a brief background, we […]

American Democracy and its Critics

I just happened to come across this article of mine from 2014: it’s a review published in the American Journal of Sociology of the book “American Democracy,” by Andrew Perrin. My review begins: Actually-existing democracy tends to have support in the middle of the political spectrum but is criticized on the two wings. I like […]

Causal inference using data from a non-representative sample

Dan Gibbons writes: I have been looking at using synthetic control estimates for estimating the effects of healthcare policies, particularly because for say county-level data the nontreated comparison units one would use in say a difference-in-differences estimator or quantile DID estimator (if one didn’t want to use the mean) are not especially clear. However, given […]

Job openings at online polling company!

Kyle Dropp of online polling firm Morning Consult says they are hiring a bunch of mid-level data scientists and software engineers at all levels: About Morning Consult: We are interviewing about 10,000 adults every day in the U.S. and ~20 countries, we have worked with 150+ Fortune 500 companies and industry associations and we are […]

Trial by combat, law school style

This story is hilarious. 78-year-old law professor was told he can no longer teach a certain required course; this jeopardizes his current arrangement where he is paid full time but only teaches one semester a year, so he’s suing his employer . . . Columbia Law School. The beautiful part of this story is how […]

“How conditioning on post-treatment variables can ruin your experiment and what to do about it”

Brendan Nyhan writes: Thought this might be of interest – new paper with Jacob Montgomery and Michelle Torres, How conditioning on post-treatment variables can ruin your experiment and what to do about it. The post-treatment bias from dropout on Turk you just posted about is actually in my opinion a less severe problem than inadvertent […]

God, goons, and gays: 3 quick takes

Next open blog spots are in April but all these are topical so I thought I’d throw them down right now for ya. 1. Alex Durante writes: I noticed that this study on how Trump supporters respond to racial cues is getting some media play, notably over at Vox. I was wondering if you have […]