Skip to content
 

The blogroll

Chain Links

I encourage you to check out our linked blogs. Here’s what they’re all about:

Cognitive and Behavioral Science

BPS Research Digest: I haven’t been following this one recently, but it has lots of good links, I should probably check it more often. There are a couple things that bother me, though. The blog is sponsored by the British Psychological Society, so this sounds pretty serious. But then they run things like advertising promotions sponsored by a textbook company and highlight iffy experimental claims. For example, in 2010 they ran a wholly uncritical post on the notorious Daryl Bem study that purported to find ESP. After being called on it in the comments, the blogger (Christian Jarrett) responded with, “The stats appear sound. . . . it’s a great study. Rigorously conducted” and even defended “the discussion of quantum physics in the paper.” To be fair, though, and as he points out in comments, Jarrett wrote of Bem’s study: “this isn’t proof of psi, far from it. Needs to be replicated. I like how Bem has used standard psychological tasks as a way to explore psi. Makes it easier for other labs to try to replicate.” Jarrett writes that he tries to “strike a balance between promotion and skepticism of new findings.” Fair enough.

Decision Science News: A mix of conference announcements and reports of new research. Here’s a typical example. I love this stuff; others might find it a bit technical. Also, this blog runs ads. I wonder how much the advertisers pay? I can’t imagine anyone would pay enough to a niche blogger to make the ads worth it. I mean, sure, if an advertiser offered me enough money for me to hire a postdoc, I’d do it, but I can only imagine we’re talking really small amounts of money. A topic of discussion for Decision Science News, perhaps?

Language Log: Not much needs to be said here. This one’s a classic blog with lots of statistical content, remains strong after all these years.

Seth Roberts: I disagree with him on climate change denial, Holocaust denial, etc. Still, he’s a pioneer of self-experimentation. I hope that the next generation of psychology or medical research involves an integration of informal experimentation with statistical controls.

The Hardest Science: Mostly revolves around reproducible research. It’s where I heard the story of the lamest, grudgingest, non-retraction retraction ever.

Cultural

Light Reading: She’s like me, she likes to write and has a lot of energy. I’m still wondering what she will think of Debutante Hill (I’ll lend her my copy).

Lists and Letters of Note: Great stuff but not much new material lately; he says he’s busy working on a book.

Love the Liberry: Amazingly enough, they keep coming up with good material.

Paperpools: Not much material lately. As it should be. We want Helen DeWitt to be writing novels, not blogging!

Research as a Second Language: Anti-charismatic self-help advice. The alternative to those omnipresent shouting, obnoxious internet gurus.

Streetsblog: Good stuff. Ideally this would all be in your daily newspaper. I don’t read it too often; if I did, I’d be too angry to think about anything else all day.

Sister Blogs

The Monkey Cage: Sometimes I simul-post, other times I’ll rant there and then link from here. (for example)

The Statistics Forum: I recently formulated the plan to fill it up with 365 stories. So far, though, I’ve only received a few. So maybe just a story a week? I’m not sure what to do with this blog. An official American Statistical Association blog seems like a good thing but I don’t really know what to do with it.

Social and Political Science

Chris Blattman: International development, politics, economics, and policy.

Fivethirtyeight: Nate does a good job. I like how he can focus on whatever question he’s answering without getting overwhelmed. Here’s a good recent example.

Lane Kenworthy is a completely serious and reasonable person, just as his name would suggest.

Marginal Revolution: You’ve heard of these guys.

Monthly Labor Review: Direct links to research on things that matter. Good stuff.

Overcoming Bias: He recently wrote, “most people we know talk as if they hate, revile, and despise ads. They say ads are an evil destructive manipulative force that exists only because big bad firms run the world, and use ads to control us all.” I was surprised to hear that most people Robin Hanson knows talk that way, and this gives me a new perspective on why he writes the way he does. It’s gotta be frustrating, hanging around people who talk about big bad firms and evil destructive manipulative forces.

Rajiv Sethi: He only blogs a couple times a month, but he always has something interesting to say. (The opposite of this blog, I suppose.)

The Baby Name Wizard: The one and only, by the people who, among other things, debunked the myth that there’s something special about the word “orange.” But you can just skip directly to the Name Voyager.

U.S. Census Blog: Not the funnest thing out there to read, but it’s good that the people at the Census are doing this for us. When you need good data, the Census is there for you.

Statistics and Machine Learning

Bob Carpenter: He wrote Stan.

Chance News: The original statistics blog.

Christian Robert: People who used to do theoretical statistics, now do computational statistics. This is a good thing.

Cosma Shalizi: He has an odd retro style and enough combination of common sense and knowledge of philosophy that I asked him to collaborate on my paper that became this. His set of interests and frustrations seems to overlap a lot with mine, except that he doesn’t really ride a bike and I’m sure there are some big parts of his life that don’t match to anything in mine.

Deborah Mayo: I learned about her through Shalizi. Mayo believes in learning through model checking, just like Jaynes (and me). Her blog features long comment threads and contributions from the likes of Stephen Senn.

John Cook: Like Tyler Cowen, a guy who does a lot of things but is best known for his blogging. He throws in some applied math and numerical analysis along with the statistics.

Kaiser Fung: Fun to read and utterly sensible. Among many other things, he offered a good probabilistic summary of the Lance Armstrong story, well before it finally broke.

Larry Wasserman: His perspective on statistics is different from mine (for example, he defines p(a|b) = p(a,b)/p(b), whereas I define p(a,b)=p(a|b)p(b)), but it’s good that he can get his views out there. Research proceeds in many different ways, and if everyone agreed with me (or with any single perspective), the field of statistics would make a lot less progress.

Messy Matters: This one reads a bit more like a draft of a pop-science book than like a blog. The trouble is, there are already so many pop-science books about economics and data. They’ll have to come up with their own unique twist.

Nuit Blanche: Compressive sensing: that’s cool stuff! I’m impressed by these CS guys who can effortlessly throw around terabytes of data.

Observational Epidemiology: These guys are thoughtful and I admire the effort they put into their blogging. If they’d started blogging in 2003, they would’ve been on everyone’s blogroll.

Stats Blogs: A convenient compendium, with links back to the originals.

The Numbers Guy: Carl Bialik is one of the original data journalists. He, Falix Salmon, and Nate Silver have very similar profiles (as Bill James might say).

Visualization

Chartsnthings: This is the ultimate graphics blog. The New York Times graphics team presents some great data visualizations along with the stories behind them. I love this sort of insider’s perspective.

Eager Eyes: Graphics research.

Information Aesthetics: Seriously pretty.

Junk Charts: The nitty gritty. What to read if you want to make your own graphs better.

22 Comments

  1. Hi, I’m the Christian Jarrett who wrote the “notorious” BPS Research Digest blog post on Bem’s Psi study. I just wanted to post here my quote from the comments in full because you’ve edited it in a way that misses out my important notes of caution.

    I said: “My opinion is that it’s a great study. Rigorously conducted. Eloquently written (provides great intro to the field). But this isn’t proof of psi, far from it. Needs to be replicated. I like how Bem has used standard psychological tasks as a way to explore psi. Makes it easier for other labs to try to replicate.”

    I’d appreciate it if you updated your blog post to includes my caveats!

    The findings were extraordinary but to many psychologists the methodology and stats did appear sound. And the study had certainly been written up in a way that made replications easy. This is in the context that many parapsychology studies are woeful, in methodology and presentation (hence I rarely choose to cover them on the Digest). There have since been lots of failed replications of the Bem study and importantly, for a time, I ensured I added links to these to the bottom of my blog post.

    Another thing – far from my Digest report on the study being “notorious” it was actually praised by the well-known skeptic Chris French as providing a concise account of the study and its claims.

    As the blog I write is published by the British Psychological Society, I try to strike a balance between promotion and skepticism of new findings. At times I have deliberately struck a highly skeptical tone, even while other blogs and news outlets have been more accepting. For example, check out this coverage of the End of History Illusion http://bps-research-digest.blogspot.co.uk/2013/01/the-end-of-history-illusion-illusion.html or this coverage of brain training: http://bps-research-digest.blogspot.co.uk/2013/02/working-memory-training-does-not-live.html Or even this recent blog post on the power failure across neuroscience: http://www.bps-research-digest.blogspot.co.uk/2013/04/serious-power-failure-threatens-entire.html

    However, I don’t aim to be so skeptical all the time, or to focus only on studies exposing controversies – not least because there are already blogs out there with that express purpose such as Neuroskeptic and Neurocritic.

    Last thing – why do you object to psychology text-book competitions? We have a lot of student readers and it’s a chance for them to get free books. Also, the set questions are usually chosen to generate discussion about the discipline. I can’t see why you’d object. If you’re not interested, can’t you just ignore them?

    • Andrew says:

      Christian:

      Good points, and I’ve altered the post to reflect them. I appreciate your going to the trouble to post this comment, and I also appreciate your blogging efforts (of course, or otherwise you wouldn’t be on the blogroll in the first place!).

      I objected to the textbook competition because I was uncomfortable with the mix of advertising and editorial content. Your headline describes the textbook as “cutting edge.” Is that really true? Do you really think that? The book could be just fine—I have no idea—but the setup just made be a bit uncomfortable. If it’s a book that you really love, then I’d feel better about what you’re doing. My guess was that you didn’t really have any opinion on the book, given that all I saw was a publisher’s blurb. I guess if you did the whole thing without the phrase “cutting edge” and the photo of the book cover, it wouldn’t bother me. This is not to say that I think you have any obligation to keep me happy—I’m just laying out my reasoning here. My problem was not that the post on the textbook was annoying but rather that the mixing of advertisement and editorial can reduce my trust overall.

      • Rahul says:

        You should convince your publisher to give out a few gratis copies on this blog. That’d be fun. I’ve always wanted to read your book on Bayesian statistics but have been too cheap to buy a copy (so far). :)

      • Hi Andrew, thanks for amending your post. The idea that you might think the Digest blog is in anyway iffy or untrustworthy really pains me, hence my heart-felt response. I’ve reported on over 1400 studies since 2003 and hopefully most of the time I get it right. Your feedback about the book competitions is really useful. We don’t need to do these, so I will take your feelings about them seriously. By “cutting-edge” I simply meant “new” – I haven’t read the book – and hopefully it was clear the descriptive blurb was from the publishers.

        • Rahul says:

          I liked your coverage of Bem. Bem’s study was very well done and I’d never call it notorious.

          I’m no believer in ESP and my skepticism priors are so strong that one Bem wouldn’t make me change. The most important contribution of Bem was to make people retrospect about the holes in the current reasoning procedures and statistical analysis.

          On a matter I was less skeptical about maybe a Bem would make me change my views? Perhaps about the efficacy of an aspirin a day?

          • Thanks Rahul. In the initial version of his post, Andrew actually described my blog post as “notorious”, not the Bem study itself! An important thing to realise about the Bem study is that it produced these astonishing results using the kind of methods and statistics that are usually considered robust for many standard psychology experiments. What’s more, it provided multiple demonstrations of the supposedly reverse-effects. It was written lucidly and published by a respected academic in a respected, prestigious journal. For all these reasons it made a huge impact. As one of the first to break the news, I chose on my blog to say “this is what they did”, “this is what they found” – the findings were extraordinary (I was expressing my skepticism by the opening remark about the drinking water), but it looked like a bona fide psychology study. In the days, weeks and months to follow some commentators took a far more detailed and sophisticated look at the stats and methods finding flaws and weaknesses. Other people conducted failed replications (there were also some successes), all of which I linked to and reported on, both on the Digest blog and at The Psychologist magazine.

          • Andrew says:

            I think Christian categorizes the story in an accurate way. Bem’s study was not actually well done but it was presented in a style that many people found compelling. Careful examination of Bem’s paper revealed many serious problems.

            P.S. But I don’t think it’s correct to describe Bem’s article as “lucid,” at least not in a scientific way. The article was written in a way that allowed it to be published in a top journal and get serious media attention but it was not lucid in the scientific sense of describing exactly what he had done in a way that would make clear the problems of the study.

            • Rahul says:

              Considering that it got published in a top journal after all, maybe the thing Bem should make us do is alter our priors about the credibility of results that get published in elite journals?

              PS. Is there a good summary of the serious problems, flaws and weaknesses in his work? Where?

              I do admire Bem’s bravery though. He must have known the world will go over his work with a fine toothed comb and rip it apart.

              We should ask ourselves, had this been yet another paper documenting the ill effects of second hand smoke, or the virtues of red wine would we have given it the degree of scrutiny necessary to catch the Bem-like flaws of methodology?

  2. K? O'Rourke says:

    > Christian Robert: People who used to do theoretical statistics, now do computational statistics. This is a good thing.

    Insightful, or at least I think “now do computational statistics” is the right message.

  3. Brad Stiritz says:

    Andrew,

    Very interesting list of websites, thanks. If this isn’t an inappropriate question (given that you teach the subject at an institution), do you possibly have any site recommendations please for getting technical statistics questions answered? If Stack Exchange had a dedicated forum for Statistics (rather than questions just being lumped into Mathematics), that would sort of be the ideal I’m imagining.

    I simply don’t have the time to take any proper courses, but have devoted quite a bit of time & energy to studying statistics on my own. I come up frequently against tricky concepts / unclear wordings which I can’t figure out myself, in texts such as Rice’s “Mathematical Statistics & Data Analysis”. I’m sorry to say that talkstats.com has been disappointing on the very few specific questions I’ve searched out there & found postings on.

    I’ve also looked into online statistics tutors, but haven’t been very impressed by any of the many sites & tutors that come up in Google. Ideally, I’d like to find s/o with a very strong intuitive grasp of theoretical statistics who can explain the conceptual basis behind all the standard formulas. Young’s “Statistical Treatment of Experimental Data” is a great example of what I mean, as far as s/o taking the time to really explain what’s going on in each formula, rather than rushing on to cover the next half-dozen topics.

    Any suggestions would be greatly appreciated. Thanks for the effort you put into your blog.

  4. Brad Stiritz says:

    Jake: Thank you very much! I’d seen “Cross-Validated” in their list of forums, but didn’t understand the connection. It looks great, I really appreciate the reference.

  5. [...] StatModeling‘s blogroll andrewgelman.com/blogroll… [...]

  6. sd says:

    “When you need good data, the Census is there for you.” true. when they don’t have the data they’ll intrapolate with iffy procedures from higher level aggregates, and charge you a few thousand dollars for the services of their dodgy statisticians (not to mention the cost in terms of countless hours wondering why your results look weird)…

    • Andrew says:

      Examples, please?

      • anon :P says:

        one example is business register statistics: the census purportedly offers birth/death/expansion/contraction rates of organiztaions by naics code and county (for purchase) — but turns out all these county-level rates/counts are merely calculated as some multiplicative fraction of density (which is the only real data point) with an added normally distributed random error; the multiplicative factor comes from the state level aggregate data (sometimes varying by naics sector–sometimes averaged across naics sectors!).

        • Kaiser says:

          Ok, but do you have actual data to prove that the imputed data is wrong? Assuming that you want to know this data — that would be why you would ask for it — do you have a better source?
          Imputing is not a sin. In fact, it forces the person doing the imputation to think about what drives the values.
          There are dodgy statisticians everywhere, not just in government. For example, look at the size of the sample of Nielsen’s panel, and look at how much money they earn each year producing estimates to all kinds of detailed levels.

  7. [...] Gelman, Statistical Modeling, Causal Inference, and Social Science, The blogroll, here. Might be [...]

  8. zbicyclist says:

    1. Does the Chance news blog actually work? I never seem to find anything there.
    2. re “The Statistics Forum”. Here’s an idea. Why not post a weekly highlight post from another stat blog? You could likely get permission ahead of time from a lot of blogs you regularly read. Because you would take the ENTIRE blog post, you (or whoever did this) wouldn’t have to spend editing time carefully finding the best parts. You wouldn’t need to comment (hopefully the comments section would eventually become self-sustaining). Because you seem very widely read, you would likely not have to do any additional reading. Plus, this might provide good publicity for worthy bloggers who haven’t built up an audience.