What we need here is some peer review for statistical graphics

Under the heading, “Bad graph candidate,” Kevin Wright points to this article [link fixed], writing:

Some of the figures use the same line type for two different series.

More egregious are the confidence intervals that are constant width instead of increasing in width into the future.

Indeed. What’s even more embarrassing is that these graphs appeared in an article in the magazine Significance, sponsored by the American Statistical Association and the Royal Statistical Society.

Perhaps every scientific journal could have a graphics editor whose job is to point out really horrible problems and require authors to make improvements.

The difficulty, as always, is that scientists write these articles for free and as a public service (publishing in Significance doesn’t pay, nor does it count as a publication in an academic record), so it might be difficult to get authors to fix their graphs. On the other hand, if an article is worth writing at all, it’s worth trying to convey conclusions clearly.

I’m not angry at the authors for publishing bad graphs—scientists typically don’t get training in how to construct or evaluate graphical displays, indeed I’ve seen stuff just as bad in JASA and other top statistics journals—but it would be good to catch this stuff before it gets out for public consumption.

9 thoughts on “What we need here is some peer review for statistical graphics

  1. Creating decent graphs usually takes longer than you plan. You’d think I’d have learned by now, but I’m always finding myself at 3am still improving a simple graph I started working on at midnight.

  2. I think the first point is actually not true. If you zoom in, you can see that although the two orange lines look very similar the author used different types of orange. The line at the bottom of the graph is a little darker than the other orange line.

    However, what I do not understand is what some of the axis are meaning. In figure three, there are three dimensions, but only two are labeled and I was not able to find an explanation in the text….

    • No, the line types are the same for all figures where major and major/minor conflict proportions (or shares) are presented. It should be obvious that the combined category is always going to be greater, but that’s not reason enough to make the line types exactly the same.

  3. I bet that the graphs looked okay when the authors submitted the paper. At least in political science, journal printing guidelines and rules for many outlets turn nice and illustrative plots into a horrible mess. First one spends a lot of time make good plots and then the publishing processes undoes a lot of work.

  4. Agree with B. Journals often want to change my figures to the worse. Sometimes they have horrible journal rules that goes against all recommendations in presenting figures. And the most annoying thing is, that never give in if you want to keep your “better” graph. Sometimes reviewers are happy with some nice tweaks in a figure but then you have to remove it to comply to journal rules.

  5. how about the chart with 4 or 5 lines, and an indecipherable legend of the form “…(a)blah blah blah; (b)….
    so you have to have your eye and attention go back and forth between the legend and the chart, trying to figure out which line is A, which is B.,,,,,,

    Or people with 6 or 7 or 8 lines, differentiaed*only* by color (typically, you can differentiate about 4 lines by color; if you have more you have to add a second differentiator, like dotted lines)

    As for the peace article, I mean, how can one take seriously something like Fig1; as they would say at http://www.thedailywtf.com, TRWTF is that anyone bothered to read this article in the 1st place

Comments are closed.