Am I too negative?

For background, you can start by reading my recent article, Is It Possible to Be an Ethicist Without Being Mean to People? and then a blog post, Quality over Quantity, by John Cook, who writes:

At one point [Ed] Tufte spoke more generally and more personally about pursuing quality over quantity. He said most papers are not worth reading and that he learned early on to concentrate on the great papers, maybe one in 500, that are worth reading and rereading rather than trying to “keep up with the literature.” He also explained how over time he has concentrated more on showcasing excellent work than on criticizing bad work. You can see this in the progression from his first book to his latest. (Criticizing bad work is important too, but you’ll have to read his early books to find more of that. He won’t spend as much time talking about it in his course.) That reminded me of Jesse Robbins’ line: “Don’t fight stupid. You are better than that. Make more awesome.”

This made me stop and think, given how much time I spend criticizing things. Indeed, like Tufte I’ve spent a lot of time criticizing chartjunk! I do think, though, that I and others have learned a lot from my criticisms. There’s some way in which good examples, as well as bad examples, can be helpful in developing and understanding general principles.

For example, consider graphics. As a former physics major, I’ve always used graphs as a matter of course (originally using pencil on graph paper and then moving to computers), and eventually I published several papers on graphics that had constructive, positive messages:

Let’s practice what we preach: turning tables into graphs (with Cristian Pasarica and Rahul Dodhia)

A Bayesian formulation of exploratory data analysis and goodness-of-fit testing

Exploratory data analysis for complex models

as well as many many applied papers in which graphical analysis was central to the process of scientific discovery (in particular, see this paper (with Gary King) on why preelection polls are so variable and this paper (with Gary King) on the effects of redistricting.

The next phase of my writing on graphics accentuated the negative, with a series of blog posts over several years criticizing various published graphs. I do think this criticism was generally constructive (a typical post might point to a recent research article and make some suggestions of how to display the data or inferences more clearly) but it certainly had a negative feel—to the extent that complete strangers started sending me bad graphs to mock on the blog.

This phase peaked with a post of mine from 2009 (with followup here), slamming some popular infographics. These and subsequent posts sparked lots of discussion, and I was motivated to work with Antony Unwin and write the article that eventually became Infovis and statistical graphics: Different goals, different looks and was published with discussion in the Journal of Computational and Graphical Statistics. Between the initial post and the final appearance of the paper, my thinking changed, and I became much more clear on the idea that graphical displays have different sorts of goals. And I don’t think I could’ve got there without starting with criticism.

(Here’s a blog post from 2011 where I explain where I’m coming from on the graphics criticism. See also here for a slightly broader discussion of the difficulties of communication across different research perspectives.)

A similar pattern seems to be occurring in my recent series of criticisms of “Psychological Science”-style research papers. In this case, I’m part of an informal “club” of critics (Simonsohn, Francis, Ioannidis, Nosek, etc etc), but, again, it seems that criticism of bad work can be a helpful way of moving forward and thinking harder about how to do good work.

It’s funny, though. In my blog and in my talks, I talk about stuff I like and stuff I don’t like. But in my books, just about all my examples are positive. We have very few negative examples, really none at all that I can think of (except for some of the examples in the “lying with statistics” chapter in the Teaching Statistics book). This suggests that I’m doing something different in my books than in my blogs and lectures.

19 thoughts on “Am I too negative?

  1. No.

    99% of researchers will work at whatever level everyone around them says is acceptable. If you tell them that lame hypothesis combined with p-value hacking and n=23 coed samples, isn’t acceptable, they’ll stop thinking of that as “science”.

  2. In personal relations, I like the evidence that positive reinforcement works better than negative, that showing what you do right is beneficial. (With the standard “I’m no expert on the material, of course” disclaimer.)

    But this blog isn’t a personal relation.

    The Tracy Ullman Show had short skit with Tracy as an Australian tennis star and Dan Castellanata as a shrink. She complains about her sex life with her husband: “when we were in Australia it was fine that it was wham-bam thank you ma’am but now that I’m in the US I feel like I need a little more foreplay and affection”. Dan offers an offhand bit of reinforcement meant to encourage, “Well, you aren’t in Australia any more”. Tracy hears this, sits up and says, “Right Doc, we aren’t in Australia any more”, thanks him and walks out.

    What is this blog? It’s not Australia.

    Now if you’re hyper critical in your personal life or with your students, that’s a different issue, one best confronted in the mirror

    • John:

      I don’t disagree with you but I think there can be a place for caustic put-downs as well. Fun and entertainment are positive values in themselves.

      Also, tact is great but there’s value to honesty as well. Your link gives the following advice:

      How to compose a successful critical commentary:

      1. You should attempt to re-express your target’s position so clearly, vividly, and fairly that your target says, “Thanks, I wish I’d thought of putting it that way.

      2. You should list any points of agreement (especially if they are not matters of general or widespread agreement).

      3. You should mention anything you have learned from your target.

      4. Only then are you permitted to say so much as a word of rebuttal or criticism.

      But sometimes, realistically, there are no points of agreement and we really have learned nothing from our target. Take the case of Daryl Bem, for example. What do you want me to say? “From your research, I’ve learned that ESP really exists”? The best I can say is, “From this episode, I’ve learned that a top psychology journal will publish a really bad paper. But, hey, the resulting discussion can be good for the field.” But if I say that, that’s just insulting. I think it’s better off just saying that I don’t like the work.

      I think this general topic (how to approach contentious discussion) is interesting, but perhaps there needs to be more separation between various goals, including honesty, personal gentleness, and tactics. I think the post you linked to is worth reading but I think it’s a bit too quick in claiming that the approach of charitable criticism is also “a sound psychological strategy.” The tough cases are when we don’t have much of anything charitable to say. Believe me, though, I try hard sometimes to give people the benefit of the doubt.

      • Well, there are times with it’s hard to be charitable. Nevertheless, this reality should not be used as an excuse to, for example, attack someone in an ad hominen manner simply for sport. The question is really about the reason behind your communication; are you trying to win a battle with a particular opponent and make him/her run crying into the washroom? Or are you trying to uncover truth and understanding for the benefit of others?

        To put it another way, some arguments are informative, while others are simply the intellectual equivalent of pro wrestling (or Fox news).

  3. Posting an improved modification of a graph you hate is less negative than just stating what you hate about a bad graph.

    IMO, that’s one way to become more constructive.

  4. definitely *not*.

    Criticizing what merits criticism is a public good. It guides and disciplines those whose conduct is subject to evaluation. It also heartens those who are committed to the standards that inform the criticism — and indeed helps them to sharpen their appreciation of what they are committed to. Lots of people are benefited, and hopefully they are moved to reciprocate by criticizing when criticism is merited, too, rather than taking the easy course of just shaking their heads & averting their eyes.

    There are various “how to” & “matter of degree” issues. But I think you handle them as well as anyone.

    The main things are to be committed to fairness, including giving the targets of criticism and others who take issue with it a chance to respond, and to acknowledge when, on reflection, it seems to you got something wrong. You’ve got those covered.

  5. “[Dr. Tufte] also explained how over time he has concentrated more on showcasing excellent work than on criticizing bad work. You can see this in the progression from his first book to his latest.”

    Yes, but his first book was much more entertaining than his third book.

    Seriously, Tufte’s books got more boring the more he concentrated upon accentuating the positive.

  6. Great post! And this comes in a timely manner for me (see below), who at least in the industrial sphere finds it hard to hold my tongue. One thing I’ve discovered in this arena is that scientific challenge can be lost to perceived negativity, in less desirable or downright dramatic way, depending on the particular culture associated with an industry focus. I remember reading a Feynmann bio, where he discusses the outcomes of his own criticisms which he never intended to be personal. I think many of us are acutely aware of our fervor in challenging methods, intentions, and conclusions. But fewer of us (me included) consider the other’s sensibilities on the matter. Some of us (guilty as charged here as well) sometimes don’t feel like concentrating energies to step with Tai Chi alacrity across personal eggshells. But that doesn’t always work out so well.

    In the tech world, this is where I’ve seen the greatest challenge in how I develop a criticism for something I view as intrinsically objective – nevermind that I have a thorough and open disdain for political posturing over an otherwise technical matter. This particular industry I have to say is likely one of the most sensitive I have engaged: constructive criticism where I might say:

    “so yeah, this is an interesting finding, but this plot you’ve provided is misleading for reasons [x, y, z] and you used an improper method for sampling, and from these things you cannot soundly make such-and-such a conclusion (I recall using the phrase “a statistical no no” once)”

    has quite a number of times been met harshly by “he’s just a negative nancy”, or more commonly, outright dismissal. Am i being too negative here despite providing guidance to understand and course correct?

    As another current example of a real problem I am facing, I am being requested to develop and evaluate various approaches to anomalous signal detection for a product. What is considered ‘anomalous’ and more specifically what meaning may be tied to whatever the anomaly actually is has not even been defined. Still, various ‘metrics’ have been arbitrarily selected and each one is to be processed independently. The current approach employed prior to my engagement is the use of Bollinger bands (see where I might be going with this ? How about gehennem?) and some arbitrary rules to identify something as ‘anomalous’. Now, the request is, “try some other methods against this and come up with some measures with the team to see how each of the algorithms perform.”

    In a recent project meeting, my reply was “if I understand you correctly, you want me to develop several alternative algorithms which detect signals of unknown definition (and whose patterns will vary with the data source, so each data stream would require testing/training), unknown meaning, with arbitrarily picked variables, and you want me to somehow validate and compare these by some measures of how they perform. Really? Is this how agile development works? So, no solid business objectives or success criteria, no real way to evaluate analytic success. I’m sorry, what is this product again?”. Yep, dismissed.

    I ask you, was I being too negative? It’s fair to say that i’ve begun to consider my alternatives regarding a mental health facility ;).

    In my past life in the medical research arena, I experienced just the opposite. Such criticisms and fired discussions were expected, and invited. I expected no less review and critique of my work than anyone would have expected of me. Those discussions, even ones that served a bit of humble pie, were extraordinarily beneficial to each of us (specifically to the R&D community that is. The business was an entirely different sordid tale).

    So the social context of scientific criticism obviously can play a role (I think in a large way) in how one may be perceived. There have been a number of blogs I’ve read recently in which the silicon valley tech culture may be oft to completely shun scientific criticism as it is thought to hinder creativity. So, in some contexts, really *anything* we do could be viewed negatively, despite the pink fluffy bow we try to tie to our messages.

    But nowadays, given the rise in the cadence of ‘tech pseudoscience’, it’s difficult for me as a natural skeptic to remain still without issuing some byte of caustic sarcasm in the wake of blatantly obvious dreck. Wait…that was sorta negative.

  7. Phillip M.

    Seems to me in academia, it is just the minority that really get the “science as a continual communial activity to get less wrong in important ways” and so even in academia it is important to choose (be lucky) who you work (interact) with.

    Most in academia seem to be just tribal, advancing their interests (enhancing their reputations) along with others they view as part of their tribe. Comments that threaten their palns/views are just ugly facts and challenges to be quickly slayed along with isolating the commentator.

    Industry/government is supposed to be tribal (with clearly designated chiefs) – isn’t it?

    > scientific criticism as it is thought to hinder creativity.
    It will create confusion that engenders creativity but possibly unmanageable creativity

    • If science were in a better state, then all this talk of positive constructive blah would make sense. But it’s not. It’s in much worse shape than even the strongest of insider critics like Gelman let on.

      In the current scenario people need to be told bluntly and unequivocally that they’re wasting time and money.

      The average academic will make $2-4 million over their career and contribute absolutely nothing to the world. They take $2-4 million of other peoples blood, sweat and tears, but give nothing in return, and all the while looking down on lowly workers who provide their stuff as not being part of the “cognitive elite”.

      That’s how we get to this insane situation where the US funds the equivalent of 15-20 complete Manhattan projects every year, year after year, but with entire major fields making no real advances over decades, or even half a century.

      They need to be told this isn’t right. They need to be told it’s not acceptable. They need an earthquake big enough to shake themselves out of their pampered tenured complacency. And 95% of them need to be straight up fired. Gelman isn’t being too negative. He’s being too kind.

    • I don’t disagree with you when considering the outcomes of academic tribalism, though we might disagree as to its underpinnings and dynamics. In some respects, and this is coming from merely a grad student perspective, academia in my experience can tend to imitate corporate life. My hunch is that this is a trend that is unlikely to diminish in the foreseeable future. That’s merely the Weberian rationalization of institutions (quick disclaimer: I’m not a social scientist – stat/comp physics here, so everything I’m spewing from here forward may just be utter armchair bunk). These forces I feel are fairly simple to observe in that unwritten tablet of administrative commandments that non and newly tenured professors can speak to all too well:

      — Thou shalt publish n number of articles with impact factors no less than .
      — Thou shalt writeth grants which will bring in in research revenue hitherto our fine institution.
      — Oh, as a subset of commandment 2, thou shalt pay thy alms to the Dean as a tithe from such grant funds.
      — Make any waves and thy consideration for full professorship might be placed on a pike along with other interesting items on your doorstep.
      — Please make a point to drop by the John Jacob Jingleheimer Schmidt professor of bovine existentialism next door. Her hire and presence alone helped us make a killing last FY.

      Branding oneself in academia in my view can be as much about external prodding as much as personal intention. Though truly, given the nature of competition for positions, one can’t deny the need and benefit of doing so.

      So tribalism in that sense, considering both personal and administrative priors, could be little more than a flocking phenomenon. If anything, I would think flocking behavior would be one way to examine tribalistic tendencies for aversions to criticism, for example, among peers.

      I don’t equate tribalism with clearly designated anything in industrial organization. Even here, flocking I believe can be quite readily abstracted as the predominantly behavioral form. The extremes of org structure I would suggest can serve as examples – such as overly vertical hierarchies where names and titles may be clear, but roles are not (promoting frequent re-org chaos; flocking is protective in some ways); or some of the newer, flatter, matrixed org structures, where sometimes neither title, role, or formal pecking order in any fashion is clearly defined. Flocking tendencies in these orgs would seem to me more complex to follow. One interesting experiment wrt the latter would be the Valve company – somehow one of their employee handbooks made it to the web. Hunt down a copy if it still exists – interesting read.

      Insofar as ‘creativity’ is hindered by scientific criticism. I don’t buy that. Creativity has often been wrought from the realization of a constraint. I agree with your thought on ‘unmanageable’. In industry, I generally refer to this as ‘vision without a plan OR a want to place the outputs of that vision to a test of its value’. Noting my current situation above, your thought fits this like a glove. Great point.

  8. Just on a personal level, I find good, fair, intelligent criticism of the research literature (and of journalism as in your commentary on David Brooks) very useful. I read the negative stuff you write (and that Tufte has written, and that others write) and try to incorporate it into my internal editor’s voice: When I start writing stuff up, I like to have as complete a sense as possible of the ways in which I may go wrong (as Feynman advises us, in science “the first principle is that you must not fool yourself”). So I read these dissections not with a sense of schadenfreude toward the poor soul who captures your attention, but asking myself, “Where does my own work suffer from the same faults?” These kinds of commentaries are very helpful and I hope you keep at it.

  9. Pingback: Friday links: community assembly vs. Go, Hurlbert vs. neuroscientists, and more | Dynamic Ecology

  10. Anyone can critique another’s work. Not anyone can actually improve upon it. Criticism can be helpful, especially when it’s constructive and the reader understands what to do differently. I do agree, though, that focusing on positive examples is generally more useful.

Leave a Reply to Jonathan Gilligan Cancel reply

Your email address will not be published. Required fields are marked *