Skip to content
 

Are brilliant scientists less likely to cheat?

In this discussion of Allegra Goodman’s book novel Intuition, Barry wrote, "brilliant people are at least as capable of being dishonest as ordinary people."  The novel is loosely based on some scientific fraud scandals from the 1980s, the one of its central characters, a lab director, is portrayed as brilliant and a master of details, but who makes a mistake by brushing aside evidence of fraud by a postdoc in her lab.  One might describe the lab director’s behavior as "soft cheating" since, given the context of the novel, she had to have been deluding herself by ignoring the clear evidence of a problem.

Anyway, the question here is:  are brilliant scientists at least as likely to cheat?  I have no systematic data on this and am not sure how how to get this information.  One approach would be to randomly sample scientists, index them by some objective measure of "brilliance" (even something like asking their colleagues to rate their brilliance on a 1-10 scale and then taking averages would probably work), then do a through audit of their work to look for fraud, and then regress Pr(fraud) on brilliance.  This would work if the prevalence of cheating were high enough.  Another approach would be to do a case-control study of cheaters and non-cheaters, but the selection issues would seem to be huge here, since you’d be only counting the cheaters who got caught.  Data might also be available within colleges on the GPA’s and SAT scores of college students who were punished for cheating; we could compare these to the scores of the general population of students.  And there might be useful survey data of students, asking questions like "do you cheat" and "what’s your SAT" or whatever.  I guess there might even be a survey of scientists, but it seems harder to imagine they’d admit to cheating.

Arguments that brilliant scientists are more likely to cheat

Goodman makes the argument (through fictional example) in her book that brilliant scientists are more likely to be successful lab directors, thus under more pressure to keep getting grants (many mouths to feed), thus susceptible to soft cheating, at least.  Similarly, the cheating postdoc is described as so smart he never had to work hard in college, again under high expectations and cheating partly to maintain his reputation as the golden boy.  On the other side, a more ordinary "worker bee" type will not be expected to come up with a brilliant insight, and so won’t be under that pressure to cheat.

Another argument that brilliant scientists are more likely to cheat comes from some of the standard "overcoming bias" ideas, that a brilliant person is more likely to have made daring correct conjectures in the past, then when the person comes up with a new conjecture, he or she is more likely to believe in it and then fake the data.  (I’m assuming that scientific cheating of the sort that’s interesting is in the lines of twisting the data to support a conclusion that you think is true.  If you don’t even think the hypothesis is true, there’s not much point to faking the evidence, since later scientists will overturn you anyway.  The motivation for cheating is that you’re sure you’re right, and so you overconfidently discard the cases that don’t support your case.)

Arguments that brilliant scientists are less likely to cheat

I’m half-convinced by the overconfidence argument above, but overall I suspect that brilliant scientists are more likely to be honest
than less-brilliant scientists, at least in their own field of
research. I say this partly because science is, to some extent, about
communication, and transparency is helpful here. Also, as illustrated
(fictionally) in Goodman’s book, fraud is often done to cover up
unsuccessful research. If you’re brilliant, it’s likely that your
research will be successful: even if you don’t achieve your big
goals–even brilliant people will, perhaps should, bite off more than
you can chew–you should get some productive spinoffs, and the simple
cost-benefit analysis suggests that cheating would stand to lose you
more than you’d gain.

Conversely, for a more mediocre scientist, cheating may be a roll of the dice, which, if it succeeds, can bring you to a plateau, and if it fails, you won’t be that much worse off than before–you don’t have such a big potential reputation to lose.  And if the stakes are low, the cheating might never be discovered:  you get the paper, the job, tenure or whatever, your findings are never replicated, and you move on.

Thinking of honesty as a behavior rather than a character trait

The other thing is that it might make more sense to think of honesty as a behavior rather than a character trait. I’m pretty honest (I think), but that also makes me an unpracticed liar (and, unsuprisingly, a bad liar). So the smart move for me is not to lie–again, more to lose than to gain (in my estimated expected value). But if I worked in a profession where dishonesty–or, to put it more charitably, hiding the truth–was necessary, something involving negotiation or legal maneuvers or whatever, then I’d probably get better at lying and then maybe I’d start doing more of it in other aspects of life.

Science seems to me like an area where lying isn’t generally very helpful, so I don’t see that the best scientists would be good or practiced liars.  The incentives, at least for the very best work, go the other way.

P.S.  Thanks for Robin Hanson for encouraging me to present arguments on both sides of the question.

6 Comments

  1. p-ter says:

    The other thing is that it might make more sense to think of honesty as a behavior rather than a character trait

    yes. human behavior is context-dependent. this is a simply fact that is often overlooked when people are morally outraged.

    my instinct is that "brilliant" people would be less likely to cheat, because ultimately they're not doing science to pay the bills or whatever, they're doing it because they're fascinated by what they do and interested in figuring it out. cheating, then, serves no purpose.

  2. Anonymous says:

    You can model this using two probabilities, P(success) and P(detect), together with the expected benefits and costs for each outcome.
    The decision to cheat can also decrease substantially expected cost of producing the paper, and we can easily come up with a model for evaluating the decision to cheat or not.

    Someone can assume that cheating in theoretical research is almost impossible, so P(detect)~1. Then we have the pure empirical studies, which can be potentially fabricated. In this case, P(detect) depends on the easiness of replicating the study. If the data was collected from lengthy field studies and involve proprietary data, there is little chance of being caught and P(detect)->0, assuming that there will be no studies in the future trying just to replicate the results of an earlier study. Similarly, if the results were generated using a complicated apparatus (e.g., a custom-built instrument in a physics lab, or a complicated custom-built software package), then again P(detect) is low.

    Finally, we have cases where empirical validation is used to validate a theory. This is a tricky case, and a brilliant scientist can be successful in "cheating by shortcuts". The cases of "soft cheating" that you refer to, seem to fall under this category.

    A person who can examine critically his/her own cheating, can decrease substantially the P(detect) and increase substantially the P(success). This can be achieved by fabricating experiments, making them to seem realistic (with noise) and to match "nicely" the results "predicted" by the theory. Not so brilliant scientists will fail miserably in such cases, and will be exposed fast. Brilliant scientists know how to manage expectations and can avoid being caught for long. Many of the high-profile cheating cases involved such professor/researchers, producing results that the scientific community was eager to see.

  3. Andrew says:

    P-ter,

    Even if you're only doing science to figure things out, cheating can serve the purpose of disseminating your findings more widely. If I've made a great discovery and I'm sure I'm correct, and a little cheating will move it from publication in J. Obscure Diseases up to Nature, then cheating could indirectly advance my goal of figuring things out, by moving science in (what seems to me to be) the correct direction, as well as by getting me grant $ that allow me to do more research, etc.

    The flaw in the argument, of course, is that a scientist should not be so sure he or she is correct, and I'd hope that a brilliant scientist wouldn't make that mistake. (And, as I said, I didn't buy the "brilliance" of the lab director in that novel.)

  4. vasishth says:

    I have always found it hard to find a clear definition of cheating in science, and find grades of cheating even more difficult to define–what's the turning point when soft cheating transitions from unacceptable to acceptable? If there isn't any, why have the gradation?

    Peter Medawar's book/piece about spotted mice is a pretty clear case, as well as the Bell labs (or was it some other lab?) guy (if you remember that story). I've had two students who have, without hesitation, suggested we just fake the data for their thesis, or mask the unhappy-making results.
    All clear instances of cheating.

    But what about the situation where you get numerous inconclusive results that are inconsistent with your favorite theory, and you do not publish them (or can't publish them even if you want to because journals in general prefer "highly" significant effects? Would you call that soft cheating? Or you get results that are actually inconsistent with your results and you do not publish them because you do not believe them (I think Andrew mentioned this case above–maybe this is clearer).

    I think that a broader problem is that people absorb the view early in their careers that it's wrong to be wrong. You cannot flip-flop over a theoretical position based on what the data say, it's considered a mark of failure of a scientist to say something at point x in time and show themselves to be wrong at point x+n in the future. One is expected to get married to a theory, to "have a position". I think this encourages the kind of soft cheating I am talking about.

    I don't know how many people do this kind of thing. Who would admit to it, even in a questionnaire?

  5. Igor Carron says:

    Andrew,

    Brilliant scientists have to field of peer-review which is we all know to be random :-).

    Bernard Chazelle take on zero-proof knowledge
    ( The security of knowing nothing
    http://www.cs.princeton.edu/~chazelle/pubs/nature… )
    makes me think that the cheating cannot go on for a long time.

    "..The surprise is that, to enforce honesty
    among distrustful parties, randomness must
    be thrown into the mix…"

    Igor.

  6. Bill Gardner says:

    vasishth raises an important point: cheating is a vague term, in the sense of the paradox of the heap (how many grains of sand does it take to define a heap?).