This post does not mention Wegman

A correspondent writes:

Since you have commented on scientific fraud a lot. I wanted to give you an update on the Diederik Stapel case. I’d rather not see my name on the blog if you would elaborate on this any further. It is long but worth the read I guess.

I’ll first give you the horrible details which will fill you with a mixture of horror and stupefied amazement at Stapel’s behavior. Then I’ll share Stapel’s abject apology, which might make you feel sorry for the guy.

First the amazing story of how he perpetrated the fraud:

There has been an interim report delivered to the rector of Tilburg University.

Tilburg University is cooperating with the university of Amsterdam and of Groningen in this case. The results are pretty severe, I provide here a quick and literal translation of some comments by the chairman of the investigation committee. This report is publicly available on the university webpage (along with some other things of interest) but in Dutch:

What have we found? The size of the fraud is severe. So far we are sure that at least 30 articles made use of data fabricated by Stapel himself. These are publications in the best peer reviewed magazines in his field. We have serious suspicions of fraud in a few dozen other articles, some chapters of books and proceedings of conferences. This fraudulous practice has been going on for years. At least since 2004. Our first task, determining the size and time span of the fraud will take much longer. We, along with colleagues from Groningen and Amsterdam will investigate each of his ca. 130 articles -especially the data-analysis parts. The same goes for the 24 book chapters and the numerous proceedings.

But we already know the nature of the fraud. The main technique appears to be the following. Along with a junior researcher: a masters student, a phd student or a postdoc, he would develop a theory with testable hypotheses. Only among them two. Next they would prepare in excruciating detail experiments: questionnaires, incentives, visual materials etc. In many cases, the junior researcher needed to provide tables summarizing the expected outcomes of the experiments. This preparatory phase was intense and demanded a lot of time. Next came the execution phase. The experiments were typically conducted (allegedly) in schools or education institutions throughout the country. The junior researcher provided the research materials to Stapel and in many cases he would put these in the truck of his car. Allegedly, Stapel would take these to the schools himself. He claimed that the experiments or surveys were used in the schools. Carrying out these experiments were done by ‘paid research assistents’. He claimed that they entered the data and constructed the datasets. After a while the junior researchers received the data from Diederik Stapel. The junior researcher carried out the empirical analysis and wrote a first version of the paper or article. Our report contains different schemes but the approach outlined above seems to be typical.

Not only do we find the size of the fraud astonishing -the commission discovered each day something more shocking- but even worse is the personal damage he has caused. He never considered the young people who he mentored and guided -while using them for his own prestige and glory. At this point we are only sure that 7 out of 21 phd students (from Groningen and Tilburg) did not use fabricated data. All other dissertations used data which went through the hands of Diederik Stapel and can be considered suspicious. We are talking about young, ambitious researchers at the beginning of their career who have found out that their advisor lied to them. The can’t be pride of their own dissertation anymore and may have to scrap some articles (sometimes the majority) from their CV. The investigating committees have found out that the junior researchers were unknowledgeable of the fraud and had no way of discovering the fraud.

How did Stapel manage to pull off such a mass fraud? The primary explanation seems to be his methodology, a combination of abuse of power and subtle approach. The careful and thorough preparatory phase of each experiment did not leave any doubt that this was actual research. The research considered school holidays and the schools (allegedly) only wanted to deal with Stapel himself. Stapel ‘rewarded’ schools (allegedly) with lectures or provided computers, beamers to the schools etc. This exclusive approach, he claimed, allowed continued cooperation. The schools would not appreciate junior researchers dropping in in their classes. In other instances, he would claim that the paper versions of the questionnaires were not avaiable as the schools nor Stapel could store the paper versions etc etc.

Stapel had visible power. He was the top dog of the department if not faculty. He became department head and later on Dean. His influence was without question. He was admired but also feared. When a collaborator would ask for questionnaires or any specifics, Stapel would point out that the close collaboration merited mutual trust. If needed, he would (in a subtle manner) point out that the collaboration was not guaranteed or that a phd position was not 100%. Abuse of power seems to be common.

The report goes on. But it is just amazing. The committee investigating the issue seems to be pretty straightforward. The rector indicated that he wants to see every detail resolved. Everyone is aware of the damage this may cause academic research (especially in the social sciences), but the best approach seems to investigate this thoroughly.
The report gives some (in my opinion convincing) reasons why Stapel seemed to be in this game on his own. Moreover they make some recommendations.

I thought you might want to follow up on this case.

I am always worried that I coded somethin wrong in my research or that my data cleaning or outlier choice may be interpreted as fraud (that is why I keep extensive records of all my research and motivate every step in my code). When you are confronted with such a lunatic, you feel awful.

Now a response (also translated from Dutch by my correspondent) by a spokesperson on behalf of Diederik Stapel:

The last couple of weeks I have been thinking on whether I should respond and if so, what to say. It is difficult to find the proper words. The commission has spoken. And now I have to -and I want to- say something; although it is difficult to say the proper thing.

I have failed as a scientist, as a researcher. I have amended my research data and faked research. Not once, but several times and not for a brief moment, but for a prolonged period. I realize that this behaviour has shocked and angered my colleagues and my profession, social psychology. I am ashamed and I regret this.

Science is a human endeavor, teamwork. I have enjoyed collaborating with numerous talented and motivated colleagues. I would like to stress that I have never informed them of my practices. I offer my sincere apologies to my colleagues, my phd students and the entire academic community. I realize the suffering and sadness I caused.

Social psychology is a large, solid and important research field that offers beautiful, unique insights in human behaviour and therefor deserves the proper attention. I have made the mistake of trying to shape the truth and present the world somewhat nicer than it is. In modern science, the standards are high and the competition for scarce resources is enormous.

The last couple of years the pressure has gotten me. I could not withstand the pressure to score, to publish, to be better. I wanted to much to soon. In a system with a few checks, where people often work on their own, I have taken a wrong direction. I would like to stress that the mistakes I made were not out of self interest.

I realize that many questions are left unanswered. My current condition does not allow me to answer these. I’ll need to dig much deeper to figure out why this has happened, what pushed me to do this. I need help and meanwhile I already have gotten some. I would like to leave it to this for now.

Given all the fraud in business, government, etc., it is perhaps no surprise that there is fraud in science too. There’s something about these cases, that they always seem to look particularly sleazy when you get to the details. Glengarry Glen Ross it ain’t.

33 thoughts on “This post does not mention Wegman

  1. I understand the pressures of science an academia. One can make a case that the scientific discipline’s obsession with positive results is really frustrating, unfair, and rewards people to do the wrong things. However, I don’t find his mea culpa not entirely sympathetic.

    “I would like to stress that the mistakes I made were not out of self interest.” – this really narrows the definition of self interest to a preposterously small space. Who/what did he want “too much too soon” for himself, then who for? Did he have a moral responsibility to provide the field with exciting, unproven hypotheses?

    “what pushed me to do this” – even now this sounds like he’s shirking responsibility. Sure everyone is a product of what made them who they are, but an apology doesn’t seem like the smartest place to dance around issues of personal responsibility, and it just comes off self serving here.

    Not saying that situations like this can’t happen to reaosnable people tempted by the wrong things, as Jonathan Schooler said in the new york times, “It’s almost like everyone is on steroids, and to compete you have to take steroids as well.” I just think the tone of his apology is kind of off.

  2. If psychology researchers were more willing to share their data, such incidents might be less common. As a political science researcher who uses (mostly) publicly available data, I cannot imagine this happening in political science. I’m constantly surprised by how generous some political scientists are with their data – it bodes well for the discipline. Journals imposing requirements that datasets and replication code be uploaded along with published articles is a great thing as well. Replication becomes much easier for students and data sharing allows the application of many more minds to interesting problems.

    I’ve always wondered why psychologists are so reluctant to share their data. Sure, I appreciate the effort and time it takes to recruit participants for experiments and that the discipline’s guild-based approach rewards “hoarding for future generations” rather than openness. However, since most of them receive federal funding (NSF, NIH, NIDA, etc); shouldn’t there be a requirement that research funded by taxpayer money be made available (after removing all identifiable data, of course), at least in the US?

    • Tt:

      I agree with you 100%. Unfortunately, in scientific fields with human subjects it can be a real hassle to share data, at least in the U.S. Lots of annoying Institutional Review Board rules.

      • Andrew or anyone else, can you tell us what you would have looked for in the primary dataset that could have given Stapel away (and wasn’t already reported or calculable from the published papers, e.g. improbably large effect sizes)? And would have been un-fakeable by someone like Stapel if data sharing & verification had already been the norm?

        Lots of people are saying the solution is that psychologists should upload and share primary datasets. I can see why that would be good for science in other ways, but I don’t see how that would have stopped Stapel or other deliberate frauds.

        • I agree 100% that Schooler’s data sharing solution would only have slowed Stapel down, Sanjay. A guy that pathological simply would have burned a few more midnight candles prior to uploading and sharing his bogus data sets. And we here in Austin miss you terribly, by the way!

    • Well, the problem is not solved by sharing data if the data was faked in the first place. I suppose it can be very hard to detect fake data, depending on the skill of the forger.

      What is needed are verifiable techniques of data collection, research protocols, etc.

  3. The abuse of graduate students is what stands out for me. He actually allowed them to use fake data for their dissertations. He wasted their time and energy on fabricated research, and he used intimidation to do it. Fraud happens, it’s unfortunate, and people who commit fraud deserve to lose their academic careers. But what are those former students supposed to do?

    I would like to see the apology acknowledge that he did more than succumb to the pressure to publish. He hurt less powerful people, and he did it over and over and over. That’s the most despicable part of this whole thing.

    “The junior researcher carried out the empirical analysis and wrote a first version of the paper or article.” Of course. Of course he was also making junior people do most of the work on top of everything. Gah.

    • I definitely agree with your point of view. Transparency and replication are useful only in part.
      The imbalance of power in the academia or junior researchers being blackmailed by who is managing resouces are chronic and they make all this possible. Research is not a solitary adventure, as he claimed. But too often critique is silenced by power.
      While in this case producing fake data was somehow “punished”, the more structural conditions that made it possible are just taken for granted. Of course.

  4. Data sharing and statistical solutions are worthy corrections but they will not be enough. We will never be able to audit everything an investigator does, and even the most rigorous studies always have a chance of producing spurious results.

    We need to value and disseminate independent replications more than we do. Two possibilities are (1) once a journal publishes an empirical study, it should be obliged to publish exact replication attempts in an online supplement; and (2) when you find an article in a database, you should be able to find published replication attempts via a “replication attempted by” link similar to the “cited by” links that are becoming more common. I wrote about this in the context of the Bem ESP paper here: http://hardsci.wordpress.com/2011/05/10/how-should-journals-handle-replication-studies/

    • The whole problems is Journal’s emphasis on being “Theory Advancing”.

      In political science, for example, AJPS does not accept unsolicited replications as a policy. Any replication should contribute substantial new theory.

      But how can we “advance theory” if what is published cannot be replicated? It strikes me we are building castles in the sand.

      • To me this says more about good chunks of quantitative scholarship in PoliSci. In many fields – certainly in IR and comparative where they use country/year data – you can take pretty much any result and by changing modeling assumptions and maybe adding a variable significantly change regression results. If AJPS allowed pure replications papers there would be no end of it.
        The redeeming thing about polisci is, though, that, as TT says above, authors are super-willing to share data and usually even .do or .log files (or R code). I’ve just had a paper accepted that introduces a new measure and replicates 3 papers and a book chapter in the paper and, for good measure, includes 3 more replications in an online appendix. The fact that we can do that with _relatively_ moderate amounts of work is really cool.

        • The argument that results are fragile and hence replication would result in a deluge of marginally interesting papers is, put bluntly, postal stamp thin.

          Clearly editors would still have to exercise judgement as to what gets published. And yes, a paper that adds a squared term in one of the original paper’s regressions and kills significance is unlikely to merit publication — we might not have a good way to judge the specification.

          But a paper that replicates a policy influential paper and finds serious clerical errors, plain vanilla methodological flaws, etc and these reverse results being cited widely probably merits at least a research note.

          For example, Dollar and Kraay had a *very* influential paper on foreign aid and growth that the World Bank used for PR to the max. Easterly then showed that adding one more year of data killed all results.

    • Although not in writing, I’ve suggested a similar idea to your #1. In my opinion, when a journal publishes a paper it is putting its “stamp” on those results and thus should also be responsible for helping people figure out when the results do not replicate. The trick, as always, is to figure out non-replications due to chance, experimenter error, or real and interesting moderating variables.

  5. What a loser! He didn’t do this out of self interest? He needs “to dig much deeper to figure out why this has happened”?

    Really? I may not have the “beautiful and unique insights into human behavior” that a social psychologist has, but I know why he did it. It was self interest.

    Incidentally I wrote a blog post with a simple model explaining why conspiracy are rarely tried and usually fail. The very last point made seem especially relevant to this case. Here it is: http://www.entsophy.net/blog/?p=42

  6. I’m very happy your Dutch correspondent uses the word “beamer” for digital projector – (I knew it was called that in German, I wonder if in Dutch, too). “Digital projector” is a terribly clumsy term for a, by now, very common appliance. (There is of course the problem of the phonetically identical Beemer, but I don’t think chances of confusing the two are overly high).

  7. Prima facie contradictory:

    “I would like to stress that the mistakes I made were not out of self interest.”

    “In modern science, the standards are high and the competition for scarce resources is enormous.”

  8. It seems like one reason he got away with this for so long is that he did a fair amount of work in politically correct areas like exposing white racism and male sexism. There is so much demand for politically correct results that it’s not surprising that supply appeared to meet demand.

    • Steve:

      Yup. The same thing seems to have happened (in the other direction) with Kanazawa, that his claims got a lot of attention because they reinforced people’s stereotypes about men and women, blacks and whites, etc. Engineers have more sons, nurses have more daughters, etc etc. Some people just love this stuff and just don’t want to hear that the data aren’t there to support these speculations. But Stapel did a lot more work. Kanazawa just downloaded some data and ran some regressions. Stapel apparently had gone to the trouble of printing up fake surveys to stick in the trunk of his car, then he put in the effort to create fake datasets all by himself. For a guy who was too lazy to do real science, he seems to have been pretty industrious!

  9. It is coincidental that Gelman blogged on Stapel today (have not kept up while traveling) because I was planning to post today in response to Gelman’s earlier post to which Gelman pointed me when I inquired if he knew of this case last week I was surprised in his post that he was sympathetic to Stapel, considering that, in some sense, he was just filling in what he already knew to be true! I concur with with previous poster (Sailor) on the role of political correctness in increasing gullibility.

  10. The report is also available in English:

    http://www.tilburguniversity.edu/nl/nieuws-en-agenda/commissie-levelt/interim-report.pdf

    Notice, by the way, that the actual method he used to fake data is described as very clumsy and easily detectable to the trained eye: bizarre correlations, means that are exactly identical in two supposedly different samples, etc. Indeed this is how he was finally unmasked.

    So yes, forcing him to make his data publicly available would probably have closed this avenue of fraud for him.

  11. Well, he is still perfectly in denial: “I would like to stress that the mistakes I made were not out of self interest.” In whose interest then?

    Over the last seven years the guy published on average two papers every month. Huh? Nobody suspected anything? If yes, they deserved to be fooled.

  12. Pingback: Insecure researchers aren’t sharing their data « Statistical Modeling, Causal Inference, and Social Science

  13. Not even close to feeling sorry for him. Academics who crack because of some perceived pressure are losers, at worst they’ll end up in the private sector making tons of money if they have the kind of training that statisticians do, right?

Comments are closed.