Some thoughts on academic cheating, inspired by Frey, Wegman, Fischer, Hauser, Stapel

As regular readers of this blog are aware, I am fascinated by academic and scientific cheating and the excuses people give for it.

Bruno Frey and colleagues published a single article (with only minor variants) in five different major journals, and these articles did not cite each other. And there have been several other cases of his self-plagiarism (see this review from Olaf Storbeck). I do not mind the general practice of repeating oneself for different audiences—in the social sciences, we call this Arrow’s Theorem—but in this case Frey seems to have gone a bit too far. Blogger Economic Logic has looked into this and concluded that this sort of common practice is standard in “the context of the German(-speaking) academic environment,” and what sets Frey apart is not his self-plagiarism or even his brazenness but rather his practice of doing it in high-visibility journals. Economic Logic writes that “[Frey’s] contribution is pedagogical, he found a good and interesting way to explain something already present in the body of knowledge.” Textbook writers copy and rearrange their own and others’ examples all the time; it’s only when you aim for serious academic journals that it’s a problem.

One question with the econ blogger did not address is: why did all these top research journals publish a paper with no serious research content. Setting aside the self-plagiarism thing, everyone knows that publication in to econ journals is extremely competitive. Why would five different journals be interested in a fairly routine analysis of a small public dataset that has been analyzed many times before?

I don’t have a great answer to that one, except that the example may have seemed offbeat enough to be worthy of publication just for fun (and, unfortunately, none of the journal editors happened to know that they were publishing a variant of a standard example in introductory statistics books).

Ed Wegman is a prominent statistician (he’s received the Founders Award for service to the profession from the American Statistical Association) who has plagiarized in several articles and even a report for the U.S. Congress! And, as is often the case, the plagiarism is typically worse than the original, sometimes introducing errors, other times simply rephrasing in a way that revealed a serious lack of understanding of the original material. There are various theories of what drove Wegman to steal, but I’ll go for my generic explanation of laziness, desire to simulate expertise or creativity where there is none.

The Frey and Wegman stories came out in their full glory a few months ago. I don’t know if Frey is giving public talks. But I was amazed to see, in the program of the Joint Statistical Meetings this past August, that Wegman was involved in two sessions! The first session (“The Human Cultural and Social Landscape”) was organized and chaired by Wegman and featured three speakers, all from Wegman’s department, including Yasmin Said, his coauthor on the paper that was retracted for plagiarism. In his other session, Wegman spoke on computational algorithms for order-restricted inference. The talk is described as a review so plagiarism isn’t so much of an issue, I guess. Still, I wonder if he actually showed up to these sessions.

Frank Fischer is the political scientist who copied big blocks of text from others’ writings without authorization (also, like Frey and Wegman, about 70 years old at the time of being caught), in what looks at a distance to be another lazy attempt to simulate expertise without actually doing the work of digesting the stolen material. I asked a friend about this case the other day, and he said that to the best of his knowledge Fischer has not admitted doing anything wrong. Unlike Frey (who’s a bigshot in European academia) or Wegman (whose work is politically controversial), Fisher is enough of a nobody that apparently survive after being called out for plagiarism with his career otherwise unaffected.

Mark Hauser is the recently retired (at the age of 51) Harvard psychologist who is working on a book, “Evilicious: Explaining Our Evolved Taste for Being Bad,” and also reportedly dabbled in a bit of unethical behavior himself involving questionable interpretation of research data. He was turned in by some of his research assistants who didn’t like that he was being evasive and not letting others replicate his measurements.

I asked E. J. Wagenmakers what he thought about the Hauser case and he replied with an interesting explanation that is based on process rather than personality:

One of the problems is that the field of social psychology has become very competitive, and high-impact publications are only possible for results that are really surprising. Unfortunately, most surprising hypotheses are wrong. That is, unless you test them against data you’ve created yourself. There is a slippery slope here though; although very few researchers will go as far as to make up their own data, many will “torture the data until they confess”, and forget to mention that the results were obtained by torture….

This is a combination of the usual “competitive pressure” story with a more statistical argument about systematic overestimation arising from the statistical-significance filter.

Diederik Stapel is the subject of the most recent high-profile academic plagiarism case. Wagenmakers writes:

He published about 100 articles, and in high ranking journals too (Science being one of them). Turns out he was simply making up his data. He was caught because his grad students discovered that part of the data he gave them contained evidence of a copy-paste job. The extent to which all his work is contaminated (including that of his many PhD students, who he often “gave” experimental results) is as yet unkown. Tilburg university has basically fired him.

Diederik Stapel was not just a productive researcher, but he also made appearances on Dutch TV shows. The scandal is all over the Dutch news. Oh, one of the courses he taught was on something like “Ethical behavior in research”, and one of his papers is about how power corrupts. It doesn’t get much more ironic than this. I should stress that the extent of the fraud is still unclear.

I’ve never done any research fraud myself but I have to say I can see the appeal. The other day I was typing data from survey forms into a file for analysis, and I noticed that the data from some of the research participants didn’t go the way they were “supposed” to. I discussed with my collaborator who had a good explanation for each person based on what had happened in their lives recently. I could feel the real temptation to cheat and adjust the numbers to what I’d guess they should’ve been, absent the shocks which were irrelevant to the study at hand. We didn’t cheat, of course, but it would’ve been so easy. There’s no way anyone would’ve checked, and it would’ve made the results much more convincing in a way that seems appropriate in the larger context of the research.

I can see how a scientist such as Hauser or Stapel could justify this sort of behavior in the name of scientific truth. Similarly, Wegman and Fischer probably felt that, in some deep sense, they really were be experts in the fields they were plagiarizing. Sure, they hadn’t fully absorbed the literature, but they might have felt that they were experts enough that could always have the capacity to understand it if necessary. As for Frey, my guess based on his many writings on academic publication ethics is that he feels that everybody does it, so he needs to play the game too.

39 thoughts on “Some thoughts on academic cheating, inspired by Frey, Wegman, Fischer, Hauser, Stapel

  1. I salute those graduate students who turned in their professor. That took guts.

    Mindful of the saying “No good deed goes unpunished”, I wonder how this has affected their careers?

    • I agree! I think that would be so hard to do. I wonder if any other professor would even want to mentor them after that, for fear of being scrutinized themselves.

  2. Pingback: The importance of style in academic writing « Statistical Modeling, Causal Inference, and Social Science

  3. quoting noahpoah:

    > > We didn’t cheat, of course, but it would’ve been so easy.

    > Of course, you would say that even if you had cheated, wouldn’t you?

    No, he would not have.

    I’m guessing “noahpoah” is not a private investigator in the tradition of Sherlock Holmes, C. Auguste Dupin, etc. ;-)

  4. Have you ever watched The Squid and The Whale? Jesse Eisenberg performs and claims to have written a famous Pink Floyd song at a talent show. When he’s later asked why he did it, he responds “I felt that I could have written it, so the fact that it was already written was kind of a technicality.”

  5. The Hauser case seems to me to be more about the inherent difficulties of animal studies. Animals do a lot of complicated and/or random stuff, and it’s not obvious how to code their actions. If you code it one way, you get published and get tenure, but if you code it as just random, you get a one way ticket to Directional State.

    Sometimes, professors hold themselves to high standards in animals studies, as with the famous Nim Chimpsky study of the 1970s, in which Herbert Terrrace of Columbia announced that he had been wrong and Noam Chomsky was right: Terrace’s chimp wasn’t learning to use American Sign Language as well as he had expected. But Terrace still wound up the designated villain in the recent documentary “Project Nim,” even though he strikes me as a hero:

    /http://takimag.com/article/chimp_bites_woman_talks_about_it#axzz1XnbUfnVU

    • Steve:

      Sure, animals do a lot of random stuff, but it’s still cheating if all the other major scientists report what they see the animals actually do, and you report what they should be doing. That’s why it’s a violation to not share the videotapes of the raw evidence.

      In general, precision is a virtue in science, and one of the challenges of scientific experimentation in general is to design your data collection to allow for precise measurements. That’s one way to build a reputation as a scientist and one reason it can be tempting to fake it.

    • The more basic issue is that the people coding the animal behavior should be blind to the experimental condition that they are coding. If coding is performed blind to condition then bad coding leads to no effect. You can only code behavior based on wishful thinking if you know what outcome you want in the trial you’re looking at. At least some of Hauser’s papers claim to use this method.

  6. I don’t get how the journals get away with this. I’ve been trying to get a project that I wrote as a conference paper extended to a journal article, and two journals have already rejected it because they don’t think it contains enough new material (I disagree, but they’re entitled to their opinions). Is it because of their stature that their work is scrutinized as well as a PhD students? Are the journals they’re submitting to that small?

    It just seems unfair that senior members of academia can bring shame to the idea of publishing while young researchers can’t get anything published because they’re not a famous name.

  7. I found it interesting that it should have something to do with german academia. You realise we have had a rather vivid discussion about several cases of plagiarism here in Germany? That the minister of defence admitted only after a long an troublesome public denial phase he did something wrong?

    I could be ashamed working in german academia. But nevertheless, I don’t think self-plagiarism, and even plagiarism, is the problem here. Actually, it’s the (public) opinion that plagiarism is not really important, while academic titles are still worn like medals given to you by an ancient king. In addition, the publish-or-perish and project based research with three years funding are our problem. Once, it was “Gut Ding will Weile haben”. Since several decades of public funding, it’s “Der Teufel scheißt auf den größten Haufen.” (Non-native, try google translate. You’ll get the point. ;) )

    Actually, I am ashamed to work in such an environment.
    Any ideas where to find a better environment?

  8. Hi Andrew,
    What do you think about these papers by Bruno Frey:

    Economic Inquiry 2003, title and abstract:
    Are political economists selfish and indoctrinated? Evidence from a natural experiment.
    “Most professional economists believe that economists in general are more selfish than other people and that this increased selfishness is due to economics education. This article offers empirical evidence against this widely held belief Using a unique data set about giving behavior in connection with two social funds at the University of Zurich, it is shown that economics education does not make people act more selfishly. Rather, this natural experiment suggests that the particular behavior of economists can be explained by a selection effect.”

    International Journal of the Economics of Business 2004 (does cite EI), title and abstract:
    Do Business Students Make Good Citizens?
    “Business students are often portrayed as behaving too egoistically. The critics call for more social responsibility and good citizenship behavior by business students. We present evidence of pro-social behavior of business students. With a large panel data set for real-life behavior at the University ofzurich, two specific hypotheses are tested: do selfish students select into business studies or does the training in business studies negatively indoctrinate students? The evidence points to a selection effect. Business education does not seem to change the citizenship behavior of business students.”

    European Journal of Law and Economics 2005 (does not cite the other papers), title and abstract:
    Selfish and Indoctrinated Economists?
    “Many people believe that economists in general are more selfish than other people and that this greater selfishness is due to economics education. This paper offers empirical evidence against this widely held belief. Using a unique data set on giving behaviour in connection with two social funds at the University of Zurich, it is shown that economics education does not make people act more selfishly. Rather, this natural experiment suggests that the particular behaviour of economists can be explained by a selection effect.”

      • Hi Andrew,
        I think so too. They were written by Frey and your colleague at Columbia Stephan Meier. I am also not sure why their 2004 JEBO and AER papers do not cite each other. Is this common practice in economics?

        American Economic Review 2004 (does not cite JEBO 2004)[16], title: “Social Comparisons and Pro-Social Behavior: Testing “Conditional Cooperation” in a Field Experiment”

        Journal of Economic Behavior and Organizations 2004 (does not cite AER 2004)[17], title: “Pro-social behavior in a natural setting”

        1 AER
        “Many important activities, such as charitable giving, voting, and paying taxes, are difficult to explain by the narrow self-interest hypothesis. In a large number of laboratory experiments, the self-interest hypothesis was rejected with respect to contributions to public goods (e.g., John O. Ledyard, 1995).”

        1 JEBO
        “Studies of important activities, such as charitable giving (e.g. Andreoni, 2002; Weisbrod, 1998), voting (e.g. Mueller, 2003), and tax paying (e.g. Slemrod, 1992; Andreoni et al., 1998), have convincingly argued that such actions cannot be explained by relying on the strict self-interest axiom. (…) The self-interest model has been clearly rejected in a great number of laboratory experiments (see Ledyard, 1995; Davis and Holt, 1993 for surveys).”

        2 AER
        “Recent theories on pro-social behavior focus on “conditional cooperation”: people are assumed to be more willing to contribute when others contribute. This behavior may be due to various motivational reasons, such as conformity, social norms, or reciprocity. According to the theory of conditional cooperation, higher contribution rates are observed when information is provided that many others contribute.”

        2 JEBO
        “According to the notion of ‘conditional cooperation’ people contribute to a public good dependent on the behavior of others. An individual dislikes being a ‘sucker’, being the only one who contributes to a public good while the others free-ride. The more a person believes that others cooperate, the greater is the probability that this person contributes too. As stated above, such social comparison can be due to various motivational mechanisms, such as a social norm to behave appropriately.”

        3 AER
        “Only a few laboratory experiments circumvent these problems and explicitly test conditional cooperation (e.g., Urs Fischbacher et al., 2001). These studies conclude that roughly 50 percent of people increase their contribution if others do so as well.”

        3 JEBO
        “In a recent standard public good experiment, for example, it was identified that, according to this definition, roughly 50 percent of the subjects are conditional cooperators, while a third of the subjects act as free riders (Fischbacher et al., 2001). According to this study, the observation that cooperation declines after repetition in public goods games is due to conditional cooperation: people adjust their contribution according to what others do, but give slightly less.”

        4 AER
        “Each semester, every student at the University of Zurich is asked to decide anonymously whether to contribute to two charitable funds”

        4 JEBO
        “Each semester, all the students at the University of Zurich have to decide whether or not they want to contribute to two official Social Funds in addition to the compulsory tuition”

        5 AER
        “They can make a voluntary donation of CHF 7 (about $4.20) to a fund that offers low-interest loans to students in financial difficulty and/or CHF 5 (about $3) to a fund supporting foreign students. They have the further option not to donate to either fund.”

        5 JEBO
        “the students are asked whether they want to give a specific amount of money (CHF 7.-, about US$ 4.20) to a Fund that offers cheap loans to students in financial difficulties and/or a specific amount of money (CHF 5.-, about US$ 3) to a second Fund supporting foreigners who study at the University of Zurich. Without their explicit consent (by marking a box), students do not contribute to any Fund at all.”

        6 AER
        “while experimental research in laboratories leads to many insights about human behavior, it is still unclear exactly how these results can be applied outside of the laboratory. Our field experiment enables this gap to be narrowed, while still controlling for relevant variables.”

        6 JEBO
        “The experimental evidence may teach us a lot about human behavior. However, it remains an open question how best these results can be applied outside the lab. This paper wants to fill this gap by testing behavioral theories in a naturally occurring situation, thus bringing back external validity to the test of pro-social behavior.”

        7 AER
        “We observe that the higher the expectation of the students about the average group behavior, the more likely it is that they contribute. Students expect, on average, 57 percent of their fellow students to contribute to both funds. They underestimate the actual contribution rate of 67 percent. The coefficient of correlation between the expressed expectations and the contribution to at least one fund is 0.34 (p <0.001).”

        7 JEBO
        “The results of our survey show that expectations about others correlate with the individual decision to contribute to the Social Funds. The coefficient of the correlation between the expressed expectation and the contribution to at least one Fund is 0.34. This correlation is quite large and statistically significant at a 99 percent-level (F1,3168 = 415.47,P < 0.01).”

        8 AER
        “A change in expectations from 46 percent to 64 percent corresponds to a change in the probability of contributing by around 5.3 percentage points.”

        8 JEBO
        “An increase of the perceived cooperation of others by 10 percentage points increases the individual probability of contributing by 6 percentage points.”

        • Andre,

          You fail to make it clear, indeed make it look the other way around, that the JEBO paper preceded the AER one, May vs December 2004, and that the JEBO one cited an earlier working paper version.

          It is an embarrassment that neither I nor the referees were aware of the earlier work on the Titanic, including in some statistics books, but the version of the Titanic paper submitted to JEBO was the first of this recent bunch by Frey et al to be submitted anywhere, although it came out a few months after the PNAS version due to referees asking for further work on the statistical testing.

        • Hi Barkley,
          I did not intend to make it look like AER preceded JEBO. But does that matter? Both AER and JEBO cite the same working paper but not each other, while AER seems like an updated version of JEBO, using a lot of the same information and presenting it it a slightly different way. I suspect that, when both versions of this paper were accepted, the editors of AER and JEBO were not aware that the paper was simultaneously submitted elsewhere.

          Some additional information:

          1. American Economic Review 2004 (does not cite JEBO 2004)[16], title: “Social Comparisons and Pro-Social Behavior: Testing “Conditional Cooperation” in a Field Experiment”

          2. Journal of Economic Behavior and Organizations 2004 (does not cite AER 2004)[17], title:
          “Pro-social behavior in a natural setting”

          1 AER
          “Many important activities, such as charitable giving, voting, and paying taxes, are difficult to explain by the narrow self-interest hypothesis. In a large number of laboratory experiments, the self-interest hypothesis was rejected with respect to contributions to public goods (e.g., John O. Ledyard, 1995).”
          1 JEBO
          “Studies of important activities, such as charitable giving (e.g. Andreoni, 2002; Weisbrod, 1998), voting (e.g. Mueller, 2003), and tax paying (e.g. Slemrod, 1992; Andreoni et al., 1998), have convincingly argued that such actions cannot be explained by relying on the strict self-interest axiom. (…) The self-interest model has been clearly rejected in a great number of laboratory experiments (see Ledyard, 1995; Davis and Holt, 1993 for surveys).”

          2 AER
          “Recent theories on pro-social behavior focus on “conditional cooperation”: people are assumed to be more willing to contribute when others contribute. This behavior may be due to various motivational reasons, such as conformity, social norms, or reciprocity. According to the theory of conditional cooperation, higher contribution rates are observed when information is provided that many others contribute.”
          2 JEBO
          “According to the notion of ‘conditional cooperation’ people contribute to a public good dependent on the behavior of others. An individual dislikes being a ‘sucker’, being the only one who contributes to a public good while the others free-ride. The more a person believes that others cooperate, the greater is the probability that this person contributes too. As stated above, such social comparison can be due to various motivational mechanisms, such as a social norm to behave appropriately.”

          2b AER
          For example, a positive correlation between expectations about the mean behavior of the reference group and one’s own behavior is consistent with conditional cooperation, but not conclusive, as causality is not clear. Behavior may influence expectations, and not the other way round
          2b. JEBO
          Evidence of ‘conditional cooperation’ is identified: when students expect others to contribute, they themselves tend to donate more. However, the direction of causality is not at all clear; one’s own willingness to donate may lead one to expect that others behave in the same way.

          3 AER
          “Only a few laboratory experiments circumvent these problems and explicitly test conditional cooperation (e.g., Urs Fischbacher et al., 2001). These studies conclude that roughly 50 percent of people increase their contribution if others do so as well.”
          3 JEBO
          “In a recent standard public good experiment, for example, it was identified that, according to this definition, roughly 50 percent of the subjects are conditional cooperators, while a third of the subjects act as free riders (Fischbacher et al., 2001). According to this study, the observation that cooperation declines after repetition in public goods games is due to conditional cooperation: people adjust their contribution according to what others do, but give slightly less.”

          3b. AER
          To our knowledge, this paper is the first to go further and to test conditional cooperation in a field experiment.
          3b JEBO
          This study analyzes patterns of pro-social behavior outside the lab. Factors affecting pro-social behavior in a field setting are identified.

          4 AER
          “Each semester, every student at the University of Zurich is asked to decide anonymously whether to contribute to two charitable funds in addition to the compulsory tuition fee. ”
          4 JEBO
          “Each semester, all the students at the University of Zurich have to decide whether or not they want to contribute to two official Social Funds in addition to the compulsory tuition”

          5 AER
          “They can make a voluntary donation of CHF 7 (about $4.20) to a fund that offers low-interest loans to students in financial difficulty and/or CHF 5 (about $3) to a fund supporting foreign students. They have the further option not to donate to either fund.”
          5 JEBO
          “the students are asked whether they want to give a specific amount of money (CHF 7.-, about US$ 4.20) to a Fund that offers cheap loans to students in financial difficulties and/or a specific amount of money (CHF 5.-, about US$ 3) to a second Fund supporting foreigners who study at the University of Zurich. Without their explicit consent (by marking a box), students do not contribute to any Fund at all.”

          6 AER
          “while experimental research in laboratories leads to many insights about human behavior, it is still unclear exactly how these results can be applied outside of the laboratory. Our field experiment enables this gap to be narrowed, while still controlling for relevant variables.”
          6 JEBO
          “The experimental evidence may teach us a lot about human behavior. However, it remains an open question how best these results can be applied outside the lab. This paper wants to fill this gap by testing behavioral theories in a naturally occurring situation, thus bringing back external validity to the test of pro-social behavior.”

          7 AER
          “First, the relationship between expectations and behavior is presented. (…) We observe that the higher the expectation of the students about the average group behavior, the more likely it is that they contribute. Students expect, on average, 57 percent of their fellow students to contribute to both funds. They underestimate the actual contribution rate of 67 percent. The coefficient of correlation between the expressed expectations and the contribution to at least one fund is 0.34 (p <0.001).”
          7 JEBO
          “The results of our survey show that expectations about others correlate with the individual decision to contribute to the Social Funds. The coefficient of the correlation between the expressed expectation and the contribution to at least one Fund is 0.34. This correlation is quite large and statistically significant at a 99 percent-level (F1,3168 = 415.47,P < 0.01).”

          8 AER
          “A change in expectations from 46 percent to 64 percent corresponds to a change in the probability of contributing by around 5.3 percentage points.”
          8 JEBO
          “An increase of the perceived cooperation of others by 10 percentage points increases the individual probability of contributing by 6 percentage points.”

          8b. AER
          “To control for such heterogeneity, we estimate a conditional logit model with individual fixed-effects.
          8b. JEBO
          “We also use personal fixed-effects to control for unobservable heterogeneity. (…) see the
          conditional logit model in Table 4).”

          8c AER
          The coefficient of past behavior indicates the fraction of previous situations in which the subject decided to contribute. More than 50 percent of the students contributed in all previous situations. Around 10 percent never contributed to either of the two funds.
          8c JEBO
          “Most of the students either always contribute or never contribute to one of the Funds. As we know from laboratory experiments, subjects basically tend to divide into two groups: one group who free-rides all the time and another group of subjects who does not. At the University of Zurich, almost 19 percent of the students who decided at least two times never contributed to the two Funds. On the other hand, the fact that about 49 percent of the students always contribute may be an indicator that students keep on contributing even after several rounds.”

        • Andre,

          As the AER does not list when a paper is submitted (at least I have not noticed it), it is impossible to know for sure if the JEBO and AER versions were submitted at the same time and were identical or not, although this is quite possible. I confess that when I saw the AER version appear later I was more amused than annoyed. It was not a paper that I viewed as super special for JEBO, whereas I put a lot of effort into the Titanic paper and made it the lead paper of not only an issue but a volume of the journal. Hence, I was more annoyed by the JEP piece than by the AER one.

          Another matter is that in 2004 I was less focused on these issues of plagiarism and self-plagiarism as I would come to be later. The reasons for that are explained in my essay “Tales from the Editors’ Crypt: Dealing with Accusations of Plagiarism True, Uncertain, and False,” available on my website at http://cob.jmu.edu/rosserjb . This has been distributed pretty widely, and a section in the middle deals with self-plagiarism. I sent a copy to Bruno Frey upon his request, and he quoted it in his own defense with Olaf Storbek. I did say that self-plagiarism is not as bad as plagiarism, but I also said that it is still unethical conduct.

        • Barkley,
          Will you contact the then-editor of AER to compare the initials submissions and their submission date? It might be important since it involves a new co-author (Stephan Meier) and it is important to determine whether his name should be cleared.

          Andre

        • Andre,

          Well, I am out of the country and away from all my journals for this entire semester, so, I do not even know who the editor was then of the AER. So, certainly not going to contact anytime soon.

          BTW, somebody on ejmr thinks I wrote the letter banning Frey from JEBO. It was written by my successor, Bill Neilson, but I support its content and think that he has handled himself admirably in this difficult situation, as has David Autor at the JEP.

  9. Pingback: The Stapel Case and Data Fabrication « ignorance and uncertainty

  10. Pingback: A Tale of Self-Plagiarism — A Critic of Publishers Proves a Prostitute Is As a Prostitute Does « The Scholarly Kitchen

  11. I think you really ought to look into the Hauser case even more. It’s not mere allegations of data massaging. He’s been found guilty of data fabrication. A whole other level.

  12. Pingback: Groundhog day in August? « Statistical Modeling, Causal Inference, and Social Science

  13. part of the data he gave them contained evidence of a copy-paste job.

    What astounds me isn’t so much that some people might do such a thing – but that supposedly smart people are so completely incompetent at it.

  14. Pingback: Another Wegman plagiarism copying-without-attribution, and further discussion of why scientists cheat « Statistical Modeling, Causal Inference, and Social Science

  15. Pingback: Twelfth Linkfest

  16. Pingback: This post does not mention Wegman « Statistical Modeling, Causal Inference, and Social Science

  17. I agree with Hilda that Hauser’s violations are more than just a little coding ambiguity. For example, in one paper there are data reported and displayed from a condition that was never run…

    Also, Stapel’s violations are far beyond any of what you discuss here. Over many years he fabricated data, telling his students and colleagues that the data came from a high school with which he had a special arrangement to collect data. The high school didn’t exist, and the data came from many hours of careful data set building to make it look just right and believable. His staunch refusal to let (even doctoral) students gain access to the participants eventually aroused their suspicion.

  18. Pingback: Hobo Kore Dojo » Blog Archive » Research Citation Lifespan – II

Comments are closed.