Should scientists be allowed to continue to play in the sandbox after they’ve pooped in it?

This picture is on topic (click to see the link), but I’d like it even if it weren’t! I think I’ll be illustrating all my posts for awhile with adorable cat images. This is a lot more fun than those Buzzfeed-style headlines we were doing a few months ago. . . .

Anyway, back to the main topic. I got this email from Jonathan Kurtzman:

Thought you might be interested in this: researcher who had to retract 13 papers for image manipulation fraud, who is banned from funding in Germany where she worked, has received a grant from Cancer Research UK. CRUK relies on the school to review proposals, so …

The linked article, by Ivan Oransky and Adam Marcus, is entitled, “Do scientific fraudsters deserve a second chance?”

When put that way, sure, how can you deny someone a second chance? Indeed, I have worked with various people who have done scientific misconduct. No cases of image manipulation that I know of, but some plagiarism, some publishing of results known to be erroneous or meaningless, and some misleading presentation of research in which negative results were buried. These people have continued to get funding, and it’s not like I called up funding agencies to blow the whistle.

On the other hand, if I were on a panel considering funding, would I give any money to someone who’d cheated in this way? Nope. The cost-benefit just doesn’t work out. Even honest, well-intentioned researchers do sloppy work all the time. If you have someone who comes in with the willingness to cheat, the game’s over before it began.

I’m not saying these people should starve. I hope there’s a useful way for them to contribute to society. I just wouldn’t want them working on any scientific project that I’m involved with. Maybe they could help out in some other way, for example making sure the lab equipment is kept up to date, or moving furniture, or operating the copy machine, or, I dunno, there must be lots of ways they could help. Just keep them away from the data!

14 thoughts on “Should scientists be allowed to continue to play in the sandbox after they’ve pooped in it?

  1. I think we talked about this once before, in the case of LaCour. He tried to start a business centered around data visualization, seems like a good use of his skills. I can’t find his website now, maybe the business tanked as they so often do, or he found gainful employment doing the same thing. Seems like a good solution, he still has access to data presumably, but has questionable ability and no incentive to fake any of it. He’s effectively barred from being a “scientist”, though, and probably from any field which requires a high level of integrity.

    I would support something like a 20-year ban, I think people could change over that timescale. Given the realities of a job market that’s probably a lifetime ban for most (what 40/50 year old is going to take an entry-level scientific position?), but maybe an extremely dedicated individual may find a way back, and those are the people we’d likely want to continue doing science anyway.

  2. Hi Jacob,

    Michael J. LaCour is still on the Internet as Michael Jules (Jules being his middle name). He had a website: Michael Jules Data Visualization Developer (www.michaeljules.xyz) that now seems to be gone, but he has a github account (https://github.com/michaeljules). He had a LinkedIn account as Michael Jules, which seems to be gone. He still has a Facebook account as Michael Jules.

    I heard through the grapevine that UCLA required him to take a year off and then would allow him to continue his doctoral work; I have no idea whatsoever if this is true.

  3. Cancer Research UK ain’t got nothing on the Karolinska Institute in Stockholm: “The Karolinska University Hospital and the Karolinska Institute (KI) in Stockholm ignored warning signs when they hired surgeon Paolo Macchiarini in 2010, an independent panel concluded this week.”

    http://www.sciencemag.org/news/2016/09/panel-swedish-hospital-should-never-have-hired-star-surgeon

    “The first of the three transplant patients died 30 months after the procedure following severe complications from the synthetic trachea. The second patient died after four months from an unknown cause. The third patient suffered very severe complications that have required continuous hospital care since the transplant in 2012. ”

    In this case, no, this potentially murderous self-glorifying hack should never be allowed to play in the sandbox again.

  4. The lady doth protest too much, methinks.

    …some publishing of results known to be erroneous or meaningless, and some misleading presentation of research in which negative results were buried. These people have continued to get funding, and it’s not like I called up funding agencies to blow the whistle…I just wouldn’t want them working on any scientific project that I’m involved with.

    Regarding Rivers, see http://statmodeling.stat.columbia.edu/2014/09/30/didnt-say/ (search for COLOGIT). Bartels has “buried” negative results (your work), though maybe you don’t work with Bartels. King’s EI has been shown to be pretty much the Goodman model (“Interestingly, Monte Carlo simulations indicate that the King model and OLS yield virtually identical estimates in the vast majority of cases”–Cho and Manski (2008), though the much worse problem with King is his work on Social Security ( http://www.nytimes.com/2013/01/06/opinion/sunday/social-security-its-worse-than-you-think.html) but see https://outlook.office365.com/owa/?path=/attachmentlightbox, which Krugman refers to as “offering among other things one of the best examples I’ve ever seen of a brutally polite intellectual takedown” ( http://krugman.blogs.nytimes.com/2013/01/19/you-know-nothing-of-our-work/). While EI is just an example of hyping relatively meaningless results, the social security work will be used as ammunition to further reduce benefits for one of the more vulnerable groups in the US. The overall point is that once you took on the joint appointment in political science, you took a giant leap to the type of research you purport to dislike (one wonders if your frequent forays into criticizing academic social psychology is in fact not displacement of qualms you have about political science work). Anyway, come back from the dark side, and do statistics.

    • numeric,

      I’m a young political scientist interested in methodology. I see you’ve criticized a number of prominent methodologists’ work here, and it seems your criticisms are quite well developed. I’d of course love to avoid making the kinds of errors you’re pointing out.

      Is there a general type of errors you think political scientists tend to make? What would you recommend me so I can try to improve my own scientific practice?

      Thanks for sharing your thoughts.

        • Numeric:

          I agree that lots of bad work is done in political science, and I write about some of this on the blog. One question that I’m not quite sure about is, where does one draw the line between incompetence and malfeasance? For example, Larry Bartels has done some interesting work and he’s made real contributions to our understanding of politics. I think his work on shark attacks and his promotion of the smiley-faces-and-immigration claim are in error, but I think they’re honest errors: Larry was doing his best but he didn’t have a full understanding of forking paths when writing about these things. At some point, though, if Larry keeps clinging to these ideas after they’ve been shot down, then I think that verges on irresponsible behavior on his part. Still it’s nothing like the “pooping in the sandbox” behavior of Stapel, Hauser, Lacour, Wegman, or Dr. Anil Potti.

          Eric: I’m not sure about general errors. Political science is hard. It’s an observational not an experimental science, and measurements can be difficult. To start, I think researchers should be aware of the importance of connecting measurements closely to theory, and I think they should avoid the NHST framework that’s associate with so many problems. That’s one reason I’m wary of the whole “empirical implications of theoretical models” movement: It sounds rigorous, but when I actually see these research papers and PhD theses and grant proposals listing Hypothesis 1, Hypothesis 2A and 2B, etc., it never seems quite right to me.

        • Eric

          Here’s what you should do to be professionally successful as an academic political scientist:

          http://www.socsci.uci.edu/~bgrofman/Wuffle-Advice%20to%20Assistant%20Professor.pdf

          Here’s what you should do to improve your scientific practice:

          http://calteches.library.caltech.edu/51/2/CargoCult.htm

          The two approaches are mostly orthogonal.

          Andrew/Eric:

          The level of ignorance of basic statistical practices in political methodology is overwhelming, and that works both ways (ignorance on the part of the practitioner or the practitioner playing on the ignorance of the political science community). For example, in “Heterogeneity in Models of Electoral Choice”, It is the ignorance of the political science audience to not know that random coefficients can’t be treated as fixed. In “A Solution to the Ecological Inference Problem”, the beginning of Chapter 10 devotes its first 16 pages to an empirical analysis of a dataset with the EI method without once mentioning the Goodman estimate (of which EI is supposed to be an improvement–my suspicion is that it gives basically the same result as OLS which is why it isn’t mentioned). Instead, section 10.3.2 ends with a paragraph claiming that “in an ordinary regression model, the best forecast of the dependent variable for given values of the explanatory variables is the fitted value” [in other words, X\hat{\beta}. But the King model is not an ordinary regression model once one assumes a joint probabilistic distribution of the explanatory variables, and there are well known formulas for adjustment of the fitted values under these assumptions (see https://en.wikipedia.org/wiki/Multivariate_normal_distribution under conditional distributions, or Altman et. al. in “Ecological inference: New methodological strategies”). Once again, ignorance. Ignorance is strength. Andrew will have to decide if these are “disqualifying” research practices, but my point (see below) is that once you’re in the swamp you’re going to get muck on you and why get in the swamp unless there is societal useful purpose (and personal aggrandizement isn’t that for me)?

          The sad fact is, however, that it is very difficult to use a statistical technique in political science to actually settle a question. It used to be that there were different models of vote choice (sociotropic, retrospective, issue space, etc) and I originally wondered why these weren’t tested against each other by statistical means (there are tests–Cox’s non-nested hypotheses tests, for example). As I grew more mature I realized two things–first, the tests almost certainly would not provide clear definitive proof of one theory over another, and more importantly, the practitioners of political science did not want such a comparison even if it were possible. Academic political scientists like to tell stories and reality gets in the way of these stories (or, as Orwell put it in 1984, parison (his clever way of shortening comparison) is the enemy.

        • Numeric:

          Regarding King’s ecological regression book: I take full responsibility for the mathematics, the statistical model, the fitting of the model, and most of the decisions of what to graph—but the project was a division of labor in which Gary was responsible for choosing the problem, performing the literature review, finding the applications, and writing the words. And once he decided he wanted to attach his name alone to the project, I stopped working on it. So I refuse to accept blame for any mistakes in the final published product!

          I did write one paper on ecological regression (with Ansolabehere, Price, Park, and Minnite), and I kind of like that paper, but I haven’t thought much about the topic since.

        • Andrew,

          Sometimes when I hear about _connecting measurements closely to theory_, it’s then followed by a recommendation of EITM. Evidently that’s not what you have in mind. How would you recommend thinking about how to connect measurements closely with theory, in applied political science?

  5. Andrew

    Didn’t realize you were that involved in the project, but my point is (which I acknowledge your lack of responsibility) was that Table 10.3 in King’s book, which gives the EI estimates of turnout by race, could also easily include the OLS estimates. Since the main point of the book was to show that EI was better than the Goodman model, it is misleading in the extreme not to include them. I’m tweaking you a little bit for the sin of the Pharisees regarding misleading research–I will say every time I’ve given a presentation you have been present at your comments have always been reasonable and you’ve done a great service with this blog and your software efforts in general. I just think it’s a waste of your time working on political science problems when your comparative advantage is statistics.

    So to Eric, a very good way to be an excellent methodologist is to include evidence that doesn’t support your thesis (this is the point of Feynman’s article), of which I’ve given some examples where I don’t think that has occurred. The problem then becomes that it’s very difficult to show anything if you adopt that philosophy, as political science data is very rarely definitive.

      • I got so caught up in the alligators I lost sight of draining the swamp. So I want to make an important point:

        A huge problem with academic political science is it is so insular (recall the recent foofaraw on the problems with quality of reviews–but clearly the problem is that there is no real market for the product and so there has to be a limit on the articles produced–I recall one APSA convention I went to where I ran into a fuming colleague where there had been no attendees. The APSA solution was to limit the number of panels. But I digress).

        This insularity is a good thing, since the most consequential political science theory of the last 50 years is “broken windows”. This theory was seized upon by the right (and some on the left!) to unleash a wave of oppression on the African-American community that we are still reaping the results. Similarly, King’s work on social security will be used to cause harm to millions of American (recall Reinhart and Rogoff’s work on debt being quoted approvingly by Ryan). Both thesis (the worse than you think on social security, the 90% ceiling) are almost certainly not true, but have and will be used (as with broken windows) to cause untold harm to millions.

        Anyway, to minimize societal harm, keep political scientists in their own little sandbox (at least until Flake defunds the whole operation–but really, given the harms outlined above, wouldn’t that be a societal good?).

    • I wish methedologists would ask themselves more often: “What evidence, if shown, would weaken or destroy my thesis?” and then prominently advertise the answer.

Leave a Reply

Your email address will not be published. Required fields are marked *