Another disgraced primatologist . . . this time featuring “sympathetic dentists”

Shravan Vasishth points us to this news item from Luke Harding, “History of modern man unravels as German scholar is exposed as fraud”:

Other details of the professor’s life also appeared to crumble under scrutiny. Before he disappeared from the university’s campus last year, Prof Protsch told his students he had examined Hitler’s and Eva Braun’s bones.

He also boasted of having flats in New York, Florida and California, where, he claimed, he hung out with Arnold Schwarzenegger and Steffi Graf. . . . some of the 12,000 skeletons stored in the department’s “bone cellar” were missing their heads, apparently sold to friends of the professor in the US and sympathetic dentists.

To paraphrase a great scholar:

His resignation is a serious loss for Frankfurt University, and given the nature of the attack on him, for science generally.

I’ve heard he’s going to devote himself to work with at-risk youths.

45 thoughts on “Another disgraced primatologist . . . this time featuring “sympathetic dentists”

  1. If he was that hell bent on making stuff up and inflating his abilities why didn’t he do the decent thing and join a philosophy department? He’d still be a respected philo prof if he’d gone that route.

    • Anon:

      That’s just rude. I don’t like to delete comments, so I won’t delete this one. But let’s all try to contribute something useful here.

      To respond to your substance: Making stuff up is fine. Mark Twain made up a lot of stuff and he was great. The problem is that Protsch seems to have made up stuff that would only be interesting if it were true, and it wasn’t true.

        • Anon:

          Just try to use your best judgment. I don’t think philosophy professors routinely commit frauds. There’s a difference between doing work that might well be at best empty and at worst bogus (which I think is what you’re saying that many philosophy professors are doing) and flat-out fraud of the deny-everything and hide-the-data-before-I-get-caught variety.

        • No worries Anonymous. We never tire of you finding another excuse to repeat the same “joke” about how horrible philosophers are, no matter how irrelevant it is to the thread.

      • One might note that Mark Twain has a short story about the Cardiff Giant, an archaeological hoax of the late 19th century, whose ghost appears to Twain. So Twain was making money off the fraud, via making up the story, in an honest way.

        http://druglibrary.org/schaffer/general/twain/AGhostStory.htm The story ends with these paragraphs:
        —————————————————————————————-

        This transcends everything — everything that ever did occur! Why you poor blundering old fossil, you have had all your trouble for nothing — you have been haunting a PLASTER CAST of yourself — the real Cardiff Giant is in Albany!

        [Footnote by Twain: A fact. The original fraud was ingeniously and fraudfully duplicated, and exhibited in New York as the “only genuine” Cardiff Giant (to the unspeakable disgust of the owners of the real colossus) at the very same time that the latter was drawing crowds at a museum in Albany.]

        Confound it, don’t you know your own remains?”

        I never saw such an eloquent look of shame, of pitiable humiliation, overspread a countenance before.

        The Petrified Man rose slowly to his feet, and said:

        “Honestly, IS that true?”

        “As true as I am sitting here.”

        He took the pipe from his mouth and laid it on the mantel, then stood irresolute a moment (unconsciously, from old habit, thrusting his hands where his pantaloons pockets should have been, and meditatively dropping his chin on his breast), and finally said:

        “Well — I NEVER felt so absurd before. The Petrified Man has sold everybody else, and now the mean fraud has ended by selling its own ghost! My son, if there is any charity left in your heart for a poor friendless phantom like me, don’t let this get out. Think how YOU would feel if you had made such an ass of yourself.”

        I heard his, stately tramp die away, step by step down the stairs and out into the deserted street, and
        felt sorry that he was gone, poor fellow — and sorrier still that he had carried off my red blanket and my bath tub.

        ———————————–

      • Andrew:

        Maybe you are overreacting? I mean, how is making fun of Philosophy Departments any worse than your routinely calling Nature / Science etc. “tabloid journals”?

        You have a not-so-nice view of Nature-Science; Anon has a not-so-complimentary view of Philosophy departments. De gustibus…

        To twist your own quote: “Philosophy Departments do, however, publish useful stuff every now and then, and this makes sense too. There’d be little reason to support a department if they only publish useless stuff. ”

        In any case, Anon is hardly the first person to question the relevance of the work done by contemporary philosophy departments.

        • Rahul:

          I agree there is no bright line, and I have no problem with commenters expressing distaste for the work done in philosophy departments (or in statistics or political science departments!). But I don’t think it’s accurate or helpful to liken empty research to fraud. That’s the part that seems rude to me.

        • Andrew:

          Ok, agreed that empty research isn’t identical to fraud.

          In a sense, empty research is worse than fraud though. How many $$ get wasted by empty research versus by outright, blatant fraud?

        • The remark about philosophers is rude because it’s empty, gratuitous, irrelevant, and mystifying. I doubt the commenter even knows what philosophy professors write articles about. In any case, it’s hard to respond, because in the context one has no idea what he’s objecting to in their work. If he’d been snide about other kinds of anthropologists, or about Germans, it would at least have been connected to the post’s topic, though still rude.

  2. Prof Terberger said. “At the end of the day it was about ambition.”

    So much for peer review and replication. If I read the article right, the field just took his word for it that his carbon dating results were correct. And it took Terberger since 2001 to tie down the fraud.

    Perhaps it’s worth noting that Terberger is from a different discipline than von Zieten, and therefore would face less hostility from peers than someone in the same discipline.

    • Um, scientists in every field routinely take other scientists’ word for it in the sense that you mean. For instance, if somebody sequences a sample of DNA, others don’t routinely sequence the same sample themselves. And even with direct replication attempts, one still ordinarily trusts that the study being replicated was in fact done as described. And only some forms of misconduct can be detected by peer reviewers. It sounds like you think this case shows some kind of serious flaw in the normal workings of science? If so, why? I read the linked article and I didn’t see anything that suggests carelessness on the part of peer reviewers of von Zieten’s papers. What do you think the rest of the field should’ve done differently, and why?

      • Yes, “scientists in every field routinely take other scientists’ word for it”. But that’s my point. Peer review is not effective in catching fraud, and “replication” is less observed in science that we think. Even bicycle racing tests an A and B sample to confirm doping, but did reviewers seem to require von Zieten to have at least some of these fossils independently dated?

        (But, of course, I have neither read these papers nor do I have access to any of the peer review. Maybe they did require independent verification, and von Zieten figured out some way to circumvent this — as bicycle racers do)

        It’s not that everything needs to be done over and over, but these were key findings in a field historically known for the possibliity of fraud (see Cardiff Giant and Piltdown Man).

        We think science is done by ethical people, but it’s just done by people.

        • +1

          This is egregious fraud. So it got caught & made headlines when it did get caught.

          But there’s lots of peccadillos that happen in routine science & never get caught. e.g. dropping points from regression lines, fishing, cherry picking, mentioning a calibration was done which was never done et cetra.

        • > “scientists in every field routinely take other scientists’ word for it”.
          Sorry, that is not scientific inquiry – you are not supposed to take anyone’s word for anything – ever!

          That’s what its all about.

          Its much easier in math, where mathematicians don’t take other’s word for it that they can prove/derive whatever – they redo it bending over backwards to find mistakes.

          (One of the things I tried to make folks aware of in anything I was involved in, is that every one and ever thing will be checked even if it is only a random sample of it. Nothing sanitizes like sunshine, nothing makes folks more on the ball, careful and honest than knowing that bright sunshine might be on them.)

        • That’s fine but empirically what’s the situation?

          e.g. of all the articles published (say) last year in Poli. Sci. what fraction will ever be even attempted to be replicated independently? Or even have their analysis verified by a third party?

          In practice, aren’t we effectively “taking other scientists’ word for it “

        • It is called auditing – most country’s revenue agencies have figured it out.

          (Some are concerned that with audits, many would decide not to do research and that would be bad. I think it would be good.)

        • Sorry Keith, you can’t just go “it’s called auditing” and imply that because national revenue agencies can do it, so can scientists in any and all circumstances. That’s like saying that, because we put a man on the moon, we should be able to build a computer out of cardboard (to borrow a line from an old Dilbert cartoon). You cannot argue that people can and should do X by pointing out that other people have done Y.

          So enlighten us: how *exactly* should the “auditing” process be conducted in physical anthropology? What should be “audited”, and how?

        • “Sorry, that is not scientific inquiry – you are not supposed to take anyone’s word for anything – ever!”

          Keith, you either badly misunderstood my comment or you’re taking a very strange position.

          In mathematics, every mathematician does not check every proof by every other mathematician. The few mathematicians who did the proof and checked it are trusted by all the others. Even those those few mathematicians are fallible, individually and collectively.

          And while you say you check at least a random sample of the work done by others you work with, I have to trust the combined efforts of you and those you work with. Or are you suggesting that I should also check your work?

          And plenty of work in science just isn’t checkable in the manner you seem to want in order for it to be science. For instance, in my own lab, we count the number of protists in small samples of water taken from our culture vessels. Each sample is counted by a properly-trained student. Are you telling me that I’m not doing science unless I also count the same samples to double-check that the students’ counts were correct?

          Rather than silly grandstanding about how all work should always be checked by everyone else on pain of it not being considered science, I’d like to hear some sensible comments on the circumstances in which it makes sense to perform particular checks for particular sorts of errors, recognizing that no error-checking procedure is perfect and that all come with some sort of cost.

        • (You probably should get two or three students to do independent counts on each sample, at least until you’re confident you know the typical error rate.)

        • +1

          How do you know the student is “properly trained” unless you have some validation / calibration periodically?

          I’d definitely get some samples counted periodically by multiple students just to see how much variation there is. Or to identify any particularly bad counters. Or simply to identify any extraneous reasons (e.g. there’s confusion over what to classify as a protist) that might cause the process to go out of control.

          I find it hard to understand this open loop attitude that pervades academic labs about analysis.

        • @Corey and Rahul:

          Oh for pete’s sake. Of course their training includes us all counting some of the same samples to ensure we all get the same answer! That’s what I meant by “proper training”. But that’s not what Keith was arguing for–he was arguing for *much* more extensive and ongoing checking than that.

        • @Jeremy

          I don’t think a one time initial cross check would be enough. I’d sure like an ongoing check. A periodic, random, unannounced verification.

          People get sloppy. Skills drift. How extensive depends on how critical your work is. Since a wrong count doesn’t kill anyone or blow up stuff I guess you have a license to be lax if you want to.

        • @Jeremy Fox, you seem to be taking this kinda personally. You offered a for-instance and posed a question, and I answered it. I’m not happy with data until I know the noise level, and as far as I can tell, you’re arguing against spending the effort to learn the noise level of the data your students are producing on an ongoing basis. In my last job, I expended (and saw a lot of other peoples’ efforts expended) on quality control for both data and analyses; in my current job, I am responsible for defining the QA and QC processes for my company’s data and analyses. I don’t mind telling you that I have more confidence in any groups’s scientific findings if they’re generated in a context of ongoing QC testing with a QA audit trail; to me, your attitude seems cavalier.

          P.S. Keith and I get together for coffee every so often, so I have some insight into his views on science; for what it’s worth, I bet he’d be perfectly happy if you were capable of and willing to check his work.

        • I think the tension between ideals of scientific work and the impossibility to really achieve them is a basic condition of science. This brief exchange brought together advocates of both the ideals and of respecting the practicalities that hamper achieving them, and is as such a very nice illustration of something that I think is very important to understand about science.

        • @Corey, Jeremy and Rahul:

          I do try to arrange for someone else to check at least some of my work whenever I can.
          My take is that what is found to be wrong are mistakes rather than professional errors – those only exist after the analysis is finalized.

          When I was in clinical research, there did seem to be two solitudes, those who said they hired good people and made it amply clear to them that they needed to be careful and diligent and those that would check anyone’s work whenever they could given resources.

          So when the first group hears these suggestions about auditing their groups’ work they feel they are being accused of not being trustworthy and often suggest how would you like it if we did not trust you?

          Jeremy: I have no idea who you are and what group you fall in – these are just blog comments hopefully sparking some to to do better research. Here I will recount David Andrews comment about predicting an election outcome in Canada much better that other groups “Why do you think you did better?” – “Not so much the modelling approach but we verified the results being phoned in before that data got into the prediction model. None of the other groups did any of that – they just entered what they think they heard”.

        • @jeremy:I’d like to hear some sensible comments on the circumstances in which it makes sense to perform particular checks for particular sorts of errors, recognizing that no error-checking procedure is perfect and that all come with some sort of cost.

          Answer: Bayes theorem as in P(error|data,assumptions) and a loss function. See also fraud detection.

        • Agree – its a cost/benefit analysis question, there is costs and _type two_ error rates but also can be large benefits.

          1. In the smallish clinical trails I was involved it, doing the checking was likely less expensive than first doing a cost/benefit analysis.

          2. My prior that carefulness and diligence would increase with folks believing they would be audited was mostly over worthwhile increases and what little data I have had on this in my career has steadily updated this upwards.

          And a personal example of benefit is here
          One notable example arose when staff at a teaching hospital evaluated the predictive value of some new measure to predict hospital mortality and it was dramatically successful. This made some members of senior management very worried and so they asked us to verify the findings. After a few difficult meetings, the researchers agreed to do double data
          entry, and it was discovered that a row slip had occurred and it was the next admitted patient’s measure
          that predicted the last admitted patient’s mortality. http://www.stat.columbia.edu/~gelman/research/published/GelmanORourkeBiostatistics.pdf

        • In a case such as your counting student, in various parts of psychology which relies on human ratings or scoring one would automatically check the inter-rater reliability by having the various raters (counting students here) rate common samples to assure oneself that the raters are at least in the same ball park.

        • Thank you for clarifying zbicyclist. I think there’s a sensible argument that particularly important findings should be subject to more error checking.

          Don’t know that a few famous isolated historical events like Cardiff Giant and Piltdown Man should cause us to want to do more error checking on all work in the entire field of physical anthropology. That’s like saying that some alchemists lied about being able to transmute lead into gold, so today all chemists should have their work done over and over.

          Re: why didn’t the reviewers require at least some of the fossils to be independently dated, it’s not my field. I don’t think it’s routine for fossils or pieces of them to routinely be radiocarbon dates by multiple labs. Particularly not really unique and valuable fossils. I think at least in part for reasons of safekeeping. But I’m guessing, it’s not my field, I could be wrong.

        • Sounds like a good time for Journals to start insisting on independent dating of any articles referred to in articles.

        • The situation for dating things is actually kind of bizarre, but in a way that *decreases* the likelihood of frauds like this.

          Many people who find specimens send them to independent labs to get them dated (because many labs lack support for both field work and dating techniques). “Independent” here can refer to truly independent labs at other institutions, or to collaborators’ labs at other institutions or at the home institution.

          Protsch appears to have had his own dating lab, but from the linked article: “Prof Protsch…was unable to work his own carbon-dating machine.”

          The problems with independent dating are two-fold: of minor concern is the cost, of larger concern is that dating requires you damage the specimen slightly. If you have a large number of similar specimens, damaging a few of them to get several dates is fine. But if you’ve only got a couple specimens, there’s a calculation that has to go into whether you damage it multiple times *now*, or damage it lightly now and hope that, in the future, techniques improve to require less damage.

          Honestly, the most troubling thing about the article is that Protsch seems to assert that *he* owns the specimens. In general, anthropology/paleontology/zoology/etc specimens need to be owned by an institution that allows free access to future workers (and almost every journal requires this). Institutional ownership means that if you publish a date or a measurement or whatever, some other worker can come in later and test your claim independently on the exact same specimen. This doesn’t protect against short-term, flat-out lying like Protsch pulled, but reposition in an institution with free access policy is exactly how these hoaxes get caught (see Piltdown man, Cardiff, Archaeoraptor, and just about every other exposed fraud in these fields).

    • As a card-carying evolutionary anthropologist, and one who teaches human evolution and relevant fossil record, it’s worth pointing out that the finds in question were never taken too seriously and have had little role in debates over the last 20 years. You won’t find them mentioned in many textbooks. Terberger exaggerates the impact to make the myth-busting seem important.

      It’s a weird episode to be sure. But most of us in the field are like, “yeah, saw that coming”, or maybe just “who? which finds?”

      • Were his finds published in crappy, third rate journals? Or respectable ones?

        Those of you who “saw it coming”, had you done anything about it? Were there many complaints / calls for verification of this guy’s work over the last 20 years?

      • Thank you for commenting– we need knowledge of practices in the field.
        Since people were skeptical about his findings, why didn’t they try redoing the carbon dating? They could expect at least to find exaggeration and fudging, if not fraud. It seems like it would be an easy way to get a publication in the same journals he published in. Or did he not publish in worthwhile journals?
        It’s crazy of course to expect replication for every article published. Most scholars are honest, and most articles don’t create enough interest to be worth replicating. But even in math, if a proof is surprising, surely a few people check it for errors.
        One thing archaeology ought to do is require articles to note which lab did the dating, just as perhaps we should all note which statistics software we used. If it’s a disreputable lab, that would be a warning sign, especially if, as with some criminal evidence labs, we suddenly discover gross fraud.

  3. “They also discovered that some of the 12,000 skeletons stored in the department’s “bone cellar” were missing their heads, apparently sold to friends of the professor in the US and sympathetic dentists.”

    Never did trust dentists, especially the sympathetic type…

  4. “At the same time, German police began investigating the professor for fraud, following allegations that he had tried to sell the university’s 278 chimpanzee skulls for $70,000 to a US dealer.”

    Why are Chimp skulls so cheap? That’s $250 per skull.

Leave a Reply

Your email address will not be published. Required fields are marked *