Skip to content
 

How to think about an accelerating string of research successes?

While reading this post by Seth Frey on famous scientists who couldn’t let go of bad ideas, I followed a link to this post by David Gorski from 2010 entitled, “Luc Montagnier: The Nobel disease strikes again.” The quick story is that Montagnier endorsed some dubious theories. Here’s Gorski:

He only won the Nobel Prize in 2008, and it only took him two years to endorse homepathy-like concepts. He’s also made a name for himself, such as it is, by appearing in the HIV/AIDS denialist film House of Numbers stating that HIV can be cleared naturally through nutrition and supplements. This he did after publishing a paper in a journal that for which himelf is the editor . . .

But that’s just the beginning:

From there it only took Montagnier a few months more to turn his eye to applying that “knowledge” to autism . . . Unfortunately, the pseudoscience that Montagnier appears to have embraced with respect to autism is combined with a highly unethical study . . . The trial is sponsored by the Autism Treatment Trust (ATT) and the Autism Research Institute (ARI), both institutions that are–shall we say?–not exactly known for their scientific rigor. Apparently Montagnier has teamed up with a Dr. Corinne Skorupka, who is a DAN! practitioner from France . . . Whenever you see an “investigator” charge patients to undergo an experimental protocol, be very very wary. . . . here we have Montagnier and colleagues charging the parents of autistic children . . . Perhaps even worse than that, check out how badly designed this experimental protocol is . . . there are no convincing preclinical data . . . Based on an unsupported hypothesis that bacterial infections cause autism, Montagnier will be subjecting autistic children to blood draws and treatment with antibiotics. The former will cause unnecessary pain and suffering, and the latter has the potential to cause the complications that can occur due to long term antibiotic use over several months. . . . The study proposed is poorly designed even for a pilot study. There is no control group . . . Moreover, because the selection criteria for the study are not specified, there is no way of knowing how much selection bias might be operative there.

Gorski asks:

I’ve wondered how some Nobel Laureates, after having achieved so much at science, proving themselves at the highest levels by making fundamental contributions to our understanding of science that rate the highest honors, somehow end up embracing dubious science . . . or even outright pseudoscience . . . Does the fame go to their head? . . .

I’m guessing the story is a bit different, maybe not for the particular case of Montagnier but for this general “Nobel prize disease” thing—the pattern of celebrated scientists embracing wacky ideas. It’s not so much that these scientists get drunk by fame; rather, it’s that the prize attracted more attention to the wacky ideas they were susceptible to in the first place.

And then the feedback loop comes in. Scientist expresses wacky idea; then because of the Nobel prize, his wacky pronouncements get attention; scientist enjoys being in the limelight (maybe it’s been a bit disappointing after the Nobel publicity fades and his life is pretty much the same as always) so he makes more pronouncements; these pronouncements get more attention; scientist realizes that to continue to get in the news, he needs to make grander and grander claims; etc.

The apparent research progress comes in, faster and faster with stronger and stronger results

But I actually want to talk about something else, not the Nobel disease or anything like that, but the following pattern which I’ve seen from time to time.

The pattern goes like this: A researcher studies some topic, and after lots of effort and many false starts, he makes some progress. After that, progress comes faster and faster, and more and more research results come in.

This escalating pattern can arise legitimately: you develop a new tool and then find applications for it everywhere. For example, it took us a few years to write the Red State Blue State paper, but from there it only took a year to write the book, which had tons of empirical results.

Other times, though, it seems that what’s happening is escalating overconfidence, exacerbated by whatever echo chambers happen to be nearby. Luc Montagnier, for example, will have no problem finding yes-men, with that Nobel prize hanging in the corner. Another echo chamber is the science publication and grants system: if you have a track record of success, you’re likely to have figured out ways of presenting your results so they’re publishable and grant-worthy.

But the example I have in mind is my friend Seth Roberts, who spent about 10 years on his self-treatment for depression and then a few years more on his weight-loss method. At this point he spent a few years working on that, writing it up, and becoming a bit of a culture hero. And then he started to let his ambition get ahead of him, using self-experimentation to conclude that eating a stick of butter a day improved his brain functioning, among other things. I’m not saying that Seth was wrong—who knows? Maybe eating a stick of butter a day does improve brain functioning—but I’m skeptical of the idea that he came up with some trick for scientific discovery, so that what took him 5 or 10 years in the past could now be done, routinely, every couple of months.

Beware the escalating pattern of research results.

P.S. Gorski’s post also has a Herbalife connection.

P.P.S. Frey’s post is interesting too, but does he really think that all those people on his list are “way way smarter than everyone I know.” Does Frey really not know anyone as smart as Trofim Lysenko?

P.P.P.S. I came across this other post where Frey remarks that he used to work for Marc “Evilicious” Hauser!

Frey’s Hauser-related post is interesting but he makes one common mistake when he defines exploratory data analysis as “what you do when you suspect there is something interesting in there but you don’t have a good idea of what it might be, so you don’t use a hypothesis.” No! Exploratory data analysis is all about finding the unexpected, which is defined (explicitly or implicitly) relative to the expected, that is, a hypothesis or model. See this paper from 2004 for further discussion of this point.

21 Comments

  1. Alex Chernavsky says:

    See this RationalWiki article about “Nobel Disease”. The page includes a long list of Nobel laureates who have made dubious claims and engaged in pseudoscientific pursuits.

    • Alex Chernavsky says:

      Oops, I see that the Seth Frey article already links to that same page. I should have read it first.

    • Andrew says:

      Alex:

      Sure, but I’m more interested in the general phenomenon. Seth Roberts did not have anything like a Nobel prize but he had that telltale escalating pattern of research results, which I think in his case meant that he had tricked himself into believing just about anything. He was kinda like Brad Bushman, with the difference that Bushman was operating within the scientific publishing system, whereas Roberts was operating alone. Bushman had developed a system by which he could regularly fool a bunch of journal editors (and also, I assume, himself), whereas Seth merely had to fool himself and some followers on the internet.

      • Andrew says:

        P.S. This is not to say that I think all of Seth’s (or Bushman’s) ideas are bad. I have no idea. I just think both of them created systems by which they could routinely manufacture apparent successes out of thin air.

  2. BR says:

    Giaever said this about his skepticism of global warming (in 2009): “We have heard many similar warnings about the acid rain 30 years ago and the ozone hole 10 years ago or deforestation but the humanity is still around. The ozone hole width has peaked in 1993.” Is there a concise phrase for “an argument that is claimed to be in favor of a proposition but actually is in favor of the opposite”?

  3. Keith O'Rourke says:

    Nice point “finding the unexpected, which is defined (explicitly or implicitly) relative to the expected, that is, a hypothesis or model” the dog not barking in the night.

    Another issue I think is the area of science in which the Nobel was awarded. If its an area where uncertainties and noise are not that important and the move into an area where that is critical – they maybe unlikely to realize the important difference. That seemed to be a case with a newly appoint Chief of Science, who when asked about the importance of uncertainties, replied that they were uncomfortable with uncertainties and thought it would best if they were not discussed – except perhaps privately. Now, they had worked in a very basic science were repeatably is the norm e.g. different pieces of the same metal always having the same properties, so their uncertainties mostly were about possible cheating or carelessness.

  4. Adede says:

    So Montagner won the Nobel for his work on HIS, but not much later endorsed HIV/AIDS denialism? I think that makes him unique for essentially recanting the thing that made him famous.

  5. psyoskeptic says:

    I like Keith O’Rourke’s argument. I’d further ad that a common trait among nobel laureates is taking research risks and being lucky. The former is a character trait and the latter was random chance. Many people who rise to the pinnacle do so by random chance. They continue to take these research leaps into the abyss poorly backed by theory and the prior rewards bias their interpretation.

    I think this is really obvious in finance. Warren Buffet’s general research advice was followed in his time and for many decades after. It didn’t start with him and won’t end there. But someone had to do the best using that advice and that was him. From what I’ve read I think that he recognizes that component, that his favourite companies ended up doing the best on average. But lots don’t realize it.

  6. Seth Frey says:

    I’m with you on your main point: a lot of social systems that are otherwise different from each other have in common that they are vulnerable to rich-get-richer effects: markets, politics, cultural phenomena, friends, and, yeah, science. I’ve never had a clear sense of what can be done about it in science.

    On your peripheral points, I’m glad you got something out of my articles. Sort of a relief that most concerns were about the other Seth. A few quick responses:
    — “Does Frey really not know anyone as smart as Trofim Lysenko?” I never met the guy. But yeah, fair enough.
    — Sure, in exploratory data analysis, the finding of the unexpected occurs relative to the expected. But I think this is a quibble, or more about the semantics of what a hypothesis is. What if I patch my definition of exploratory data analysis? “What you do when you suspect there is something interesting in there but you don’t have a good idea of what it might be, so you don’t use AN ARTICULABLE hypothesis BECAUSE IT WAS SEEING THE UNEXPECTED THAT HELPED YOU LATER ARTICULATE WHAT YOU’D EXPECTED .”

    To get all the promotion of me in one place, here are the articles that Andrew drew from:
    “Never too smart to be very wrong”: http://enfascination.com/weblog/post/410
    “White hat p-hacking”: http://enfascination.com/weblog/post/1775
    and here’s another that may be of interest:
    “The unexpected importance of publishing unreplicable research: http://enfascination.com/weblog/post/1344

  7. Paul Alper says:

    Luc Montagnier is well over 85 and perhaps that helps in explaining things. In many European countries the elderly have to pass stringent tests to continue driving a vehicle. Excellent drivers they may have been decades before, but insurance companies and motor vehicle departments realize that faculties deteriorate as the years accumulate. Maybe there needs to be a sunset law regarding pontificating.

  8. Anoneuoid says:

    I looking into Luc Montagnier a few years ago. I remember his “crackpot” papers and his “hiv” papers seemed of equal quality, there was even a band they refer to in a blot of the first HIV paper that simply isn’t visible (probably because if they exposed it more you would see bands everywhere…). That makes me think whats really going on is people like to pick and choose the theories they like for whatever reasons and whatever data is around is more of an additional ornamentation.

    And I mean just look how many prizes have been given out for the “amyloid hypothesis” of Alzheimer’s, something that is probably wrong. Possibly wrong even to the point of being an actively harmful idea (if the amyloids can function as anti-bacterial\anti-viral substances).

  9. John Pellman says:

    > Other times, though, it seems that what’s happening is escalating overconfidence, exacerbated by whatever echo chambers happen to be nearby. Luc Montagnier, for example, will have no problem finding yes-men, with that Nobel prize hanging in the corner. Another echo chamber is the science publication and grants system: if you have a track record of success, you’re likely to have figured out ways of presenting your results so they’re publishable and grant-worthy.

    I have at various times in my time as a thinking being speculated that having a system that promotes pseudonymity over scholarly fame would be more beneficial for scientific advancement overall (i.e., if authorship for journals was managed more like pseudonymous remailers).

    See: https://twitter.com/jaipelai/status/900513036552859648

    • Anonymous says:

      “Another echo chamber is the science publication and grants system: if you have a track record of success, you’re likely to have figured out ways of presenting your results so they’re publishable and grant-worthy.”

      “I have at various times in my time as a thinking being speculated that having a system that promotes pseudonymity over scholarly fame would be more beneficial for scientific advancement overall (i.e., if authorship for journals was managed more like pseudonymous remailers).”

      Hmm, my 1st reaction is that i don’t think this is a good idea. I reason it is useful in science to know who wrote/said what, also in light of “rewarding” the “best” scientists. However, i reason this should be done based on scientific grounds, not whether you are friends with the author or something like that.

      I don’t see what’s so hard about simply making sure grant committees, editors, and reviewers are not aware of the author’s names. When i entered academia i actually expected this to always be the case, and i was pretty shocked to find out this was not (always) so. Imagine my shock when i found out some editors/journals actually allow authors to “suggest” their “expert” peer-reviewers.

      • John Pellman says:

        > I don’t see what’s so hard about simply making sure grant committees, editors, and reviewers are not aware of the author’s names.

        The issue with this is that because there are so few people working with specific research agendas, it’s not hard to re-identify anonymous submissions. If grant applications contained no pertinent details and one knew nothing other than that the submitter was in *Field X Broadly Construed* then sure, you might not be able to de-anonymize. But when the grant or article proposes experimentation with a procedure that you know *Big Name in the Field* discussed casually in passing at *Last Major Conference* with such detail that only *Big Name in the Field* could have proposed it, it becomes obvious that the grant is coming from *Big Name in the Field* and you better fund it or publish it to avoid his/her wrath. Basically, blindness in committee, editing, and reviewing assumes more isolation than is actually possible given the openness of scientific discourse.

        Pseudonymity would mitigate this by forcing researchers to focus more on long-term incentives (since their name wouldn’t be attached to their research for several years) rather that short-term gains, and could mitigate well-characterized sociological phenomena in science like the Matthew Effect, cliques, and cults of personality.

        The main issue with pseudonymity is that it wouldn’t completely eliminate side-channel communications, like conferences, so it’s still subject to the same sorts of issues that blind submissions are. It’s impractical to ban conferences, although they could potentially be restructured to make pseudonym mapping less likely. I’m not sure that any form of conference restructuring, however, would produce an event that wasn’t extremely awkward, however, and that wouldn’t eliminate other events where pseudonyms could be re-identified, such as private talks.

        • Anonymous says:

          1) “The issue with this is that because there are so few people working with specific research agendas, it’s not hard to re-identify anonymous submissions”

          I am not convinced this is correct, but more importantly it doesn’t matter in my reasoning. Even if one could look at the references used, and/or remember the idea from a recent conference, this does not imply that anonymous grant proposals and journals submissions are of no use.

          For instance, i could build on something X said at a recent conference, but have it be my own distinct idea. Or i could cite X a lot in my grant proposal or paper, but use that work to subsequently use my own ideas. In both cases, someone could guess that i was X, but in fact i was not.

          2) “Pseudonymity would mitigate this by forcing researchers to focus more on long-term incentives (since their name wouldn’t be attached to their research for several years) rather that short-term gains, and could mitigate well-characterized sociological phenomena in science like the Matthew Effect, cliques, and cults of personality.”

          Your pseudonymity reasoning is unclear to me, perhaps because words like “incentives” mean nothing to me (e..g what does this word even mean/imply? why should possible short-term incentives be different than long-term incentives?). These sociological phenomena can probably be dealt with just as well (or better) using anonymous submissions in my reasoning.

          To me, science has always involved non-anonymous credit and/or (immediate) connection of work and person. To me, this makes a lot of sense because i reason that in turn makes it possible to (immediately) check for possible conflicts of interest, to reward the people who do the best job, etc. If someone produces solid work, i think it makes scientific sense to know who that person was, and reward him/her, also in the short-term. Using one’s real name helps with that i reason.

          I am getting a bit worried that all this “changing incentives” narrative (that still makes little sense to me) is making scientists into non-individualistic robots who do what their masters tell them to do, all in the name of “collaboration” and “changing incentives”. In my reasoning, you have to always put the science 1st. That means, thinking about what, why, and how something is (potentially) scientifically “good” or “bad”.

          Individuals and individualism in science are not necessarily “bad”, in my reasoning they might be what is most important in science. Perhaps if everyone does their “idiosyncratic individual part”, the totality of these individual actors and actions is what results in the “best” science, and what “improves” science the most.

          • John Pellman says:

            > I am not convinced this is correct, but more importantly it doesn’t matter in my reasoning. Even if one could look at the references used, and/or remember the idea from a recent conference, this does not imply that anonymous grant proposals and journals submissions are of no use.
            > For instance, i could build on something X said at a recent conference, but have it be my own distinct idea. Or i could cite X a lot in my grant proposal or paper, but use that work to subsequently use my own ideas. In both cases, someone could guess that i was X, but in fact i was not.

            It’s not just the content, it’s also writing style. Admittedly, writing style alone has a low-ish success rate (https://33bits.wordpress.com/2012/02/20/is-writing-style-sufficient-to-deanonymize-material-posted-online/) of 20% when analyzed algorithmically, but assuming that there are only a handful of researchers working on a specific topic (say 50 or so, which can happen in more niche sub-fields) and the reviewers are familiar with their writing styles from having read the literature, they can combine all this knowledge to develop a decent prediction of who the author is. Sure, there may be some misses- I wouldn’t be able to state how many since as near as I know there’s been no systematic study where researchers try to reidentify their colleagues’ papers. The only real work I can think of that tackles issues of anonymity / authorship and publication is Ceci and Peters (1988), but they did something very different, and a lot of reviewers seemed to be angry when the results indicated that fame was being prioritized over substance.

            My view on anonymity is primarily based on anecdotal experience with an anonymous bulletin board at my undergrad (around ~2000 students). The majority of the posts on the bulletin board had too little content to be directly traceable to most people, but I was able to identify the original posters for a few posts that had large amounts of context that I was able to combine with my mental priors (e.g., knowledge of events on campus like who was breaking up with whom, etc). I’ve also long suspected that in smaller courses I took in undergrad (seminars around 15 people) the “anonymous” faculty evaluations were easy to re-identify.

            I wouldn’t necessarily argue against trying anonymous grant proposals and submissions first, but my gut tells me that they are necessary but not sufficient for addressing issues in scholarly publishing.

            > Your pseudonymity reasoning is unclear to me, perhaps because words like “incentives” mean nothing to me (e..g what does this word even mean/imply? why should possible short-term incentives be different than long-term incentives?). These sociological phenomena can probably be dealt with just as well (or better) using anonymous submissions in my reasoning.

            The long-term incentives of science should be to develop an adequate representation of the natural world. Short-term incentives are fame-grabbing / excessively striving for name recognition, since they don’t directly influence our understanding of the world, but might influence grant decisions and satisfy individual needs (e.g., ego) rather than collective needs. Essentially, we need to discourage disinterestedness rather than Hollywood-like behavior. I’m sort of approaching this from Merton’s perspective [1] (though I admittedly have never actually read much of his work).

            > To me, science has always involved non-anonymous credit and/or (immediate) connection of work and person.

            William Sealy Gosset would like to have a word with you.

            > If someone produces solid work, i think it makes scientific sense to know who that person was, and reward him/her, also in the short-term. Using one’s real name helps with that i reason.

            I’m not arguing against ultimately giving attribution to someone for his/her work- I’m arguing for delaying the reward for a couple of years by separating the individual from the research content for a fixed time period / embargo. This should discourage poor behavior brought about by those who engage in social climbing behaviors while still ultimately producing gratification for people who need that kind of thing (although in theory the gratification should be the discovery itself regardless of who produced it; alas, however, we are human after all).

            > I am getting a bit worried that all this “changing incentives” narrative (that still makes little sense to me) is making scientists into non-individualistic robots who do what their masters tell them to do

            “[N]on-individualistic robots who do what their masters tell them to do” are exactly what many early career researchers, postdocs, and grad students (unfortunately) become when they work in the shadow of someone who’s accumulated a prestigious reputation. I highly doubt that Stapel’s postdocs were acting with high levels of liberty when they ran statistics on his phony data.

            > Individuals and individualism in science are not necessarily “bad”, in my reasoning they might be what is most important in science.

            The point of the “changing incentives” narrative isn’t to discourage individuality per se so much as to discourage bad actors. It’s kind of silly to suggest that donning a pseudonym suddenly forces you to become part of some Borg hive mind (especially if that pseudonym is directly linked to you). My reddit and IRC pseudos aren’t any less me for being different than my birth name (although for some folks pseudos do allow the expression of parts of their personalities that aren’t always obvious). It’s not even like there isn’t a precedent for pseudonymity as a way of tackling ideas based on substance rather than the personalities backing them- the Federalist Papers and Anti-Federalist Papers were all written pseudonymously expressly because their authors wanted ideas to be considered based on merit alone (or perhaps to avoid the wrath of the then irrelevant King George III).

            [0] https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/article/peerreview-practices-of-psychological-journals-the-fate-of-published-articles-submitted-again/AFE650EB49A6B17992493DE5E49E4431

            [1] https://en.wikipedia.org/wiki/Robert_K._Merton#Sociology_of_science

            • Anonymous says:

              Thank you for the reply!

              1) You wrote “It’s not just the content, it’s also writing style.”

              Perhaps i am misunderstanding pseudonymous writing, but aren’t content and writing style possibly detectable in pseudonymous writing as well? Using no-name or “anonymous” in light of these possible issues is no different than using “Mr. X” or some other (possibly even changing) pseudonymous name i reason…

              2) You wrote: “The long-term incentives of science should be to develop an adequate representation of the natural world. Short-term incentives are fame-grabbing / excessively striving for name recognition, since they don’t directly influence our understanding of the world, but might influence grant decisions and satisfy individual needs (e.g., ego) rather than collective needs. “

              Ah thank you for more directly talk about things that could influence science (and not only use an indirect term like “the incentives”). Now, these sentences raise a few questions in my head. For instance:

              *Where can i find evidence that these possible short-term incentives are not the same as the long-term incentives?
              *Where can i find evidence for the reasoning that short-term incentives are possibly fame-grabbing/name-recognition?
              *Why is fame-grabbing/name-recognition possibly even an “incentive”? (i.c. how and why would that be rewarded and by whom and for what reason?)
              *Could it be that all this incentive-narrative is simply a possible selection-bias representation of academia and/or its “successful” members who have chosen to participate in the current system?
              *And if so, shouldn’t all this “incentive” talk really be directed towards hiring committees/universities/etc., and not to individual scientists (like it seems to me to predominantly be the case)?

              I think talking about “incentives” in general terms is sub-optimal because i think it is an indirect, and unclear, term for many things that a) could in turn stop you from really thinking about things, and b) could (therefore) lead you to coming up with a whole new set of “incentives” that may not even be good for science.

              A definition of “incentives” is “a thing that motivates or encourages someone to do something”. My personal motivation has always been to “positively contribute to science”, which has both been my short-term and long-term goal. So, in my reasoning there is nothing wrong with any of my “incentives”, nor do i have different short-term and long-term incentives regarding science.

              (Side note: i wrote a little something about “incentives” here: https://psyarxiv.com/5vsqw/)

              3) You wrote: “Essentially, we need to discourage disinterestedness rather than Hollywood-like behavior. I’m sort of approaching this from Merton’s perspective [1] (though I admittedly have never actually read much of his work).”

              I assume you meant “(…) encourage disinterstedness rather than Hollywood-like behavior”. If so, i agree!

              (Side note: i wrote a little something about “Hollywood-like behavior” in science (related to awards) here: https://psyarxiv.com/pju9c/)

              4) You wrote: “The point of the “changing incentives” narrative isn’t to discourage individuality per se so much as to discourage bad actors.”

              &

              “William Sealy Gosset would like to have a word with you.”

              This makes little sense to me, but again i worry this is because i think using words like “incentives” in general is sub-optimal in many of these types of discussions (as i tried to make clear somewhere above). Anyway, if all this incentive talk is about discouraging “bad actors”, it could (and should in my reasoning) just as well be about encouraging “good actors”.

              I reason both can be tackled by implementing, and rewarding, things like open data, pre-registration, anonymous grant and paper submissions, judging the quality of papers not the amount, etc. Now in my reasoning, you would want to (focus on) reward scientists who produce solid, good work. For which i reason it is important that their work is non-pseudonymous.

              You mentioned William Sealy Gossy in this discussion, and provided a link. Thanks for this interesting information!
              I however fail to see how it relates to my general point that i reason science has always involved non-anonymous credit and/or (immediate) connection of work and person (and that this is important for science to work optimally).
              A few peudonymous examples do not provide evidence against my general point, especially in the case of Gossy who wrote pseudonymously for very different reasons than the possible benefits of pseudonymity you alluded to (if i read and/or understood things correctly). Although i should also note that i myself have not provided evidence for my general point, and am merely conjecturing that this is the case (based on the overwhelming majority of published articles in the last decades i have read that have been non-anonymous/non-pseudonymous).

              5) You wrote: “It’s kind of silly to suggest that donning a pseudonym suddenly forces you to become part of some Borg hive mind”

              Whether or not pseudonymous usage leads to hive mind behavior, or other possible “bad” behavior, is not really what i was trying to point to. To me, the pseudonymous proposal is in line with several other things i have come across recently in this whole “let’s change the incentives” and “let’s improve science” that worry me. They all seem to me boil down to some weird version of science where groups and group-processes are somehow seen as being an improvement, and individuals and their roles are minimalized (e.g. see discussion here: http://andrewgelman.com/2018/05/08/what-killed-alchemy/#comment-727893).

              (Side note: i sometimes wonder if this is because social psychologists are possibly over-represented in this whole thing (?). I myself don’t like groups, and group-processes, very much and am more a fan of the individual so this all possibly catches my attention quickly).

              To me, groups and group-processes are what possibly largely got us into this mess:

              *Groups of scientists giving each other jobs because they are “friends” not because of their skills.
              *Groups of scientists citing each other to boost their citation numbers.
              *Groups of scientists reviewing each other’s work.
              *Groups of scientists not being critical of each other to possibly “protect” their common interests.
              *Groups of scientists giving each other awards.
              *Groups of scientists putting each other on boards/committees.
              *Etc.

              One of the best things about science, in my reasoning, is that a single person can point everyone else to some truth, fact, or solid reasoning even when hundreds or thousands of other scientists think otherwise. To me, it makes sense to know who that person is so we could possibly pay attention the next time they say or write something, and/or reward them (both of which i reason help (improve) science).

            • Martha (Smith) says:

              ” I’ve also long suspected that in smaller courses I took in undergrad (seminars around 15 people) the “anonymous” faculty evaluations were easy to re-identify.”

              Yes, it is sometimes easy to identify (or at least make very good guesses at) some students from their course/instructor evaluation written comments. The handwriting, or particular manners of speech, or reference to particular incidents are often clues. And then there was the student who always wrote in purple ink. (I assume she did not care about anonymity.)

            • Anonymous says:

              (Thank you for the (extensive) reply!

              I replied (extensively) yesterday, but my post seems to have not been posted yet. Perhaps i did something wrong and/or am missing something, but if i remember correctly someone else mentioned their post being delayed as well. My own other recent post seemed to me to also take more time than usual to show up. I hope my reply/post from yesterday will show up today, also because i can’t remember what i replied)

Leave a Reply