Skip to content
 

What killed alchemy?

Here’s the answer according to David Wootton’s 2015 book, “The invention of science: a new history of the scientific revolution” (sent to me by Javier Benitez):

What killed alchemy was the insistence that experiments must be openly reported in publications which presented a clear account of what had happened, and they must then be replicated, preferably before independent witnesses. The alchemists had pursued a secret learning, convinced that only a few were fit to have knowledge of divine secrets and that the social order would collapse if gold ceased to be in short supply. Some parts of that learning could be taken over by the advocates of the new chemistry, but much of it had to be abandoned as incomprehensible and unreproducible. Esoteric knowledge was replaced by a new form of knowledge which depended both on publication and on public or semi-public performance. A closed society was replaced by an open one.

. . .

The demise of alchemy provides further evidence, if further evidence were needed, that what marks out modern science is not the conduct of experiments (alchemists conducted plenty of experiments), but the formation of a critical community capable of assessing discoveries and replicating results. Alchemy, as a clandestine enterprise, could never develop a community of the right sort. Popper was right to think that science can flourish only in an open society.

Replication yes, secret knowledge no.

This is consistent with my experience that secret peer review (referee reports, editorial decisions, copyright transfer forms, and all the other paraphernalia of the scientific publication process) has a lot of problems that can be resolved using open review (Pubpeer, blogs, open reviews, Arxiv, preregistration, the Open Science Framework, etc.).

Science should not be secret knowledge, and the scientific society should be an open society.

P.S. Benitez sends along a few more quotes:

Lakatos:

In this code of scientific honour modesty plays a greater role than in other codes. One must realise that one’s opponent, even if lagging badly behind, may still stage a comeback. No advantage for one side can ever be regarded as absolutely conclusive. There is never anything inevitable about the triumph of a programme. Also, there is never anything inevitable about its defeat. Thus pigheadedness, like modesty, has more ‘rational’ scope. The scores of the rival sides, however, must be recorded and publicly displayed at all times.

Now what constitutes a ‘discovery’, and especially a major discovery, depends on one’s methodology. For the inductivist, the most important discoveries are factual, and, indeed, such discoveries are frequently made simultaneously. For the falsificationist a major discovery consists in the discovery of a theory rather than of a fact. Once a theory is discovered (or rather invented), it becomes public property; and nothing is more obvious than that several people will test it simultaneously and make, simultaneously, (minor) factual discoveries. Also, a published theory is a challenge to devise higher-level, independently testable explanations… If one thinks of the history of science as of one of rival research programmes, then most simultaneous discoveries, theoretical or factual, are explained by the fact that research programmes being public property, many people work on them in different corners of the world, possibly not knowing of each other. However, really novel, major, revolutionary developments are rarely invented simultaneously. Some alleged simultaneous discoveries of novel programmes are seen as having been simultaneous discoveries only with false hindsight: in fact they are different discoveries, merged only later into a single one.

Imre Lakatos, History of Science and Its Rational Reconstructions, Proceedings of the Biennial Meeting of the Philosophy of Science Association, Vol. 1970 (1970), pp. 91-136

Kuhn:

Throughout his paper Lakatos refers to the importance in scientific decision-making of what he calls a “code of scientific honesty” or a “code of scientific honor”. [p. 92] When he distinguishes his position from the one to which he objects, he makes remarks like: “What one must not do is to deny [a research program’s] poor public record”, [p. 104] or “The scores of the rival sides… must be recorded and publicly displayed at all times.” [p. 105] Elsewhere he speaks of answering colleagues’s objections “by separating rational and irrational (or honest and dishonest) adherence to a degenerating research program.” [p. 105]

Lakatos’ views cannot, however, be distinguished from mine or anyone else’s in this way. On the contrary, he and I come closest at just these points. Who does he suppose believes that science could continue if scientists were dishonest? If I have been defending the irrational, it has not been by defending lies. In fact, Lakatos’ references to honesty, to a ‘public record’, or to a score that must be ‘recorded’ and ‘publicly displayed’ suggest that he too is thinking of theory-choice as a community activity which would be impossible unless public records of this sort were kept. When the individual may decide alone, nothing of the sort is needed. Finally, and most important, Lakatos’ emphasis on a code of honor carries him even further in the same direction, for a code consists of values not of rules, and values are intrinsically a community possession.

Thomas S. Kuhn, Notes on Lakatos, PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association
Vol. 1970 (1970), pp. 137-146

So you can laugh all you want to
But I’ve got my philosophy

37 Comments

  1. Anoneuoid says:

    This seems irreconcilable with the modern “thought” that replicating each others work is some kind of statistical idea/issue.

  2. Anonymous says:

    “This is consistent with my experience that secret peer review (referee reports, editorial decisions, copyright transfer forms, and all the other paraphernalia of the scientific publication process) has a lot of problems that can be resolved using open review (Pubpeer, blogs, open reviews, Arxiv, preregistration, the Open Science Framework, etc.).”

    Perhaps Merton fits nicely with your post as well: https://en.wikipedia.org/wiki/Mertonian_norms

    “The four Mertonian norms (often abbreviated as the CUDOS-norms) can be summarised as:

    communalism: all scientists should have common ownership of scientific goods (intellectual property), to promote collective collaboration; secrecy is the opposite of this norm.[3]

    universalism: scientific validity is independent of the sociopolitical status/personal attributes of its participants[4]

    disinterestedness: scientific institutions act for the benefit of a common scientific enterprise, rather than for the personal gain of individuals within them

    organized scepticism: scientific claims should be exposed to critical scrutiny before being accepted: both in methodology and institutional codes of conduct.[5]”

    I have been thinking more and more lately that a strong case can possibly be made that the journal/editor/ peer-review model of science and/or scientific publishing of the last decades is actually 1) anti-scientific and 2) is the cause of many of science problems:

    – it (sort of) keeps the information hidden from all except a select few (journal article paywall)

    – it (sort of) denies all from participating except a select few (is submitting to an “official” journal even possible as a non-academic? And if so, how successful has that been for those that tried?)

    – a select few (sort of) decide what constitutes a useful scientific contrinution (journals, editors, peer-reviewers)

    – a select few (sort of) decide what the “rules” and “norms” of science are (e.g. journals, editors, peer-reviewers, researchers decided that not publishing null-results or p-hacking is all fine and good), when i reason that should be determined more independently (e.g. via ground rules/basic principles and values, etc.)

    I am more and more becoming a fan of entirely getting rid of journals/editors, and then using pre-prints and possibly keeping peer-review but only in the form of post-publication online comments. That way everyone can participate in science, everything is accessible to everyone, there is no blocking of certain papers/authors, and no “reviewer 2”-issues, etc. However, you would still have possibly useful comments via post-publication peer-review. And, if some researchers still value pre-publication peer-review, they can just ask their friends or colleagues, who can then also become co-authors if their contribution has been big enough (imagine what that will do for the quality of comments)

    • Andrew says:

      Anon:

      I noticed this item from your list:

      Organized scepticism: scientific claims should be exposed to critical scrutiny before being accepted: both in methodology and institutional codes of conduct.

      This is interesting, in that I suspect that various credulous science boosters such as Gladwell, Fiske, Freakonomics, etc., would probably say that they do favor critical scrutiny etc., and that this already done in the peer review process. It’s not! Peer review can be hyper-critical, but once you remember that (a) peer review is spotty, and (b) just about every paper gets published, then you realize that scientific claims are not really “exposed to critical scrutiny before being accepted.” Or, to put it another way, claims may be exposed to scrutiny, but authors are free to ignore that criticism.

      • Occasional reader says:

        In that apparently distant past I think the assumption was that papers were being thoroughly read after publication, and that critical scrutiny includes that readers would not cite (or otherwise endorse) bad science.

        Today, people no longer seem to employ scrutiny themselves, but try to outsource this to reviewers, retractions, or PPPR platforms, so they can skip reading papers critically.

        At the end of this way to outsource any own thinking, we could finally develop a fully automated publication system, where new papers are generated automatically from anything available online (maybe with a valid DOI). This system will have important advantages, e.g. you can save costs for all the people hanging around at universtites, you don’t even need a campus and all this any more. This scence can be run on a single large computer, and of course someone can build a second one to produce ‘alternative science’ if needed. Try it out…

        • Anonymous says:

          “At the end of this way to outsource any own thinking (…)”
          “This scence can be run on a single large computer,(…)”

          This may already be well in progress via the “Psychological Science Accelerator”, where (if i understood things correctly) hundreds of labs have already signed up to perform someone else’s studies, and seem to find no problematic issues with (semi-) outsourcing data-collection and -analysis.

          Let’s hope there will be no new Diederik Stapel(s) at the beginning and end of all this outsourcing that’s involved with the study topic, design, gathering, and analyzing of data on behalf of dozens of labs around the world :) What could possibly go wrong…

          https://psyarxiv.com/785qu/

          https://twitter.com/psysciacc?lang=en

          https://twitter.com/lisadebruine/status/986894626870235137

          • Andrew says:

            Anon:

            “Psychological science accelerator,” huh? I’d think that psychological science does not need any acceleration. Rather, they should slow down a bit!

            • Yes, it should slow down. Heaven help us otherwise

            • Anonymous says:

              ““Psychological science accelerator,” huh? I’d think that psychological science does not need any acceleration. Rather, they should slow down a bit!”

              I am worried that the “Psychological Science Accelerator” will not even be accelerating anything. But perhaps this is because i still don’t quite understand what they plan to do. I read 300+ labs have signed up, but does that mean that a single study will be executed by 300 labs? Isn’t that wasting tons of resources?

              I read the pre-print to find anwers but they do not seem to go into much detail about this matter.

              Next to multiple possible problematic issues i am already (fore-)seeing (e.g. there are now already 3 “assistant directors” and a few “committees” like it’s a big fancy firm), i am genuinely puzzled by the fact that 300 + labs have signed up and nobody seems to be asking questions (or providing answers) to issues like 1) possibly wasting way too many resources, 2) possibly putting way too much unnecessary power in the hands of a few pepole, and/or 3) asking whether this format is really the best way to accelerate anything…

          • Anonymous says:

            ““At the end of this way to outsource any own thinking (…)”
            “This scence can be run on a single large computer,(…)”

            This may already be well in progress via the “Psychological Science Accelerator”, where (if i understood things correctly) hundreds of labs have already signed up to perform someone else’s studies, and seem to find no problematic issues with (semi-) outsourcing data-collection and -analysis.”

            Ah, another possible step in this direction: “A career niche for replicators” (https://rolfzwaan.blogspot.nl/2018/05/a-career-niche-for-replicators.html)

            And another nod to the Psychological Science Accelerator by Zwaan in the comment-section: “The best use of replication will be in the context of original research. Collaborative efforts such as PsyAccellerator will be crucial in this endeavor.”

            I feel like i am missing some crucial information and/or view things so totally differently that it boggles my mind that everyone seems to (uncritically?) endorse the Psychological Science Accelerator.

            If i understood things correctly, their view of “collaboration” is having dozens of labs do what 1 researcher and or 1 “voting committee” decides. Now we apparently even have some folks starting to talk about “a career for replicators”. I worry that before we know it, we will have a science where a small group of people decide what everyone else will be doing as research under the guise of “collaboration”. That’s not “collaborating” in my view, that’s more like “outsourcing” or “telling others what to do”.

            • Anonymous says:

              The more i read about the Psychological Science Accelerator the more worried i get. Here’s the latest: https://www.facebook.com/groups/psychmap/permalink/608023426241250/

              “Still working out the details, but we will have a two-stage data release plan for the Psychological Science Accelerator to allow others to explore in the first portion of the data set, complete a pre-reg, then test in the second portion. Hoping people find this useful.”

              So am i understanding things correctly here that the “Psychological Science Accelerator” will have:

              1) a small group of folks deciding what all other labs should replicate (that’s “collaboration” right?),
              2) have a single computer gather all data,
              3) have a single (or a few) person analyze the data,
              4) will not make all the date available so others can check these at the time of the results getting published (and widely talked/written about?), and
              5) possibly “nudge” folks into follow-up research on that specific study/topic (chosen by a small group of people?) by being all mysterious and keeping the 2nd part of the data a “secret”?

              I am so confused about this whole thing, but even more confused that every “open science” researcher on twitter and facebook and blogs seems to totally be fine with all of this…

              • Very good concerns

              • Keith O'Rourke says:

                They have yet to experience how collaborations have difficulty not drifting into groups dominating key members advancing their colleagues career interests.

              • Anonymous says:

                “They have yet to experience how collaborations have difficulty not drifting into groups dominating key members advancing their colleagues career interests.”

                I really don’t get it, and am getting really concerned. There are so many things that could go wrong in my opinion and reasoning with this “Psychological Science Accelerator”, and nobody seems to be talking about them.

                Here’s one i just thought of when writing this comment: the next thing you know these folks could be applying for grants and possibly giving themselves some nice salaries for all their “collaborative” and “guiding” efforts, hereby taking money away from the actual science. Basically they could form a whole new “level” in the research scheme, and just like journals and university administators take away money and other resources from where it should be, and having way too much unnecessary influence.

                Perhaps when you shout “open science”, “collaboration”, and “science accelerator” enough people don’t really think about stuff anymore. I just don’t get it…

              • Anonymous says:

                Wow, just wow.

                If i understood things correctly, one of the “founding members” of StudySwap and the Psychological Science Accelerator has just received the “SIPS leadership award” at the 2018 Society for the Improvement of Psychological Science (SIPS) meeting.

                To me this is incomprehensible.

                Please correct me if i am wrong about the following:

                -StudySwap and the Psychological Science Accelerator have, as far as i know, resulted in at most a handful of papers

                -There seems to be little, if any, critical discource, or investigation, with regards to whether they are beneficial with regards to actually performing, and/or improving, psychological science. Nobody is even asking questions.

                -SIPS is supposed to be “different” but still somehow feels the need to hand out awards to individuals (i tried to express my issues with individual awards here https://psyarxiv.com/pju9c/) for something that hasn’t even lead to (hardly) any science, and hasn’t been critically examined, tested, or evaluated.

                -Someone received an individual award at a meeting of a society for the improvement of psychological science, probably 1) largely because of efforts to promote “collaborative” work, and 2) for being a “leader” and actually accepts it. To me this whole think is not just ironic, but pretty sad.

                It’s like the “Registered Report” debacle all over again (possibly see http://andrewgelman.com/2017/09/08/much-backscratching-happy-talk-junk-science-gets-share-reputation-respected-universities/#comment-560672)

                I am getting the feeling more and more that the “old boys network” is simply being replaced with an “open science network” where the same problems, and problematic group processes will be at work.

                The king is dead, long live the king.

                Same sh#t different day.

                I’m out of this whole “open science” thing. It’s no longer what i think it should be: https://www.youtube.com/watch?v=5FjWe31S_0g

              • Martha (Smith) says:

                The more things change, the more they stay the same ;~)

              • Without being at SIPS2018 it is hard to assess what in the way of theory and practice is being contributed. I hate to prejudge the content. I guess there is justifiable cynicism after some recent investigatory revelations in the field. But it will take a few years more to evaluate the results of Open Science Framework.

  3. Keith O'Rourke says:

    And some stuff from CS Peirce.

    “But what I mean by a ”science” (…) is the life devoted to the pursuit of truth according to the best known methods on the part of a group of men who understand one another’s ideas and works as no outsider can. It is not what they have already found out which makes their business a science; it is that they are pursuing a branch of truth according, I will not say, to the best methods, but according to the best methods that are known at the time. I do not call the solitary studies of a single man a science. It is only when a group of men, more or less in intercommunication, are aiding and stimulating one another by their understanding of a particular group of studies as outsiders cannot understand them, that I call their life a science. … (Peirce: MS 1334, pp. 11-14, 1905).

    He put it like this once, we are all uniquely defined by how we are particularly wrong about the world. When we engage with others in inquiry there is an opportunity to learn how we are wrong from those who are not particularly wrong just like us. So coming together in science only affords opportunities to learn how you are wrong never that you are right (that’s just a delusion). One has to be up for constant self-effacement.

    Not that everyone or even most actually are – “The theory here given rests on the supposition that the object of the investigation is the ascertainment of truth. When the investigation is made for the purpose of attaining personal distinction, the economics of the problem are entirely different. But that seems to be well enough understood by those engaged in that sort of investigation.” Peirce, C. S. (1879). Note on the theory of the economy of research.

    • Steve says:

      Thank you for this quote, I was going to find it, just as I saw your comment. There is another one (that I also can’t find), about how a logical (and scientific) person is ultimately just an honest person, a person that tries to challenge his own beliefs, admits his errors and uncertainty, and is clear about the kind of evidence that would prove him wrong. There is a remarkable parallel between what Peirce says about science and Feynman’s cargo cult speech. Ultimately, for Peirce science is not a specific type of knowledge or a set of methods or even a particular type of community, but a set of values that we all, if we are honest, can agree will more reliably lead to the truth. There probably was a time when alchemy was as scientific as physics is today, but it could not survive honest inquiry.

    • I am a little surprised that Peirce would suggest that narrow a view of the scientific deliberative process. I don’t doubt that conferring with other scientists is worthy goal. There is something to be said for solitary, serendipitous excursion into it. In other words, not all need be scientists to contribute to the scientific process.

      Nor do I agree that ‘coming together’precludes being right. Why is it only a ‘delusion’?

      • Keith O'Rourke says:

        > solitary, serendipitous excursion into it.
        Fine, but it is not science until shared with others who are convinced and can replicate it.

        > precludes being right.
        Nothing to preclude – no one is ever right about the world (in all aspects).

    • Martha (Smith) says:

      Thanks to Keith and Steve for bringing in Peirce’s thoughts. (But I would be remiss if I did not emphasize that what they say needs to be stated in terms of “person”, etc. rather than “man”, etc.)

  4. Erin Jonaitis says:

    Hm. Is it possible to do science, in this view, under the constraints of (1) security clearances for classified information or (2) the desire to preserve trade secrets? If we can call (some) work done in this way scientific, is that only because of the large community of scientists working on related topics without those constraints? If the latter, can we say anything about the balance of open vs closed science that is required to make the closed scientific fields “work”?

    • Martha (Smith) says:

      Perhaps we need to introduce a term such as “quasi-science” or “fringe science” for work done under such constraints.

    • Thanatos Savehn says:

      Yep. You just have to publish it under the pen name “Student”. j/k However, imagine if instead of publishing a method whereby a practical man might come to a definite conclusion, Gosset had gone a step further and published: “Using this method on barley in different soils, experiencing different average annual rainfalls and growing at different elevations we were able to predict which fertilizer would produce the highest yield nine times out of ten.” Fisher might have trained his intellect on a different target and we might not be having these discussions.

  5. Tom Dietterich says:

    I see no reason why you can’t do science in secret. It will just be very slow and highly error-prone, because each of us has many weaknesses (biases, inferential blindness, flubs, etc.). Collaboration greatly increases the reliability of the enterprise.

  6. Kaiser says:

    Interesting that they specified: “they must then be replicated, preferably before independent witnesses.” This puts the onus on the original researchers to do the replication – in an open way. Contrast it with the attitude that (a) it’s a waste of time for the original researchers to do a replication or (b) the replicators do not have the skill to perform a replication. On (b), the original researchers should feel motivated to coach the replicators to make sure they will replicate the result.

    • Martha (Smith) says:

      “This puts the onus on the original researchers to do the replication – in an open way.”

      I don’t see how this necessarily follows from “they must then be replicated, preferably before independent witnesses.”

      • Kaiser says:

        I see your interpretation. It’s either the first party does a replication in front of independent witnesses, or a second party does a replication in front of independent witnesses. But the second party would already be independent so that’s probably why I read it differently.
        In any case, I think the first party doing a replication in front of independent witnesses to be acceptable. No?

        • Martha (Smith) says:

          The second party would be independent of the first party, but not of their own biases/flaws. So I think a second party replicating with third party (independent of both first and second parties) as witnesses would be strongest.

  7. Martha (Smith) says:

    Kudos to Lakatos for emphasizing the importance of modestly.

  8. David Whitlock says:

    I disagree.

    I am pretty sure it was lack of funding that killed off alchemy. If there was abundant funding, alchemy would be going strong today.

    Sort of like how climate change denial is going strong, race-based IQ difference research is going strong, DNA based IQ research is going strong, Complimentary, Alternative and Integrative health is going strong (there is even a Federal Department for it! the NCCIH), mainstream religions are going strong (and even some not-so mainstream, like Scientology), homeopathy is going strong, chiropractic is going strong.

    If something can get enough funding to sustain the people practicing it, it will persist indefinitely.

    • Anonymous says:

      “I am pretty sure it was lack of funding that killed off alchemy. If there was abundant funding, alchemy would be going strong today.”

      I think this could be a (big) part of it yes.

      Look at psychology today, they are getting lots of money to (try and) clean up the mess they themselves created via funding for replications and “meta-scientific” work, and they don’t seem to question whether this is useful, and even seem to be proud of that.

      I don’t understand all this focus on replicating past, probably flawed, work. It’s possibly wasting tons of resources, and sort of reinforces the problematic system:

      1)it uses resources, and gives attention to, probably sub-optimal research and researchers,

      2)while at the same time reinforcing the “backwards” process of replicating studies after every single important thing concerning a scientific study has in large part already happened (gathering many citations, influencing much work, influencing policy, etc.)

      Also see J. C. Coyne “Replication initiatives will not salvage the trustworthiness of psychology:

      https://bmcpsychology.biomedcentral.com/articles/10.1186/s40359-016-0134-3

      In the mean time, i also think they are not really trying to stop this “backwards” process from happening in the future. Large scale replication projects will at best reset the system but then start the same cycle again. I tried to get attention for this possible problematic issue, and provide a possible solution, here:

      https://groups.google.com/forum/#!topic/openscienceframework/JtabKEvqE44

Leave a Reply