Capitalist science: The solution to the replication crisis?

Bruce Knuteson pointed me to this article, which begins:

The solution to science’s replication crisis is a new ecosystem in which scientists sell what they learn from their research. In each pairwise transaction, the information seller makes (loses) money if he turns out to be correct (incorrect). Responsibility for the determination of correctness is delegated, with appropriate incentives, to the information purchaser. Each transaction is brokered by a central exchange, which holds money from the anonymous information buyer and anonymous information seller in escrow, and which enforces a set of incentives facilitating the transfer of useful, bluntly honest information from the seller to the buyer. This new ecosystem, capitalist science, directly addresses socialist science’s replication crisis by explicitly rewarding accuracy and penalizing inaccuracy.

The idea seems interesting to me, even though I don’t think it would quite work for my own research as my work tends to be interpretive and descriptive without many true/false claims. But it could perhaps work for others. Some effort is being made right now to set up prediction markets for scientific papers.

Knuteson replied:

Prediction markets have a few features that led me to make different design decisions. Two of note:
– Prices on prediction markets are public. The people I have spoken with in industry seem more willing to pay for information if the information they receive is not automatically made public.
– Prediction markets generally deal with true/false claims. People like being able to ask a broader set of questions.

A bit later, Knuteson wrote:

I read your post “Authority figures in psychology spread more happy talk, still don’t get the point . . .”

You may find this Physics World article interesting: Figuring out a handshake.

I fully agree with you that not all broken eggs can be made into omelets.

Also relevant is this paper where Eric Loken and I consider the idea of peer review as an attempted quality control system, and we discuss proposals such as prediction markets for improving scientific communication.

30 thoughts on “Capitalist science: The solution to the replication crisis?

        • I looked at a half dozen of the questions on Kn-X, and it looks like they are very specific questions that arose in particular applications. It seems like a good format for that sort of thing and for testing the accuracy of a theory that could later be published. I haven’t read the paper, which probably addresses the issue of which scientific results should be public goods — it would take a fairly devoted capitalist to argue for no public goods, ever. Presumably Knuteson doesn’t believe that Maxwell’s equations, the laws of thermodynamics, or the structure of DNA should have remained behind a paywall forever.

  1. Good fun:

    Zamora Bonilla, Jesús P. “Scientific inference and the pursuit of fame: A contractarian approach.” Philosophy of Science 69.2 (2002): 300-323.

    Ferreira, José Luis, and Jesús Zamora-Bonilla. “An economic model of scientific rules.” Economics & Philosophy 22.2 (2006): 191-212.

    Zamora used to organize some brilliant summer rounds of Econ dialogues (miss-labelled as a ‘summer school’) in San Sebastian, including on that sort of problem (after the 2002 paper). Not sure what came of them, with regret…

  2. Is the only determiner of value correctness? I can produce study after study easily where the result is correct… but uninteresting. If you’re going to have a multiplier for difficulty and interestingness you’ll have to be careful with their magnitude or you’ll end up right back where we are now with lo.

  3. First, I really like the succinct no-nonsense summary of > 100 papers on the replication issue found in the intro. Second, there is no need for the arbiter to be a “central exchange”, or even involved unless there is a dispute. Software to do this already exists, eg: https://bitsquare.io/

    The part I am not getting is:

    “Every transaction includes a monetary incentive for Q to determine the accuracy of A’s answer and to back this determination with evidence deemed sufficient by an objective third party (X). These features directly address socialist science’s reproducibility crisis. These incentives, present in capitalist science, explicitly and directly re- ward accuracy and penalize inaccuracy.”

    Isn’t Q just going to do a second transaction with B and present the results to X? How else would this work? Also,it isn’t clear how A (and/or B) initiate this transaction with Q. Does Q put a proposal about some needed info somewhere, which is then bid on by A and B who post their own counter proposals (eg I’m thinking of freelance sites like upwork/elance)? That process may become quite a mess, with lots of low quality bids.

    Another thing is that sometimes Q will have a question that is essentially unanswerable with the available info (I’m sure many people here are familiar with that). If X and Q are unable to ascertain this, it looks like we are back to square one where NHST reigns supreme.

    Anyway I would certainly like to see a prototype of this since I agree that no solution is likely to arise from within “the system” anytime soon.

      • Interesting – though I think “A la carte consulting” is much more apt than “Capitalist science”.

        Ideally in science you want full disclosure while really only hoping to clear some brush so that those after you can solve the real problems. Needs to be a co-operating community.

        For instance as bad as least publishable papers are – here there will be least monatizable answers!

        No to suggest a la carte consulting won’t help many do science – its just not going to be science.

      • Hi Bruce,

        I’m interested in the mechanics of this. I see the money transfers are handled through paypal*, which I don’t think would be suitable for transfer of really expensive/worthwhile information (eg you run a study that costs $100K-1,000K to get the answer). How do you imagine this could scale up to levels able to actually fund science projects?

        *In fact with paypal I am scared you will deal with chargeback scams, and since the information transfer is non-reversible you will have to promise to cover the sellers if they are to use your site. There is no way to “recover” the product once sent. To deal with this you will have to keep raising your fees, increase the intrusiveness of the sign-up process, etc.

  4. Honestly asking — wouldn’t *collaboration between researchers with distinctly different priors* be a good approach? That is, the researchers disagree about a hypothesis, but they agree in advance on what would be a good way to test it. They carry out the research side by side, review all the data together, and they each get their own subsection in the analysis section of the published paper. Wouldn’t that approach — let’s say, to testing psychotherapy modalities, or priming effects, or something about voter behavior — produce work that was FAR more credible and rigorously thought through than what we see today? I truly don’t understand why “competing labs ” don’t reach out and collaborate instead.

  5. If you publicly fund research you keep it a public good and compromise on the material incentives to do good work. What this guy proposes are basically patents. You get a higher material incentive to be correct, however the knowledge is underutilized since the guy owning it has monopoly. He is basically just another phycisist ignoring the econ literature and reinventing the wheel.

    His general point on how to provide correct incentives for research has been discussed way better in http://www.nber.org/papers/w7716 and http://www.nber.org/papers/w7717. For the reproducibality crisis there is for example this propposal http://www.nber.org/papers/w23335 .

    • This from the third link is interesting “instead of sending this paper to a peer-reviewed journal, we make it available online as a working paper, but we commit never to submit it to a journal for publication. We instead offered co-authorship for a second, yet to be written, paper to other scholars willing to replicate our study.”

      This would be science as opposed to monopolistic answer provision regarding practical issues that may arise in trying to do science (clearing some practical obstacles).

      So it would take a couple journals and perhaps some universities to provide incentives for such second tier publishing…

  6. Paying for results will only assist science if you are paying for sound scientific results. Product development is a solid motivation for engineering-oriented science because if the science is not sound, the products fail. This can theoretically propagate back to the researchers if the causal chains and temporal links aren’t too sloppy. In cases where people are actually paying for what they want to hear, we replicate psychology.

  7. It seems me that a major problem is that the researcher may starve to death or die of old age before the money leaves escrow. If I am reading the wiki correctly it was only roughly 45 years from theory to confirmation of the Higgs boson. That is a long time to wait for a cheque.

    The heliocentric solar system theory is another one. If I understand the history of the theory/research, I admit in a very crude manner, it took a number of scholars to properly formulate the theory going from, at least, Copernicus (mid 16th C) to Galileo and Kepler , 17C (and probably a number of researchers I don’t remember) and really only could be confirmed in the 18th C when better observation techniques (and math?) were developed. Who, if anyone, gets the money?

    I can see the approach being, possibly, useful in late development work in some areas, product development leading to a patent, or pharmaceutical development I have severe doubts about it working in most or all of basic research where collaboration not rivalry is often required.

    I’d also refer you to Dorthy Bishop’s essay on negative results,http://deevybee.blogspot.ca/. I suppose some researchers would be happy to pay for negative results, but just how do you confirm a negative result?

    • > severe doubts about it working in most or all of basic research where collaboration not rivalry is often required.
      I agree.

      Perhaps this quote from Peirce might help “I [Peirce] do not call the solitary studies of a single man a science. It is only when a group
      of men, more or less in intercommunication, are aiding and stimulating one another by their understanding of a particular group of studies as outsiders cannot understand them, that I call their life a science.”

      • Thank you, I think it reinforces my position very nicely.

        The proposed system looks good for highly applied research or product development.

        I just don’t see it working for basic or intermediate research where a few hundred people over,perhaps 100 or 200 years, are contributing to the results.

        Goodness, in a litigious society such as the USA, the lawsuits about relative contributions, could on for decades. :)

  8. I’m in the pharma industry and I’m trying to think of the cases where I know where a key paper has been agonizingly, frustratingly wrong.

    I think unsurprisingly we haven’t spent a year on something we could disprove in ten days. We have spent something like a year on things we could disprove in six months. Maybe the market would have helped. But we might prefer to start early on the uncertain results rather than wait a while for the academics to advance it to a point of high probability.

    Given the rise of patents in universities and industry collaborations, there is already a big incentive to be correct. There’s not a disincentive to be wrong. It’s an interesting idea but would the market actually exist for penalties? If we (industry) are willing to move on things with a 5-10% chance of success it seems like the academic, not us, are going to be the risk averse ones in the transaction.

    FWIW in my experience the specific example of chemical matter A binding to target B would *not* take a long time to answer, but that’s seldom the issue. The problem in applying this category of academic work tends to be “does the observed effect of chemical matter A actually derive from its binding to target B.” (Other research has other classes of problems, of course.)

  9. This sounds like a proposal to create huge amount of new jobs for lawyers. For example, you could easily offer your paper via a shell company, and just have that fold if it turns out to be wrong. I am also convinced it would not work well for most of science.

    Look at how many CEOs and other highly important people just repeat the power pose claims uncritically, and would probably have paid up for it.

    One of the main problems in this paper is that it assumes the buyer is interested in the truth. That may be often the case, but there are situations where the buyer is not rational (see the power pose example). And there are other cases where the buyer has rational self-interest, which does not have to align with the truth.

    It would create a big incentive for scientists to (for example) offer highly priced “evidence” for smoking being non-harmful, that would be happily bought (and supported with more bad evidence) by tobacco companies. Or to use the drug company example in the, they will not pay for research to develop a new drug (those are usually developed by buying small university spin-offs or biotechs). They will pay for research demonstrating that their product is more effective and/or safer than the competition, and they have an incentive to do so regardless of whether it is true.

    This proposal just adds additional incentives for crappy papers, namely abusing the system to make money in addition to reputation.

    Also, given that this would just seem to work for applicable knowledge and not for basic research (or could you imagine paying Newton’s heirs forever for his laws?), it seems that patents already cover this adequately (unsurprisingly, patents are also a gold mine for lawyers).

  10. This whole business strikes me as irrelevant to what most people think of as science – at least *basic* science. It seems more relevant to “engineering” or “inventing,” and there’s already ways to do this for money.

  11. > Prices on prediction markets are public. The people I have spoken with in industry seem more willing to pay for information if the information they receive is not automatically made public.

    If we’re serious about the idea of science as a public good, then I think the “prices on prediction markets are public” is the most important part!

    We already subsidize science. If people aren’t quite willing enough to pay for information if the information is automatically made public, then let’s just subsidize scientific prediction markets too. (And I say this as someone who almost always argues against subsidies.)

  12. This is one of the worst ideas I have ever heard. Capitalism is killing the planet and driving the human race to extinction. The very notion that it should be expanded and adopted elsewhere is insane.

Leave a Reply to anon Cancel reply

Your email address will not be published. Required fields are marked *