Time-reversal heuristic as randomization, and p < .05 as conflict of interest declaration

Alex Gamma writes:

Reading your blog recently has inspired two ideas which have in common that they analogize statistical concepts with non-statistical ones related to science:

The time-reversal heuristic as randomization: Pushing your idea further leads to the notion of randomization of the sequence of study “reporting”. Studies are produced sequentially, but consumers of science could be exposed to them in any permutation of ordering to study the effects of temporal priority on belief formation. Maybe there could be a useful application of this, but I don’t see it (yet).

Declarations of conflicts of interest as p < .05: Conflicts of interests are largely treated like significant p-values: they indicate the absolute presence or absence of an effect (or, in this case, a bias), without nuance or context. The truth is certainly dimensional and not categorical here, so one could argue this practice should be refined. But then the utility function of such decisions might be different from those related to accepting scientific findings based on p-values, so it may well be better to err on the side of many more false positives. But at least the question should be raised.

I guess he’s referring to this post and this one.

3 thoughts on “Time-reversal heuristic as randomization, and p < .05 as conflict of interest declaration

  1. Isn’t the declaration of conflict of interest–despite the name–better regarded as evidence of potential conflict rather than a conclusion that such a conflict exists? Knowing that there is a financial relationship is important evidence to consider when assessing the results of a study. It is evidence that is otherwise difficult to obtain. Perhaps these should be labeled as “declaration of potential conflict of interest”.

  2. The reason we have conflict of interest reporting is to provide a very clear explicit rule that isn’t easily subject to bias. After all the reason we have such a rule is that we suspect certain authors either genuinely, but mistakenly, believe themselves to be making an unbiased evaluation of the evidence despite their interest in the results or are willing to lie about their research when they think they can get away with it.

    In both cases the benefit of a conflict of interest rule is in it’s simplicity and clarity. Not only can you not let subtle bias affect the choice to disclose a conflict but you can’t plausibly claim a choice not to disclose such a conflict was a good faith judgement call.

    Any attempt to try and add some kind of degree of conflict information to this will either be essentially useless or be sufficiently flexible to allow bias (or deliberate misrepresentation) to sneak into conflict of interest reporting itself.

    Consider even a very simple attempt to assign a numerical figure to conflict of interest. Instead of merely revealing that one of the authors received funding from a company with a stake in the outcome require they disclose the fraction of their funding that comes from such a source. All the sudden there are all sorts of difficult questions about what to report. For instance, what if their group received very little funding from such sources but share expensive equipment with groups almost wholly funded in such a manner. What about future funding? One might very reasonably think that a vague comment by a potential funding source about future grants is too indefinite to include but that’s a judgement call itself subject to bias and is surely the excuse a dishonest researcher who expects to get a great deal of funding if they get the right results would use.

    Worse, even if you created some elaborate system of accounting rules to use it wouldn’t really help with conflict of interest. Incentives are a complicated matter and depend on all sorts of hard to quantify facts. The same funding arrangements for an independently wealthy researcher who could easily replace any lost grant with one from another source suggests much less influence than they would for the poor struggling scientist. Better to have a simple rule about reporting and let the scientific community use the totality of the information they have about the individuals conducting the study to determine how much credibility to give it.

Leave a Reply

Your email address will not be published. Required fields are marked *