Bayes in the research conversation

Charlie Williams writes:

As I get interested in Bayesian approaches to statistics, I have one question I wondered if you would find interesting to address at some point on the blog.

What does Bayesian work look like in action across a field? From experience, I have some feeling for how ongoing debates evolve (or not) with subsequent studies in response to earlier findings. I wonder if you know how this happens in practice when multiple researchers are using Bayesian approaches. How much are previous findings built into priors? How much advance comes from model improvement? And in a social science field where self-selection and self-interest play a role, how are improved “treatment” effects incorporated and evaluated?

I thought you might know of a field where actual back and forth has been carried out mostly in the context of Bayesian analysis or inference, and I thought it would be interesting to take a look at an example as I think about my own field.

My reply: I’ve seen Bayesian methods used for individual studies, and I’ve seen Bayesian meta-analysis (of course), but I can’t recall seeing an entire field of inquiry placed in a Bayesian perspective, with the posterior inference from an earlier study used as the prior for the next.

7 thoughts on “Bayes in the research conversation

  1. The lack of Bayesian updating tends to be a criticism of applied bayes. In practice, it doesn’t make sense because there’s always some study design, sample bias, and investigator effects. So even if you’re using the result of a previous study, the prior should always have more uncertainty than the previous posterior distribution because of these things.

  2. In Clinical trials When using historical controls, Priors are constructed from previous studies / Trials. while this may not be a whole field of study We have to be pretty comprehensive. However one needs to proceed with Caution since the standard of care can change over time or ghifts in the control group happen due to changes in enrollment criteria or simple shifts in the population.

  3. We do this in cosmology to some extent: we’re trying to measure a set of parameters, a subspace of which any given measurement may be informative about. I say “subspace” since in practice an experiment will limit us to some part of the parameter space, but will often have very strong (sometime perfect) degeneracies between the parameters we care about. So especially in that case we *need* to use informative priors from other experiments in order to actually get parameter limits.

    But because not all cosmologists are Bayesians, we even sometimes have to interpret frequentist results as if they were Bayesian. Dangerous, of course…

    There are some caveats, however: there is usually some effort to report (or at least allow calculation of) some prior-neutral statistics that allow recreation of the likelihood function [See your own “A Bayesian wants everybody else to be a non-Bayesian.” in Gelman, A. (2012), “Ethics and the statistical use of prior information.”]. Moreover, because we’re cosmologists, we’re studying the one and only universe. Hence, you can’t always combine final likelihoods when it’s two different measurements of the same sky.

    • “Moreover, because we’re cosmologists, we’re studying the one and only universe. Hence, you can’t always combine final likelihoods when it’s two different measurements of the same sky.”

      I don’t understand what this means. Bayesian methods are great for multiple measurements (with different likelihood models) of the same thing.

      • Hi,

        Sorry, I wasn’t at all clear.

        We often make measurements of the same quantity (e.g., the pattern of cosmic microwave background fluctuations) and then use that to infer the underlying cosmological parameters which determine the distribution governing the fluctuations. The correct Bayesian procedure is to model the two experiments as having the same underlying sky and different noise distributions. However, we usually don’t report our results in a way that makes that possible: usually the posterior distribution for the cosmological parameters — marginalised over the underlying sky signal.

        In order to combine experimental results, you should only marginalise over nuisance parameters that are specific to your experiment, but in this case, this would increase the dimensionality of the posterior from about 6 to millions (but of course this is exaggeration for effect, since the structure of the full distribution can be compressed somewhat under various assumptions about the noise properties).

  4. Sounds like what I’ve been trying to do. Is this what Christopher is looking for:

    David S. LeBauer, Dan Wang, Katherine T. Richter, Carl C. Davidson, and Michael C. Dietze 2013. Facilitating feedbacks between field measurements and ecosystem models. Ecological Monographs 83:133–154. http://dx.doi.org/10.1890/12-0137.1

Leave a Reply to Andrew Jaffe Cancel reply

Your email address will not be published. Required fields are marked *