Following up on yesterday’s post, here’s David Chudzicki’s story (with graphs and Stan/R code!) of how he fit a model for an increasing function (“isotonic regression”). Chudzicki writes:

This post will describe a way I came up with of fitting a function that’s constrained to be increasing, using Stan. If you want practical help, standard statistical approaches, or expert research, this isn’t the place for you (look up “isotonic regression” or “Bayesian isotonic regression” or David Dunson). This is the place for you if you want to read about how I thought about setting up a model, implemented the model in Stan, and created graphics to understand what was going on.

The background is that a simple, natural-seeming uniform prior on the function values does not work so well—it’s a much stronger prior distribution than one might naively think, just one of those unexpected aspects of high-dimensional probability distributions. So Chudzicki sets up a more general family with a hyper parameter.

One thing I like about this example is that it’s *not* the latest research; it has a charming DIY flavor that might make you feel that you too can patch together a model in Stan to do what you need.

I’d love to see some comment on this paper, which so far I have not, anywhere, and which is beyond me frankly:

http://www.pnas.org/content/early/2013/10/28/1313476110.full.pdf?with-ds=yes

Thanks

There’s some discussion in a comment thread about an older p-value paper:

https://andrewgelman.com/2013/11/15/are-all-significant-p-values-created-equal/#comment-151450

My feeling (as expressed in those comments) is that this paper goes in the wrong direction; it proposes a new Bayesian metric for hypothesis testing, rather than advising more generally thoughtful statistical practice (avoiding dichotomization, being careful about data snooping, etc.).

Thanks a lot Ben. Johnson does acknowledge that other, possibly larger, issues in reproducibility exist and that he is dealing only with those involving statistical philosophy/computation.