Gronau and Wagemakers write: The bridgesampling package facilitates the computation of the marginal likelihood for a wide range of different statistical models. For models implemented in Stan (such that the constants are retained), executing the code bridge_sampler(stanfit) automatically produces an estimate of the marginal likelihood. Full story is at the link.

**Bayesian Statistics**category.

## The Statistical Crisis in Science—and How to Move Forward (my talk next Monday 6pm at Columbia)

I’m speaking Mon 13 Nov, 6pm, at Low Library Rotunda at Columbia: The Statistical Crisis in Science—and How to Move Forward Using examples ranging from elections to birthdays to policy analysis, Professor Andrew Gelman will discuss ways in which statistical methods have failed, leading to a replication crisis in much of science, as well as […]

## Why won’t you cheat with me?

But I got some ground rules I’ve found to be sound rules and you’re not the one I’m exempting. Nonetheless, I confess it’s tempting. – Jenny Toomey sings Franklin Bruno It turns out that I did something a little controversial in last week’s post. As these things always go, it wasn’t the thing I was […]

## Using Mister P to get population estimates from respondent driven sampling

From one of our exams: A researcher at Columbia University’s School of Social Work wanted to estimate the prevalence of drug abuse problems among American Indians (Native Americans) living in New York City. From the Census, it was estimated that about 30,000 Indians live in the city, and the researcher had a budget to interview […]

## My 2 talks in Seattle this Wed and Thurs: “The Statistical Crisis in Science” and “Bayesian Workflow”

For the Data Science Seminar, Wed 25 Oct, 3:30pm in Physics and Astronomy Auditorium – A102: The Statistical Crisis in Science Top journals routinely publish ridiculous, scientifically implausible claims, justified based on “p < 0.05.” And this in turn calls into question all sorts of more plausible, but not necessarily true, claims, that are supported […]

## The Publicity Factory: How even serious research gets exaggerated by the process of scientific publication and reporting

The starting point is that we’ve seen a lot of talk about frivolous science, headline-bait such as the study that said that married women are more likely to vote for Mitt Romney when ovulating, or the study that said that girl-named hurricanes are more deadly than boy-named hurricanes, and at this point some of these […]

## The network of models and Bayesian workflow

This is important, it’s been something I’ve been thinking about for decades, it just came up in an email I wrote, and it’s refreshingly unrelated to recent topics of blog discussion, so I decided to just post it right now out of sequence (next slot on the queue is in May 2018). Right now, standard […]

## Why I think the top batting average will be higher than .311: Over-pooling of point predictions in Bayesian inference

In a post from 22 May 2017 entitled, “Who is Going to Win the Batting Crown?”, Jim Albert writes: At this point in the season, folks are interested in extreme stats and want to predict final season measures. On the morning of Saturday May 20, here are the leading batting averages: Justin Turner .379 Ryan […]

## No tradeoff between regularization and discovery

We had a couple recent discussions regarding questionable claims based on p-values extracted from forking paths, and in both cases (a study “trying large numbers of combinations of otherwise-unused drugs against a large number of untreatable illnesses,” and a salami-slicing exercise looking for public opinion changes in subgroups of the population), I recommended fitting a […]

## “Bayesian evidence synthesis”

Donny Williams writes: My colleagues and I have a paper recently accepted in the journal Psychological Science in which we “bang” on Bayes factors. We explicitly show how the Bayes factor varies according to tau (I thought you might find this interesting for yourself and your blog’s readers). There is also a very nice figure. […]

## Partial pooling with informative priors on the hierarchical variance parameters: The next frontier in multilevel modeling

Ed Vul writes: In the course of tinkering with someone else’s hairy dataset with a great many candidate explanatory variables (some of which are largely orthogonal factors, but the ones of most interest are competing “binning” schemes of the same latent elements). I wondered about the following “model selection” strategy, which you may have alluded […]

## Tenure-Track or Tenured Prof. in Machine Learning in Aalto, Finland

This job advertisement for a position in Aalto, Finland, is by Aki We are looking for a professor to either further strengthen our strong research fields, with keywords including statistical machine learning, probabilistic modelling, Bayesian inference, kernel methods, computational statistics, or complementing them with deep learning. Collaboration with other fields is welcome, with local opportunities […]

## The house is stronger than the foundations

Oliver Maclaren writes: Regarding the whole ‘double use of data’ issue with posterior predictive checks [see here and, for a longer discussion, here], I just wanted to note that David Cox describes the ‘Fisherian reduction’ as (I’ve summarised slightly; see p. 24 of ‘Principles of Statistical Inference) – Find the likelihood function – Reduce to […]

## I disagree with Tyler Cowen regarding a so-called lack of Bayesianism in religious belief

Tyler Cowen writes: I am frustrated by the lack of Bayesianism in most of the religious belief I observe. I’ve never met a believer who asserted: “I’m really not sure here. But I think Lutheranism is true with p = .018, and the next strongest contender comes in only at .014, so call me Lutheran.” […]

## What am I missing and what will this paper likely lead researchers to think and do?

This post is by Keith. In a previous post Ken Rice brought our attention to a recent paper he had published with Julian Higgins and Thomas Lumley (RHL). After I obtained access and read the paper, I made some critical comments regarding RHL which ended with “Or maybe I missed something.” This post will try to discern […]

## Should we worry about rigged priors? A long discussion.

Today’s discussion starts with Stuart Buck, who came across a post by John Cook linking to my post, “Bayesian statistics: What’s it all about?”. Cook wrote about the benefit of prior distributions in making assumptions explicit. Buck shared Cook’s post with Jon Baron, who wrote: My concern is that if researchers are systematically too optimistic […]