Skip to content
Archive of posts filed under the Bayesian Statistics category.

Expectation propagation as a way of life

Aki Vehtari, Pasi Jylänki, Christian Robert, Nicolas Chopin, John Cunningham, and I write: We revisit expectation propagation (EP) as a prototype for scalable algorithms that partition big datasets into many parts and analyze each part in parallel to perform inference of shared parameters. The algorithm should be particularly efficient for hierarchical models, for which the […]

Bayesian Cognitive Modeling Models Ported to Stan

Hats off for Martin Šmíra, who has finished porting the models from Michael Lee and Eric-Jan Wagenmakers’ book Bayesian Cognitive Modeling  to Stan. Here they are: Bayesian Cognitive Modeling: Stan Example Models Martin managed to port 54 of the 57 models in the book and verified that the Stan code got the same answers as […]

A question about varying-intercept, varying-slope multilevel models for cross-national analysis

Sean de Hoon writes: In many cross-national comparative studies, mixed effects models are being used in which a number of slopes are fixed and the slopes of one or two variables of interested are allowed to vary across countries. The aim is often then to explain the varying slopes by referring to some country-level characteristic. […]

I (almost and inadvertently) followed Dan Kahan’s principles in my class today, and that was a good thing (would’ve even been more of a good thing had I realized what I was doing and done it better, but I think I will do better in the future, which has already happened by the time you read this; remember, the blog is on a nearly 2-month lag)

As you might recall, the Elizabeth K. Dollard Professor says that to explain a concept to an unbeliever, explain it conditionally. For example, if you want to talk evolution with a religious fundamentalist, don’t try to convince him or her that evolution is true; instead preface each explanation with, “According to the theory of evolution […]

“If you’re not using a proper, informative prior, you’re leaving money on the table.”

Well put, Rob Weiss. This is not to say that one must always use an informative prior; oftentimes it can make sense to throw away some information for reasons of convenience. But it’s good to remember that, if you do use a noninformative prior, that you’re doing less than you could.

Soil Scientists Seeking Super Model

I (Bob) spent last weekend at Biosphere 2, collaborating with soil carbon biogeochemists on a “super model.” Model combination and expansion The biogeochemists (three sciences in one!) have developed hundreds of competing models and the goal of the workshop was to kick off some projects on putting some of them together intos wholes that are […]

Question about data mining bias in finance

Finance professor Ravi Sastry writes: Let’s say we have N vectors of data, {y_1,y_2,…,y_N}. Each is used as the dependent variable in a series of otherwise identical OLS regressions, yielding t-statistics on some parameter of interest, theta: {t_1,t_2,…,t_N}. The maximum t-stat is denoted t_n*, and the corresponding data are y_n*. These are reported publicly, as […]

“The Statistical Crisis in Science”: My talk in the psychology department Monday 17 Nov at noon

Monday 17 Nov at 12:10pm in Schermerhorn room 200B, Columbia University: Top journals in psychology routinely publish ridiculous, scientifically implausible claims, justified based on “p < 0.05.” And this in turn calls into question all sorts of more plausible, but not necessarily true, claims, that are supported by this same sort of evidence. To put […]

The history of MRP highlights some differences between political science and epidemiology

Responding to a comment from Thomas Lumley (who asked why MRP estimates often seem to appear without any standard errors), I wrote: In political science, MRP always seems accompanied by uncertainty estimates. However, when lots of things are being displayed at once, it’s not always easy to show uncertainty, and in many cases I simply […]

“The Firth bias correction, penalization, and weakly informative priors: A case for log-F priors in logistic and related regressions”

Sander Greenland sent me this paper that he wrote with Mohammad Ali Mansournia, which discusses possible penalty functions for penalized maximum likelihood or, equivalently, possible prior distributions for Bayesian posterior mode estimation, in the context of logistic regression. Greenland and Mansournia write: We consider some questions that arise when considering alternative penalties . . . […]