Monday 2 Nov, 5-6:30pm at the Methodology Institute, LSE. No link to the seminar on the webpage, so I’ll give you the information here:

Why we (usually) don’t worry about multiple comparisons

Applied researchers often find themselves making statistical inferences in settings that would seem to require multiple comparisons adjustments. We challenge the Type I error paradigm that underlies these corrections. Moreover we posit that the problem of multiple comparisons can disappear entirely when viewed from a hierarchical Bayesian perspective. We propose building multilevel models in the settings where multiple comparisons arise.

Multilevel models perform partial pooling (shifting estimates toward each other), whereas classical procedures typically keep the centers of intervals stationary, adjusting for multiple comparisons by making the intervals wider (or, equivalently, adjusting the $p$-values corresponding to intervals of fixed width). Thus, multilevel models address the multiple comparisons problem and also yield more efficient estimates, especially in settings with low group-level variation, which is where multiple comparisons are a particular concern.

This work is joint with Jennifer Hill and Masanao Yajima.

(Here’s a video version of a related talk that I gave at a meeting on statistics and neuroscience.)

P.S. My talk briefly touches upon some work done by a researcher at the London School of Economics!

P.P.S. I’m speaking at LSE on Tuesday also (on a different topic).

P.P.P.S. I’ll be speaking again a couple times in London later in the academic year, but on other topics. All my talks there will be different.

http://www2.lse.ac.uk/publicEvents/eventsSearch/e…