Scientific trends, fads, and subfields

Peter Woit has an interesting article–a review of a book called “The Trouble With Physics,” where he talks about the struggle of a physicist named Lee Smolin to do interesting work amid the challenges of dealing with people in different subfields of physics (in particular, string theory). Smolin characterizes academic physics as “competitive, fashion-driven” and writes, “during the years I worked on string theory, I cared very much what the leaders of the community thought of my work. Just like an adolescent, I wanted to be accepted by those who were the most influential in my little circle.”

I can’t comment on the details of this since my physics education ended 20 years ago, but there are perhaps some similarities to statistics. But first, the key differences:

1. Statistics is a lot easier than physics. Easier to do, and easier to do research in. You really don’t need much training or experience at all to work in the frontiers of statistics.
2. There’s a bigger demand for statistics teachers than physics teachers. As a result, ambitious statistics Ph.D.’s who want faculty positions don’t (or shouldn’t) have to worry about being in a hot subfield. I mean, I wouldn’t recommend working on something really boring, but just about every area in statistics is close to some interesting open problems.

Now to the issues of trends, fads, and subfields. I remember going to the Bayesian conference in Spain in 1991 and being very upset, first, that nobody was interested in checking the fits of their statistical models and, second, that there was a general belief that there was something wrong or improper about checking model fit. The entire Bayesian community (with very few exceptions, most of whom seemed to be people I knew from grad school) seemed to have swallowed whole the idea of prior distributions as “personal probability” and had the attitude that you could elicit priors but you weren’t allowed to check them by comparing to data.

The field has made some progress since then–not so much through frontal attack (believe me, I’ve tried) as from a general diffiusion of efforts into many different applications. Every once in awhile, people applying Bayesian methods in a particular application area forget the (inappropriate) theory and check their model, sometimes by accident and sometimes on purpose. And then there are some people who we’ve successfully brainwashed via chapter 6 of our book. It’s still a struggle, though. And don’t get me started on things like Type 1 and Type 2 errors, which people are always yapping about but, in my experience, don’t actually exist

But here’s the point: for all my difficulties in working with the Bayesian statisticians, things have been even harder with the non-Bayesians, in that they will often simply ignore any idea that is presented in a Bayesian framerwork. (Just to be clear: this doesn’t always happen, and, again, there’s a lot more openness than there used to be: as people become more aware of the arbitrariness of many “classical” statistical solutions, especially for problems with complex data structures, there is more acceptance of Bayesian procedures as a way of getting reasonable answers (as Rubin anticipated in his 1984 Annals of Statistics paper).)

Anyway, I’d rather be disagreed with than ignored, and so I realize it makes sense to do much of my communication within the Bayesian community–that’s really the best option available. It’s also a matter of speaking the right language; for example, when I go to econometrics talks, I can follow what’s going on, but I usually have to maintain a simultaneous translation in my head, converting all the asymptotic statements to normal distributions and so forth. To communicate with those folks, I’m probably better off speaking in my own language as clearly as I can, validating my methods via interesting applications, and then hoping that some of them will reach over and take a look.

P.S. For more on why Bayes, see here and here.

2 thoughts on “Scientific trends, fads, and subfields

  1. A question regarding Bayesian model validation. First, let me say that I'm one of your converts, and have been using summary statistics, such as a Bayesian analog of chi-square, from the posterior predictive distribution to measuure a model's predictive validity with respect to the observed data. My question: is this still something that many Bayesians are opposed to doing, on the premise that you should be so attached to your personal prior(s) that whatever the model estimates is what you believe? Given the prevalence of hierarchical models these days, and the difficulty in developing informative priors for hyperparameters, I hope this is no longer the case.

  2. Dana,

    Nowadays, I wouldn't say that many Bayesians oppose model checking, but they often aren't working in a theoretical structure that allows model checking. They are not comfortable in a world in which y, y.rep, and theta have a joint distribution. For most Bayesians, model checking is like trying to scream using an alphabet with no vowels. They have nothing against checking the fit of a model, but it rarely seems to happen.

Comments are closed.