In a recent article in the New York Review of Books (see also here), Freeman Dyson writes,

Great scientists come in two varieties, which Isaiah Berlin, quoting the seventh-century-BC poet Archilochus, called foxes and hedgehogs. Foxes know many tricks, hedgehogs only one. Foxes are interested in everything, and move easily from one problem to another. Hedgehogs are interested only in a few problems which they consider fundamental, and stick with the same problems for years or decades. Most of the great discoveries are made by hedgehogs, most of the little discoveries by foxes. Science needs both hedgehogs and foxes for its healthy growth, hedgehogs to dig deep into the nature of things, foxes to explore the complicated details of our marvelous universe. Albert Einstein was a hedgehog; Richard Feynman was a fox.

This got me thinking about statisicians. I think we’re almost all foxes! The leading stasticians over the years all seem to have worked on lots of problems. Even when they have, hedghehog-like, developed systematic ideas over the years, these have been developed in a series of applications. It seems to be part of the modern ethos of statistics, that the expected path to discovery is through the dirt of applications.

I wonder if the profusion of foxes is related to statistics’s position, compared to, say, physics, as a less “mature” science. In physics and mathematics, important problems can be easy to formulate but (a) extremely difficult to solve and (b) difficult to understand the current research on the problem. It takes a hedgehog-like focus just to get close enough to the research frontier that you can consider trying to solve open problems. In contrast, in statistics, very little background is needed, not just to formulate open problems but also to acquire many of the tools needed to study them. I’m thinking here of problems such as how to include large numbers of interactions in a model. Much of the progress made by statisticians and computer scientists on this problem has been made in the context of particular applications.

Going through some great names of the past:

Laplace: possibly hedgehog-like in developing probability theory but I think of him as foxlike in working on various social-statistics applications such as surveys, that gave him the motivation needed to develop practical Bayesian methods.

Gauss: least-squares is a great achievement, but developed as a particular mathematical tool to solve some measurement error problems. In the context of his career, his statistical work is foxlike.

Galton: could be called a “hedgehog” for his obsession with regression, but I think of him as a fox with all his little examples.

Fisher: fox. Developed methods as needed. Developed theory as appropriate (or often inappropriate).

Pearson: the family of distributions smells like a hedgehog, but what’s left of it, incluidng chi-squared tests, looks like fox tracks.

Neyman: perhaps wanted to be a hedgehog but ultimately a fox, in that he made contributions to different problems of estimation and testing. I’d say the same of Wald and the other mid-century theorists: they might have wanted to be hedgehogs but there was no “theory of relativity” out there for them to discover, so they seem like foxes to me.

What about the leading statisticians of the twentieth century?

Cox: fox

Cochran: fox

Tukey: super-fox

Efron: fox

Rubin: fox. (You could call him a hedgehog for his idea that all statistics is missing data, but this is developed in a foxlike proliferation of examples.)

Chernoff: fox. Various big ideas but no single quest

Hastie/Tibshirani/Friedman: fox

Some hedgehogs: Lindley. Donoho/Johnstone. Berger. Nelder.

OK, I’m sure I’ve missed a lot of important names herein this game of foxdar. But you get the point.

P.S. TIan noticed this article also.

I'm not clear on this, but you read like you're suggesting that applying similar methods to a variety of problems is fox-like. In contrast I would sggest it's hedgehog-like: I've done stuff with hierarchical models on ecological communities, genetics, and sports results. But it's all hierarchical modelling, so it's all the same sort of stuff (I blame Andrew Thomas).

On the other hand, it strikes me that as applied statisticians we all _should_ be fox-like: we're going to be approached by people wanting answers to their problems, and applying the same techniques to everything will be a mistake. So, we have to be flexible and use (or in the case of the Greats, invent) a variety of techniques.

Thus endeth my sermon for today. I hope I'm not too prickly.

Bob

Bob,

I agree that applying one method to all problems is hedgehog-like. But that's not what I think research statisticians do. I've used hierarchical models in a lot of papers, but in one of my most important papers, the whole argument was pretty much carried by a series of graphs. Also, hierarchical modeling is a tool, and it's foxlike to develop new models for new problems. (Just as Feynman was a fox even though he was always using mathematics.)

In the examples of eminent statisticians given above, yes, they mostly have a desire to see the big picture, but if you look at what they've done that's had impact, I think you'll see foxness. For example, it's not like Efron thought up the bootstrap and then spent a career doing the bootstrap over and over. He uses it as a tool in developing new methods for new problems that cross his desk. Similarly, Cochran's work on observational studies was related to his work on experimental design and sample surveys, but really involved completely different methods (in this case, matching followed by regression).

In contrast, I see Jim Berger as a hedgehog because he's worked to put all of statistical inference into a decision-theoretic framework (following the incomplete (or unsuccessful, depending on how you look at it) example of Savage).

But I agree with your point that, as applied statisticians, we tend to be more effective with the methods we are expert on–and then, conversely, we tend to help out with problems for which our methods will work. So we can get into ruts (or, at least, I know I can). I wouldn't call this "hedgehog-like," though. I'd reserve "hedgehog" for a sustained quest such as Nelder's approach to understanding regression models and Anova (which I think is reflected a bit in R's lm formulas), or Donoho/Johnstone's ideas of minimax estimation, or Raftery's ideas of model averaging. This quest can come through theory (as with Donoho/Johnstone) or through applications (as with Raftery). But, to get back to my main point, I see most statistical innovations as being more foxlike and actually arising from specific examples (as, for example, Jun Liu and others have followed the statistical physicists and developed innovative computational methods in the context of particular models that they wanted to fit).

Which Rubin?

Bill,

<a href="http://scholar.google.com/scholar?hl=en&q=donald+rubin&um=1&ie=UTF-8&sa=N&tab=ws" rel="nofollow">This guy.

Reading Herbert Simon has made me wary of these kinds of observations.

The example he gave was noticing that an ant took a complicated path across a beach – the ant was not a complicated ant – it just could not go in a straight line.

But more pertinent Simon argued people themselves can't really tell you why they took the path they did. He argued you had to get them to verbalize out load what was going through their minds as they went along the path – called protocol analyses.

So at the next conference give each a couple problems to look at and if

Berger always tends to saying this is a good example for decision theory – hedgehog.

Efron says the first one is a good chance to try out A while the second is a good example for B – fox.

And if Rubin is not there just impute his responses (more than once!)

Keith