Geophysicist Discovers Modeling Error (in Economics)

Continuing “heckle the press” month here at the blog, I (Bob) found the following “discovery” a little overplayed by David H. Freedman, who was writing for Scientific American in the following article and blog post:

The article’s paywalled, but the blog entry isn’t. Apparently, a geophysicist named Jonathan Carter (good luck finding him on the web given only that information) found that when he simulated from a complicated model, then fit the model to the simulated data, he sometimes got different results. What’s more, these differing estimates fit the data equally well but made different predictions on new data. Now we don’t know if the model was identifiable, had different local optima (i.e., multiple modes), how he fit the data, or really anything, but it doesn’t really matter.

Reading the comments and article is a depressing exercise in the sociology of science, with clueless commenters tying this “discovery” to their own views on the banking industry, global warming, fractals, the dishonesty of scientists, the mystical un-modelability of behavior, Keynesianism, and anything else that seems to be on their mind. So I’m going to join the crowd and file it under “the press misunderstanding science”.

Cluefulness to the Rescue

I’ll maintain my optimism about humanity as a whole, though, because some of the commenters were on the right track.

Commenter number six, jayjacobugs, gets it right, pointing out that scientists should be doing sensitivity analyses (though I’d look at more than just “changing variables”).

And then commenter thirteen, LeighCaldwell, reminds the readers that economists are well aware of this problem.

The Good Old Days of Scientific American

It makes me sad that Scientific American now looks like The Onion.

At the risk of sounding like an old curmudgeon, I remember when Scientific American enlisted domain experts to write articles. I was envious that my boss at Bell Labs, Steve Levinson (along with Mark Liberman, of The Language Log), had written a Scientific American article on speech recognition. The article didn’t assume calculus, but it also didn’t assume readers were idiots waiting to be titillated by a “scandal.”

One of the things that got me interested in math and science was the mathematical games column of Martin Gardner, from which you could learn serious mathematical reasoning with only American middle-school math classes. Speaking of American math education, kids in the U.S. who like math should envy Christian Robert‘s daughter, who is learning Monte Carlo integration in the equivalent of American tenth grade.

19 thoughts on “Geophysicist Discovers Modeling Error (in Economics)

  1. At the risk of joining you on the porch in a rocking chair, I first learned about James-Stein estimators from the Scientific American article by Efron and Morris back in 1977, when I was still in college. It doesn’t seem to me that there’s much hope of seeing that sort of article again… I loved Martin Gardner too.

  2. Do you know what is killing the press? Blogs like this one. Readers with background to understand issues at a more deep level will prefer to read what Gelman has to say than any journalist. In the end, the press has to low its level to keep some readers…

    • I think that’s part of it, but also it’s difficult to write punchy, entertaining text. Typically people excel at either stats or letters, less commonly at both. Writing skill retains broader audiences.

  3. I hate these kinds of articles that play to the general public’s misconception that any scientific results are claims of undisputed fact and it’s some kind of big scandal when that turns out not to be the case.

    There have been a couple of papers mining this well too. Publication bias exists!? Statistical significance testing gets abused!? Something is statistically significant yet not causal!?


  4. I hate these kinds of articles that play to the general public’s misconception that any scientific results are claims of undisputed fact and it’s some kind of big scandal when that turns out not to be the case.

    There have been a couple of papers mining this well too. Publication bias exists!? Statistical significance testing gets abused!? Something is statistically significant yet not causal!?

  5. Pingback: Economist's View: links for 2011-10-28

  6. I like a googling-challenge, so here goes: Based on the info in the Scientific American piece, it seems the paper by Carter is at http://goo.gl/KA1RS – there’s four authors, all from the Department of Earth Sciences and Engineering at the Imperial College, London. It’s currently in press for the journal “Reliability Engineering and System Safety”, according to the homepage of one of the authors (http://goo.gl/Yo3WS )

    • Carter does not use the word “finance” nor “economics” in the paper, so it’s D Freedman stretching the skin over that drum.

      I made some mocking comments in this thread, but I shouldn’t have. Carter noticed a problem that others have also noticed. So what if it’s not original? It validates that there is indeed a problem with calibration.

Comments are closed.