Continuing “heckle the press” month here at the blog, I (Bob) found the following “discovery” a little overplayed by David H. Freedman, who was writing for Scientific American in the following article and blog post:
The article’s paywalled, but the blog entry isn’t. Apparently, a geophysicist named Jonathan Carter (good luck finding him on the web given only that information) found that when he simulated from a complicated model, then fit the model to the simulated data, he sometimes got different results. What’s more, these differing estimates fit the data equally well but made different predictions on new data. Now we don’t know if the model was identifiable, had different local optima (i.e., multiple modes), how he fit the data, or really anything, but it doesn’t really matter.
Reading the comments and article is a depressing exercise in the sociology of science, with clueless commenters tying this “discovery” to their own views on the banking industry, global warming, fractals, the dishonesty of scientists, the mystical un-modelability of behavior, Keynesianism, and anything else that seems to be on their mind. So I’m going to join the crowd and file it under “the press misunderstanding science”.
Cluefulness to the Rescue
I’ll maintain my optimism about humanity as a whole, though, because some of the commenters were on the right track.
Commenter number six, jayjacobugs, gets it right, pointing out that scientists should be doing sensitivity analyses (though I’d look at more than just “changing variables”).
And then commenter thirteen, LeighCaldwell, reminds the readers that economists are well aware of this problem.
The Good Old Days of Scientific American
It makes me sad that Scientific American now looks like The Onion.
At the risk of sounding like an old curmudgeon, I remember when Scientific American enlisted domain experts to write articles. I was envious that my boss at Bell Labs, Steve Levinson (along with Mark Liberman, of The Language Log), had written a Scientific American article on speech recognition. The article didn’t assume calculus, but it also didn’t assume readers were idiots waiting to be titillated by a “scandal.”
One of the things that got me interested in math and science was the mathematical games column of Martin Gardner, from which you could learn serious mathematical reasoning with only American middle-school math classes. Speaking of American math education, kids in the U.S. who like math should envy Christian Robert‘s daughter, who is learning Monte Carlo integration in the equivalent of American tenth grade.
Commenting on blogs so often feels like a lost cause; I’m glad that one time out of fifty, it helps to raise the tone. Thanks for noticing.
Just keep thinking about Locke & Demosthenes. People do notice. More so when comments have + and − buttons next to them.
At the risk of joining you on the porch in a rocking chair, I first learned about James-Stein estimators from the Scientific American article by Efron and Morris back in 1977, when I was still in college. It doesn’t seem to me that there’s much hope of seeing that sort of article again… I loved Martin Gardner too.
Pretty soon this blog is going to be nothing but (a) gripes about the press and (b) allusions to the imminent release of Stan.
Coincidentally, economists over at the Federal Reserve Bank of Cleveland just published a paper on a forecasting model for monetary policy. If anyone is interested in comparing their insights to the Scientific American piece, it’s at:
http://www.clevelandfed.org/research/workpaper/2011/wp1128.pdf
You should definitely check out the Daily Show clip on “science”.
http://www.thedailyshow.com/watch/wed-october-26-2011/weathering-fights—science—what-s-it-up-to-?xrs=playershare_fb
Do you know what is killing the press? Blogs like this one. Readers with background to understand issues at a more deep level will prefer to read what Gelman has to say than any journalist. In the end, the press has to low its level to keep some readers…
I think that’s part of it, but also it’s difficult to write punchy, entertaining text. Typically people excel at either stats or letters, less commonly at both. Writing skill retains broader audiences.
I hate these kinds of articles that play to the general public’s misconception that any scientific results are claims of undisputed fact and it’s some kind of big scandal when that turns out not to be the case.
There have been a couple of papers mining this well too. Publication bias exists!? Statistical significance testing gets abused!? Something is statistically significant yet not causal!?
Hmm all my comments seem to be getting filtered…
…
I hate these kinds of articles that play to the general public’s misconception that any scientific results are claims of undisputed fact and it’s some kind of big scandal when that turns out not to be the case.
There have been a couple of papers mining this well too. Publication bias exists!? Statistical significance testing gets abused!? Something is statistically significant yet not causal!?
… Multiple models can fit the same dataset?! And worst of all, calibration (inverse regression) doesn’t work?!!!
Good for a laugh: http://publish.uwo.ca/~mpolborn/calibration.pdf
Pingback: Economist's View: links for 2011-10-28
I like a googling-challenge, so here goes: Based on the info in the Scientific American piece, it seems the paper by Carter is at http://goo.gl/KA1RS – there’s four authors, all from the Department of Earth Sciences and Engineering at the Imperial College, London. It’s currently in press for the journal “Reliability Engineering and System Safety”, according to the homepage of one of the authors (http://goo.gl/Yo3WS )
Carter does not use the word “finance” nor “economics” in the paper, so it’s D Freedman stretching the skin over that drum.
I made some mocking comments in this thread, but I shouldn’t have. Carter noticed a problem that others have also noticed. So what if it’s not original? It validates that there is indeed a problem with calibration.
http://www3.imperial.ac.uk/people/j.n.carter
I suspect that the article is referring to this 2005 paper, or closely related work, discussing multimodality in estimating a geophysical model.
Blah blah, models are always wrong, but such models are just metaphors. If not everyone knows this, it’s because their Econ professors taught them wrong. Which is shameful.
Looks like a 2005 paper used by the Scientific American writer to make a point about financial models. The paper looks like it is:
Our Calibrated Model has No Predictive Value: An Example from the Petroleum Industry, J.N. Carter, P.J. Ballester, Z. Tavassoli and P.R. King
http://library.lanl.gov/cgi-bin/getdoc?event=SAMO2004&document=samo04-45.pdf