Stan Salthe pointed me to the article “Why Mathematical Models Just Don’t Add Up.” It’s something every quantitative modeler should read. Let me provide some snippets:

The predictions about artificial beaches involve obviously absurd models. The reason coastal engineers and geologists go through the futile exercise is that the federal government will not authorize the corps to build beaches without a calculation of the cost-benefit ratio, and that requires a prediction of the beaches’ durability. Although the model has no discernible basis in reality, it continues to be cited around the world because no other model even attempts to answer that important question. [...] In spite of the fact that qualitative models produce better results, our society as a whole remains overconfident about quantitative modeling. [...] We suggest applying the embarrassment test. If it would be embarrassing to state out loud a simplified version of a model’s parameters or processes, then the model cannot accurately portray the process. [...] A scientist who stated those assumptions in a public lecture would be hooted off the podium. But buried deep within a model, such absurdities are considered valid.

While the article is quite extreme in its derision of quantitative models, plugs the book the authors wrote, and employs easy rhetoric by providing only positive examples of a few failures and not negative examples of many successes, it is right that quantitative models are overrated in our society, especially in domains that involve complex systems. The myriad of unrealistic and often silly assumptions are hidden beneath layers of obtuse mathematics.

Statistics and probability were attempts to deal with failure of deterministic mathematical models, and Bayesian statistics is a further attempt to manage the uncertainty arising from not knowing what the right model is. Moreover, a vague posterior is a clear signal that you don’t have enough data to make predictions.

Someone once derided philosophers by saying that first they stir up the dust, and then they complain that they cannot see: they are taking too many things into consideration, and this prevents them from coming up with a working model that will predict anything. One does have to simplify to make any prediction, and philosophers are good at criticizing the simplifications. Finally, even false models are known to yield good results, as we are reminded by that old joke:

An engineer, a statistician, and a physicist went to the races one Saturday and laid their money down. Commiserating in the bar after the race, the engineer says, “I don’t understand why I lost all my money. I measured all the horses and calculated their strength and mechanical advantage and figured out how fast they could run. . . ”

The statistician interrupted him: “. . . but you didn’t take individual variations into account. I did a statistical analysis of their previous performances and bet on the horses with the highest probability of winning. . . ”

“. . . so if you’re so hot why are you broke?” asked the engineer. But before the argument can grow, the physicist takes out his pipe and they get a glimpse of his well-fattened wallet. Obviously here was a man who knows something about horses. They both demanded to know his secret.

“Well,” he says, between puffs on the pipe, “first I assumed all the horses were identical, spherical and in vacuum. . . ”

I have to agree wholeheartedly, although I will differentiate math from stats. As a former mathematician (and now practicing stats/econometrician), I see plenty of mathematicians and physicists who come to Wall Street and specialize in overfitting.

And, far too often, they don't have a metric. I have no idea why BGM is preferred to HJM (except maybe the formulas are nicer), and they are prefered to Ho-Lee or a multivariate BK model, since nobody ever bothered to tell you a means of ranking. Even doing maximum likelihood and statistical testing would give some hint, but it is not in the vocabulary. Bayes is light-years ahead.

So many investors will tell us that nothing too "smart" has ever worked.

All the "hard-core" scientists have forgotten parsimony. They tend to replace it with a misplaced arrogance.

"All models are wrong but some are useful."

Box, G.E.P., page 204 in "Robustness in the strategy of scientific model building" in

Robustness in Statistics, R.L. Launer and G.N. Wilkinson, Editors. 1979, Academic Press: New York.This is an interesting point. In my area, psycholinguistics, there are more qualitative modelers than quantitative ones, and I have heard quite a few of the former type criticising the latter type (a category into which I tend to fall most of the time). But the thing is, the qualitative modelers tend to not have the capability (I mean the technical grunt-work capability–they are very smart people of course) to build quantitative models. So I find it hard to take such criticism seriously.

Furthermore, qualitative models (in psycholinguistics) can be, and often are, pretty silly too. Things magically happen and many crucial assumptions remain unstated, adding hidden degrees of freedom. It's easy to get away with these, and sometimes it even makes sense to do that, but not always. Sometimes it makes more sense to lay out the details, something that usually only quantitative or mathematical models can do.

"All models are wrong but some are useful."

But the key is realizing when your model is inadequate for addressing the question of interest.

The article itself, however, is unavailable unless you are willing to pay (which I am not).

Carl Wunsch wrote an interesting and rather critical review in the American scientist of the book the authors wrote, see:

http://www.americanscientist.org/template/BookRev…

the trick is to use all the models _ if your able to _ thats what theory is there for _ no need to reinvent the wheel _ as the chinese believe. only complement _ not necessarily as is the case most times: out with the old in with the newout