Skip to content
 

Fooled by randomness

From 2006:

Naseem Taleb‘s publisher sent me a copy of “Fooled by randomness: the hidden role of chance in life and the markets” to review. It’s an important topic, and the book is written in a charming style—I’ll try to respond in kind, with some miscellaneous comments.

On the cover of the book is a blurb, “Named by Fortune one of the smartest books of all time.” But Taleb instructs us on page 161-162 to ignore book reviews because of selection bias (the mediocre reviews don’t make it to the book cover).

Books vs. articles

I prefer writing books to writing journal articles because books are written for the reader (and also, in the case of textbooks, for the teacher), whereas articles are written for referees. Taleb definitely seems to be writing to the reader, not the referee. There is risk in book-writing, since in some ways referees are the ideal audience of experts, but I enjoy the freedom in book-writing of being able to say what I really think.

Variation and randomness

Taleb’s general points—about variation, randomness, and selection bias—will be familiar with statisticians and also to readers of social scientists and biologists such as Niall Ferguson, A.J.P. Taylor, Stephen J. Gould, and Bill James who have emphasized the roles of contingency and variation in creating the world we see.

Hyperbole?

On pages xiiv-xlv, Taleb compares the “Utopian Vision, associated with Rousseau, Godwin, Condorcet, Thomas Painen, and conventional normative economists,” to the more realistic “Tragic Vision of humankind that believes in the existence of inherent limitations and flaws in the way we think and act,” associated with Karl Popper, Freidrich Hayek and Milton Friedman, Adam Smith, Herbert Simon, Amos Tversky, and others. He writes, “As an empiricist (actually a skeptical empiricist) I despise the moralizers beyond anything on this planet . . .”

Despise “beyond anything on this planet”?? Isn’t this a bit extreme? What about, for example, hit-and-run drivers? I despise them even more.

Correspondences

On page 39, Taleb quotes the maxim, “What is easy to conceive is clear to express / Words to say it would come effortlessly.” This reminds me of the duality in statistics between computation and model fit: better-fitting models tend to be easier to compute, and computational problems often signal modeling problems. (See here for my paper on this topic.)

Turing Test

On page 72, Taleb writes about the Turing test: “A computer can be said to be intelligent if it can (on aveage) fool a human into mistaking it for another human.” I don’t buy this. At the very least, the computer would have to fool me into thinking it’s another human. I don’t doubt that this can be done (maybe another 5-20 years, I dunno). But I wouldn’t use the “average person” as a judge. Average people can be fooled all the time. If you think I can be fooled easily, don’t use me as a judge, either. Use some experts.

Evaluations based on luck

I’m looking at my notes. Something in Taleb’s book, but I ‘m not sure what, reminded me of a pitfall in the analysis of algorithms that forecast elections. People have written books about this, “The Keys to the White House,” etc. Anyway, the past 50 years have seen four Presidential elections that have been, essentially (from any forecasting standpoint), ties: 1960, 1968, 1976, 2000. Any forecasting method should get no credit for forecasting the winner in any of these elections, and no blame for getting it wrong. Also in the past 50 years, there have been four Presidential elections that were landslides: 1956, 1964, 1972, 1984. (Perhaps you could also throw 1996 in there; obviously the distinction is not precise.) Any forecasting method better get these right, otherwise it’s not to be taken seriously at all. What is left are 1980, 1988, 1992, 1996, 2004: only 5 actual test cases in 50 years! You have a 1/32 chance of getting them all right by chance. This is not to say that forecasts are meaningless, just that a simple #correct is too crude a summary to be useful.

Lotteries

I once talked with someone who wanted to write a book called Winners, interviewing a bunch of lottery winners. Actually Bruce Sacerdote and others have done statistical studies of lottery winners, using the lottery win as a randomly assigned treatment. But my response was to write a book called Losers, interviewing a bunch of randomly-selected lottery players, almost all of which, of course, would be net losers.

Finance and hedging

When I was in college I interviewed for a summer job for an insurance company. The interviewer told me that his boss “basically invented hedging.” He also was getting really excited about a scheme for moving profits around between different companies so that none of the money got taxed. It gave me a sour feeling, but in retrospect maybe he was just testing me out to see what my reaction would be.

Forecasts, uncertainty, and motivations

Taleb describes the overconfidence of many “experts.” Some people have a motivation to display certainty. For example, auto mechanics always seemed to me to be 100% sure of their diagnosis (“It’s the electrical system”), then when they were wrong, it never would bother them a bit. Setting aside possible fradulence, I think they have a motivation to be certain, because we’re unlikely to follow their advice if they qualify it. In the other direction, academics like me perhaps have a motivation to overstate uncertainty, to avoid the potential loss in reputation from saying something stupid. But in practice, people seem to understate our uncertainty most of the time.

Some experts aren’t experts at all. I was once called by a TV network (one of the benefits of living in New York?) to be interviewed about the lottery. I’m no expert—I referred them to Clotfelter and Cook. Other times, I’ve seen statisticians quoted in the paper on subjects they know nothing about. Once, several years ago, a colleague came into my office and asked me what “sampling probability proportional to size” was. It turned out he was doing some consulting for the U.S. government. I was teaching a sampling class at the time, so i could help him out. But it was a little scary that he had been hired as a sampling expert. (And, yes, I’ve seen horrible statistical consulting in the private sector as well.)

Summary

A thought-provoking and also fun book. The statistics of low-probability events has long interested me, and the stuff about the financial world was all new to me. The related work of Mandelbrot discusses some of these ideas from a more technical perspective. (I became aware of Mandelbrot’s work on finance through this review by Donald MacKenzie.)

P.S.

Taleb is speaking this Friday at the Collective Dynamics Seminar.

Update (2014):

I thought Fooled by Randomness made Taleb into a big star, but then his followup effort, The Black Swan, really hit the big time. I reviewed The Black Swan here.

The Collective Dynamics Seminar unfortunately is no more; several years ago, Duncan Watts left Columbia to join Yahoo research (or, as I think he was contractually required to write, Yahoo! research). Now he and his colleagues (who are my collaborators too) work at Microsoft research, still in NYC.

9 Comments

  1. If you’re interested in the statistics of low-probability events, you’ve probably seen David J. Hand’s latest book, “The Improbability Principle.” I’ve always enjoyed Hand’s clear and elegant writing style, and this book was not an exception.

    Taleb’s first two books are great, but his boastful self-confidence puts me off sometimes. Considering that he constantly writes about the dangers of hidden uncertainties and cognitive biases, it surprises me that he’s so certain and aggressive about many things.

    The link in “statistics of low-probability events” is not working, by the way. It seems that you were trying to link to one of your papers.

  2. July says:

    The link to “related work of Mandelbrot” works no more.

  3. Rahul says:

    I nominate Taleb as one of the most overrated writers of the decade. Nothing like a serendipitous recession to bolster ones creds.

  4. Jonathan says:

    My favorite stuff to laugh at are things like NFL mock drafts. You can’t model the process accurately; the supply inputs and demand calls vary year to year so there’s no predictive power from the past. You can’t do more than guess at intentions and it fails as in game theoretical terms because there is no “prize” or “best outcome” other than your actual selection and you can’t evaluate how one choice affects other choices down the line because you only see the actual choices made not the hidden processes (like if each team had to submit to the NFL what they would have picked on their own, even with a tree of choices at each selection). It’s my definition of an absolute waste of time … but very funny.

    In terms of what Taleb writes about, I find people still don’t get his implications: that active markets, meaning spaces with lots of money and turnover and thus change, will ferret out “opportunity” given sufficient time and will explore the profit potentials of these opportunities (even if it means ignoring and then returning to an “opportunity” when it is riper, etc.). In these spaces, trusting activity and thus risk/return will remain centered in the distribution becomes dumb because if you fit these various opportunities to the larger distribution of “all opportunities” or however you phrase that, you see some of them will generate tail effects at another level. It has to be that way because the various distributions aren’t going to line up perfectly on top of each other. That would be a miracle. What we saw with subprime or AltA or various insurance derivatives, etc. was a tail event in one distribution that wasn’t a tail event in another; it was actually likely some of the “opportunities” explored would have results whose likelihood would center around disaster or some hugely negative return or the equivalent. That is, multi-level modeling generates tail events in one distribution that aren’t tail events in another. People still have a tendency to think in flat terms.

    • Brad Stiritz says:

      Jonathan,

      >if you fit these various opportunities to the larger distribution of “all opportunities” or however you phrase that, you see some of them will generate tail effects at another level. It has to be that way because the various distributions aren’t going to line up perfectly on top of each other.. People still have a tendency to think in flat terms.

      Well, so perhaps I’m one of those people.. I’m trying to visualize exactly what you’re talking about. Are you quoting from Taleb? (I haven’t read any of his books) Are you saying the various distributions don’t map to the classic NHST paradigm? I would have thought that your “larger distribution of all opportunities” could be viewed as the parent null distribution of all possible trades / strategies & their individual returns, with return / Sharpe Ratio / etc. as the X-axis variable?

      >What we saw with subprime or AltA or various insurance derivatives, etc. was a tail event in one distribution that wasn’t a tail event in another..

      Yes, I can imagine that return metrics of some of those crisis-linked instruments fall on an alternate distribution (i.e., rejecting the null hypothesis) *on the same common X-axis* but could you please clarify / elaborate, or refer me to Taleb if appropriate?

      Thanks for your consideration..

  5. Eli Rabett says:

    US Presidential elections are actually 51 (50 states + DC) elections, so even in landslide years there are marginal cases and in close elections there are landslides. The landscape has enough features to evaluate models

    • Andrew says:

      Eli:

      I was referring specifically to evaluations of success based on predictions of the winner of the national election. If you use state-by-state vote shares, indeed there is lots of evidence in landslide elections as well.

  6. Kaiser says:

    Have been thinking quite a bit about evaluating forecasts lately and I totally agree with your paragraph about “evaluations based on luck” and in particular about separating easy cases from truly hard cases. We typically compute average accuracy but skill is often only shown in the difficult cases. I think this problem is solved if we take a relative measure, i.e. compare your model to the current best-in-class model, in which case, both models will perform well on the easy cases. Any comparison against the actual outcomes (and not relative to a competing model) is problematic I believe as it does not have an anchor.

Leave a Reply