Skip to content

As the boldest experiment in journalism history, you admit you made a mistake

Screen Shot 2014-02-24 at 11.33.22 PM

The pre-NYT David Brooks liked to make fun of the NYT. Here’s one from 1997:

I’m not sure I’d like to be one of the people featured on the New York Times wedding page, but I know I’d like to be the father of one of them. Imagine how happy Stanley J. Kogan must have been, for example, when his daughter Jamie got into Yale. Then imagine his pride when Jamie made Phi Beta Kappa and graduated summa cum laude. . . . he must have enjoyed a gloat or two when his daughter put on that cap and gown.

And things only got better. Jamie breezed through Stanford Law School. And then she met a man—Thomas Arena—who appears to be exactly the sort of son-in-law that pediatric urologists dream about. . . .

These two awesome resumes collided at a wedding ceremony . . . It must have been one of the happiest days in Stanley J. Kogan’s life. The rest of us got to read about it on the New York Times wedding page.

Brooks is reputed to be Jewish himself so I think it’s ok for him to mock Jewish people in print. The urologist bit . . . well, hey, I’m not above a bit of bathroom humor myself—and nor, for that matter, is the great Dave Barry—so I can hardly fault a columnist for finding a laugh where he can.

The interesting part, though, comes near the end of the column:

The members of the cognitive elite will work their way up into law partnerships or top jobs at the New York Times, but they probably won’t enter the billionaire ranks. The real wealth will go to the risk-taking entrepreneurs who grew up in middle- or lower-middle-class homes and got no help from their non-professional parents when they went off to college.

One of the fun things about revisiting old journalism is that we can check how the predictions come out. So let’s examine the two claims above, 17 years later:

1. “The members of the cognitive elite . . . probably won’t enter the billionaire ranks.” Check. No problem there. Almost nobody is a billionaire, so, indeed, most people with graduate degrees who are featured in the NYT wedding section do not become billionaires.

2. “The real wealth will go to the risk-taking entrepreneurs who grew up in middle- or lower-middle-class homes and got no help from their non-professional parents when they went off to college.” Hmmm . . . I googled rich people and found this convenient wikipedia list of members of the Forbes 400. Let’s go through them in order:

Bill Gates
Warren Buffett
Larry Ellison
Charles Koch
David H. Koch
Christy Walton
Jim Walton
Alice Walton
S. Robson Walton
Michael Bloomberg
Sheldon Adelson
Jeff Bezos
Larry Page
Sergey Brin
Forrest Mars, Jr.

Most of these had backgrounds far above the middle class. For example, of Gates, “His father was a prominent lawyer, and his mother served on the board of directors for First Interstate BancSystem and the United Way.” Here’s Buffett: “Buffett’s interest in the stock market and investing also dated to his childhood, to the days he spent in the customers’ lounge of a regional stock brokerage near the office of his father’s own brokerage company.” Koch: “After college, Koch started work at Arthur D. Little, Inc. In 1961 he moved back to Wichita to join his father’s business, Rock Island Oil & Refining Company.” And I don’t think I have to tell you about the backgrounds of the Waltons or Forrest Mars, Jr. Larry Page had more of a middle class background but not the kind that David Brooks was looking for: “His father, Carl Page, earned a Ph.D. in computer science in . . . and is considered a pioneer in computer science and artificial intelligence. Both he and Page’s mother, Gloria, were computer science professors at Michigan State University.” And here’s Sergei Brin: “His father is a mathematics professor at the University of Maryland, and his mother a researcher at NASA’s Goddard Space Flight Center.” Damn! Foiled again. They might have even really violated Brooks’s rule and paid for Brin’s college education.

That leaves us with Larry Ellison, Sheldon Adelson, Michael Bloomberg, and Jeff Bezos: 4 out of the Forbes 15. So, no, I think Brooks would’ve been more prescient had he written:

The real wealth will go to the heirs of rich people or to risk-taking entrepreneurs who grew up in rich or upper-class homes or who grew up middle class but got lots of help from their well-educated professional parents when they went off to college and graduate school.

But that wouldn’t have sounded as good. It would’ve been like admitting that the surf-and-turf at Red Lobster actually cost more than $20. As Sasha Issenberg reported back in 2006:

I went through some of the other instances where he [Brooks] made declarations that appeared insupportable. He accused me of being “too pedantic,” of “taking all of this too literally,” of “taking a joke and distorting it.” “That’s totally unethical,” he said.

This time, let me make it clear that I’m not saying that Brooks did any false reporting. He just made a prediction in 1997 that was way way off. I do think Brooks showed poor statistical or sociological judgment, though. To think that “the real wealth” will go to the children of the “middle- or lower-middle-class” who don’t even pay for their college education . . . that’s just naiveté or wishful thinking at best or political propaganda at worst.

Brooks follows up his claim with this bizarre (to me) bit of opinionizing:

The people on the New York Times wedding page won’t make $4 million a year like the guy who started a chain of erotic car washes. They’ll have to make do with, say, $1.2 million if they make partner of their law firms. Maybe even less. The cognitive elite have more status but less money than the millionaire entrepreneurs, and their choices as consumers reflect their unceasing desire to demonstrate their social superiority to people richer than themselves.

I honestly can’t figure out what he’s getting at here except that I think it’s a bit of “mood affiliation,” as Tyler Cowen might say. According to Brooks’s ideology (which he seems to have borrowed from Tom Wolfe), “the guy who started a chain of erotic car washes” is a good guy, and “the cognitive elite” are bad guys. One way you can see this is that the erotic car wash guy is delightfully unpretentious (he might, for example have season tickets to the local football team and probably has a really big house and and a bunch of cars and boats, and he probably eats a lot of fat steaks too), while the cognitive elite have an “unceasing desire to demonstrate their social superiority.” They’re probably Jewish, too, just like that unfortunate urologist from the first paragraph of Brooks’s article.

But the thing that puzzles me is . . . isn’t 1.2 million a year enough? I mean, sure, if this car wash guy really wants more more more, then he can go for it, why not. But it seems a bit rich to characterize a bigshot lawyer as being some sort of envious hater because he was satisfied to max out at only a million a year. I mean, that’s just sad. Really sad, if there are people out there who think they’re failures unless they make 4 million dollars a year. There just aren’t that many slots in the world for people like that. If you have that attitude, you’re doomed to failure, statistically speaking.

Why bother?

The question always comes up when I write about these political journalists: why spend the time? Wouldn’t the world be better off if I were to put the equivalent effort into Stan, or EP, or Waic, or APC, or MRP, or various other statistical ideas that can really help people out?

Even if you agree with me that David Brooks is misguided, does it really help for me to dredge up a 17-year-old column? Better perhaps to let these things just sit, forgotten for another 17 years, perhaps.

My short and lazy answer is that I blog in part to let off steam. Better for me to just express my annoyance (even if, as in this case, it took me an hour to look up all those Wiki pages and write the post) than have it fester in my mind, distracting me from more important tasks.

My longer answer is: Red State Blue State. I do think that statistical misunderstandings can lead to political confusion. After all, if you really think that a good ticket for massive wealth is having lower-middle-class parents who won’t pay for college . . . well, that has some potential policy implications. But if you go with the facts and look at who the richest Americans really are and where they came from, that’s a different story.

Also, more generally, I wish people would revisit their pasts and correct their mistakes. I did it with lasso and I wish Brooks would do it here. What a great topic for his next NYT column: he could revisit this old article of his and explain where he went wrong, and how this could be a great learning experience. A lesson in humility, as it were.

I’ll make a deal with David Brooks: if you devote a column to this, I’ll devote a column to my false theorem—the paper my colleague and I published in 1993 that we had to retract because our so-called theorem was just wrong. I mean wrong wrong wrong, as in someone sent us a counterexample.

But I doubt Brooks will take me up on his offer, as I don’t think he ever ran a column on his mistake regarding the prices at Red Lobster, nor did he ever retract the “potentially ground-shifting” but false claims he publicized awhile ago in his column.

So, even though I would think it would be excellent form, and in Brooks’s best interests, to correct his past errors, he doesn’t seem to think so himself. I find myself in the position of Albert Brooks in that famous scene in Lost in America in which he tries in vain to persuade the casino manager to give back all the money his wife just gambled away: “As the boldest experiment in advertising history, you give us our money back.”

Am I too negative?

For background, you can start by reading my recent article, Is It Possible to Be an Ethicist Without Being Mean to People? and then a blog post, Quality over Quantity, by John Cook, who writes:

At one point [Ed] Tufte spoke more generally and more personally about pursuing quality over quantity. He said most papers are not worth reading and that he learned early on to concentrate on the great papers, maybe one in 500, that are worth reading and rereading rather than trying to “keep up with the literature.” He also explained how over time he has concentrated more on showcasing excellent work than on criticizing bad work. You can see this in the progression from his first book to his latest. (Criticizing bad work is important too, but you’ll have to read his early books to find more of that. He won’t spend as much time talking about it in his course.) That reminded me of Jesse Robbins’ line: “Don’t fight stupid. You are better than that. Make more awesome.”

This made me stop and think, given how much time I spend criticizing things. Indeed, like Tufte I’ve spent a lot of time criticizing chartjunk! I do think, though, that I and others have learned a lot from my criticisms. There’s some way in which good examples, as well as bad examples, can be helpful in developing and understanding general principles.

For example, consider graphics. As a former physics major, I’ve always used graphs as a matter of course (originally using pencil on graph paper and then moving to computers), and eventually I published several papers on graphics that had constructive, positive messages:

Let’s practice what we preach: turning tables into graphs (with Cristian Pasarica and Rahul Dodhia)

A Bayesian formulation of exploratory data analysis and goodness-of-fit testing

Exploratory data analysis for complex models

as well as many many applied papers in which graphical analysis was central to the process of scientific discovery (in particular, see this paper (with Gary King) on why preelection polls are so variable and this paper (with Gary King) on the effects of redistricting.

The next phase of my writing on graphics accentuated the negative, with a series of blog posts over several years criticizing various published graphs. I do think this criticism was generally constructive (a typical post might point to a recent research article and make some suggestions of how to display the data or inferences more clearly) but it certainly had a negative feel—to the extent that complete strangers started sending me bad graphs to mock on the blog.

This phase peaked with a post of mine from 2009 (with followup here), slamming some popular infographics. These and subsequent posts sparked lots of discussion, and I was motivated to work with Antony Unwin and write the article that eventually became Infovis and statistical graphics: Different goals, different looks and was published with discussion in the Journal of Computational and Graphical Statistics. Between the initial post and the final appearance of the paper, my thinking changed, and I became much more clear on the idea that graphical displays have different sorts of goals. And I don’t think I could’ve got there without starting with criticism.

(Here’s a blog post from 2011 where I explain where I’m coming from on the graphics criticism. See also here for a slightly broader discussion of the difficulties of communication across different research perspectives.)

A similar pattern seems to be occurring in my recent series of criticisms of “Psychological Science”-style research papers. In this case, I’m part of an informal “club” of critics (Simonsohn, Francis, Ioannidis, Nosek, etc etc), but, again, it seems that criticism of bad work can be a helpful way of moving forward and thinking harder about how to do good work.

It’s funny, though. In my blog and in my talks, I talk about stuff I like and stuff I don’t like. But in my books, just about all my examples are positive. We have very few negative examples, really none at all that I can think of (except for some of the examples in the “lying with statistics” chapter in the Teaching Statistics book). This suggests that I’m doing something different in my books than in my blogs and lectures.

Association for Psychological Science announces a new journal

spec

The Association for Psychological Science, the leading organization of research psychologists, announced a long-awaited new journal, Speculations on Psychological Science. From the official APS press release:

Speculations on Psychological Science, the flagship journal of the Association for Psychological Science, will publish cutting-edge research articles, short reports, and research reports spanning the entire spectrum of the science of psychology. We anticipate that Speculations on Psychological Science will be the highest ranked empirical journal in psychology. We recognize that many of the most noteworthy published claims in psychology and related fields are not well supported by data, hence the need for a journal for the publication of such exciting speculations without misleading claims of certainty.

- Sigmund Watson, Prof. (Ret.) Miskatonic University, and editor-in-chief, Speculations on Psychological Science

I applaud this development. Indeed, I’ve been talking about such a new journal for awhile now.

The most-cited statistics papers ever

Robert Grant has a list. I’ll just give the ones with more than 10,000 Google Scholar cites:

Cox (1972) Regression and life tables: 35,512 citations.

Dempster, Laird, Rubin (1977) Maximum likelihood from incomplete data via the EM algorithm: 34,988

Bland & Altman (1986) Statistical methods for assessing agreement between two methods of clinical measurement: 27,181

Geman & Geman (1984) Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images: 15,106

We can find some more via searching Google scholar for familiar names and topics; thus:

Metropolis et al. (1953) Equation of state calculations by fast computing machines: 26,000

Benjamini and Hochberg (1995) Controlling the false discovery rate: a practical and powerful approach to multiple testing: 21,000

White (1980) A heteroskedasticity-consistent covariance matrix estimator and a direct test for heteroskedasticity: 18,000

Heckman (1977) Sample selection bias as a specification error: 17,000

Dickey and Fuller (1979) Distribution of the estimators for autoregressive time series with a unit root: 14,000

Cortes and Vapnik (1995) Support-vector networks: 13,000

Akaike (1973) Information theory and an extension of the maximum likelihood principle: 13,000

Liang and Zeger (1986) Longitudinal data analysis using generalized linear models: 11,000

Breiman (2001) Random forests: 11,000

Breiman (1996) Bagging predictors: 11,000

Newey and West (1986) A simple, positive semi-definite, heteroskedasticity and autocorrelationconsistent covariance matrix: 11,000

Rosenbaum and Rubin (2004) The central role of the propensity score in observational studies for causal effects: 10,000

Granger (1969) Investigating causal relations by econometric models and cross-spectral methods: 10,000

Hausman (1978) Specification tests in econometrics: 10,000

And, the two winners, I’m sorry to say:

Baron and Kenny (1986) The moderator–mediator variable distinction in social psychological research: Conceptual, strategic, and statistical considerations: 42,000

Zadeh (1965) Fuzzy sets: 45,000

Ugh.

But I’m guessing there are some biggies I’m missing. I say this because Grant’s original list included one paper, by Bland and Altman, with over 27,000 cites, that I’d never heard of!

P.S. I agree with Grant that using Google Scholar favors newer papers. For example, Cooley and Tukey (1965), “An algorithm for the machine calculation of complex Fourier series,” does not make the list, amazingly enough, with only 9300 cites. And the hugely influential book by Snedecor and Cochran has very few cites, I guess cos nobody cites it anymore. And, of course, the most influential researchers such as Laplace, Gauss, Fisher, Neyman, Pearson, etc., don’t make the cut. If Pearson got a cite for every chi-squared test, Neyman for every rejection region, Fisher for every maximum-likelihood estimate, etc., their citations would run into the mid to high zillions each.

P.P.S. I wrote this post a few months ago so all the citations have gone up. For example, the fuzzy sets paper is now listed at 49,000, and Zadeh has a second paper, “Outline of a new approach to the analysis of complex systems and decision processes,” with 16,000 cites. He puts us all to shame. On the upside, Efron’s 1979 paper, “Bootstrap methods: another look at the jackknife,” has just pulled itself over the 10,000 cites mark. That’s good. Also, I just checked and Tibshirani’s paper on lasso is at 9873, so in the not too distant future it will make the list too.

On deck this week

Mon: The most-cited statistics papers ever

Tues: American Psychological Society announces a new journal

Wed: Am I too negative?

Thurs: As the boldest experiment in journalism history, you admit you made a mistake

Fri: The Notorious N.H.S.T. presents: Mo P-values Mo Problems

Sat: Bizarre academic spam

Sun: An old discussion of food deserts

Just gave a talk

I just gave a talk in Milan. Actually I was sitting at my desk, it was a g+ hangout which was a bit more convenient for me. The audience was a bunch of astronomers so I figured they could handle a satellite link. . . .

Anyway, the talk didn’t go so well. Two reasons: first, it’s just hard to get the connection with the audience without being able to see their faces. Next time I think I’ll try to get several people in the audience to open up their laptops and connect to the hangout, so that I can see a mosaic of faces instead of just a single image from the front of the room.

The second problem with the talk was the topic. I asked the people who invited me to choose a topic, and they picked Can we use Bayesian methods to resolve the current crisis of statistically-significant research findings that don’t hold up? But I don’t think this was right for this audience. I think that it would’ve been better to give them the Stan talk or the little data talk or the statistical graphics talk.

The moral of the story: I can solicit people’s input on what to speak on, but ultimately the choice is my responsibility.

Adjudicating between alternative interpretations of a statistical interaction?

Jacob Felson writes:

Say we have a statistically significant interaction in non-experimental data between two continuous predictors, X and Z and it is unclear which variable is primarily a cause and which variable is primarily a moderator. One person might find it more plausible to think of X as a cause and Z as a moderator and another person may think the reverse more plausible. My question then is whether there is are any set of rules or heuristics you could recommend to help adjudicate between alternate perspectives on such an interaction term.

My reply:

I think in this setting, it would make sense to think about different interventions, some of which affect X, others of which affect Z, others of which affect both, and go from there. Rather than trying to isolate a single causal path, consider different cases of forward casual inference. My guess is that the different stories regarding moderators etc. could motivate different thought experiments (and, ultimately, different observational studies) regarding different potential interventions.

So I would not try to “adjudicate” between different stories; rather, I’d recognize that they could all be appropriate, just corresponding to different interventions. Also, all the above would hold even if there are only main effects, no interactions needed. And, for that matter, statistical significance would not be needed either for you to look at these questions.

References (with code) for Bayesian hierarchical (multilevel) modeling and structural equation modeling

A student writes:

I am new to Bayesian methods. While I am reading your book, I have some questions for you. I am interested in doing Bayesian hierarchical (multi-level) linear regression (e.g., random-intercept model) and Bayesian structural equation modeling (SEM)—for causality. Do you happen to know if I could find some articles, where authors could provide data w/ R and/or BUGS codes that I could replicate them?

My reply: For Bayesian hierarchical (multi-level) linear regression and causal inference, see my book with Jennifer Hill. For Bayesian structural equation modeling, try google and you’ll find some good stuff. Also, I recommend Stan (http://mc-stan.org/) rather than Bugs.

I agree with this comment

The anonymous commenter puts it well:

The problem is simple, the researchers are disproving always false null hypotheses and taking this disproof as near proof that their theory is correct.

Creating a Lenin-style democracy

Mark Palko explains why a penalty for getting the wrong answer on a test (the SAT, which is used in college admissions and which is used in the famous 8 schools example) is not a “penalty for guessing.” Then the very next day he catches this from Todd Balf in the New York Times Magazine:
Continue reading ‘Creating a Lenin-style democracy’ »