## Let’s try this: Instead of saying, “The probability is 75%,” say “There’s a 25% chance I’m wrong”

I recently wrote about the difficulty people have with probabilities, in this case the probability that Obama wins the election. If the probability is reported as 70%, people think Obama is going to win. Actually, though, it just means that Obama is predicted to get about 50.8% of the two-party vote, with an uncertainty of something like 2 percentage points. So, as I wrote, the election really is too close to call in the sense that the predicted vote margin is less than its uncertainty.

But . . . when people see a number such as 70%, they tend to attribute too much certainty to it. Especially when the estimated probability has increased from, say 60%. How to get the point across? Commenter HS had what seems like a good suggestion:

Say that Obama will win, but there is 25% chance (or whatever) that this prediction is wrong? Same point, just slightly different framing, but somehow, this seems far less incendiary.

I like that. Somehow a stated probability of 75% sounds all too sure, whereas saying that there’s a 25% chance of being wrong seems to better convey the uncertainty in the prediction.

1. Jonathan (a different one) says:

I carried out the following colloquy with the untrained statistician in my household yesterday:
(1) Nate Silver has said that Obama has an 80 percent chance of winning. If Romney wins, was he wrong? (Wife: Yes)
(2) If the weatherman says there’s a 20 percent chance of rain, and it rains, was the weatherman wrong? (Wife: No)
Hmmmmm.
(3) OK. Nate Silver says that Romney has a twenty percent chance of winning. If Romney wins, was Nate wrong? (Wife: No)
Do your answers to (1) and (3) seems odd to you? (Wife: No)

• Phil Koop says:

“Do your answers to (1) and (3) seems odd to you? (Wife: No)”

There lies the problem with the re-framing idea. You can restate your message until you elicit the reaction you want, but how do you know that reaction comprises any real understanding? I would say that if the equivalence between 80% sunny and 20% rainy cannot be distinguished even in retrospect, the re-framing has failed.

• Bill Jefferys says:

That is an interesting story, Jonathan.

A comment made to one of Nate’s recent blog entries tries it this way when Obama’s win probability was around 84% (paraphrasing):

The probability that Romney wins (under those circumstances) is approximately equal to the probability that you will kill yourself in a game of Russian roulette, using a six-gun, about 1 chance in 6. If you were to be unlucky and kill yourself, would that mean that the probability is wrong?

It clearly seems to be the case that people are better able to relate to this question when the low-probability event is the one that is highlighted. I mentioned this to my class the other day.

2. Your articles focus on the popular vote, but the election prediction sites also look at the electoral vote. There, it has been shown that not only are the chances high the Obama will secure at least 270 votes to win the election, but that the scenarios in which Romney would win would suggest that the polls are systematically biased. Furthermore, there are very few routes toward winning for Romney, and the most probable routes by a longshot have him losing the electoral vote. I haven’t crunched the numbers, but there seems to much less uncertainty in this outcome than the outcome of the popular vote. So I hope that this doesn’t get lost in your reframing of the debate.

As for being able to simultaneously state that the election is too close to call, and that so-and-so as X% chance of winning, I don’t think that Nate Silver or any of the other election prediction people have ever argued that it isn’t too close to call. They’re just saying that the odds favor Obama. And they do, regardless of how small the correspondence between a 10% change in the probability of winning and the margin.

Next, while a change in the probability of Obama winning from 60 to 70 corresponds to a small change in the margin of victory, I think the more important question is what a change in the probability of Obama winning from, say, 30% to 70% or even 10% to 90% corresponds to. Why? Because that would give us a sense of what it means for the popular vote when we invert our believes about the odds of winning (30 to 70 being an inversion of Nate Silver’s prediction, which is actually 80/20 now, and 10 to 90 corresponding to what Sam Wang’s and Darryl Holman’s models have been saying for a while now).

As for the best way to state the probability, perhaps we could look at some of the research that Cognitive Psychologists have done on people’s perceptions of weather prediction. Sonia Savelli and colleagues have done interesting work on this. That said, I honestly don’t think the suggested change would overwhelm the partisan bias that drives people’s interpretation of the model results. Furthermore, I think the best way to change people’s perceptions of these models is to make them a mainstay of American politics.

• Andrew says:

Brash:

Yes, when I predict presidential elections I look at electoral vote also. The relevant point here is that a 1% swing in the predicted vote in each state could easily swing the election, and the uncertainty in these predictions is more than 1%.

3. Another idea from the Judgment and Decision Making literature would be to “unpack” the probabilities of “getting the forecast wrong” a bit more.. this is a bit bizarre, but it turns out that disambiguating the various ways an event can turn out will increase people’s perceptions that the event will occur. For example, instead of communicating the probability Obama does not win, instead communicate “The probability that Romney wins more electoral college votes outright -OR- Romney wins after a recount vote -OR- Romney wins after an electoral college tie -OR- etc.”

This idea goes back to a paper by Fischhoff and colleagues from 1978 — the original demonstration (http://www.gwern.net/docs/1978-fischhoff.pdf) was with car mechanics.

• Andrew says:

Mark:

Yes, I used the car-fault-tree example in class a couple weeks ago!

Regarding the election example: Yes, when I speak or write on this (as in the first link in my post above), I talk about the different ways the forecast can be wrong: voters can decide at the last minute, polls miss a lot of people (the nonresponse rate grows every year), and survey adjustments are only approximate.

4. Steve Fenn @OptaHunt says:

Silver essentially did this on Friday. The concluding paragraph from his post titled Nov. 2: For Romney to Win, State Polls Must Be Statistically Biased:
“But the state polls may not be right. They could be biased. Based on the historical reliability of polls, we put the chance that they will be biased enough to elect Mr. Romney at 16 percent.”

5. Slugger says:

A large part of the problem is that people seem to misunderstand what the numbers are saying. A 70% chance of winning does not mean that the vote will be 70 for; 30 against. Many of the critics appear to believe that the prediction is for a 70/30 split in popular vote or in the electoral college.
Part of the problem is also the problem of assessing the reliability of the predictors. If I get wet on a 20% probability day, at some point I start doubting the weatherman. While there are mathematical tools to help us quantify the reliability of predictions, these tools often do not match up with our gut feelings. If the 20% rain leads to me sharing an umbrella with a sweetie, I will be more favorably disposed to the weatherman than if the 20% rain ruins my favorite suit.
This disparity between the math and our guts is what makes casino owners rich.

6. Rick in Chicago says:

What about “specification error,” as econometricians and other like to phrase it? When I say that there’s a 25% chance that I’m wrong, it doesn’t (to my ears, at least) allow for any chance that I’ve specified the process incorrectly, but that the process has some ‘noise’ in it.

I’ve not been closely following the arguments, but I suspect that some who disagree with Nate Silver think his model of the process is wrong, not that he can’t do the math. (I don’t see how you could derive insights into the process without making the same mistakes, but anyway…)

7. Entosphy says:

Andrew,

Your opinion was directly contradicted by Paul Krugman. If I’m reading him right, he’s calling you an idiot, or at least suggesting that you’re getting much stupider:

Andrew Gelman: “If the probability is reported as 70%, people think Obama is going to win. Actually, though, it just means that Obama is predicted to get about 50.8% of the two-party vote, with an uncertainty of something like 2 percentage points. So, as I wrote, the election really is too close to call in the sense that the predicted vote margin is less than its uncertainty”

Paul Krugman: “As Nate Silver (who has lately attracted a remarkable amount of hate — welcome to my world, Nate!) clearly explains, state polling currently points overwhelmingly to an Obama victory. It’s possible that the polls are systematically biased — and this bias has to encompass almost all the polls, since even Rasmussen is now showing Ohio tied. So Romney might yet win. But a knife-edge this really isn’t, and any reporting suggesting that it is makes you stupider.” http://krugman.blogs.nytimes.com/2012/11/03/reporting-that-makes-you-stupid/

So how do you feel about a Nobel Laureate directly stating that your blog post is making people stupider and insinuating that you wrote it because you’re either incompetent or dishonest? Please feel free to use colorful language.

Personally, I assume every statistic is biased (whether the bias is “systematic” or not is irrelevant since this election will never be repeated). The real question is just how big does the bias have to be in this case to change the outcome?

• Andrew says:

(1) These polls can be wrong. There are big problems with nonresponse. Exit polls are even worse, but because they will be the only numbers available for awhile, everybody (including me) will be drawing all sorts of conclusions from them.

(2) Krugman doesn’t return my emails anymore. Neither does Nate. So I think they’re both too busy to make any judgments about my competence. Meanwhile, I’m spending most of my worktime finishing the third edition of my book, responding to blog comments as a distraction.

• Entsophy says:

Darn it. I was hoping it was going to be the academic equivalent of “Thunderdome” from that Mad Max movie:

• A. Zarkov says:

Krugman has become petulant as he approaches old age. He has a nasty tendency to label anyone who doesn’t see things his way as “stupid.” Too bad for him. He’s hurt his reputation a lot and fewer and fewer professional people are taking him seriously. He taken on the mantle of a Dr. feelgood for the liberal readers of the New York Times. He’s best ignored at this point.

• Entsophy says:

In this case he’s being a complete hack and deserves to be schooled publicly by Gelman, but unfortunately it’s probably a mistake to ignore him completely. He reminds me a bit of Kepler. Kepler’s three laws of planetary motion supposedly take up a small part of his Astronomia nova, and he spent most of his professional time concerned with astrology. Nevertheless, in the right hands (Newton) these few nuggets were of major significance.

8. I don’t like it. There’s no chance of being wrong because you’re not making a point estimate.

• Andrew says:

Fair enough. Change it to: “There’s a 25% chance the outcome will go in the unexpected direction” or something like that.

9. A. Zarkov says:

With only two-to-one odds against Romney few people would be surprised if he got elected. If the odds against him were 50 to 1 then virtually everyone would be surprised. When William F. Buckley ran for the mayor of New York City, the press asked him what he would do first– “Ask for a recount” was his famous response. I think we need a surprise index of some sort. Something along the lines of Weavers “Probability rarity interest and surprise”

10. Lee Sechrest says:

A long time ago (at least it seems so to me), in the moving “Naked Gun” the OJ Simpson character was shot full of of holes and was in an emergency room. The Leslie Nielsen character comes in and asks the doctor, “What are his chances, Doc?” The doctor replies, “Only 1 in 10 and only a 50/50 chance of that.”

And in the cartoon strip “Miss Peach,” the nasty little girl Marcia is berating poor dumb Arthur. She says something like, “Arthur your model for predicting the weather is stupid. You predicted sunshine and it is raining!” Arthur replies, “Well, maybe I predicted sunshine and it is stupidly raining.”

So there may be some defense for Nate Silver if he is wrong: this is a stupid election.