Over at the sister blog, they’re overinterpreting forecasts

Matthew Atkinson and Darin DeWitt write, “Economic forecasts suggest the presidential race should be a toss-up. So why aren’t Republicans doing better?”

Their question arises from a juxtaposition of two apparently discordant facts:

1. “PredictWise gives the Republicans a 35 percent chance of winning the White House.”

2. A particular forecasting model (one of many many that are out there) predicts “The Democratic Party’s popular-vote margin is forecast to be only 0.1 percentage points. . . . a 51 percent probability that the Democratic Party wins the popular vote.”

Thus Atkinson and DeWitt conclude that “the Republican Party is underperforming this model’s prediction by 14 percentage points.” And they go on to explain why.

But I think they’re mistaken—not in their explanations, maybe, but in their implicit assumption that a difference between a 49% chance of winning from a forecast, and a 35% chance of winning from a prediction market, demands an explanation.

Why do I say this?

First, when you take one particular model as if it represents the forecast, you’re missing a lot of your uncertainty.

Second, you shouldn’t take the probability of a win as if it were an outcome in itself. The difference between a 65% chance of winning and a 51% chance of winning is not 14 percentage points in any real sense, it’s more like a difference of 1% or 2% of the vote. That is, the model predicts a 50/50 vote shift, maybe the markets are predicting 52/48, that’s a 2 percentage point difference, not 14 percentage points.

It’s not that Atkinson and DeWitt are wrong to be looking at discrepancies between different forecasts; I just think they’re overintepreting what is essentially 1 data point. Forecasts are valuable, but different information is never going to be completely aligned.

5 thoughts on “Over at the sister blog, they’re overinterpreting forecasts

  1. The standard errors on the forecasting models (both coefficient and overall) are so large they are largely irrelevant (Palmquist had something in the APSA house organ about 20 years ago but it was obvious then and obvious now, particularly since elections from the 60’s and 70’s aren’t really comparable to current elections, so you have 8 observations or so). Using these forecast model for anything except talking head chit-chat is a waste of time (and academics shouldn’t be doing it, as it’s neither novel or useful).

  2. What do you think is a standard error for predictions at this moment. I doubt it’s better then +/-10%. At this level of uncertainty you need a lot of shift in the estimate for the mean to get from 50 to 35. Like 5% points.

  3. Great point. It also has to do with risk-neutral probabilities (https://en.wikipedia.org/wiki/Risk-neutral_measure).

    a 52% chance of winning need not for any reason map to a 52% prediction market odd — or even be expected to.

    A forecast of 49% of (R) pop vote winning with a 95% C.I. of, say, 49.2-50.1 could easily map to something like a ~30% prediction of winning on a prediction market.

Leave a Reply

Your email address will not be published. Required fields are marked *