Inference = data + model

A recent article on global warming reminded me of the difficulty of letting the data speak. William Nordhaus shows the following graph:

And then he writes:

One of the reasons that drawing conclusions on temperature trends is tricky is that the historical temperature series is highly volatile, as can be seen in the figure. The presence of short-term volatility requires looking at long-term trends. A useful analogy is the stock market. Suppose an analyst says that because real stock prices have declined over the last decade (which is true), it follows that there is no upward trend. Here again, an examination of the long-term data would quickly show this to be incorrect. The last decade of temperature and stock market data is not representative of the longer-term trends.

The finding that global temperatures are rising over the last century-plus is one of the most robust findings of climate science and statistics.

I see what he’s saying, but first, I don’t find the stock-market analogy to be useful at all—he’s just restating his claim in a different arena, one in which he does not have the laws of physics on his side. Second, the debate over this particular claim is not about what was happening over the last century, it’s about what’s been happening over the past ten years or so.

The (uncomfortable, perhaps) take-home message from the above graph is that it is consistent with a continuing rise in temperature, and it’s also consistent with a leveling-off since the year 2000.

Let me tell you a story. Nearly 25 years ago, Gary King and I came up with an improved estimate of the incumbency advantage in U.S. congressional elections. Here’s the graph summarizing our results:

OK, so what was happening? Incumbency advantage was around 1 percentage point for the first half of the twentieth century, then it steadily rose, with no end in sight as of 1988. Or maybe it rose up until 1966 and then stayed steady after that. Or maybe it rose until 1984 and then flattened out. It’s the climate change story all over again, but this time with a lot less data and no physics to help us out.

Fortunately for our discussion, I returned to estimate the incumbency advantage a few years later, using a better statistical model and more data, going all the way to the year 2000. Here’s what Zaiying Huang and I found (remember, in any given year the estimates will be different, as we use a bigger model and more data):

Hey—it looks like the effect really did peak in the mid-80s, and indeed the declining trend appears to have continued. (I didn’t redo the full analysis but a quick regression gave an estimate of 6 percentage points in 2010.)

Incumbency advantage doesn’t have anything to do with climate change—but the example illustrates the general difficulty of inferring trends from data alone.

To get back to the climate series: the data shown above are consistent with a continuing rise or a flattening of the curve. At this point you have to go to the theory. I think this is how Steven Levitt and Stephen Dubner ended up with the following three statements:

1. “Over the past several years, the average global temperature during that time has in fact decreased.”

2. “Levitt does not believe there is a cooling trend.”

3. Future trends are “virtually assuring us of about 30 years of global cooling.”

These positions are difficult to reconcile as stated, but they can work if you interpret them more vaguely. The time trend is consistent with an increase, no trend, or even a future decrease. That’s why you need to bring in the albedo, if you will. Everybody knows this—scientists don’t study these climate time series in isolation—but then there can be a tendency to oversimplify as in Nordhaus’s discussion quoted above, which implies that the graph tells the story all by itself. The graph is consistent with the story, which counts for something.

P.S. If you’re interested in the incumbency advantage in itself and not merely as an example of a difficult time series, see here for further discussion.

23 thoughts on “Inference = data + model

  1. Perhaps I’m misunderstanding something, but I don’t think the chart supports the Nordhaus claim. Nordhaus seems to be saying that the level of temperature is highly volatile over time, but shows a seemingly-volatile series of changes over time.

    This would be similar to saying that prices are volatile because returns are volatile. It seems to me that levels can be fairly stable even with volatile changes, if the magnitude of those changes is small relative to the level.

    My interpretation of that graph is not that temperature levels are unstable, but rather they’ve been increasing steadily over the past several decades, and the rate of change has leveled off since 2000 (although clearly still increasing).

    But maybe I’m misunderstanding.

  2. Another problem here for the CO2-based global warming story is that the upward trend seems to start in about 1910, before CO2 emissions were enough to have a substantial effect. So if there’s a CO2-based trend, it’s overlayed with other variation. This just re-enforces your point that the time series alone isn’t enough to tell you much.

    • Radford, there were not insignificant additions to the CO2 stock in the atmosphere even before 1900…that’s what the Industrial Revolution was all about, after all. Andrew Carnegie was making a lot of steel before 1910. There should also be a lag between the introduction of the gas and seeing it in the temperature record (temperature change being in effect an integral of excess heat retained over a period of time).

      Loren Cobb did a phenomenological study of this going back to 1850 a few years ago. He considered solar activity and fossil fuel use (lagged 25 years) in his model, which explained approximately 80% of the variance. Not a physical model, but useful nonetheless.

      http://tqe.quaker.org/2007/TQE158-EN-GlobalWarming.html

      • The article you link to makes no sense. It regresses temperature on (lagged) fossil fuel consumption, not cumulative CO2 emissions from both current and past consumption, which is what would be physically meaningful. All it shows is that if you have two quantities that both have generally upward trends, they will be correlated. On this basis, you could probably equally well predict global temperature from the log of the number of computers in the world, or the number of papers on Bayesian statistics published each year.

    • Yes, there is forced and unforced variability in temperatures other than what is attributable to CO2. No, this is not a “problem for the CO2-based global warming story”, insofar as it’s possible to reproduce the early 20th century warming using climate models. (This isn’t causal proof of anything, but it also demonstrates that the 20th century temperature record doesn’t contradict greenhouse physics.)

    • According to the graph in this article, CO2 levels were pretty much constant from 1850 to 1950, and then began their recent rise. Note that this is an article at a site billing itself as “Getting skeptical about global warming skepticism”, so I take it to represent the “consensus” view. Of course, if there is a lag, the effect of increased CO2 shouldn’t be seen until even later than 1950.

      Now, it’s possible that if you account for other sources of variability, such as volcanic eruptions, you can see a nice connection between temperature and CO2. Or maybe not – I don’t think we’re going to resolve that question in blog comments. Note, however, that the nice matches between models and past temperatures are based on tuning the (possibly large, but largely unknown) effect of aerosols to make the fit better.

      • Tuning the industrial aerosol forcing, or lack thereof, does not explain the models’ ability to match the early 20th century warming (which had nothing to do with aerosol cooling). It does have to do with the natural forcings acting at the time.

        • I think you’re agreeing with me, and Andrew. The whole point (see the post, and my first comment) is that looking at the time series itself isn’t enough to tell you much. The first cut at interpreting it would be to look at the period from about 1950 on, where CO2 increased, and see if there is a consistent upward trend there, that is different from before. There isn’t. You need to do a lot more than just point to the time series to show anything. But as I said above, I don’t think the adequacy of the models attempting to do this is going to be resolved in blog comments.

        • I agree, the temperature time series alone can’t tell you anything about attribution. You also have to also something about know the forcings (and about the transfer function relating the two, i.e. physics). My point was just that if you are looking at forcings, you can’t just look at CO2 only and say whether or not there is a “problem with the CO2-based global warming story”. You have to at least look at all the relevant forcings.

  3. I like your characterization of inference as data + model. I have asserted, without much evidence, that as your data get worse – by which I mean contaminated with variation unrelated to the phenomenon being measured or fewer data points – that the choice of model becomes much more important. I see a gradient from data mining (lots and lots of data, little or no model structure) to theoretical process models fitted with Maximum Likelihood or Bayesian MCMC (able to get parameter estimates with little data – more data always better!).

  4. This might be a terrible question, but is there a book that just essentially goes through identifying variation techniques for various research designs. It might be available, but I’m not sure. Mostly Harmless kind of does this, but I think we can do better.

  5. Or:

    Explanation = Plausible Story + Supporting Data

    The correlation between 24-hour cycles and the sun rising in the east is remarkably convincing in itself.

    But I’m only 100% certain that it will happen again tomorrow because I understand and believe the story about gravity, momentum, etc.

  6. While Nordhaus doesn’t have physics on his side, he does have efficient markets on his side in the stock market analogy. The point is that you can’t get an anticipated trend in a stock market because traders will have already priced all anticipated trends into the stock price. And the history of using technical analysis (ie pure reads of the charts to predict the future) is terrible, and there’s an enormous literature that demonstrates that it’s terrible, at virtually any time scale you like. Thus, in the stock market, model alone implies an inference of no trend, and the data confirm it. (That said, I want to stress that I have oversimplified, before you accuse me of using utility theory or something.)

    Getting back to the physics, what Radford Neal said.

  7. Uh, I know you are trying to tell a story about “the difficulty of letting the data speak for itself”, but I have a hard time seeing how you haven’t embarrassed yourself with this particular graph, top of the page, and this particular claim.

    It is true that the graph is “consistent with a leveling-off since the year 2000”, but wouldn’t a statistician feel obliged to point out that:

    [1] the trend is increasing overall

    [2] the data is so noisy that inside this graph there are many sequential sub-spans of years, of length 12 and less, that have a downward trend, not even to speak of a “leveling-off”

    [3] thus the graph, by itself, even without a model behind it, does not let one say anything confidently about a sub-span of fewer than 15, 10 years. Perhaps a sub-span of 20 years is long enough, but the argument that a valid inference could be performed on a sub-span of 20 years would have to be closely reasoned – at which point, arguably, some kind of model of smoothness and inertia and inflection would have to be introduced.

    Not a novel point: http://www.skepticalscience.com/going-down-the-up-escalator-part-1.html

    If tallying how many sequential spans of years of 12 and less (and 13, 14, 15 too) that have a downward trend is not letting the graph speak for itself, without the aid of a interjected model, then I must say I don’t know what “letting the graph speak for itself” means.

    I assert: you wanted to make a point about “the difficulty of letting the data speak for itself”, and you began with a lame claim that is floating in the contemporary conversation solely for reasons of sophistry and politics. Poor form.

    I don’t expect you to respond to the totality of this comment with a sympathetic reading – I expect some criticism of my use of quotation marks – and I forgive myself from re-responding to a partial defensive response.

    • Manuel:

      I wrote of the graph that “it is consistent with a continuing rise in temperature, and it’s also consistent with a leveling-off since the year 2000.” Note the first part of my quoted phrase.

  8. Discussion of “last decade” is meaningless, given the noise level of yearly variation, ENSO gyrations and human effect like massive Chines industrialization and sulfate aerosol production. For statistical literacy, tamino (in real life a good time-series guy) runs a fine blog, Open Mind. Climate scientists have long said that one needs 20-30 years to get useful results. See graph of 5,10,15,30-year SLOPES. Given yearly noise 10X bigger than the slope, it is *necessary* that there be 5-10 year periods that seem flat, if you pick them carefully. If you you comptue them for every year, as that graph does, you can see the extent of cheery-picking.

    A brief history of human impact on climate (History didn’t start in 1910.):
    1) For the last few million years, CO2 has been low enough for Milankovitch cycles to drive the asymmetric sawtooth temperature curve (fast up, slow down). Up is fast (~10K years, fast for geologists) due to ocean outgassing of CO2, CO2+WV amplification and fact that NH snow/ice melt, ice-albedo feedback works , with solar insolation @ 60deg N important. Down is slower because it takes longer for CO2 to be reabsorbed, and ice-albedo feedback works going down, but when ice buildup requires both lower temperatures and more snow, of whic there tends to be less when it’s cold, since air is drier.

    2) In a normal interglacial, we would be in the long, slow descent into the depths of the next ice age, but we aren’t.

    3) [Ruddiman, for whom evidence has been piling up]: About 8,000 years ago, humans started altering cliamte by cutting down trees and making other land-use changes, and about 5,000 years ago, rice and animal husbandry got going. The results of both are clearly visible in the CO2 and CH4 records, which are simply unlike any of the past interglacials for whihc we have ice-core data.

    4) Since then, Milankovitch would predict slow declines in temperature and CO2, but human effects essentially nullified that, acting as a thermostat. Quite likely human plagues (and consequent reforestration) lowered CO2 post Roman Empire. Later, population recovered, CO2 rose, and temperatures were somewhat warmer in during the Medieval Warming Period, although not as warm as as now.

    5) Then, due to other plagues, CO2 drops from other plagues created temporary dips, especially noticeable in regions near the snowline, where snow-albedo feedback works, i.e.. like Europe.

    6) From 1525 to the 1600AD, the sharpest drop of CO2 in 1000 years (or much more) was likely caused by a 50M -person die-off in the Americas and massive reforestration. Add in volcanoes and later, the Maunder Minimum and we got the Little Ice Age. Then the Industrial Revolution got going ~1800, CO2 has risen since, and so have temperatures, although with the usual noise. Also, solar insolation rose for a while, although not during last 30 years or so.
    Global warming has “stopped” many times since.

    A good resource is Skeptical Science, which offers a numbered list of standard, long-refuted climate anti-science memes that are repeated endlessly. They are not worth debunking, just referencing.

    The following might be relevant:
    #4 It’s cooling.
    #35 It warmed before 1940 when CO2 was low
    #43 There’s no correlation between CO2 and temperature

    As Andrew noted, the stock market is a bad analogy, because physics matters. Physics includes conservation laws and quantum mechanics, both of which must be rejected if one wants to reject AGW. It is especially ironic to find people using computers and fiber-optic networks to reject the physics that lets them be built.

  9. Your election data (figure 3) looks basically like a step function. There was a sudden, large increase in the incumbency advantage in 1966. The trend was relatively flat before that and after that – there may have been some pattern, but it’s too small to show up clearly among the noise and is relatively small compared to the 1966 jump.

    The global temperature data would look roughly like that if you started in 1990. From 1990-1997 there was a lot of year-to-year noise but no clear trend, then there was a huge jump up in temperatures in 1998, and since then (or at least since 2001) any trend is small compared to the noise or the 1998 jump.

    But the global temperature data don’t start in 1990. There has been a clear upward trend since the early 1960s, which makes the “sudden jump in 1998 and then flattening off” story much more complex. You could say: there was a steady rise in temperature from the 1960s through the 1990s (with some year-to-year noise), then a sudden acceleration of that increase in the late 1990s, and then a flattening off. But that is an odd pattern for data to have (with low prior probability), and not one that we’d expect based on what we know about the causes of global temperatures. Or you could say: there has been an upwards step function since the 1960s, with 4 flat parts and 3 sudden jumps, and we are currently on the 4th flat step (1st step was roughly 1960s-70s, 2nd was 1980s, 3rd was 1990s, and 4th is 2000s). But that’s another complex pattern, and it also seems to suggest that temperatures will continue to rise (unless there is some reason to suspect that we’re at the top step). “Roughly linear increase with some noise” is a much simpler model of what happened to global temperatures from the 1960s to present (with much higher prior probability), it fits the data just as well, and it fits with what we know about the causes of global temperatures.

  10. There are two characteristics of trend in a varying time-series: 1. the behavior of the peaks and, 2. the behavior of the troughs. In the temperature series, the peaks over the last decade appear to be flat (hard to say whether 2000, 2006 or 2010 is the max, from the graph), but the troughs are still rising sharply. 2011 is low, but higher than 2009, which is higher than 2002, etc.

    I would describe the last 10 years as “some leveling” but not as “leveled”.

    Of note, there appears to have been a similar period of “leveling” from the 1950s to the early 1970s, with flattening of peaks but rising troughs, followed by a sharp and clear rise in trend afterward until 2000.

  11. When you are interested in the long term trends of a time series, and the data has short period cycles it is meaningless to draw conclusions from small length subsets of data. Isn’t this the better message?

    In other words, shouldn’t your confidence in conclusions based on the 120 year upward trend be higher (when trying to predict outcomes 100 years ahead) than any conclusions based on the latest 10 year trend?

  12. Pingback: Mathblogging.org Weekly Picks « Mathblogging.org — the Blog

Comments are closed.