“…it has become standard in climate science that data in contradiction to alarmism is inevitably ‘corrected’ to bring it closer to alarming models. None of us would argue that this data is perfect, and the corrections are often plausible. What is implausible is that the ‘corrections’ should always bring the data closer to models.” – Richard Lindzen, MIT Professor of Meteorology
Back in 2002, researchers at NASA published a paper entitled “Evidence for large decadal variability in the tropical mean radiative energy budget” (Wielicki et al., Science, 295:841-844, 2002). The paper reported data from a satellite that measures solar radiation headed towards earth, and reflected and radiated energy headed away from earth, and thereby measure the difference in incident and outgoing energy. The data reported in the paper showed that outgoing energy climbed measurably in the late 1990s, in contradiction to the assumptions of predictions from climate models that assume positive or near-zero “climate feedback.”
One of the people who wasn’t surprised by these results was Richard Lindzen, one of the best-credentialed of the anthropogenic global warming (AGW) skeptics. Lindzen has always believed, or at least long believed, that the climate is much less sensitive to greenhouse gases than most researchers assume. Lindzen and a colleague analyzed the temperature data, in conjunction with satellite data that showed that the cooling could not be attributed to decresed solar radiation, and published a paper (Chou and Lindzen (2005, Comments on “Examination of the Decadal Tropical Mean ERBS Nonscanner Radiation Data for the Iris Hypothesis”, J. Climate, 18, 2123-2127) that demonstrated that the results imply a strong “negative feedback” in the climate. That is, the greenhouse effect from carbon dioxide — an effect no credible scientist denies, including Lindzen — is almost entirely counteracted by some unknown effect. As Lindzen said in a guest article in early 2009 on the AGW skeptic blog Watts Up With That?, “the results imply a strong negative feedback regardless of what one attributes this to.”
Around the time Chou and Lindzen were working on their paper demonstrating negative feedback, a NASA scientist named Josh Willis was analyzing data from a large array of autonomous underwater robots (called Argo) that measure ocean temperatures. In 2006, Willis reported reported that the oceans worldwide had cooled quite a bit from 2003-2005. AGW skeptics naturally claimed that this report, too, showed that AGW is overstated: if the oceans are cooling when they’re “supposed” to be warming, obviously there are large negative feedbacks that are not included in the models.
So, as of early 2006, both satellite data and ocean temperature data seemed to indicate that the oceans were cooling. But then, aha, the “corrections” began.
First, the authors of the Science paper about the satellite data published a new paper, “Reexamination of the observed decadal variability of the earth radiation budget using altitude-corrected ERBE/ERBS nonscanner WFOV data” (Wong et al., Journal of Climate 19:4028-4040, 2006). This paper corrected for a previously unrecognized (or perhaps just unaccounted-for) effect in the satellite data: the satellite moved about 20km closer to earth during the 1990s. The main measurement instrument “sees” the entire earth plus a ring of space around it, as the satellite moved closer to the earth it intercepted more of the earth’s radiation and saw less of the space around it, and thus saw more outward-going radiation…not because radiation from the earth had increased, but because the satellite was intercepting more of it. After correcting for this effect, there was no increase in outgoing energy in the late 1990s. The apparent “negative feedback” was an artifact of bad data.
And as for the cooling of the oceans, well, there was a “correction” for that, too. It started with a close look at the ocean temperature data, which were not only hard to explain, they seemed to contradict other data sources. It’s true that the huge array of floating robots was intended to be the best single source of worldwide ocean temperature data, but it’s not the only source, and other sources didn’t see the cooling. While the satellite data had suggested that cooling could have occurred, it was possible to believe, barely, that it had. But once the satellite data were corrected and showed that the earth was gaining, not losing, heat, the ocean temperature data looked increasingly wrong. Eventually the original NASA investigator, Willis, had to agree that something was wrong: as data continued to pour in, month after month, some parts of the oceans, especially in the Atlantic, were cooling very quickly. So quickly, in fact, that it seemed physically impossible to account for the missing heat. By early 2007, Willis was convinced: his data were wrong, and the ocean cooling he had reported several years earlier may not have occurred at all. You can read the story on a NASA website, it’s pretty interesting. In short, although most of the thermometers on the 3000 undersea robots were reporting accurate data, a small number were reporting temperatures far too low…so low that even the limited numbers of such measurements were enough to substantially underestimate the average temperature. What’s more, some of the older measurements, from before the start of Argo, were found to be too high. When the older measurements were adjusted downward, and the more recent measurements were adjusted upwards, the result was an ocean warming trend that was consistent with the satellite measurements…which, remember, were themselves adjusted in a way that removed a cooling trend that had initially been reported.
Lindzen, the AGW skeptic, was apparently unaware of these corrections/adjustments when he wrote his article on Watts Up With That. In that article, he repeated the key result from his 2005 paper, saying “The earth’s climate (in contrast to the climate in current climate GCMs [General Circulation Models, i.e. computer models of the earth's climate]) is dominated by a strong net negative feedback.” But a month after making that post, Lindzen sent a letter to Watts Up With That, acknowledging the corrections to the energy data and agreeing that they would change his results. In that letter, he made the statement that leads off this blog entry, including this: “What is implausible is that the ‘corrections’ should always bring the data closer to models.” Lindzen doesn’t claim that any particular data adjustment or correction is incorrect, and in fact, he seems to agree that initial data are sometimes wrong and that corrections are therefore necessary. But he suggests that if those corrections are always in the same direction — always supporting “alarming” models — then something is fishy.
There are several reasons “corrections” to data can tend to make the data agree better with models, not just in climate science but in any field. Here are two of the most common:
1. When you see data that disagree with your model, you take a close look for ways in which the data could be wrong — especially in the direction that makes them fit poorly. If there are several adjustments that could or should be applied (like correcting or adjusting a measurement for instrument drift, pressure, frequency response, etc.) you might only think of, or only apply, the ones that act in a favorable direction. If you work this way, your adjustments will always lead to data that are better fit by your model.
2. When you see data that disagree with your model, you find ways to reject the data. “We had trouble with the instrument that day,” “Oh, I remember thinking at the time that that experimental sample looked funny,” “that patient shouldn’t really have been in the study anyway, they snuck through a loophole in the selection protocol.” If you discard poorly-fit data this way, but you don’t apply the same standards to the data that are fit well, then your adjustments will always lead to better agreement between data and model.
In the real world, effects like these occur all the time. Item 2 is perhaps more widely recognized — accusations of “cherry-picking” the data are common in many areas of science — but item 1 occurs too. Presumably Lindzen’s comment about the implausibility of corrections always leading to better fit with “alarming” models indicates his conviction that either or both of the effects above are the explanation. Actually, by agreeing that adjustments are necessary but saying it’s implausible that the adjustments should always lead to better model fit, he’s implicitly plumping for reason 1.
But there is a third reason adjustments can systematically lead to data that are better fit by models:
3. The models are close to being correct. In this case, gross discrepancies between data and models will indicate problems with the data. Fixing those problems will lead to data that are in better agreement with the models.
For instance, I was once the teaching asistant for a physics lab class in which one of the experiments involved timing a small metal ball as it fell from different heights, and recording and plotting the results. Even with this simple exercise, several things could go wrong, including a stuck switch that the ball was supposed to trigger at the bottom, or a student mis-recording a time. None of the data from that lab could possibly have convinced me that there was a problem using d = 1/2 * g * t^2 to calculate the distance for that experiment. (The balls were very dense so air resistance was low).
Except in rare cases like that simple physics lab experiment, it’s a mistake to assume that when data and models disagree, it’s the data that are the problem. In fact, when the data are simple and the models are complicated, as is often the case, it’s almost always the models that are wrong. But when the data are complicated — by which I mean, there are many different effects that must be accounted for in order to interpret the raw data as a measurement of a parameter of interest — then it’s not necessarily a surprise to find problems with the data, and to find that when those problems are fixed the result is better agreement with a model.
It’s good to have a healthy suspicion of “adjustments” or “corrections” to data, especially if (a) those corrections are made only after disagreement with a model has been noticed, (b) corrections are made to completely different datasets (as with the energy balance satellite data and the ocean temperature data from the robots), and (c) the corrections change the data to fit the model rather than the other way around. Be suspicious. Give the data and the corrections extra scrutiny, absolutely. Be on the lookout for biases introduced by effects 1 and 2 above, because those really do occur.
But a healthy suspicion can be taken too far. Researchers can’t be expected to keep using data that are known to have systematic errors, so one has to allow corrections to be made. And if, in fact, there is a model that correctly captures the behavior being measured, those corrections are going to lead to better agreement with the model.
In the cases of the energy balance measurements and the ocean temperature measurements discussed above, the corrections are necessary. And if you make the corrections, you find that the oceans did not cool, and the energy balance of the earth did not shift in a way that implies strong “negative feedback.”
A few notes:
1. People who don’t think anthropogenic global warming is occuring or is of practical significance take umbrage at being called “deniers” — insulting, dismissive, yada yada — but apparently some or many of them are happy to use their own insulting or dismissive terms. Lindzen refers to estimates of moderate climate sensitivity as “alarmism.” I find this irritating.
2. For an example of the complexities of calibrating satellite data, here’s an interesting short write-up that I came across while preparing this post.