A whole fleet of gremlins: Looking more carefully at Richard Tol’s twice-corrected paper, “The Economic Effects of Climate Change”

We had a discussion the other day of a paper, “The Economic Effects of Climate Change,” by economist Richard Tol.

The paper came to my attention after I saw a notice from Adam Marcus that it was recently revised because of data errors. But after looking at the paper more carefully, I see a bunch of other problems that, to me, make the whole analysis close to useless as it stands.

I think this is worth discussing because the paper has been somewhat influential (so far cited 328 times, according to Google Scholar) and has even been cited in the popular press as evidence that “Climate change has done more good than harm so far and is likely to continue doing so for most of this century . . . There are many likely effects of climate change: positive and negative, economic and ecological, humanitarian and financial. And if you aggregate them all, the overall effect is positive today — and likely to stay positive until around 2080. That was the conclusion of Professor Richard Tol of Sussex University after he reviewed 14 different studies of the effects of future climate trends.” Once the data errors were corrected, it turns out the above quote is incorrect: of the studies cited by Tol, all but one projected negative or essentially zero economic effects of climate change, with the only paper giving a positive estimate being an earlier one of Tol himself, so clearly no consensus of a positive effect—although the science writer could be excused for thinking that, based on the earlier published paper that had the errors.

Tol himself has written different things on climate change. In his 2009 paper he wrote of “considerable uncertainty about the economic impact of climate change . . . negative surprises are more likely than positive ones. . . . The policy implication is that reduction of greenhouse gas emissions should err on the ambitious side.” Then in 2014 he wrote, “the revised estimate based on the new data “is relevant because the benefits of climate policy are correspondingly revised downwards.”

So these matters are not trivial, at least according to Tol and the above-linked journalist. Let’s try to track down what’s happening from a statistical perspective.

Problems with the data

Tol’s paper is a meta-analysis in which he combines several published projections of the economic effects of global warming, in order to produce some sort of consensus estimate. During the years after publication of the article, several different people pointed out different data errors corresponding to mischaracterizations of the projections from various of the papers in the meta-analysis. On some of the estimates the signs were changed, which looks like a typo, and in another case it appears that Tol had misread the paper. There were only 14 points in the original data analysis so this is a disturbingly high error rate. Perhaps even more disturbingly, Tol’s correction notice was itself re-corrected after an error was noted in the correction:

Screen Shot 2014-05-27 at 3.13.20 PM

Tol attributed the errors to “gremlins,” but I’m guessing that they happened when he was typing data into a file. (They couldn’t simply be errors that were introduced in the journal’s editing or production process because some of them were entered into Tol’s graphs and analyses, not just his data table.) Bob Ward provides a convenient list of errors and data sources here.

And, after this, one more thing. Rachael Jonassen noticed “a subtle but meaningful difference between the labels of the x-axis in the original Figure 1 of Tol (2009) and the updated Figures 1 and 2”:

In the original Figure 1, the x-axis is labelled temperature change relative to ‘today.’ In the new Figures, the x-axis is labelled as temperature change relative to pre-industrial temperatures. The term ‘pre-industrial’ derives from climate scientist’s interest in anthropogenic influences on CO2 levels and they usually assign a date of 1750 as the last time natural CO2 levels were observed. A temperature change of 0 relative to pre-industrial temperatures, existed in 1750. On the newly labelled x-axis, we are ‘today’ at +0.8C.

This would seem to make a hash of everything, if even the uncorrected points are all at the wrong place on the x-axis. Jonassan continues:

It would be difficult to imagine economists interested in (or reporting) the effect of climate change on GDP in comparison to the GDP at pre-industrial times, temperatures, and CO2 levels. More likely the economic analyses were performed relative to the GDP of ‘today’ at the time of each publication (e.g. at +0.8C relative to pre-industrial levels in 2014) but calibrated with climate models that expressed results relative to 1750 levels (usually taken at 285ppm). . . .

Due to non-linearities and lags in the earth system, a +1C change relative to 1750 has a different impact than a +1C change relative to today’s temperature (already at +0.8C relative to 1750). Presumably, economists calibrate their analyses using available climate model results. . . .

Given the shift in labelling of the x-axes, with no adjustments in the position of the data points between the 2009 and current versions of the plots, it is not clear that the author considered the importance of these distinctions so caution in interpreting the relation of the underlying data to published plots is in order.

Here’s what she’s talking about. From the caption to Figure 1 of the 2009 paper:

Screen Shot 2014-05-27 at 3.09.03 PM

And from the corresponding figure in the update:

Screen Shot 2014-05-27 at 3.10.18 PM

This seems like a huge difference, changing the interpretation of everything, but it’s not discussed at all in the correction note.

One scary thing, when we see a paper that has had so many errors, not publicly corrected until five years after publication, is that it can make us wonder how many errors are sitting there in various other articles. As we discussed in the context of the notorious Reinhart and Rogoff paper, we find out about these errors in influential papers because others go to the trouble of checking—but even in that case, it was years before the errors came to light.

This sort of thing is one motivation for the movement toward more openness and transparency in scientific publication. There’s no reason to think that Tol, Reinhart, and Rogoff made these errors on purpose. They just weren’t careful, which is too bad, given that these were highly-publicized analyses that had important practical implications. At some point, I think an “I’m sorry” would be appropriate, if nothing else to acknowledge all the time people have wasted tracking all these problems down. But really I see it as a larger problem with the scientific communication system, the idea that once something is published in a journal, it is presumed to be true and it takes a lot of work to dislodge even gross errors.

Problems with the analysis

For convenience I’ll repeat some things that I wrote on the sister blog the other day. The short story that the problems with Tol’s analysis go beyond a simple miscoding of some data points.

One problem which Tol didn’t note was the role of the changing minus signs in interpreting the estimates that were not garbled. In particular, his estimate of a big positive impact at 1 degree is a clear outlier in his analysis. Did he look into that in the original paper? I took a look, and here’s what he wrote, back in 2009:

Given that the studies in Table 1 use different methods, it is striking that the estimates are in broad agreement on a number of points—indeed, the uncertainty analysis displayed in Figure 1 reveals that no estimate is an obvious outlier.

In this way, a misclassification of a couple of points can affect the interpretation of a third point.

Thus, there was possibly a cascading effect: Tol’s existing estimate of +2.3% made him receptive to the idea that other researchers could estimate large positive economic effects from global warming, and then once he made the mistake and flipped some signs from positive to negative, this made his own +2.3% estimate not stand out so much.

Tol also wrote:

The fitted line in Figure 1 suggests that the turning point in terms of economic benefits occurs at about 1.1 degrees Celsius warming (with a standard deviation of 0.7 degrees Celsius).

This turning point has disappeared in his new Figure 2, so, again, I do think the new analysis has changed his conclusions in a real way.

The other big problem is that when Tol wrote, “The assessment of the impacts of profound climate change has been revised: We are now less pessimistic than we used to be,” and “the benefits of climate policy are correspondingly revised downwards,” these claims are entirely based on (a) the feature of the quadratic that when it goes up and then down, it has to go down even faster, and (b) his extrapolation of his original model (with data points only going past 3 degrees) to 5.5 degrees.

Tol writes that the fit of his quadratic model “is destroyed by the new observations: -11.2% for 3.2K and -4.6% for 5.4K. The former suggests a non-linearity that is much stronger than second degree; the latter suggests linearity.” (I assume the -11.2% he refers to is the -11.5% in his paper.) But when he writes this, showing an incredible faith in your model. But it’s a strange model, as it’s not a model of the impact of warming, it’s a model of other people’s estimates of the impact of warming. To suggest that one paper’s estimate of -11.2 provides evidence of a strong nonlinearity . . . I don’t buy it.

Problems with the model

We had a good discussion of the Tol paper on the blog last week which motivated me to think more about all this. In particular, the problem is not so much with the quadratic functional form as with the conceptual model, the y_i = g(x_i|theta) + epsilon_i model that’s driving the whole thing.

The implied model of Tol’s meta-analysis is that the published studies represent the true curve plus independent random errors with mean 0. I think it would make more sense to consider the different published studies as each defining a curve, and then to go from there. In particular, I’m guessing that the +2.3 and the -11.5 we keep talking about are not evidence for strong nonmotonicity in the curve but rather represent entirely different models of the underlying process.

In short, I don’t think the analysis can be fixed by just playing with the functional form; I think it needs to be re-thought. You just can’t think of these as representing 14 or 21 different data points as if they represent observations of economic impact at different temperatures. The data being used by Tol come from some number of forecasting models, each of which implies its own curve.

Reforming the process of scientific publication and review

The Journal of Economic Perspectives is well respected, and my impression is that Tol’s 2009 paper, even with all errors aside, is far below the quality of usual empirical papers that get published in top economics journals. I typically see econ papers as being pretty serious about model misspecification but the model here just doesn’t make sense. And Tol’s remark that outliers provide evidence of nonlinearity (rather than, as we would usually think in such a situation, evidence that the outlying data points are different from the others in some important ways) indicates a lack of understanding of the relevant statistical issues. That’s not so terrible—Tol does not claim to be an econometrician—but, again, it makes me wonder how the paper got through the review process.

My guess (and it’s just a guess; others can feel free to correct me on this) is that the paper was accepted because it was on an important topic. The economic effects of global warming: you can’t get much more important than this. So perhaps the journal editors felt an obligation to publish a paper on this, even if it was weak. The down side is, once the paper was published, it became influential.

This is where a more open review process might come in handy. Ultimately I can’t get upset with the journal for publishing the paper: the editors are busy people, and at first glance the paper looks reasonable. It’s only on reflection that the problems become clear, and indeed the problems with the model become much more clearer once the data have been corrected and augmented. But what if the referee and associate editors’ reports were public? Then, we might see something like, “This paper is weak but we should publish it because the topic is important.” Or maybe not, I don’t know. But this sort of information could be useful to people. Again, we all make errors so the point is to catch and fix them sooner, not to avoid making them entirely or to punish people who make mistakes.

Summary

The Tol paper had many data errors, some of which do not seem to be acknowledged even in the latest corrected version. The errors affect the paper’s substantive conclusions as well as the interpretation of the remaining data points. Beyond this, the regression models used by Tol does not make sense to me. I think a better approach would be to consider each of the forecasts of the effects of climate to be a curve rather than a single point at a single hypothesized level of warming. I don’t object to Tol’s goal of performing a meta-analysis of published forecasts but I think to do it right, you have to get a bit more out of each forecast than a single number. This is not my area of expertise, though, and ultimately my points are statistical, not derived from climate science. The statistical model should be appropriate for the problem and data being studied, especially for a problem as important as this one.

219 thoughts on “A whole fleet of gremlins: Looking more carefully at Richard Tol’s twice-corrected paper, “The Economic Effects of Climate Change”

  1. You are being too kind. The paper was likely published because it is an important topic AND because it portrays economists’ favorite view that only economists really understand things and that the net effects of climate change could be positive. On the last point, of course, it is entirely possible that climate change has positive net effects. Too many studies begin with the view that because there is change caused by humans, the effects must be negative. However, and this is the important point I believe, economists are quite willing to publish based on the punchline regardless of the quality of the analysis. This is why replication is so pitiful in economics and making data publicly available given such low priority.

    I am making sweeping comments here, and there are certainly some economists that care about methodology and validity – but that is not the prevailing standard. Replication receives less attention in economics than in science or medicine. Even when errors are exposed, few careers have been damaged by them. Based on my 40 years of experience as an economist, I am past the point of believing that most errors are due to carelessness. I’ve grown skeptical and untrusting (not even a word, but it should be).

    • I’m baffled at the notion that global warming “might” be beneficial for our global economy.
      Holding that notion betrays a frightful lack of appreciation for the physical reality of what is going on out there…
      including amazing complacency regarding the vulnerabilities within our global global communication and transportation systems.
      Or their dependence on relatively mild and predictable weather patterns.

      Global warming, or a climate changing to increasingly extreme weather events and disruptions of long established growing (and other) cycles has absolutely no upside to it!

      Disruption, destruction, turmoil and the end to all we have come to know and love, is what we are looking at!
      yet folks like Tol and the whole lot of ’em keep clucking on. Our stupendous foolishness is beyond comprehension.

      • “I’m baffled at the notion that global warming “might” be beneficial for our global economy.”

        You are? Look at population density maps of Russia and Canada:

        Population density of Russia

        Population density of Canada

        Look at how many people in the U.S. who retire to Florida or Arizona versus Alaska and North Dakota.

        “…including amazing complacency regarding the vulnerabilities within our global global communication and transportation systems.”

        You think our global communication and transportation systems are somehow very vulnerable to a warmer world? How is that?

        • But don’t forget the concurrent exodus North into America , Europe and North Africa and Asia from increasingly uncomfortable outhern climes-

  2. I do worry that there is some real conceptual miss-understanding about statistical modelling (here meta-analysis) that is actually really hard to get across to _excited_ (and especially defensive) researchers (even statisticians).

    A case in point was a statistician, almost as productive as you, who presented a meta-analysis of observational studies at the ASA a number of years ago with the thesis that the last three studies were unnecessary and should not have been carried out (i.e. early studies were adequate for drawing conclusions). To me they didn’t conceptually understand the challenge of meta-analysis in the presence of varying confounding, as the earlier studies were all case-control and the last three studies were cohorts and the confounding likely would be different. When I pointed this out to them they quickly grasped why the last three studies could be very helpful and agreed they likely were worthwhile doing. But it needed to be pointed out (and they did not get defensive).

    Here, the first sentence is helpful http://en.wikipedia.org/wiki/Meta-analysis (admittedly I was a coauthor of it)
    “meta-analysis refers to methods that focus on contrasting and combining results from different studies, in the hope of identifying patterns among study results, sources of disagreement among those results, or other interesting relationships that may come to light in the context of multiple studies”

    One would hope _contrasting, identifying patterns, sources of disagreement_ would suggest the need for critical examination of the included studies expecting some surprises/errors though maybe _combining_ does unfortunately suggest just sticking extracted (dependent even!) estimates in a regression model.

    The tendency though is for this to emerge “thorough summary of several studies that have been done on the same topic, and provides the reader with extensive information on whether an effect exists and what size that effect has” which is the last sentence of the paragraph on that wiki entry. Conveniently simple, does it or does it not exist and exactly what size (linear or non-linear) is it?

    • That’s a really interesting observation, and I’ve never thought of it that way.

      One thing that’s frustrated me a lot is hearing meta-analyses quoted, where you suspect that a lot of the studies have silly methodologies, and a lot are sensible, both kinds of study are put into a big bucket and we find statistical significance in spades – QED. It’s uninteresting to see a meta-analysis that says that parapsychology works (they’re out there!). But a precise analysis of what it is that makes some parapsychology studies produce positive results would be very interesting.

      For the climate thing it would be very interesting to know what drives some models to produce outlier estimates and others cluster near the center.

      • Yes, an interesting review of the literature highlights the points of contention and explains why they exist. This seems to be all too rare. I have only ever looked closely at one climate science paper that had to do with a single site (Byrd Station) in Antarctica (because it was only about a single site it seemed like a good opportunity to investigate the quality of this research field). I was impressed that the raw data was made available, but upon inspection of the data it turned out that the warming measured was not conveyed sufficiently by the average since the majority of temperature increase was due to a change in the minimum month to month temperature. Also the entire “trend” appeared to be due to a change suddenly occurring in 1989.

        My own plot (each line is the data from a different month): http://s28.postimg.org/5j161ex65/byrd.png

        In the supplement they say:
        “The central processing unit of the AWS was (paradoxically) a newer version than the one subsequently used from 1989 onward.”

        This would seem to be a plausible explanation for their results.

        Media report:
        http://www.nytimes.com/2012/12/24/science/earth/west-antarctica-warming-faster-than-thought-study-finds.html?_r=0

        Reference:
        Central West Antarctica among the most rapidly warming regions on Earth. David H. Bromwich, Julien P. Nicolas, Andrew J. Monaghan, Matthew A. Lazzara, Linda M. Keller, George A.Weidner and Aaron B.Wilson. Nature Geoscience 6,139–145 (2013) doi:10.1038/ngeo1671

  3. Part of the problem is that most scientists are told to focus on the “big picture”, the “major contribution”, etc. at the expense of “mere details” of how the estimates were produced.

    Yet what all recent replications show is that details matter; that no matter how fancy their tools (Excel!?) scientists are still craftsmen; and that the error rate in the production of scientific knowledge is probably as high, if not higher, as that in the production of motor cars ca. 1900.

    This is why those of us interested in research practice, or how scientists go about doing science, advocate for new ways of systematizing research to improve its reliability. IMHO replications ought to be an effort to infer how errors and omissions come about, diagnose research artifacts, and propose measures, including checklists, to prevent future artifacts and increase the reliability of the scientific process.

    Unfortunately the mainstream attitude towards replication remains focused on making substantive — as opposed to procedural — contributions. In fact many researchers argue that replications are completely uninteresting unless accompanied by a new discovery. I beg to differ. To me that is not replication but standard science (if you think about it all science is replication in this sense). As I see it replication ought to be a tool to improve scientific practice, not a method to advance the stock of scientific knowledge per se.

    I get the impression I am alone in this but what the hell.

  4. Are you going to let Richard Tol defend himself here?

    Or will you again block any posts that are sceptical of catastrophic climate change – like you did in your last recent post on global temperature anomalies?

    • Q:

      If you posted a comment that didn’t appear, then it was caught by the spam filter. We get thousands of spam comments and it’s impossible for me to go through the filter. If you ever post a comment that does not appear, please email me and I will check for it.

  5. A question: are the errors important or is the use of the paper important? If this paper weren’t being cited for political reasons, would it matter that it’s wrong? I ask this because scientifically what’s false eventually falls by the wayside. In this case, if we do nothing or not very much, we’ll have a nice natural experiment about the economic effects of climate change (or we won’t, for those who deny it’s happening). The truth would come out. So is the concern that this paper is wrong or that it is being used to advance an agenda? In that case, many papers are used to advance agendas and you or I would disagree with some of those agendas and favor others. Assuming we subject all our evidence to high tests – which is impossible because we manage to blind ourselves – does that actually matter for the actual agenda debate? In other words, assume this paper is retracted. I’d bet it gets cited anyway. I’d bet the retraction does nothing to sway anyone who pushes the agenda it supports.

    This is not to say I don’t appreciate – and enjoy – a good taking apart of material. The roots of that lie deep. My kids’ 2nd grade teacher used to collect all sorts of household items from parents and then have a big “take apart day” in which the kids would unscrew and pry apart each thing. They could see how each fit together, how this fastener is used here and that there. Same thing, same root, all cool.

  6. We appear to be having a new economic publication issue emerging. It looks a bit like the Reinhart, and Rogoff story. Piketty, however made his data immediately available with the publication of his book, Capital in the 21st Century.

    Financial Times Finds “Many” Errors in Piketty Analysis, Argues They Undermine His Thesis
    http://www.nakedcapitalism.com/2014/05/financial-times-finds-many-errors-piketty-analysis-argues-undermine-thesis.html

    • I’m only about 100 pages in to Piketty’s book, but he must have made 50 comments in the text so far to the effect that historical income and wealth are very hard to measure and as a result we can only draw very rough conclusions from the data. I would be really surprised if a spreadsheet error actually called into question his points.

    • Paul Krugman, on his blog, points out some issues he has with the FT criticisms. My best guess is that this will not strongly parallel the Reinhart and Rogoff analysis (spreadsheet errors, some questionable assumptions) but there will be issues in both the analyses and some of the criticisms.

  7. Pingback: You shouldn’t use a spreadsheet for important work (I mean it)

  8. Andrew:
    Errors were made and have been corrected.

    As I’ve told you before, but you prefer to ignore, a corrigendum is a very restrictive format. The corrected data do not materially affect the results.

    The new data do affect the results, but in the long run rather than the short run. Again, I told you this. Not sure why you chose to ignore that information. I would not get too excited about the change in results, because the differences are largely discounted away.

    I’ve also told you that a new paper takes account of all the issues you raise, and more.

    You may consider that only linear curves can be fitted from a single observation (and the origin).

    I am an econometrician, by the way.

        • This still seems odd as you are fitting multiple models to twenty data points and the assumptions that feed into model selection matter hugely (because there is so little data). For example given that some of the data points are potential outliers my starting point would be a robust linear model. Fitting this sort of model with or without constraining the regression to pass through zero (in linear or quadratic form) gives a pattern where the costs appear with smaller temperature increases. I also struggle to see this as a meta-analysis in the statistical sense because there is no sense of the precisions of the original values.

          The conclusion “The impacts of climate change do not significantly deviate from zero until 2.5-3.5°C warming” is seems problematic to me because you have 20 data points and hence very low power (even if one ignores that other seemingly equally valid models might have a different threshold).

        • Thanks for sending the link. I am not a climate scientist, so my question is probably naive. Are the 20 estimates based on independent data or do they use some of that same data (and some different data) but just use different models to reach their estimates for their estimates?

        • Dr. Tol,

          Some time ago I looked into Nordhaus’s DICE model and the evidence on which it’s based. Looking into one of its data sources, I found it had really dubious methodology, and if I remember correctly that paper with dubious methodology is also cited in your 2009 paper on the economic costs of climate change.

          Do you have any comments on this?
          https://gainsfromtrade.org/2014/06/30/two-thirds-of-nordhauss-1999-dice-agw-damages-are-based-on-unsubstantiated-guesswork/

          It seems like Nordhaus arbitrarily manipulated some of the numbers (not claiming with ill intention, of course).

        • The data in the reproduction code has 21 observations, the article has 20. The data in the reproduction code differs in 6 (other) places from the article. The rest of the reproduction code seems to be technically correct, but some of it seems rather meaningless to me, such as calculating the gaussian likelihood on the non-parametric bootstrap.

    • Richard:

      Econometrician is just a label so if you want to call yourself one, fine.

      Also, yes, I know that you wrote that the corrected data do not materially affect the results, but as discussed in my post above I don’t agree with that statement of yours, as your paper includes several important statements that are affected by the corrections:

      1. In the 2009 paper, you wrote, “some estimates, by Hope (2006), Mendelsohn, Morrison, Schlesinger, and Andronov (2000), Mendelsohn, Schlesinger, and Williams (2000), and myself (Tol, 2002b), point to initial benefits of a modest increase in temperature, followed by losses as temperatures increase further.” After the correction, only one estimate (yours) points to initial benefits (unless you want to count the one study that projects tiny 0.0% and 0.1% effects). As discussed in the blog post above, this puts a lot more of the burden of this claim on your own study, indeed also interfering with the “if you aggregate them all, the overall effect is positive today” claim in the popular press. I of course don’t hold you responsible for what some reporter writes, but the point is that if the paper had not had the errors, it wouldn’t have supported that claim.

      2. In the 2009 paper, you wrote, “it is striking that the estimates are in broad agreement on a number of points—indeed, the uncertainty analysis displayed in Figure 1 reveals that no estimate is an obvious outlier.” Again, once the errors are fixed, your +2.3 number is indeed an obvious outlier. This does not mean that your +2.3 is wrong, but it stands out once the other points get fixed. Indeed, as noted in my post above, I wonder if your +2.3 is one reason you made the data errors in the first place: given the positive estimate you had obtained, the mistaken +2.5 and +0.9 numbers didn’t stand out so much.

      3. In the 2009 paper, you have a whole paragraph explaining how “the initial benefits [of small amounts of global warming] arise.” Again, without the errors, these initial benefits would be much more speculative. You no longer would have confidence bounds in which it was plausible that 2 degrees of global warming would increase economic impact by something like 4%.

      4. As Rachael Jonassen noticed, “In the original Figure 1, the x-axis is labelled temperature change relative to ‘today.’ In the new Figures, the x-axis is labelled as temperature change relative to pre-industrial temperatures. . . . On the newly labelled x-axis, we are ‘today’ at +0.8C.” A change of 0.8 degrees is a big shift and changes the interpretations of all the results, also it raises the possibilities that there are further errors that have not yet been corrected.

      Finally, you say that you have a new paper that takes account of all the issues you raise, and more. But in 2009 I assume you would’ve said that your paper at that time took account of all the issues, no? Given the problems with the data and the model in the published papers, I assume you can see why I can’t immediately be reassured by the existence of another paper that purportedly fixes everything. I really think that, instead of trying to patch things, you should consider stepping back and revisiting your meta-analysis, taking account of the comments in the above post and also the comments here and in various other places. You’re working on important problems; maybe you should take a deep breath and think more seriously about what you’re doing.

      • 1. Indeed. Then again, the initial impacts are sunk and irrelevant for policy.

        You refer to the Matt Ridley’s article in the Spectator. That draws on a different paper, independent of the discussion here. (In other news, don’t believe everything Bob Ward tells you.)

        2. Yes and no. 2.3% for 1K is an outlier if you insist on sticking to a quadratic curve, but the new data say you shouldn’t do that.

        3. No. These impacts are not speculative at all. There may be only one aggregate estimate, but there is plenty of evidence for CO2 fertilization, reduced heating costs, and reduced winter mortality.

        4. Speculation.

        5. You are late to the party. The issues you raise are not new, and have been addressed already. That said, there is plenty of room for a man of your skills to make a constructive contribution to the literature. You may want to have a look at the meta-analysis in Chapter 6 of IPCC WG3 AR5.

        • Richard:

          1. The news article says, “There are many likely effects of climate change: positive and negative, economic and ecological, humanitarian and financial. And if you aggregate them all, the overall effect is positive today — and likely to stay positive until around 2080. That was the conclusion of Professor Richard Tol of Sussex University after he reviewed 14 different studies of the effects of future climate trends. To be precise, Prof Tol calculated that climate change would be beneficial up to 2.2˚C of warming from 2009 (when he wrote his paper).” OK, maybe he was speaking of some other paper you wrote in 2009 that summarized 14 different studies . . . whatever. The point is, when you correct the data, the statement no longer holds. What you have is only one study—yours—with a forecast of seriously positive outcomes. That completely changes the implications, in particular the claim in the news article that “this is the current consensus. If you wish to accept the consensus on temperature models, then you should accept the consensus on economic benefit.”

          2. The 2.3 and the -11.5 are outliers because they stand far apart from all the other points. The 2.3 is the only positive forecast (except for the tiny 0.0 and 0.1 projections), and the -11.5 is much more negative than any of the others. This has nothing to do with quadratic. The point is that when you put back those minus signs and get rid of the +2.5 and +0.9, the +2.3 stands out.

          3. Based on your data, those initial benefits come from a single study—yours. Again, the implication of such a claim is much different when it comes from only your study, than when there were three different studies, only one of them yours, that projected positive effects.

          4. Huh? It’s not speculation. You changed the labels on your axis. I didn’t write the paper, but you must have meant something by “pre-industrial temperatures.” If it is indeed a shift of 0.8 degrees, that’s a lot and it changes the implication of every data point in your table. That’s a big deal.

          5. I make no claim that the issues I raise are new. I just think they’re important. And I continue to think that your remark that outliers provide evidence of nonlinearity (rather than, as we would usually think in such a situation, evidence that the outlying data points are different from the others in some important ways) indicates a lack of understanding of the relevant statistical issues. See, for example, Keith’s comments here.

        • Andrew, Richard,

          I think the appropriate questions to ask regarding Matt Ridley’s statement is this:

          1. “So, which paper did he rely on, then?”
          and
          2. “Considering the errors in this particular 2009 paper, have you checked the numbers in that other paper?”

          Question 1 is of relevance, since it would suggest Richard Tol wrote two papers that are now at odds with each other, whereas question 2 is of relevance because it may well explain why the two papers are at odds which each other…

        • 1. I agree that the wording could be clearer. The numerical result is from one paper, but ascribed to another.

          2. True, if your presumption is a linear model. The current data suggest linearity (with a few outliers), but the literature has assumed non-linearity (from 1982 onwards).

          3. I repeat, these benefits are supported by a wider literature.

          Besides, the estimate of 0.1% for 2.5K, so readily dismissed by you, is strong evidence for initial positive impacts. Would you fit a linear curve there, implying a 1% positive impact at 25K warming, or a quadratic curve with initial positives and negative impacts beyond, say, 3.0K? The latter strikes me as more plausible, and Mendelsohn has argued the same.

          4. Speculation referred to “further errors”.

          5. I am well aware of the statistical issues. Instead of bitching about someone else’s work, you may consider helping to solve this problem: You may 20 incommensurate estimates. How do you draw a conclusion about effect size? You are not allowed the ivory tower option of refusing to confront the problem until more evidence will have been gathered.

        • Richard:

          1. I don’t care about the wording, the point is that the claim in the news article falls apart when the data errors are corrected. This is one reason I disagree with your statement that the corrected data do not materially affect the results.

          2. No, the outliers are relative to the other points. The statement that these points are outliers has nothing to do with linear model.

          3. The benefits may be supported by a wider literature but not by the data in your paper. Thus, again I disagree with your statement that the corrected data do not materially affect the results.

          4. Again, if you have indeed shift your axis by 0.8 degrees, that’s a lot and it changes the implication of every data point in your table. At this point it’s just not clear what the numbers are intended to represent, and given this confusion, along with the multiple errors and multiple corrections so far, it’s hard to know what to make of this analysis.

          5. Please be polite. Discussing mistakes in influential, published papers on important topics is not “bitching”; it’s called criticism, and it’s an important part of science.

          And I’m sorry but I disagree with your statement that you are well aware of the statistical issues. I agree that you are aware that people are raising concerns about your statistical methods but, based on what you’ve written so far, you don’t seem to understand the problem with treating the numbers you are using as measurements of a single function with random errors. When you remark that outliers provide evidence of nonlinearity (rather than, as we would usually think in such a situation, evidence that the outlying data points are different from the others in some important ways) indicates a lack of understanding of the relevant statistical issues.

          And please don’t tell me what I am not allowed to do. I am a statistician and one part of the public service aspect of my job is that I give free advice to people like you, I try to help out on difficult problems that confuse people. In this particular case, I did give some advice, although perhaps it was buried in the long post. My advice was to consider each of the forecasts of the effects of climate to be a curve rather than a single point at a single hypothesized level of warming. Doing this would require work, of course—it involves getting inside each model. But, to the extent this project is worth doing, I think such work is necessary.

          There’s no shame in being confused—statistics is hard. But if your goal is to do science, you really have to move beyond this sort of defensiveness and reluctance to learn.

          Recall that as early as 28 May 2009 (see link here), Prof. Julie Nelson wrote a detailed note pointing out problems with your paper, in particular identifying one of the data points that you later had to correct. Your response at the time was simply defensive. You lost the opportunity to make the correction back then and now here we are 5 years later, several other errors were found in the paper, indicating real sloppiness in a paper that was published in such a prominent journal on such an important topic, and you’re getting further criticism from me and others. Instead of taking this criticism seriously, you continue to be defensive.

          I’m sure you can go the rest of your career in this manner, but please take a moment to reflect. You’re far from retirement. Do you really want to spend two more decades doing substandard work, just because you can? You have an impressive knack for working on important problems and getting things published. Lots of researchers go through their entire careers without these valuable attributes. It’s not too late to get a bit more serious with the work itself.

        • Andrew
          You’re a sound statistician. You got all the data. Get to work. Show me how it’s done. There are plenty of journals that would love to publish your solution to the problem.

          I’m making progress on a follow-up paper. I will add a linear model to the suite and name it Gelman.

          Julie Nelson made the point that the Stern Review is at odds with my representation of Hope (2006), Stern’s main source. That is not a criticism of my work. Rather, it is a criticism of Stern whose conclusions are indeed inconsistent with his sources (and his analysis, but that’s a different story).

        • Richard:

          No, I have never at any time suggested a linear model. For you to even suggest this implies a deep misunderstanding on your part of what I have written; it is a misunderstanding consistent with the model of a fixed curve with independent errors that is implicit in your meta-analysis.

          Again, my advice was to consider each of the forecasts of the effects of climate to be a curve rather than a single point at a single hypothesized level of warming. Doing this would require work, of course—it involves getting inside each model. But, to the extent this project is worth doing, I think such work is necessary. Getting inside the model requires subject-matter knowledge of the sort that you might have. It is not a matter of analyzing those 20 data points. When I talk about getting a bit more serious with the work itself, that’s what I’m talking about. Forecasting the effects of climate change is one of the things you do for a living; you can spend some time going beyond just pushing these 20 numbers around.

          Finally, Julie Nelson pointed out a particular problem in one of your data points. It turns out she was completely correct and you now report you had coded that point wrong, apparently switching a -2.5 to +2.5. Perhaps if you had taken your criticism seriously instead of playing defense, you might have caught the problem back in 2009 instead of having to run a correction in 2014. In addition, once you saw this problem, you could’ve checked all your numbers back then. I understand that it can be hard to admit you made a mistake, but this is getting ridiculous.

        • Andrew:
          No. Nelson pointed to an inconsistency between Hope and Stern. That inconsistency was known, and did not provide a reason to take another look at Hope.

          If I understand you correctly, you suggest I fit a non-linear curve through the origin and a single observation. That is exactly what I did in the Computational Economics. Of course, it does require a strong prior on the shape of the curve.

          If I further impose the condition that sign changes are Verboten, positive impact estimates are indeed outliers. Outlier by assumption, I must add.

          If I do not impose that condition, positive impact estimates are a bit weird, but not outliers.

          If I use a flexible functional form, positive impacts estimates are bang in the middle.

          Do I really want to make an assumption that turns 10% of my observations into outliers?

        • Richard,

          I think what he means is that you look into the model behind a specific point estimate. Then you extend the model such that it covers the whole temparture range for which meaningful estimates can be made, so that you get several estimates for each and every model that has been used for the various point estimates cited.

          By the way, do you have any indication yet how SCC estimates change e.g. with FUND with whatever damage function you take?

        • Richard:

          – Nelson pointed out a serious mistake with your data. You brushed her off. You were wrong. There really was a serious mistake. It sat in your paper for 5 years. Now, even after the fact, you do not thank Nelson, you do not apologize to Nelson, instead you blame Stern for inconsistency which is pretty ridiculous given that you’re the one who made the mistake. Not cool, not cool at all.

          – You write, “If I understand you correctly, you suggest I fit a non-linear curve through the origin and a single observation.” No, you do not understand me correctly, just as you did not understand me earlier when you thought I wanted to fit a linear model. As I said before, my advice was to consider each of the forecasts of the effects of climate to be a curve rather than a single point at a single hypothesized level of warming. To do this requires going inside each forecast and understanding it well enough to produce a forecast curve g_i(x) rather than a single point (x_i, g_i). Of course this takes work but sometimes work is what we have to do to make progress.

          – Finally, let me repeat that the -2.3 and +11.5 are outliers in the sense of being much different from the other points in the dataset. They’re not outliers relative to a model, linear or otherwise; they’re outliers relative to the other data you have.

        • Andrew,

          It seems to me you are clutching at straws, nit-picking, and have nothing of significant to offer. having read this, I am persuaded that Tol’s work is as good as the current data allows.

          I also find it hard to understand how anyone could believe that warming to date (since the Little Ice Age) hasn’t been good for humans and for most life. It’s very hard to believe that continued warming will suddenly change from being beneficial to a negative impact for life. I think you need to address that.

        • Peter Lang wrote: “It’s very hard to believe that continued warming will suddenly change from being beneficial to a negative impact for life.”

          Have you considered this?

          “If action is not taken to reduce greenhouse gas pollution, scientists predict that climate change may cause yields of corn, soybeans, and cotton– three of America’s biggest cash crops– to decrease by as much as 80% by 2100. A new study released by researchers at Columbia University and North Carolina State University in the online Proceedings of the National Academy of Sciences details these potential impacts.

          Dr. Michael Roberts, one of the lead authors of the study, said, “While crop yields depend on a variety of factors, extreme heat is the best predictor of yields.”

          Temperatures higher than 85 degrees Fahrenheit for corn, 86 degrees for soy, and 89 degrees for cotton cause damage to crops, reducing yields. By the end of this century, temperatures are predicted to rise by as much as 11 degrees Fahrenheit due to global warming, according to a recent US Global Change Research Program report.”

          Link = http://farmenergy.org/news/climate-change-may-reduce-corn-soy-cotton-yields-80-by-2100

          A few minutes of Googling will turn up plenty of other references.

        • Andrew, you said

          “There’s no shame in being confused—statistics is hard. But if your goal is to do science, you really have to move beyond this sort of defensiveness and reluctance to learn.”

          I’d be interested to know if you made similar comments to Michael Mann regarding the “hockey stick” affair? The reason I am asking is to try to get a feeling whether or not you are genuinely objective or if you have an agenda.

        • Chris G,

          Yes, of course. There are many studies. But arguably, they are cherry picking looking for negative impacts rather then positive impacts. I prefer the studies that attempt to consider the overall net benefit-cost of all impacts. For example, this working paper by Tol: “The economic impact of climate change in the 20th and 21st centuries” http://www.copenhagenconsensus.com/sites/default/files/climate_change.pdf

          And here is one showing that the cost of sea level rise to 2100 is negligible – i.e. about $200 billion for a 0.5 m rise and about $1 trillion for a 1 m rise. Compare that with projected global GDP of $30,000 trillion cumulative to 2100 ( all figures in 2011 US $): http://link.springer.com/article/10.1007%2Fs11027-010-9220-7
          This is significant because sea level rise is perhaps the most important of the justifications to scare the pants off the population.

        • Andrew
          Thanks for making explicit what you mean re model specification. I agree that that it would be a good idea. Unfortunately, the available estimates are static ones. There is no curve. There is just a point. I apologize that the text did not make this sufficiently clear.

          Let’s agree to disagree on Nelson. I read it then as I read it now: She highlights an inconsistency between Hope and Stern.

          We also seem to disagree on outliers. I think that an outlier is an observation that is far from its expectation. That immediately implies that you have a specification in mind. The three observations that you deem outliers are inconsistent with some models but consistent with other models.

        • Richard:

          I would agree with “outliers are inconsistent with some models but consistent with other models”

          But only under the assumption that all data is being generated from the same probability generating model (and there is a lot work yet to do to have any idea about that or even if it is sensible).

          So would John Nelder in his paper “There are no outliers in the stackloss data set”.

        • Richard:

          You write, “Let’s agree to disagree on Nelson. I read it then as I read it now: She highlights an inconsistency between Hope and Stern.”

          Your habit of brushing aside criticisms is unseemly. You’re the one who reported a -2.5 as a +2.5, she’s the one who directly pointed to the +2.5 number as “there must be a mistake”—and she was absolutely right! At the time, you apparently were so sure of yourself that you didn’t even think of checking the number in your paper. Then, years later, you still refuse to recognize you made a mistake by not checking at the time. This indicates:

          1. A stubborn refusal on your part to even consider you could have made an error;

          2. A lack of command of the literature on which you are considered an expert (at no point did you yourself notice something fishy about the +2.5);

          3. A continuing pattern to dismiss valid criticism, a pattern that is continuing today.

          Put these together and I guess there’s no surprise that your paper contained so many errors—indeed, an amazing number of errors for an empirical paper with only 14 data points.

          Perhaps a future correction notice will appear. I think you would describe this as speculation, but there seem to be additional errors that need to be corrected, indeed each time you add a correction, more questions arise.

          For example, it still seems strange to me that the changed caption on your graph changes the meaning of all your data points. How could that really be? For example, your new graph (including your famous point at (+1, +2.3)) is labeled as being on the scale of “the increase in the annual global mean surface air temperature relative to preindustrial times.” But when I go to the source of that paper (Tol, 2002), I find a discussion of “the impact climate change would have on the present situation . . . an increase in the global men temperature of 1 C, a relatively modest climate change expected to occur in the first half of the next century,” which does not match your new caption. So it seems that your revised paper is not even consistent with the data from your own study.

        • Richard has truly jumped the shark here. Ridley’s wording is entirely clear and deliberate, and it is blindingly obvious that he is referring to the 2009 study that is the subject of our discussion here. (Yes, he also refers to a separate study further on in the article, but that hardly negates the first point.) If you really require undeniable proof then simply take a look at Ridley’s website, where he specifically links to Tol (JEP, 2009) when arguing that “that climate change would be beneficial up to 2.2˚C of warming from 2009”!

          Two further points:

          1) Please, can Richard (or someone else in the know) put us out of our misery about the shift in x-axis? Can it really be that hard to say: “Sorry, that’s a typo, it should be from the present day.”, or “Sorry, the original study was incorrect”, etc?? (It seems almost redundant, but this issue takes on added significance when we remember how much emphasis Ridley placed on the fact that damages would only accrue relative to the present day.)

          2) To second Andrew’s general comment about politeness: Richard, this really would be so much easier if you simply toned down your bombast. It’s not enough that you dismiss climatologists as “clowns among statisticians”. Why be so incessantly defensive to the point where you are unable to recognise obvious shortcomings and mistakes in your own work? I would humbly submit that you consider your own advice: “[I]t is high time that climate researchers stand up against sloppy research, and against the apologists of sloppy research.”

        • “1) Please, can Richard (or someone else in the know) put us out of our misery about the shift in x-axis? Can it really be that hard to say: “Sorry, that’s a typo, it should be from the present day.”, or “Sorry, the original study was incorrect”, etc?? (It seems almost redundant, but this issue takes on added significance when we remember how much emphasis Ridley placed on the fact that damages would only accrue relative to the present day.)”

          Yes, Richard Tol should absolutely and unequivocally state whether the graph has an x-axis zero at pre-industrial temperatures or temperatures of “the present day.”

        • Very good example, thom, although you should have seen the previous drafts. Richard was less generous toward “swivel-eyed loons.”

          Here’s what I think is his latest draft:

          https://docs.google.com/file/d/0Bz17rNCpfuDNRllTUWlzb0ZJSm8/edit

          Richard’s link seems to have disappeared on his website.

          ***

          Andrew might be interested in Richard’s use of Kappa in a commentary to Cook & al. Richard’s latest version shows some elusiveness to that effect. The notion of “weak disagreement” may also deserve due diligence.

        • Richard stated… “Instead of bitching about someone else’s work, you may consider helping to solve this problem…”

          I find this to be a highly ironic comment coming from Tol, being that he has spent the better part of a year just “bitching” about Cook et al 2013.

          Gelman is here actually attempting to be constructive in how to achieve more accurate statistical results, whereas Tol has taken an explicitly expressed “destructive” approach to Cook13.

        • Tough criticism. Thanks for the link.

          (My all time favorite piece of tough criticism though was listening to Lucy Ziurys tear Dick Zare a new one for insinuating life on Mars on the basis of sloppily-analyzed mass spec data. She did it to him in real-time at a conference.)

        • Eli:

          Wow—that’s amazing. Nelson even caught that there was a problem with one of the data points and Tol just ignored it. That’s bad news. I’ll tell you this: when someone points out to me that my model produces an estimate that doesn’t make sense, I take that sort of criticism very seriously.

        • Continuing with the “bitching” about other people’s work:

          In response to Nelson’s specific comment:

          “Tol’s survey identifies 14 estimates of the global economic impact of climate change (Table 1 and Figure 1); five of these are by Nordhaus, presenting similar estimates from successive vintages of the same model. Two more are by Tol, and most of the others are by their colleagues and collaborators.”

          Richard replies:

          “Second, she argues that the meta-analysis of the social cost of carbon contains a disproportionate amount of my own work. This is true. For that reason, I hesitated before writing my 2005 paper in Energy Policy. However, there was a clear demand for a paper like that, and no one was interested in writing it. I did do sensitivity analyses excluding one of the three dominant authors (Hope, Nordhaus, Tol), which showed that there is no overdue influence by any author.”

          Even if Richard leaves his collaborators’ work in and only pulls out the data points from his own studies and those of Nordhaus and Hope, that drops 50% of the data that went into the meta-analysis. How is “no overdue influence” quantified here? In Richard’s place, I would give some measure of “no change”. I find it hard to believe that 50% of the data add no new information.

          Also, Richard mentions in his comments to Nelson that he excluded data that had a conflict of interest (what he calls partisan studies). But then he should exclude his own data too, because it looks like he’s decided what the conclusion is, and nothing is going to change his mind. [Or have the conclusions changed; I think that’s what the new paper says that Richard refers to but doesn’t link for us.]

          Isn’t there something like the Cochrane collaboration for climate science? You need a third party that just let’s the data do the work.

        • One more detail:

          http://retractionwatch.com/2014/05/21/gremlins-caused-errors-in-climate-change-paper-showing-gains-from-global-warming/

          In the retraction watch post above there are two statements that do not match up:

          “The correction points out that the original paper concluded that “there were net benefits of climate change associated with warming below about 2°C”, but the updated analysis shows “impacts are always negative”.”

          But later on on that page, Richard Tol is quoted as saying:
          “Although the numbers have changed, the conclusions have not. The difference between the new and old results is not statistically significant. There is no qualitative change either.”

          How can the updated analysis show that impacts are always negative, and the old and new analysis not change the conclusions?

          Also, how does one establish that there is no statistical significant difference between one analysis with some numbers, and another analysis with different numbers?

          More generally, the problem seems to be that scientists seem to hate to be wrong. It’s rare to find a scientist that will publish a result that goes against their pet theory.

          In harmless areas like linguistics this is a particularly strong tendency—once a linguist has taken a position, they will often defend it till the day they die, and often beyond (through their students). Luckily nobody is going to die if we get a theoretical position in linguistics dead wrong.

          But in the climate research world, one would expect much more detachment on part of the researcher from the position associated with their name, versus the facts. It’s amazing to me that one can’t overcome the strong desire to be right in such a critical research area. If I were Richard, I would have beaten everyone else to writing a refutation of my own previous work.

        • > Cochrane collaboration for climate science?

          Their meta-analysis methods are reasonable for well-done two group RCTs but _non-sense_ for most other things.

          For instance, It is not unusual for people to apply Cochrane collaboration methods to areas like observational studies doing thoughtless things like taking the most adjusted estimate from each study as the least confounded (we know that’s sometimes wrong) but much worse then taking a weighted average of these. But effects of interest adjusted differently are different and so they are just combining apples and oranges. Getting something sensible is hard and requires more domain knowledge than even most experts in the domain usually have.
          This seems to be what has happened here. Look we have 14 or so estimates of an _y_ and an _x_, so we can do regression modelling, But it not sensible – as is – unless those _y_s are independent measures at (ideally set) values of _x_,s where an underlying single function can reasonably be taken to have generated those y,s and x,s (this being the most critical part).

          Likely an article rather than a blog post is required to make this clear (and apparently Greenland and I were not very effective in our book chapter referenced in wiki or likely no one reads it) but here is a quick stab.

          Assume for convenience rather than likely true, that all 14 study’s data was generated by exactly the same function determined by an intercept, linear and quadratic term. A meta-analysis makes sense here and is fully possible (the consistency of the intercept, linear and quadratic terms can be assessed across studies) but to adequately carry it out, six pieces of information are required from each study – estimates and standard errors of the intercept, linear and quadratic terms. (Actually better would be the 3 dimensional likelihood functions from each study which then can be contrasted and IFF consistent multiplied.) It just can’t be adequately done with just one number from each study.
          As an aside, Cochrane collaboration’s statistical methods group has banned me from their email group for sending overly provocative comments – defensiveness is indeed a serious problem in science.

        • “As an aside, Cochrane collaboration’s statistical methods group has banned me from their email group for sending overly provocative comments – defensiveness is indeed a serious problem in science.”

          Wow really? I find that quite unacceptable.

  9. Note that the Journal of Economic Perspectives is unique among economics journals: papers are usually solicited; they are selected and written with a broad audience in mind (policymakers and educated non-economists); it is highly-ranked and cited but does not really count as a publication for purposes of P&T (except as outreach), etc.

    • Along those lines, the answer to Andrew’s question “it makes me wonder how the paper got through the review process” is that the JEP is not peer reviewed. From the JEP’s submission guidelines (http://www.aeaweb.org/jep/submissions.php), the review process is:

      “Almost all JEP articles begin life as a two- or three-page proposal crafted by the authors….The proposal provides the editors and authors an opportunity to preview the substance and flow of the article. For proposals that appear promising, the editors provide feedback on the substance, focus, and style of the proposed article. After the editors and author(s) have reached agreement on the shape of the article (which may take one or more iterations), the author(s) are given several months to submit a completed first draft by an agreed date. This draft will receive detailed comments from the editors as well as a full set of suggested edits from JEP’s Managing Editor. Articles may undergo more than one round of comment and revision prior to publication…. The JEP is not primarily an outlet for original, frontier empirical contributions; that’s what refereed journals are for!”

      • Sol:

        Thanks for the background. The Reinhart and Rogoff paper was published in a non-refereed journal too, right?

        Just to be clear, my problem with Tol’s paper has nothing to do with whether it is original or frontier but rather that (a) it was sloppy, and (b) its model doesn’t make sense. But I can see how these problems would not get caught by a journal editor. An editor with no particular subject-matter expertise would not notice the sloppiness (those errors were caught by people who knew the details; see for example that post by Bob Ward linked to above), and an editor would not notice the problems with the model either, based on a quick read. Indeed I don’t think I would’ve thought so hard about what made the model not work, if I weren’t pushed to reflect upon it after thinking about the quadratic curve. (Rahul’s comment on our earlier post was helpful in making this point.)

        • Andrew:

          Yep, the Reinhart and Rogoff paper was first released as an NBER working paper and then published in “American Economic Review: Papers & Proceedings” which is the conference proceedings for the American Economic Association’s largest annual meeting. Neither of these formats are peer reviewed, although many folks treat them as if they were.

          I agree with your points on what’s noticeable to an editor and your criticisms of the meta-analysis. I was just clipping the quote for background since the JEP is clear to set itself apart from “refereed journals”.

          Tol’s issues might have been caught by a referee, but I doubt that the Reinhart and Rogoff issue would have been caught since it required going into the data and journals that require data publication don’t usually ask for it until publication. Maybe one way to avoid these issues is to start requiring that replication code be submitted for review alongside the manuscript.

  10. Lenore Weitzman’s The Divorce Revolution (1985) central “fact” is still cit today (http://www.huffingtonpost.com/brendan-lyle/after-divorce-women-rebou_1_b_1970733.html). This central fact is:

    According to George Mason University Sociology and law professor Lenore Weitzman in her book “The Divorce Revolution,” a typical woman endures a 73 percent reduction in her standard of living after a divorce. Her typical ex-husband enjoys a 42 percent increased standard of living [from the above link, which is from 2012)].

    The actual fact is:

    Richard Peterson of the Social Science Research Council published a study of Weitzman’s 73/42 statistic, which was arrived at using an “income/needs ratio.” After precisely recreating Weitzman’s study using the data sample and methods outlined in The Divorce Revolution, Peterson reported his findings: Weitzman’s figures were actually the result of a computer transcription error and dramatically overstated the case. After correcting her errors, Peterson arrived at a 27 percent decrease in standard of living for women and a 10 percent increase for men in the first year after divorce-figures more in line with other studies dealing with this topic.

    It took years for Peterson to get the data from Weitzman but it really makes almost no difference–this original “facts” stand.

  11. Pingback: The gremlins did it? Iffy statistics drive strong policy recommendations « Statistical Modeling, Causal Inference, and Social Science Statistical Modeling, Causal Inference, and Social Science

      • I don’t know how much would be important. I am just wondering if people only study the economic effects of warming (implicitly giving zero prior probability to cooling).

        • Unless we suddenly get a series of massive volcanoes, sustained cooling is difficult to imagine.

          For a really quick graph, see NASA GISTEMP Land-Ocean.
          Stick that data in Excel, and simply compute the SLOPE to ending year, for 10, 15, 20, 25-year intervals. That converts hard-to-eyeball jiggly diagonals into rates of change, by interval, for each year, which makes it hard to cherry-pick specific dates.
          Regression slopes chart does this.
          It is possible to have a tiny 10-year cooling trend (barely) if you pick 2011 or 2012, but since 1980, 10-year trends average about 0.017C/yr, while jiggling between 0 and .03.

          There are no 15-, 20- or 25- year cooling trends since the mid-1970s, and most of those slopes have been between .01 and .02C/yr.

          Even with all the oceanic jiggles, conservation of energy still works.

        • John and Chris,

          Your answers are tangential to my question, but it appears to be that to the best of your knowledge no one has ever attempted to model the economic effects of cooling. You both agree that if this is the case it is justified because sustained cooling has such low prior probability that it is not worth funding.

          Let me know if that is inaccurate.

        • > You both agree that if this is the case it is justified because sustained cooling has such low prior probability that it is not worth funding.

          I concur that the prior for sustained cooling is very, very low. I’ll add that my prior is physics-based.

        • “I concur that the prior for sustained cooling is very, very low. I’ll add that my prior is physics-based.”

          I’m thinking it would be a good way to explore the behavior of these models (both climate and economic) by using them in ways unexpected by the modelers. If they are good models they should still give plausible results. Maybe there is a string of volcanic eruptions, the sun starts dimming, or a Mr. Burns decides to partially block out the sun.

        • 1) I haven’t seen any cooling studies of late, but then I haven’t explicitly looked for them, and all of the studies I have seen quite naturally study the effects of warming. Maybe one can go back to “nuclear Winter/Fall” or to historical data from LIA …

          2) “tangential”: maybe, maybe not. From experience, there is a set of people who think or want to think that global cooling is likely, imminent, or generally to be taken seriously. Skeptical Science even has several such in its list:

          #4 It’s cooling.

          #9 We’re headed into an ice age Not any time soon.

          #34 Glaciers are growing

          3) Anyway, if someone wants to model the economic effects of cooling, they are welcome to spend the serious effort to do that, akin to redoing big chunks of IPCC WG II … but I wouldn’t expect any public funding for this.

        • 1. There are plenty of paleoclimate models which use GCMs to analyze periods of cooling. In such a case the models are used to evaluate assumptions about external forcing (solar, volcanic, etc) by comparing results against proxy data for temperature, precip, etc.

          2. IAMS assume temperature and forcing scenarios for predictions, so if you wanted to see what would happen to GDP using and IAMS you would have to start with some sort of reasonable forcing scenario, and as others have pointed out there are no such things absent violent and long lasting volcanic activity, good luck with that.

        • Eli,

          “1. There are plenty of paleoclimate models which use GCMs to analyze periods of cooling. In such a case the models are used to evaluate assumptions about external forcing (solar, volcanic, etc) by comparing results against proxy data for temperature, precip, etc.

          2. IAMS assume temperature and forcing scenarios for predictions, so if you wanted to see what would happen to GDP using and IAMS you would have to start with some sort of reasonable forcing scenario, and as others have pointed out there are no such things absent violent and long lasting volcanic activity, good luck with that.”

          Do you have a link to the code for one/some of these models?

        • Stick that data in Excel, and simply compute the SLOPE to ending year, for 10, 15, 20, 25-year intervals. That converts hard-to-eyeball jiggly diagonals into rates of change, by interval, for each year, which makes it hard to cherry-pick specific dates.

          This is a specific example of a Savitzky-Golay filter. There are probably filters better suited to the task (albeit a running regression is easy to explain).

        • They’re both linear time-invariant filters, so the answer depends on the frequency response and how those relate to the power spectra of the components of the temperature time series. Some things that would be good are: no ripple in the passband; a stopband frequency response with a slope that decreases fast enough to counteract the effect of taking the derivative. (Taking a derivative can itself be viewed as a linear time-invariant filter — one that amplifies high-frequency noise).

        • Reading the wikipedia article on Savitzky-Golay filters, they are in fact a weighted moving average. The weights being the “convolution coefficients” mentioned which are chosen carefully so that the resulting weighted average is equal to the value of a low degree polynomial least-squares fit to the adjacent points.

          This general technique, of using variously weighted moving averages, is well explained in a little book by Richard Hamming (Digital filters, published by Dover). He specifically talks about “smoothed derivatives” and analyzes them in terms of frequency response.

          Corey is right that you want enough smoothing that taking the derivative doesn’t undo the smoothing and amplify the noise. I think the basic point made above is probably correct though, there have not been long-duration cooling trends recently.

          My big question is: so what? We’re talking about a super-high-dimensional nonlinear dynamical system. It seems perfectly plausible to me based on my knowledge of chaotic dynamical systems that we can’t completely rule out some enormous snowfall that blankets the earth so heavily in snow that we get a large change in albedo and within a decade we’re in the middle of an ice age. I’m not saying this is LIKELY just that the range of behavior that chaotic systems are capable of is much broader than the range anyone seems to be considering. Since we have ABSOLUTELY NO high resolution data about how ice ages occur (ie. month by month temperature, albedo, snowfall etc for the onset of the last several ice ages…. I) think we’re fooling ourselves with the small degree of uncertainty we use in things like IPCC.

        • Daniel, the possibility of a chaotic jump to ice age is basically irrelevant to the policy question — that question turns on whether the observed warming is anthropogenic (in which case we can try to do something about it) or not.

        • See that Steve Easterbrook TedX talk I mentioned before about models.
          In addition:
          1) Ice ages (i.e., ice sheets over much of Canada and N. Europe) aren’t caused by random chaotic snowfalls.

          2) As per this, 4My ago, there was too much CO2 in the air for ice ages. As weathering drew down CO2, Earth entered the zone where Milankovitch forcing and CO2/water vapor feedbacks could generate ice age/interglacial oscillations.

          3) Our interglacial is unlike any for the last 800ky, see Bill Ruddiman’s EARth Transformed(2013). A combination of natural (Milankovitch) trend and human changes has kept global temperature/CO2 in a very narrow band for 8,000 years, since agriculture started.

          4) We are departing that band of temperature/CO2 on the high end, for better or (likely) worse, depending on how far we go, essentially permanently on human time scales. See David Archer’s The Long Thaw.

          Certainly, it could get cooler, given yearly Pinautbos, or nuclear war or a major asteroid collision, but a random snowfall that pushes us back into ice age just does not work with the current levels of solar insolation, Greenhouse gases and Ocean Heat Content.

        • “Since we have ABSOLUTELY NO high resolution data about how ice ages occur (ie. month by month temperature, albedo, snowfall etc for the onset of the last several ice ages…. I) think we’re fooling ourselves with the small degree of uncertainty we use in things like IPCC.”

          That seems to be an argument from ignorance. Data at that resolution would have been convenient to have, but as it turns out it’s possible to get some answers with what’s available. Check Google Scholar for Ayako Abe-Ouchi’s recent work (with various co-authors). That said, solving the glaciation problem doesn’t help us much with setting error bars on things like climate model projections for 2100.

        • John,

          “2) As per this, 4My ago, there was too much CO2 in the air for ice ages. As weathering drew down CO2, Earth entered the zone where Milankovitch forcing and CO2/water vapor feedbacks could generate ice age/interglacial oscillations.
          3) Our interglacial is unlike any for the last 800ky, see Bill Ruddiman’s EARth Transformed(2013). A combination of natural (Milankovitch) trend and human changes has kept global temperature/CO2 in a very narrow band for 8,000 years, since agriculture started.
          4) We are departing that band of temperature/CO2 on the high end, for better or (likely) worse, depending on how far we go, essentially permanently on human time scales. See David Archer’s The Long Thaw.”

          Please be careful to distinguish between theory (however severely tested) and data. It is difficult to have a constructive discussion when people fail to do this, the conversation is quickly sidetracked.
          https://en.wikipedia.org/wiki/Reification_fallacy

        • Steve Bloom: when I said “ABSOLUTELY NO high resolution data” I really meant instrument data. I assume we do have some kind of reconstruction data, but reconstructions are much more model dependent and measurement error is harder to quantify. Still, based on further comments, it does look like rapid changes are certainly plausible. I’m putting this here just to clarify that I meant “instrument data” from thermometers and snowfall gauges and wind gauges etc.

        • > What are the advantages/disadvantages of this method vs a moving average?

          In this case both have the disadvantage of not incorporating any physics. Neither tells you squat about cause and effect or incorporates any physically-based drivers or constraints. That’s not to say that moving averages or Savitzky-Golay aren’t ever useful – they can be very useful – just be aware of the limits of their utility. When the conditions which gave rise to an inferred trend change then extrapolations based on the estimated trend are prone to significant errors.

        • John Massey said:

          “Unless we suddenly get a series of massive volcanoes, sustained cooling is difficult to imagine.”

          Why?

          The planet has been in a cooling trend for 50 My, for 10 My, for 1 My, for 8 ky.

          The planet warms and cools abruptly. It can change temperature in a decade by half the temp difference between current and glaciation.

          So, why don’t you believe the planet can cool abruptly?

          [1] Hansen and Sato, 2010, “Paleoclimate Implications for Human-Made Climate Change”, Figure 1: http://www.columbia.edu/~jeh1/mailings/2011/20110118_MilankovicPaper.pdf

          [2] Coxon, P. and McCarron, S.G. (2009) Cenozoic: Tertiary and Quaternary (until 11,700 years before 2000), Figure 15.21, p391: http://eprints.nuim.ie/1983/

          “Figure 15.21 The stable isotope record (∂18O) from the GRIP ice core (histogram) compared to the record of N.pachyderma a planktonic foraminiferan whose presence indicates cold sea temperatures) from ocean sediments (dotted line). High concentrations of IRD from the Troll 8903 core are marked with arrows. After Haflidason et al. (1995). The transition times for critical lengths of the core were calculated from the sediment accumulation rates by the authors and these gave the following results:
          Transition A: 9 years; Transition B: 25 years; and Transition C: 7 years. Such rapid transitions have been corroborated from the recent NGRIP ice core data.”

          [3] Wallace S. Broecker, 1995, ‘Chaotic Climate’, http://www.slc.ca.gov/division_pages/DEPM/Reports/BHP_Port/ERRATA_CSLC/Vol%20II/EDC%20Attachments%20Vol%20II-02.pdf

          [4] Jose A. Rial, et al, 2004, ’Nonlinearities, Feedbacks and Critical Thresholds within the Earth’s Climate System’, http://www.globalcarbonproject.org/global/pdf/pep/Rial2004.NonlinearitiesCC.pdf

        • So, why don’t you believe the planet can cool abruptly?

          You’re obviously an innumerate troll, but I’ll try to explain it gently. We have known radiative forcings and feedbacks. Energy is coming into and being trapped into the system at a known rate. There are known reservoirs (and sinks) of energy in the form of the poles, ice sheets at the poles, and deep cold waters in the deep ocean, and the water and land itself. There are known channels in which energy can be added and subtracted from those reservoirs. The atmosphere is the most variable and the smallest reservoir and just helps move it (the energy) around.

          So absent a lot of small volcanoes or a super volcano or an asteroid impact, a big nuclear war or some other massive perturbation on solar irradiance, we know where the energy is, and how quickly the reservoirs or sinks are filling or discharging. So yeah, when you start mucking around with the channels in and out of those resevoirs (sink) then yeah the planet can cool in a hurry. But it will only be temporary. So go ahead, make my decade. You’ve already wasted three of them, what more problems can a couple of more cause? A lot. But is sure won’t be an ice age anymore and if you think it is, I can’t help you. I doubt anyone can help you anymore, that’s how out of touch with reality you are.

        • > So, why don’t you believe the planet can cool abruptly?

          Me personally? Because I accept the First and Second Laws of Thermodynamics.

          (Actually, I accept all three but it’s the first two which are germane to this discussion.)

        • Chris,

          At least some people claim that the models do not obey the second law. I don’t know. Perhaps you can comment:

          “Numerical models of the atmosphere should fulfill fundamental physical laws. The Second Law of thermodynamics is associated with positive local entropy production and dissipation of available energy. In order to guarantee this positivity in numerical simulations, subgrid-scale turbulent fluxes of heat, water vapor, and momentum are required to depend on numerically resolved gradients in a unique way. The task of parameterization remains to deliver phenomenological coefficients.

          Inspecting commonly used parameterizations for subgrid-fluxes, we find that some of them obey the Second Law of thermodynamics, and some do not. The conforming approaches are the Smagorinsky momentum diffusion, phase changes, and sedimentation fluxes for hydrometeors. Conventional turbulent heat flux parameterizations do not conform with the Second Law. A new water vapor flux formulation is derived from the requirement of locally positive entropy production. The conventional and the new water vapor fluxes are compared using high-resolution radiosonde data. Conventional water vapor fluxes are wrong by up to 10% and exhibit a negative bias.”

          How is local material entropy production represented in a numerical model? Almut Gassmann and Hans-Joachim Herzog. Quarterly Journal of the Royal Meteorological Society. DOI: 10.1002/qj.2404
          http://onlinelibrary.wiley.com/doi/10.1002/qj.2404/abstract

        • Oh I get it. You don’t believe that carbon dioxide is an infrared absorbing greenhouse gas, that carbon dioxide is the dominant radiative forcing, that the net heat gain of the planet is positive, and/or that some unknown forcing or feedback miraculously overwhelms the carbon dioxide forcing. Good luck with that, because that’s all you need to know, in addition to the extreme magnitude of the integrated energy being pumped into the system every second. At this level, the conservation of energy is all you need to know. It is not a pretty picture.

        • Thomas,
          “Oh I get it. You don’t believe…”

          If this is directed at me it is incorrect. I have only taken a cursory look at the climate science literature (find my post about Antarctica above), and found evidence for the same types of flaws I have observed in my own field (which I trust very little): overuse of averaging (little exploration of the data), weak investigation of possible methodological and instrumental sources of error, and pointless disproving of a nil null hypothesis.

          *There is only a single other point which I have been asked to not discuss further on this blog.

          “At this level, the conservation of energy is all you need to know.”
          This claim seems to be overstated. Can you expand on it?

        • I have only taken a cursory look at the climate science literature

          It shows. Maybe you missed the part where carbon dioxide is the dominant radiative forcing agent. Spencer Weart is your man. You’ll have to do your own research. It’s a waste of time even discussing this with you until you understand the fundamentals, which obviously you don’t.

        • Thomas,

          “It’s a waste of time even discussing this with you until you understand the fundamentals, which obviously you don’t.”

          Discussing what? Your original comment does not seem related to anything I have posted in this thread.

        • You are just blabbering on without understanding or knowing anything about the subject or the paleorecord, and making naive statements without context or justification. People here are very knowledgeable. You are challenging nothing and your questions represent mere trolling. You are just here to waste everyone’s time and make it look good with the occasional reference.

        • Thomas,

          I am somewhat surprised at your reaction to my posts. I wrote the long post below in the hope of changing your perception that my asking questions regarding climate issues is “trolling”. You earlier wrote: “Spencer Weart is your man,” which suggests you may be knowledgeable on this topic. Perhaps this will also induce you to offer informative responses. If you still consider it trolling after reading, I suggest refraining from troll-feeding in the future, since those discussions do not accomplish anything positive. It is at best just noise. If anything, doing so associates your views with irrationality in the eyes of anyone reading.

          I would guess that you have had disagreeable experience during online discussions involving climate change in the past. I have a somewhat unique “angle” that may not fit into whatever stereotype you have grouped me with. If that is the case, this would not be my first experience with this problem. It is unfortunate that this makes it difficult to discuss the topic (and other “controversial” topics) with people knowledgeable about the subject, as a result I (informally) try out different approaches to see which is most likely to elicit responses I find informative. Hence my surprise at your response.

          If you look closely at my posts they are all regarding information about specific evidence I have come across that does not appear consistent with the conception of climate science I gather from the other posters. I’d characterize this as the behavior of a skeptical student, which has seemed to be the most successful. So I am somewhat surprised at your reaction.

          It is usually best to avoid being overly aggressive since this commonly results in responses like yours, which is unproductive for all involved. However, appearing argumentative can sometimes elicit useful information if someone knowledgeable happens to take it as a challenge. I suspect when this occurs people respond not to win an argument, but with the goal of attracting “lurkers” to the discussion (who are more likely to follow vigorous debate). This provides opportunity to demonstrate how rational their behaviour is vs that of those who disagree with them.

          On the other hand, too friendly or unskeptical a tone and people are not interested in responding other than out of politeness. These responses lack detail (eg direct towards a textbook containing high-level summaries that do not address the specific issue at hand), and usually follow up questions go ignored or are similarly uninformative.

          In response to your specific comments:
          >”You are just blabbering on without understanding or knowing anything about the subject or the paleorecord, and making naive statements without context or justification.”

          Please point out where I “blabbered on” or made naive statements, so I can avoid appearing this way in the future. I have not claimed to be knowledgeable about climate science or the paleorecord.

          >”You are challenging nothing and your questions represent mere trolling.”

          I never claimed to be challenging something. My questions are an effort to induce knowledgeable people to offer commentary informed by domain knowledge (which I lack here) and lead me to specific references they consider informative on the topics or evidence from that field that captured my interest. My interest is not necessarily in climate science per se, it is more so to learn how researchers in that field have dealt with data analysis, theory/modeling and interpretation in the hope I can apply some of what I learn to my own problems.

          For example, the Savitzky-Golay filter may be superior to the windowed average I am currently using to summarize some timeseries. This may also be of interest for technical analysis.

          From my question about modelling the effects of sustained cooling, I learned it appears unlikely to have been or be done in the near future from the responses here. Because I am working on some models of my own, I may be able to learn from the way climate modelers solved the problems they faced. Figuring out what they are doing would not be a trivial task, but my motivation to try increases if my efforts could also possibly contribute something currently missing (however justified that is) to the field.

          My comment about the Antarctica paper had a few goals (which have gone unachieved). First, as an example of the inability for the usual method of writing review articles and performing meta analysis to deal with “the devil in the details”, perhaps there are some examples out there that should be emulated. Also, I thought perhaps someone may take the time to point out where my interpretation of that data could be flawed thus justifying the original researchers’ lack of interest in instrumental error as an explanation for their results.

          I am also interested in what justifications could be put forward for use of strawman NHST and focus on averaging in that case, although I doubt anything on that front would impress me. Another aspect is that paper was in a Nature journal which are commonly perceived as one of the most “prestigious” journals. I cannot fathom the reasons for this from what is published there on topics I am familiar with, but perhaps this is a case of prestige by association with high quality publications on other topics.

          My investigation convinced me that the quality of the research reports and peer review of Nature Geoscience are not necessarily superior, which is inconsistent with the high degree of confidence climate scientists appear to have in their conclusions. Admittedly, this is based on assessing a single paper which may not be representative.

        • I am somewhat surprised at your reaction to my posts

          As well you should be, but I do hope you are not further surprised when I decline to read any more of them, including and especially this one. Your questions are not knowledgeable, you appear unfamiliar with the basic fundamental science, you have not taken the time even to read Weart, and yet you insist there is something serious wrong with the edifice of planetary science and geophysics. I wish you be best of luck! Unfortunately, I have no openings requiring your particular skills at this time, but I will keep any new results you derive on file. Thanks.

        • Thomas,

          “I do hope you are not further surprised when I decline to read any more of them”

          I’ve seen no indication you read any of them, but anyway you did inspire me to clarify some of my thoughts on this type of interaction on the off-chance you had.

          “you insist there is something serious wrong with the edifice of planetary science and geophysics”

          Where?

        • Question wrote: “How is local material entropy production represented in a numerical model?”

          When I made my comment re the 2nd Law I was thinking of heat transfer not entropy. The 2nd Law places constraints on the direction of heat transfer. If Object A is warmer than Object B then the net heat flow is from A to B unless you do work on the system. Similarly, a medium at a uniform temperature isn’t going to spontaneously get warmer on one side and colder on the other. (A refrigerator doesn’t without a motor.) Taken in combination with the atmosphere we’ve got (and are likely to have in the future) the 1st and 2nd Laws tell me that it’s a stretch to believe – short of volcanic activity or some other phenomenon which has the effect of increasing Earth’s effective albedo – that there’s a plausible mechanism for abrupt cooling. Can one contrive abrupt cooling scenarios which don’t involve high altitude aerosols or big increases in surface albedo? Sure. Are those scenarios plausible? Not particularly.

          Re the RMS paper, it’s outside my area of expertise. That said, I can parse the Abstract. The authors write “The conforming approaches [with respect to the 2nd Law] are the Smagorinsky momentum diffusion, phase changes, and sedimentation fluxes for hydrometeors. Conventional turbulent heat flux parameterizations do not conform with the Second Law… Conventional water vapor fluxes are wrong by up to 10% and exhibit a negative bias.” So, not knowing a great deal about the fine points of the numerical models in question, my instinct would be to choose a conforming model, particularly since they suggest that the conforming models are more accurate in predicting water vapor fluxes. I don’t know how alarmed I should be by the non-conforming models. Does the 10% water vapor error propagate or is the error always within 10% over the timescales of interest? If the authors are finding that the nonconforming simulations create conditions where there’s net heat flow from a cold cell to a warm one and that the net transfer is greater than the work done on the system – effectively a Carnot efficiency >1 – then that would be noteworthy. The bottom line though is how well the models predict physical observables. Entropy isn’t a physical observable. It’s a reality check – you compute it from physical observables, you don’t measure it directly. Even if a model isn’t rigorously correct with respect to First Principles it can still produce useful results. You just need to understand the limits of interpretation of the results produced.

        • I don’t think the anonymous lady or gentleman understands that the previous dramatic downward excursions in temperature was the result of the rapid warming and melting of large mid latitude temperate zone continental ice sheets after a long period of intense glaciation, and the subsequent reorganization of ocean currents. Those ice sheets no longer exist, and what we have left now resides safely on the poles. If for some reason a dramatic reorganization of ocean currents were to occur for some reason, and consistently drive cold deep waters of the ocean to the surface, we would indeed be in big trouble, hence the climate alarmism, but those temporary temperature excursion would still be subject to the overlying continuous warming trend, and would only hasten the inevitable overturning of the oceans into an eocene like climate, which clearly is where we are headed to now.

        • Chris,

          Thanks for your interpretation of the claims in the abstract. On reading the paper I found too much jargon to interpret it with out effort I am unwilling to expend. My understanding was that while some aspects of the models (I could not find where they referenced one explicitly) violate the second law, others do not. So it is not a matter of choosing which to use. I may be wrong on that, and it also wasn’t obvious to me exactly which models are guilty of this.

          Thomas,

          “I don’t think the anonymous lady or gentleman understands that the previous dramatic downward excursions in temperature was the result of the rapid warming and melting of large mid latitude temperate zone continental ice sheets after a long period of intense glaciation, and the subsequent reorganization of ocean currents…”

          What evidence would convince you that theory is too flawed to be useful (ie falsify it)?

  12. A very small note wrt the wrong sign on the PAGES estimate. It is interesting that the table in Tol 2009 showed that the range of estimates for various regions was between -0.5 and -11.4 % GDP. How this results in an increase of +2.5 is left as an exercise for the reader because Tol provides no hint of how this was done. (even if you accept -2.5)

  13. > Tol’s paper is a meta-analysis in which he combines several published projections of the economic effects of global warming, in order to produce some sort of consensus estimate.

    I said this in the original discussion and will say it again: I don’t see a legitimate basis for taking the published projections seriously. They’re flights of fancy. Even if Tol’s analysis of the data he chose to work with had been rigorous I’d still give near-zero weight to his conclusions.

    • I’m with you. I’d be unsurprised if these numbers like 2.5% and -11% or whatever have “true” error bars of about 40% around them, at which point it’s obvious that we’re purely chasing noise.

      Is it so hard to believe that say 40 years from now we’ll be on average either twice as rich as we are now, or half as rich? Things that could seriously effect this are for example:

      1) Invention of commercially viable fusion technology which makes electricity cost say $0.0001/kWh instead of currently around $0.10 / kWh, allowing for commercially viable climate control via enormous power plants whose whole purpose is to beam microwaves into space or waste heat into the atmosphere to control the weather and climate.

      2) Eruption of the Yellowstone super-volcano which produces a near extinction event and initiates a 10k year ice age

      3) Mode switching in the chaotic weather system, and the initiation of either a massive desertification or an ice-age, leading to 95% loss of crop growing land.

      I’m not saying any of these are highly likely, but they are mildly plausible, and their impact so large that averages are quite misleading.

      This is in fact my whole problem with “climate science”. We’ve fit models to the last hundred years of weather, and several thousand years of historical noisy “reconstruction” data from tree rings and ice cores and coral reefs etc, but since we only have high resolution data for 100 years or so, we have a highly biased view of how climate/weather should behave. Were there 100 year periods at any time in the last 10 million years in which massive and drastic changes occurred? I don’t think we have enough high-resolution info on ice ages to say how quickly they can occur or how quickly they can dissipate. But i’m not an expert on what data we have. I’d be happy to hear about how people are reconstructing somehow year-by-year climate at onset and dissipation of the last 5 ice-ages, anyone?

      • http://www.dailymail.co.uk/sciencetech/article-1227990/Ice-Age-took-just-SIX-months-arrive–10-years.html

        This seems sensationalist but the point is that perhaps large changes over just a few months or years are perfectly plausible. It is certainly easy to mathematically create simple nonlinear dynamical systems which show this mode-switching type behavior.

        The fact that the IPCC doesn’t seem to include those scenarios in its 100 year discussion, and doesn’t mention that such changes might already be sunk…. makes me highly skeptical of the whole field. (caveat: I’ve only skimmed IPCC stuff and not recently. perhaps there are whole chapters on this.. I don’t know)

        Climate models are fit to the weather, we have detailed data only from stable climate over the last 100 or 200 years. (we didn’t even INVENT a thermometer until maybe the 1600’s, we didn’t know that energy = heat until Joule’s experiments in the 1850’s or so).

        Fitting climate models to detailed weather data that only shows relative stability MUST force us to reject parameter values which would have led to major mode switching events over the last 100 years, unless we construct a likelihood function in which massive deviations from our stable climate are plausible. But if we do that, if we fit our climate data to the last 100 years but allow for the model to have some reasonable likelihood of entering an ice age or massive desertification even though that wasn’t actually seen in the last 100 years… I’ve got to assume the range of predictions that would come out of the next 100 years of climate models would expand drastically to the point of most likely being useless (i’ve actually had this discussion with people in the “uncertainty quantification” field who actually work on climate models… most of them have basically agreed with me, but they were informal discussions at meetings that weren’t climate science meetings)

        Does anyone whether/who/how climate science explicitly addresses this issue?

        • Careful with using the DM as your primary literature! /snark

          You might consider that the studied lake is located so as to receive maximum benefit from the AMOC. You might also consider that at the present time we don’t have anything like a gigantic glacial lake ready to burst. Even if the Greenland ice sheet collapses fast, it can be nothing like the essentially instantaneous Lake Agassiz outburst. But notice also that despite the large Younger Dryas cooling perturbation (not an “ice age” by any means, note), climate returned to par rather than slipping back into a real glaciation.

        • Yes, as per p.3 of Ruddiman, Kutzbach, Vavrus (2011), the YD dropped CH4 for a while, but CO2 barely changed. In some sense, those graphs are historical analogs of what one would hope are projected curves for the future. The orbital positions and CO2/CH4 values differ, but the overall patterns are mostly similar (up, then down), except ours (up, down, up). As usual, examination of differences sometimes generates insight. One indeed would like to see a set of damage curves, not points.

        • “the point is that perhaps large changes over just a few months or years are perfectly plausible.”

          Correct. Seem my comment above referencing several authoritative papers on “abrupt climate change”. Here is one example:

          Coxon and McCarron (2009), ‘Cenozoic: Tertiary and Quaternary
          (until 11,700 years before 2000)

          http://eprints.nuim.ie/1983/1/McCarron.pdf

          Figure 15.21 The stable isotope record (∂18O) from the GRIP ice core (histogram) compared to the record of N.pachyderma a planktonic foraminiferan whose presence indicates cold sea temperatures) from ocean sediments (dotted line). High concentrations of IRD from the Troll 8903 core are marked with arrows. After Haflidason et al. (1995). The transition times for critical lengths of the core were calculated from the sediment accumulation rates by the authors and these gave the following results: Transition A: 9 years; Transition B: 25 years; and Transition C: 7 years. Such rapid transitions have been corroborated from the recent NGRIP ice core data.

          I interpret this and other figures as follows:

          1. Very rapid warmings occurred in the past before human GHG emissions; in fact, the climate as recorded in paleo data in Ireland, Greenland and Iceland, warmed from near glacial temperatures to near current temperatures in two events in 7 years and 9 years at 14,500 and 11,500 years ago respectively.

          2. Life thrived during the warming events (Life loved warming and warmer conditions).

          3. There is a periodicity of about 500 to 1000 years represented by minimums at about (eyeballed from the chart):

          years before present:
          16,000
          15,500
          14,500
          13,800
          13,000
          12,600
          11,600
          11,200
          11,000
          10,600
          10,200
          9,500
          9,200

      • See comment above, read Ruddiman or Archer.

        1) Are there any seriousion researchers who forecast that?

        2) Possible.

        3) Desertification: some, ice-age, no.

        You keep emphasizing chaotic nature … but on a global scale, locally-chaotic behavior does not banish conservation of energy and Greenhouse Effect. It’s very hard to have huge fast swings in CO2, especially down (even the Younger Dryas state change didn’t budge it much), and it takes time for big changes in global Ocean heat Content. Temperature swings in NY land exceed NH overall, and that exceeds global.

        Let me offer an analogy:
        would people agree that medical researchers do not understand all the exact biochemical processes in which cigarette smoking seems to (but does not always) lead to disease?
        Would anyone then agree that until they do understand all that, and can predict which 12-year-olds will be affected and when, nothing useful is known and it is fine for 12-year-olds to smoke? (and perhaps good for the economy).

        • I think this guy (https://www.ted.com/talks/michel_laberge_how_synchronized_hammer_strikes_could_generate_nuclear_fusion) is serious, he has neutron production to verify that he gets some kind of fusion, he has graphs of fusion progress which suggest that a lot of progress has actually been made, and the Livermore national ignition facility people did get more energy out of one of their recent tests than the energy that was absorbed by the target. 50 more years of progress produce what? I don’t know.

          3) Ice ages are still plausible to me. Obviously conservation of energy holds, but the earth’s albedo is potentially hugely varied. Is it so hard to believe a-priori that the albedo could change from say 10% on land to 80% (massive snowfalls) and stay there for a decade, and say over large portions of the tropical ocean we produce high albedo cloud-cover? I don’t see why not. That provides a lot of room for counteracting greenhouse trapping effects.

          Even if it’s a “little ice age” like we had 500 years ago, which was insignificant on the global geological timescale, it could be HUGELY economically important and it’s the economics that matter here.

        • Ok, I read your comment above about snowfall/ice-age but I’m not convinced. I don’t have a specific model in mind really, I just know that when you fit to stable data you will exclude highly chaotic behavior that is not consistent with that stability. But there isn’t really a reason for me to believe a-priori that it’s not consistent with future instability. Chaotic behavior is still perfectly compatible with conservation of energy, provided there is somewhere for that energy to go other than heating the atmosphere.

          And, as I said, even if we’re not talking about global 10k year duration geological ice ages, if you make the central US grain belt non-viable for agriculture, it has a HUGE effect on economics even if the effect is trivial for global climate.

          We’re talking about using ultimately things that look like weather models to predict the weather 50 or 100 years in the future and then predict the economic effects of that weather… the uncertainty involved is simply absurd to me.

          Put another way, what will be the effect of weather during the decade 2050’s on the price of gold, silver, copper, steel, pork bellies, corn, ethanol, rice, soybeans, cotton, and sugar, and the human population of the earth

          There is no conservation of energy type law that can help you with those price predictions.

        • I like his abstract. thanks for the link. I don’t have subscriber access and don’t have enough stake in this argument to pay to get a copy. From the abstract, it sounds like he basically agrees with me that even if we knew the climate, and we don’t, we still don’t know the economic outcomes, and we especially don’t know the effect of large deviations in weather, or even perhaps moderate deviations that might cause large deviations in economics.

        • Martin: thank you for the link. I read his article and agree essentially 100% with his thesis. I even agree with his analogy of evaluating climate change to evaluating thermonuclear war. The nontrivial plausibility of drastic changes means we need to evaluate those drastic changes within any framework. We need to determine what we can do both to climate **and to the economic sensitivity to climate** and we need to stop fooling ourselves into thinking we know a lot more about all of this than we really do.

          Ultimately, overconfidence bias contributed strongly to Fukushima, Chernobyl, possibly the Bohpal disaster, and climate disasters are most likely strongly affected by overconfidence bias.

        • Daniel,

          I tend to agree with the general idea, and at the same time cannot bring myself to fully embrace it.

          First, anyone following the literature was well aware of the severe uncertainty we face when evalutating the cost of AGW. The Pindyck article does a nice job of summarizing the main points, but I was somewhat surprised that it needed to be said at that point, when I first read it. It occurred to me that Pindyck himself looked into the matter for the first time and was so surprised that he felt he had to write it down. Whatever the Tol disaster means, even he – with his reputation as a “skeptic” – emphasizes that damage estimates are really bad, and that we seem not to get better.

          So, I agree that uncertainty about catastrophic events should be emphasized. I strongly disagree with your invoking of Chernobyl, Fukushima, and Bohpal: these are not instances representing a risk resembling global warming. Whe might dub them “catastrophic” – and vertainly they waere and are, on some level – but regarding global warming, we are talking about global risks up to a human extinction event. Nuclear or chemical power plants are not systemic risks in this sense, and never were.

          Also, one should not forget that, at the end of the day, we need to introduce specific measures to tackle emissions, not talking points, however sensible they might be. I hinted to this in another comment: with the – very bad – information at hand, we have to find a carbon tax. ‘We have to act urgently’ just isn’t something one can do. This is presumably the reason that Pindyck, after his evaluation of shorcomings of IAM modelling, concludes that we should go with an IAM based emission tax estimates anyway.

        • Martin: note I wasn’t comparing the extent of Fukushima’s etc damage to the global warming issue, I was simply comparing the role of overconfidence. Fukushima was situated at its particular location because engineers were overconfident about how large tsunamis could be, and its design was such that loss of cooling power was a catastrophe but they were overconfident about their ability to maintain that power. Similarly chernobyl’s design could burst into flames but they were overconfident about their ability to prevent such things. Bhopal happened probably because the maintenance people were overconfident about safety systems that would prevent a leak…

          similarly I think the IPCC type projections are highly overconfident about the magnitude, speed, and direction of climate changes. Pindyck’s argument is that we’re falsely overconfident about damage projections.

          You state that we “need to introduce specific measures to tackle emissions” but I believe this is also overconfidence. Perhaps “catastrophic” levels of climate change are already built into existing emissions, and our very best policy option is “mitigate the economic effects that large changes will have” which might require yet MORE emissions.

          My point is: I honestly don’t think we know what the heck will happen or what the heck to do about it, and ESPECIALLY the what to do about it part.

        • Daniel,

          OK, but I am still not sure if I agree. The consequence of +20°C warming (or of a equilibrium climate sensitivity of that magnitude) is not “mitigation against the effects” of such warming. We cannot mitigate against such an effect (save in a world were Captain Picard is a real person), we and the world as we know it would cease to exist, no exaggeration here. But we can try to reduce the risk of this event occuring in that we scale down (and phase out) greenhouse gas emissions.

          Anyway, this is somewhat off-topic. Perhaps you could flesh out your ideas and write a blog post?

        • If climate reconstruction has any value at all it most likely at least gives us a range over which we can expect future climate to be bounded. Hundreds of millions of years is long enough that we’ve been through quite a few different ranges of the parameters. If the graph that was linked a few weeks ago here:

          http://en.wikipedia.org/wiki/File:All_palaeotemps.png

          is at all reasonable, a 20C warming seems out of the question thankfully. You’re right, I should probably blog up some of it, but my expertise in the substance of climate change matters is far far far far less than my expertise in methods of specifying risk models, and evaluating risk and its consequences. To do anything truly useful would require collaboration with a climate expert.

    • Chris G said:
      “> Tol’s paper is a meta-analysis in which he combines several published projections of the economic effects of global warming, in order to produce some sort of consensus estimate.

      I said this in the original discussion and will say it again: I don’t see a legitimate basis for taking the published projections seriously. They’re flights of fancy. Even if Tol’s analysis of the data he chose to work with had been rigorous I’d still give near-zero weight to his conclusions.”

      chapter 8 here seems relevant:
      http://omega.albany.edu:8008/JaynesBook.html
      I originally came across it on this page which contains many useful references:
      http://www.gwern.net/DNB%20FAQ#flaws-in-mainstream-science-and-psychology

      –begin long quote—
      “The classical example showing the error of this kind of reasoning is the fable about the height of the Emperor of China. Supposing that each person in China surely knows the height of the Emperor to an accuracy of at least ±1 meter, if there are N=1,000,000,000 inhabitants, then it seems that we could determine his height to an accuracy at least as good as

      11,000,000,000√m=0.003cm (8-49)

      merely by asking each person’s opinion and averaging the results.

      The absurdity of the conclusion tells us rather forcefully that the N−−√ rule is not always valid, even when the separate data values are causally independent; it requires them to be logically independent. In this case, we know that the vast majority of the inhabitants of China have never seen the Emperor; yet they have been discussing the Emperor among themselves and some kind of mental image of him has evolved as folklore. Then knowledge of the answer given by one does tell us something about the answer likely to be given by another, so they are not logically independent. Indeed, folklore has almost surely generated a systematic error, which survives the averaging; thus the above estimate would tell us something about the folklore, but almost nothing about the Emperor.

      We could put it roughly as follows:

      error in estimate = S±RN√ (8-50)

      where S is the common systematic error in each datum, R is the RMS ‘random’ error in the individual data values. Uninformed opinions, even though they may agree well among themselves, are nearly worthless as evidence. Therefore sound scientific inference demands that, when this is a possibility, we use a form of probability theory (i.e. a probabilistic model) which is sophisticated enough to detect this situation and make allowances for it.

      As a start on this, equation (8-50) gives us a crude but useful rule of thumb; it shows that, unless we know that the systematic error is less than about 1/3 of the random error, we cannot be sure that the average of a million data values is any more accurate or reliable than the average of ten101. As Henri Poincare put it: “The physicist is persuaded that one good measurement is worth many bad ones.” This has been well recognized by experimental physicists for generations; but warnings about it are conspicuously missing in the “soft” sciences whose practitioners are educated from those textbooks.”
      –end long quote—

      It makes sense that if systemic error is too large random error is irrelevant, but I don’t fully follow. Does anyone know where Jaynes is getting the 1/3 from?

      • There are formatting errors in the above quote. I will try to correct them here if not check the sources:

        1/sqrt(1,000,000,000) m= 0.003 cm (8-49)

        error in estimate = S ± R/sqrt(N) (8-50)

      • > As Henri Poincare put it: “The physicist is persuaded that one good measurement is worth many bad ones.” This has been well recognized by experimental physicists for generations; but warnings about it are conspicuously missing in the “soft” sciences whose practitioners are educated from those textbooks.”

        +1 to Jaynes and Poincare.

      • “unless we know that the systematic error is less than about 1/3 of the random error, we cannot be sure that the average of a million data values is any more accurate or reliable than the average of ten101. ”

        I think he’s getting the 1/3 because sqrt(10) ~ 3

        If the systematic error is much smaller than 1/3 the random error, then averaging more values than 10 can still decrease the error significantly, but if it’s about 1/3 the random error, then averaging a huge number of observations will still give you errors about the size of the systematic error, which is to say about the size of the random error when we had 10 observations. In other words, we reach significantly dimishing returns to averaging at about 10 observations when S = 1/3 R

  14. The 0 point in the x axis is a major question that remains up in the air. All the estimates of economic benefit or damage are relative to a reference point, at which the benefit or damage would tautologically be zero. So all the curves of economic damage should include the point y = 0, x = reference x. But what is that reference x value – is it pre-industrial, or current temperatures? And where is that point on Tol’s figure? It would seem to make a huge difference to likely functional fits, whether linear or some other form…

  15. Corey: we’re out of reply depth above, so I’m replying here, if you see it.

    The policy question put bluntly is basically this: E(W(t, P_i(t)) is what under different policy scenarios P_i for a plausible range of weather function W, and aggregation functions E

    From a policy perspective, the existence of sudden jumps in climate is reason to prefer policies that make us LESS SENSITIVE to such jumps. We’re really not interested in “what is the marginal economic effect of marginal anthropogenic warming on policy” if the warming/cooling whatever is not marginal. Put another way dE/dW dW is irrelevant when dW (change in weather) is not small or when dE/dW is nearly infinite.

    A policy that tries to control climate change through changes in emissions has tradeoffs and costs, ones which have various effects on the well being of humans, some of which are pretty obviously bad (turn off all the coal plants in China today, how many people will die in the next year? How big of a civil war will break out in China?)

    On the other hand, there are policies which don’t try to control the climate, but DO try to control the economic sensitivity to climate. If E is a constant function, we don’t really need to care what the weather does… Most of the policy decisions designed to do this essentially involve burning a lot more fuels to create a lot more wealth…

    The existence of the possibility of large switches in climate rather than marginal changes suggests that reducing sensitivity is potentially much more important. If those large switches are not possible, then there is a different set of policy options that look attractive.

    • Also, the existence of a large amount of in my opinion neglected uncertainty in dW/dP (really a highly multidimensional gradient since P is a multidimensional thing) means that we have a very hard time deciding what P should be even if we have total totalitarian control over P, which is not even close to the case.

      So no, I don’t think uncertainty in weather outcome has negligible effect on policy, in fact I think it’s the entire reason why actual policy has largely gone towards “get rich quickly by burning fossil fuels” because that’s intuitively a lot more certain, has non-discounted (short term) benefits, and potentially mitigates a LOT of problems by making E a more stable function.

      • The implicit assumption behind my comment was that if an ice age descends on the world in the span of ten years, that would be not just an unmitigated disaster but an unmitigatable one. On reflection, that might have been a tad defeatists.

        Serendipitously, the U.S. Chamber of Commerce just released a report on the tradeoffs involved in policy that tries to control climate change through changes in emissions. Paul Krugman blogged about it today:

        …a plan to reduce GHG emissions 40 percent from their 2005 level, so it’s for real action… the Chamber is telling us that we can achieve major reductions in greenhouse gases at a cost of 0.2 percent of GDP. That’s cheap!

        True, the chamber also says that the regulations would cost 224,000 jobs in an average year. That’s bad economics… But even at face value that’s also a small number in a country with 140 million workers.

        So, I was ready to come down hard on the Chamber’s bad economics; but what they’ve actually just shown is that even when they’re paying for the study, the economics of climate protection look quite easy.

  16. One point I’m still confused about is this suggestion by Andrew:

    it would make more sense to consider the different published studies as each defining a curve, and then to go from there.

    How exactly? I mean, how do I know what sort of curve? Every point used by Tol is of the form (Temperature , Impact). How do I convert a (T, I) touple into a whole curve? I doubt the original papers know this? Do they?

    Isn’t this wishful thinking hoping the original work gives us curves when perhaps all they give us are point estimates?

      • Daniel:

        Yes. The “data point” from each paper must come from some model (what we used to call a “computer model”) so I’m proposing that, for each paper, Tol take its model and get a whole curve. Then he can plot the 20 curves and see where they agree and disagree. This will take some work but, hey, it’s what Tol’s career is all about, so it makes sense for him to do the work. And even if it can only be done approximately, I think it would still be a huge step forward beyond what we have now.

        • I thought these models integrated climate (C) and economic (E) variables, i.e., they were of a form like dC = f(C, E, noise), dE = g(C, E, noise) and so came up with predictions about the joint distribution of the time paths of (C, E). So, for example, they’d give you the expectation C* of C(t) and E* of E(t) at time t; in principle you could then ask about the expectation of E(t) conditional on C(t)=C** != C*, but maybe the models were not set out to give you that information easily (and maybe it does not even make sense to condition on, say, temperature, without looking at what that implies for the other variables/parameter estimates).

  17. To me this was the most damning part:

    And Tol’s remark that outliers provide evidence of nonlinearity

    If 19 of your data points fall on a straight line, approximately, and one point falls far away do you (a) believe you need a higher order polynomial as a model or (b) suspect that one point is simply wrong

    I’ve never encountered a situation where I’d choose (a) over (b).

    • Rahul:

      Various people (not just Tol) have argued there is good substantive rationale for believing the curve is nonlinear, so I’m not bothered by nonlinearity. But like you I am bothered by the attitude that one can or should find a curve that fits all the points. Given that the different “points” correspond to completely different models of the world, I’m not surprised that some of them are in great disagreement with others.

      • Andrew:

        Sorry, I wasn’t clear. I’m not bothered by non-linearity either. e.g. To restate the analogy: If 19 points fall on a circle & one far away it rather means that the point is wrong. Not that the model is a circle with a spike like appendage.

        • Rahul:

          You need to keep in mind that these points come from different studies and these likely differ in quality and _information content_. That 20th point might come from the group with by far the most expertise and the most informative modelling.

          (Actually, in meta-analysis of RCTs, when studies were quality appraised most were low quality and so one expected the best estimates would be in the minority. Unfortunately some statisticians unthinkingly suggested robust methods which would usually decrease contributions from the best studies that seemed like outliars making the combined estimates worse.)

          Usual statistical practice (application of techniques) needs to be replaced with critical statistical thinking to deal with multiple sources of evidence or meta-analysis.

        • I’m guessing some of the estimates are based on the same data. For example, researchers X and Y might both use the same numbers on ice melt in Greenland to estimate change in temperature. Therefore it seems like the person combining results really needs to ‘get under the hood’ of the estimated values in order to combine them. If they are using some of the same data it is not a meta-analysis of different datasets, but of functions applied to big mixture of a few datasets.

        • Dan:

          Andrew’s comments suggested they were a mixture of less 14 datasets at one point and earlier comments [buried somehwere] stressed they should not be treated as independent.

          I looked at the link to Tol’s most recent? analysis which summarises his use of _regular_ and smoothed bootstrapping of the estimates and I stopped reading…

        • @K? O’Rourke:

          The fact that studies differ in quality and _information content_; isn’t that a given for any sort of meta analysis?

          Are you saying fitting arbitrary functions to collated data should, in general, be a no-no for *any* meta analysis?

        • Rahul:

          Yes, quality variation always needs to carefully considered.
          (Some technical issues of that discussed here http://biostatistics.oxfordjournals.org/content/2/4/463.full.pdf )

          If data was actually generated in the studies, its statistical consideration requires its generation to be considered as being from an unknown member of a class probability generating models usually indexed by parameters. A primal or natural assumption (or worst case) is that all of the parameters are different in each study and so the studies are islands on their own and it would seem nothing additional can be learned about any study by the joint analysis of all. But from substantive background knowledge and knowledge about how the studies were conducted it might be a credible conjecture that one of the parameters should be the same in all studies. Now you have to isolate information about that one parameter, check its consistency and combine the information about it if consistent while properly allowing for how other parameters do differ. Also parameter transformations can create common parameters for instance the sign transformation of treatment effects of varying positive amounts. And it could be that just the error measurement model is common and you get the seeming magic of Stein estimation of the different things being measured.

          It can be fairly simple in certain settings such as well done and reported uniformly high quality two group RCTs of binary outcomes (e.g. the Cochrane Collaboration inverse variance weighted methods) and folks often try to use those simple techniques everywhere.

          The studies Tol used _seem_ to have done much more than just generate and summarize data and used overlapping data so its not going to be easy. I think I would start like Andrew suggests, try to get curves out of each study and compare and contrast them. Next step would be to try and get measures of uncertainty and quality for each curve.

        • Maybe, but in practice this sort of simple aggregation seems ubiquitous. Especially in fields I read.

          Say I want to know heat transfer coefficients as a function of heat exchanger geometry or flowrates often people will publish an aggregate study collecting datapoints from previous studies & then fitting an empirical equation on top.

          Mostly it seems to work. Ok, only as well as the quality of the original data but that’s hardly something a meta-analyser can do about.

          Perhaps what you mean when you say meta-analysis differs from what I described?

          Now I can see all those critiques but I hardly see a way to fix them practically. Unless you want to go and repeat all previous work.

          Isn’t your whole general complaint somewhat nihilistic; it precludes any simple way of “standing on the shoulders of giants”? (maybe not giants, but still)

        • Rahul:

          > somewhat nihilistic
          We need to distinguish between being aware of problems/challenges without becoming over-whelmed by them.

          The above considerations would identify when simple aggregation will be adequate which fortunately is not that uncommon. Most of what is in the Cochrane Library (RCts) would be – if their wasn’t publication bias, held back studies and other deficiencies in what gets into the publish papers.

          For instance, with Tamiflu, the Cochrane group that worked on a meta-anlysis of it from just the published papers strongly suspected a problem and somehow for the first time in history apparently got access to all the data that the regulator’s had on Tamiflu. With this data, they decided to ignore all the published studies (give it all 0 weight, Andrew suggested to me they instead put a negative weight on it).

          So being somewhat nihilistic, I emailed an old friend in Oxford, Iain Chalmers (one of the founders of Cochrane) and asked him – given here it was found that 0 weight should be put on the published studies should not Cochrane stop doing meta-analyses using only published data?

          His response, which I agree with, is that this is all we have now and we have to do the best we can with it. (But it was worth my raising this as Stephen Senn and others are arguing for full access to the regulator’submitted data in the future and folks need to fully appreciate why that is so important.)

          Now, for the other methodology point from this. The Cochrane group used their standard approach of not considering non-randomized studies in determining treatment effectiveness. Problem here is that Tamiflu was bought mostly for potential benefit of high risk patients and all the Tamifu randomized studies were done exclusively in low risk patients. There was a small effect – but likely not representative of the true effect in high risk patients…

        • I’d love to see you or Andrew demonstrate this bit about getting a curve out of Tol’s studies. Maybe even just any one study.

          Because as described it just sounds so vague and abstract to me. Perhaps I am not getting what you guys are really suggesting that he do as a fix.

          In principle I agree with most of your critiques about meta analysis. But I fail to see how you’d practically apply it in this particular case. IOW, I can acknowledge the problems/challenges but I fail to see specifics of how you want to fix them in Tol’s meta analysis.

          Would you have been happier had Tol subjectively applied some sort of quality score to weight each data point based on study characteristics?

  18. Ultimately I can’t get upset with the journal for publishing the paper: the editors are busy people, and at first glance the paper looks reasonable.

    I always feel Andrew is too soft on editors & referees. Surely you wouldn’t condone crappy work saying authors are busy people? I think editors and referees need to be held accountable for such crap too.

    If only in hindsight we knew who the referees were we could apportion some portion of the shame & that might have a salutary effect. I don’t think this paper looks reasonable. More likely, the editors / referees did not do their due diligence. People ought to be taking their imprimatur a bit more seriously.

    • Rahul:

      You can make of this what you will, but if the paper had been sent to me as a referee back in 2009, I might well have recommended acceptance on the grounds that this is an important problem and the analysis seems like a reasonable summary of the data. The problem is that I am not an expert in this area and at first glance the paper appears to be a thoughtful treatment of a difficult literature. In retrospect, yes, the model is terrible and this should be clear even without the revelation that several data points and the entire scaling of the x-axis appear to have been mistaken—but I don’t think I would’ve seen the problems. Indeed, the lack of any outliers in the 14 data points makes everything seem much more sensible, and that’s another reason why I disagree with Tol’s assertion that the revealed errors do not change his conclusions much. In addition to the multiple errors casting doubt on Tol’s command of the data (someone who really understood the models wouldn’t confuse a +2.5% effect with a -2.5% effect, would he?), the original data just looked more coherent than what came out after the corrections.

      But, to get back to your comment: I don’t blame the editor because I could see myself having made the same decision had I been a referee. I’ve refereed a lot of papers like this—somewhat shallow analyses of important problems—and often I recommend publication on the grounds that it’s good to get this sort of material out there. And such a recommendation on a referee’s part is not necessarily so bad—as long as people can get out of the mindset of thinking that scientific papers represent true knowledge, just cos they’ve been published.

      I will say, though, that when people write shallow papers on topics I know something about, I can come down hard right away. Consider this paper, for example. I was polite, but I know enough about some of the examples to realize that their coding was really bad, and I would never have recommended it for publication anywhere.

      • Andrew:

        That puzzles me even more. How can it be both true that (a) “it’s good to get this sort of material out there” & (b) … to me, make the whole analysis [is] close to useless as it stands.

        How does it do anyone any good to publish a crappy analysis of a topic, just because it is an important topic? It makes no sense to me to say “This paper is weak but we should publish it because the topic is important.”

        If you are OK with this how can you fault the “Tabloid Journals” for being biased in favor of sensational results? The importance of topic is as irrelevant a mitigating factor to crappy analysis as is sensationalism.

        I think such recommendations on a referee’s part are absolutely bad. Also, if you are not an expert in the area you shouldn’t be reviewing the paper anyways, so that excuse is a bit moot.

        Again, the fact that “at first glance the paper appears to be a thoughtful treatment of a difficult literature” should also be totally irrelevant because we don’t really expect referees to accept papers on a mere first glance impression, do we?

        I stick to my opinions that you are being soft on referees & editors. I applaud your frankness in admitting you may have conceivably made the same mistakes as a referee. But, if so, we should rather introspect on why & how to fix the process rather than give the referees a free pass just because you could have been them.

        • Rahul:

          I often get bad papers to review and I recommend rejection. If I were sent Tol’s paper to review now, I would recommend rejection, and I would similarly recommend rejection on many of these Psychological Science style papers. My point was that, had I seen Tol’s paper in 2009, I expect that I would not have thought about the problems, I likely would’ve thought the analysis was shallow but a reasonable summary of the data, and then I would’ve recommended acceptance. The reason I would now recommend that the paper be rejected is not because it is shallow but because, upon reflection, I do not think it is a reasonable summary of the data.

          In short, when I say, “it’s good to get this sort of material out there,” I’m not talking about papers that I think are “close to useless,” I’m thinking about papers that have flaws but could still be useful if considered as data summaries.

          But, yes, I agree with you that my attitude—to be softer on a paper if it makes an important or dramatic claim in some substantive area that I don’t know much about—contributes to the problem of tabloidness in journals. I hadn’t thought about this but you have a good point. I’ll have to think about this one.

    • The incentives are not there…as a researcher can a good anonimous review help advance your career? Probably not. Additionally one thing i learned as a beginning researcher: reviewers are also human beings. And, yes they are really busy. Perhaps a post-publication review should be mandatory with several experts in the field and outside the field..

      • If the incentives for being a good, diligent, careful reviewer aren’t there maybe we should reflect on how to fix that. Rather than pretend this is how things ought to be.

        Sure reviewers are really busy but so are researchers. When the shit hits the fan why apportion all responsibility to one & let the other off sans censure?

        • You have a point there. One possible reason might because reviewers are researchers, and they do not have an incentive to spend a large amount of time reviewing (and that is very unfortunate). But in the end, they also suffer some consequences I guess (no more articles send to person X, since it could damage credibility of our journal Y), but it is kind of difficult since it is supposed to be anonymous. I think the more general problem here (already mentioned here), is that there is too much weight given to novelty and too little to refutal/debate, reproducibility.

  19. Richard Tol says:

    “Second, the initial benefits of a modest increase in temperature are probably positive …”
    https://www.sussex.ac.uk/webteam/gateway/file.php?name=wps-64-2013.pdf&site=24

    I would expect that should be a fairly uncontroversial statement. There is a substantial body of evidence demonstrating that warming has been beneficial for human development and health and well-being and, in fact, for most life. It strains credulity to believe that the trend since the Little Ice Age would reverse (change from positive to negative) just when we happen to be alive. It’s more likely that human’s natural inclination to believe in scaremongering stories causes us to more readily accept doomsday scenarios and reject more likely outcomes – such as that warming is net beneficial.

    It is clear that we need much more work on the ‘damage function’ – i.e. the impacts of global warming.

    • Could be. All we are saying is one isolated data point does not a net benefit scenario make. Tol’s model has exactly one point out of his 15 or so that has positive impact.

    • People may find the following relevant to the nature of discourse in this overall domain.

      I wonder if this Peter Lang (who I thank for the comment here as a reminder) might the same as found on p.337 of the PDF attached to this, i.e.,:
      ‘Peter Lang} #2.1.1
      August 11, 2013 at 3:57 pm · +67 -0 (This comment got 67 upvotes, 0 downvotes.)

      Graeme No.3,
      I’d go further than that. I want to know all the behind the scenes discussions, lobbying and manipulations that were involved in removing and trying to seriously damage the career of a scientist whose research is a potential threat to
      so many special interests.
      We need a mini ClimateGate to expose what was being done behind the scenes to remove him.’

      “him” was Murry Salby, who’d been presenting slide shows on topics (CO2, ice-core) outside his atmospheric circulation expertise, mostly to general audiences who loved them, while experts panned the errors. This comment was *a month* after some real evidence appeared.

      tThis and this showed that Salby been debarred by the NSF for financial deception, had had serious Conflict of Interest problems at U Colorado-Boulder, misrepresented outcomes of court cases he kept losing, mis-used junior associates (including his poor grad student), and used a credit card in a way that would be a firing offense at most companies …

      Then a few weeks after this. Salby had gotten a long-time associate to try to help by going out on a public limb … that had been chainsawed a week or two earlier.

    • Several years ago I did a very small consulting job for a consortium of insurance companies. The job was related to insurance company payouts from severe weather, and to what extent (if any) a climate change signal could be seen in the data that the insurance companies were willing to make public to each other. As part of that project I ended up reading a few reports generated by the insurance companies themselves. One of them — memory tells me it was a statement by someone high up at Munich Re, but I might be remembering that wrong — had an outlook that I thought was interesting, and that is germane here. Paraphrasing very freely, they said, in essence: As individuals we might have our own opinions, but as a company we don’t care what the climate is, we just need to know what risks we face and how to assign values to them. In this regard, it is important to recognize that the existing human infrastructure worldwide is highly optimized to the climate we have experienced over the past 50 to 100 years. Dams, water pipelines, bridges, ports, grain silos, oceanfront properties, and most other human constructions were designed to handle variable weather but not variable climate. A change in the climate will be costly.

      Peter Lang argues that a warmer climate has generally been better for people…perhaps that’s true, I don’t know much about it. But I think it does not necessarily follow that that is still true, because human civilization has changed dramatically since the last major change in climate.

      Of course, it’s possible that a slightly warmer climate would be bad in the short term — the next few hundred years, say — but that at some point we adapt and it’ll be better in the future. Or maybe Peter is right and a slightly warmer climate will be better even in the short term, although it’s hard to reconcile that with the insurance company’s outlook (which seems reasonable to me). At any rate I don’t think it “strains credulity” to think that a warmer climate might have been beneficial before but that we don’t want to go any farther now. After all, we DO live in a special time: we live in a time when our industrial output is so high that we can change the climate. That has never been true before.

      • Phil @ May 29, 2014 at 6:19 pm

        I disagree with several of your points.

        >” In this regard, it is important to recognize that the existing human infrastructure worldwide is highly optimized to the climate we have experienced over the past 50 to 100 years. Dams, water pipelines, bridges, ports, grain silos, oceanfront properties, and most other human constructions were designed to handle variable weather but not variable climate. A change in the climate will be costly.”

        Your assertion that “a change in the climate will be very costly” is your belief but un-supported by evidence. Engineers build the infrastructure you mentioned. They build to meet the requirements. They take into account the possible future scenarios for potential risks. They conduct sophisticated HAZOP studies http://www.planning.nsw.gov.au/plansforaction/pdf/hazards/haz_hipap8_rev2008.pdf and risk analyses. They weigh the additional cost of more robust design against the expected benefit. Examples are that dams are designed for the maximum probable flood or 1 in 10,000 year flood. Civil engineers and hydrologists have been leading the way in the analysis of risk of extreme events for decades or millennia.

        They also make major mistakes when the alarmists get into their heads. A classic example is the case of the Brisbane floods 2 years ago. They were caused by engineers not releasing water to lower water level in the Wivenhoe dam despite the warnings from the Bureau of Meteorology. Put very simply, the reason they didn’t lower the water levels was because the scare mongers – e.g. Climate Commissioner, Professor Tim Flannery – had convinced the bureaucrats that dams would never fill again because of climate change. That provides a cautionary example of the damage that unfounded scaremongering can do.

        >” Peter Lang argues that a warmer climate has generally been better for people … perhaps that’s true, I don’t know much about it. But I think it does not necessarily follow that that is still true, because human civilization has changed dramatically since the last major change in climate.”

        This is a classic example of humans’ natural tendency to believe in doomsday scenarios rather than doing objective and rational analysis.

        >” Of course, it’s possible that a slightly warmer climate would be bad in the short term — the next few hundred years, say — but that at some point we adapt and it’ll be better in the future.”

        We should deal with probabilities rather than remote possibilities. There is no persuasive evidence that warming is bad. There is substantial evidence that cooling is very bad for humanity and for life. Look at some of the links I’ve already posted in previous comments on this thread. There is also substantial evidence that warming has been substantially beneficial for the past 250 years. There is no persuasive reason to believe that trend will change. We know that life thrives when the planet is warmer then now and struggles when colder. The planet is normally much warmer than now. It is in an unusually cold period. There has been no ice at either pole for 75% of the time since multi-cell life began. When the planet is warmer life thrived – there was much more carbon tied up in the biosphere. That’s the big picture.

        >” Or maybe Peter is right and a slightly warmer climate will be better even in the short term, although it’s hard to reconcile that with the insurance company’s outlook (which seems reasonable to me). At any rate I don’t think it “strains credulity” to think that a warmer climate might have been beneficial before but that we don’t want to go any farther now.”

        Yes. There is a persuasive case that warmer will be better in the short term, and possible in the long term too. It is also likely the amount of warming will be less than the alarmists are projecting.

        Regarding the insurance companies, they analyse the risks. If people are going to be paranoid about warming, claims will be higher just because of the paranoia. They have to insure for that.

        A parallel is with the very high cost of insuring nuclear power stations against the cost of accidents. The high cost of accidents is almost entirely because of an irrational fear of radiation and everything to do with nuclear power. Nuclear power is the safest way to generate electricity, yet it has exorbitant costs for accidents such as Fukushima in which not a single person has died or is ever likely to die from radiation caused illnesses. But thousands died because of the trauma of having their lives uprooted and the fear of the future. On an objective basis, few people should have been evacuated.

        The climate fear mongers are doing enormous damage. They are single issue people. They do not have a balanced perspective of all risks and the consequences of wasting money on climate mitigation policies that have virtually no probability of making any difference to the climate, let alone avoid climate related damages.

        • > There is also substantial evidence that warming has been substantially beneficial for the past 250 years. There is no persuasive reason to believe that trend will change. We know that life thrives when the planet is warmer then now and struggles when colder.

          The whole crop failure thing I noted earlier, did that not register with you? Is there a temperature beyond which you think things might get worse instead of better? Another 10 deg C? Another 20 deg C? Another 100 deg C? Water boils at 100 C. Surely you must acknowledge some threshold for things starting to go south. What do you believe that threshold is?

          > It is also likely the amount of warming will be less than the alarmists are projecting.

          Where do you come up with that? Please, do tell. More specifically, what do you believe the probability distribution functions for average global surface temperature in 2050, 2100, and 2200 look like and what’s the basis of your belief?

        • Chris G.

          I’ve answered your previous comments above and provided many links. I suggest you start doing some research. I’m getting the impression you are trolling.

        • Peter,
          You believe that because dams and bridges and waterfronts were designed by engineers to meet certain standards that were relevant in the past, they will necessarily be adequate for future conditions even if those conditions are substantially different. That is not true.
          In my city many culverts were sized for a 1-in-100-year flood. This was already a problem within a few decades, because the addition of impermeable development (houses and roads) led to more runoff, so what had previously been a 1-in-100 year flood started happening more like 5 times per 100 years. Still, people here have been willing to tolerate flooding a few times per century rather than pay the very large costs of digging up all of the streets to replace the culverts…but the damage from flooding can be quite expensive and people in my city are very unhappy about it. So far this paragraph has nothing to do with climate change, I’m just pointing out that there can be large costs associated with changes in the frequency of weather events. The connection to climate change is that if we start having wetter storms we will either have to tolerate even more frequent severe floods or we will have to bite the bullet and spend hundreds of millions of dollars (in my city alone) to re-size the culverts.

          The State of California, where I live, is already in the situation that lower-than-average snowpack in the Sierra Nevada leads to a water shortage in the summer and fall (measured against historical usage). This is true even if rainfall in the Sierra is not diminished or increased, and even if the year is not a drought year by conventional measures. The current state population and water infrastructure require the snowpack of the Sierra to provide a certain amount of water storage, and if they don’t we have problems.

          I could go on.

          I think it’s possible that a small amount of warming could be good, but also possible that a small amount of warming could be bad. You are sure that the latter is possible but your argument seems to consist of calling people names if they disagree with you. Fear-mongers, scare-mongers, alarmists, etc., etc. Somehow I am unpersuaded.

        • Put very simply, the reason they didn’t lower the water levels was because the scare mongers – e.g. Climate Commissioner, Professor Tim Flannery – had convinced the bureaucrats that dams would never fill again because of climate change.

          Put very simply, that’s a tough claim to substantiate, especially since Wivenhoe was planned as a flood mitigation dam. The claim implies that flood mitigation was taken off the table or deemphasised due to Flannery’s influence, and we’d need to see some evidence of that.

          Speaking of evidence, Flannery’s famously abused quote was not about dams for city water supplies, and it was not even about northern Australia, and it was predicated to some extent on climate change being allowed to continue unabated. It was spoken in an interview for a radio program delivered to a country audience. Andrew Bolt distorted what Flannery said (interested readers can Google for an analysis of Bolt’s distortions – some subsequently repeated to Flannery’s face), and that distortion has become “conventional wisdom” in certain circles.

          Note the interviewer’s question to which Flannery replies (my emphasis):

          What will it mean for Australian farmers if the predictions of climate change are correct and little is done to stop it? What will that mean for a farmer?

          Call me deluded, but I don’t think there are large numbers of farmers in the areas of Brisbane that were flooded.

          Next, note Flannery’s full answer (which does contain the odd point that deserves critique, but given the context it does not argue what Peter claims):

          We’re already seeing the initial impacts and they include a decline in the winter rainfall zone across southern Australia, which is clearly an impact of climate change, but also a decrease in run-off. Although we’re getting say a 20 per cent decrease in rainfall in some areas of Australia, that’s translating to a 60 per cent decrease in the run-off into the dams and rivers. That’s because the soil is warmer because of global warming and the plants are under more stress and therefore using more moisture. So even the rain that falls isn’t actually going to fill our dams and our river systems, and that’s a real worry for the people in the bush. If that trend continues then I think we’re going to have serious problems, particularly for irrigation.

          So it’s a very long bow to draw from that to management policies for a dam feeding the city of Brisbane.

          Now it’s possible Flannery did convince Wivenhoe authorities to change their management practices to prioritise water storage over even the imminent threat of flooding, but I’ve never seen anyone show any evidence of that – especially since IIRC the period preceding the floods was characterised by unusually high inflow into the dams. If there is such evidence, no doubt Peter will post it for us to consider.

          One might also consider that the 2012 report found that the engineers had valid reason based on the (“at times badly drafted” and “conflicting”) operating manual to believe they were operating in strategy W2, when they were in fact operating in strategy W3. As the article points out:

          Strategy W2 is a transition strategy of releasing flood waters where the primary consideration is protecting urban areas from inundation.

          It’s a long bow to draw that the floods were caused by Flannery getting the dam operators to throw flood mitigation duties aside given that the strategy the engineers apparently should have been in and that they tried to implement was intended to mitigate flooding risks (and that they were attempting to do so in response to the largest every inflows recorded for the dam).

          I think many of the rest of Peter’s claims fare little better, and his preferred risk management strategies are distinctly suboptimal in the presence of plausible risks that have extremely severe impact even if the likelihood of occurrence is remote, but my time is up…

        • Forgot to add, the US Army Corps of Engineers report found with respect to the engineers:

          “Their decisions were prudent and showed considerable insight into the precision and accuracy of available hydrometeorological information,” the report said. Alternative operation could have been used, however. “Without the benefit of perfect foresight, there still would have been a risk that the outcome could have been worse as well as better.

          “It is unlikely that reasonable alternative operations would have made a significant difference in peak flows for an event of this magnitude,” the report said.

          At first blush that doesn’t sound like a ringing endorsement of the idea that choosing to store too much water because of climate change was a significant factor in the flooding, but one might need to read the whole report to be confident of that.

  20. > It strains credulity to believe that the trend since the Little Ice Age would reverse (change from positive to negative) just when we happen to be alive.

    Why does it strain credulity?

    • I consider graph 3 to be basically 3 samples from a prior distribution over “damage” functions. I can’t help but suspect that the full range of reasonable damage functions from a full blown prior is pretty large.

    • I’ve calibrated a few things in my day – radiometers, for example – and they… they have a pretty liberal definition of the term “calibrate”.

      You know though, I initially saw that DICE function and thought, “A 34 deg F temperature increase only drops GDP by a factor two? Even the worst case CO2 emissions scenario would only amount to half that. What’s there to worry about. Party on!” so points to Weitzman for keeping it real: “In support of this disastrous projection for 12°C of warming, Weitzman cites recent research showing that at that temperature, areas where half the world’s population now lives would experience conditions, at least once a year, that human physiology cannot tolerate – resulting in death from heat stroke within a few hours (Sherwood and Huber 2010).” Oops.

  21. US hearing on IPCC process

    Richard Tol

    The link to Tol’s testimony is [http://science.house.gov/sites/republicans.science.house.gov/files/documents/HHRG-113-SY-WState-RTol-20140529_0.pdf]. Key points:

    “Academics who research climate change out of curiosity but find less than alarming things are ignored, unless they rise to prominence in which case they are harassed and smeared.

    People volunteer to work for the IPCC because they worry about climate change.

    Governments nominate academics to the IPCC – but we should be clear that it is often the environment agencies that do the nominating.
    All this makes that the authors of the IPCC are selected on concern as well as competence.

    The IPCC should deploy the methods developed in business management and social psychology25to guard against group think.

    Not all IPCC authors are equal. Some hold positions of power in key chapters, others subordinate positions in irrelevant chapters.

    The IPCC leadership has in the past been very adept at putting troublesome authors in positions where they cannot harm the cause. That practice must end. This is best done by making sure that the leaders of the IPCC –chairs, vice-chairs, heads of technical support units – are balanced and open-minded.

    A report that is rare should make a big splash – and an ambitious team wants to make a bigger splash than last time. It’s worse than we thought. We’re all gonna die an even more horrible death than we thought six years ago. Launching a big report in one go also means that IPCC authors will compete with one another on whose chapter foresees the most terrible things.

    In learned journals, the editor guarantees that every paper is reviewed by experts. IPCC editors do not approach referees. Rather, they hope that the right reviewers will show up. Large parts of the IPCC reports are, therefore, not reviewed at all, or not reviewed by field experts.

    We need an organization that is not beholden to any government or any party to anchor climate policy in reality as we understand it. A reformed IPCC can play that role.”

        • Why do you disagree with the characterization of shameful? Does it not agree with your personal beliefs, or do you have some valid criticisms?

          /snark

          The burden of proof is no longer where you seem to prefer it to be.

          Actually, re-reading the passage above, I’d go for paranoid rather than shameful.

    • The link to Botkin’s testimony is [http://science.house.gov/sites/republicans.science.house.gov/files/documents/HHRG-113-SY-WState-DBotkin-20140529.pdf]. Key points:

      I regret to say that I was left with the impression that the reports overestimate the danger from human-induced climate change and do not contribute to our ability to solve major environmental problems. I am afraid that an “agenda” permeates the reports, an implication that humans and our activity are necessarily bad and ought to be curtailed.

      My biggest concern about the reports is that they present a number of speculative, and sometimes incomplete, conclusions embedded in language that gives them more scientific heft than they deserve. The reports, in other words, are “scientificsounding,” rather than clearly settled and based on indisputable facts. Established facts about the global environment exist less often in science than laymen usually think.

      THE REPORT GIVES THE IMPRESSION THAT LIVING THINGS ARE FRAGILE AND RIGID, unable to deal with change. The opposite is to case. Life is persistent, adaptable, adjustable.

      There is an overall assumption in the IPCC 2014 report and the Climate Change Assessment that all change is negative and undesirable; that it is ecologically and evolutionarily unnatural, bad for populations, species, ecosystems, for all life on planet Earth, including people. This is the opposite of the reality.

      The extreme overemphasis on human-induced global warming has taken our attention away from many environmental issues that used to be front and center but have been pretty much ignored in the 21st century.

      • “THE REPORT GIVES THE IMPRESSION THAT LIVING THINGS ARE FRAGILE AND RIGID, unable to deal with change. The opposite is to case. Life is persistent, adaptable, adjustable.”

        Those mass extinctions? Never happened.

        • Agreed. There is quite a big difference between life in general and a specific organism or species. I think that “life” and “living things” are referring to entities here.

    • A number of insightful points, this being my favourite –

      “to guard against group think. These include a balanced composition of peer groups, changing the compositing of groups, appointing devil’s advocates, and inviting outside challengers. This requires active support from the IPCC leadership. To the best of my knowledge, outside challengers are rare.”

      If IPCC leadership is replaced by Andrew or Richard Tol, I think I know who came closer to succeeding at actually following that advice in this blog post.

    • From the JEP website:
      “The JEP is not primarily an outlet for original, frontier empirical contributions; that’s what refereed journals are for!” and “Findings that are not almost immediately self-evident in tabular or graphic form probably belong in a conventional refereed journal rather than in JEP.”

      http://www.aeaweb.org/jep/submissions.php

      • Looks as though JEP have it wrong, despite what they claim. Richard’s CV clearly identifies JEP as a refereed publication in his list of claimed refereed publications.
        I wonder how many other journals have got it wrong?

  22. CO2 is a nutrient and so is N03. If I add (too much) NO3 to my tomatoes, I get more more vegetative growth, but not as many tomatoes. More nutrients and more plant growth does not always lead to more economic value. We can say, “Rain is grain!”, but rain a couple of weeks late, can ruin the harvest and spoil the crop. Climate change makes weather less predictable. In the past, wheat growing areas of Kansas got a period of rain in early August. Wheat farmers went to great effort to get the fields prepared and seeded so that rain would germinate the seed. Now, Kansas gets more rain in the summer, but it is less predictable. What is the value of rain that comes a couple of weeks early, and turns the field into a sea of mud that cannot be planted until later? More rain on my apple trees leads to more “water shoots” resulting in more labor cost and higher priced apples.

    The bottom line of every economic analysis is the price of food. Tol does not understand enough agronomy to do a rational analysis of the effects of climate change on the price of food. Industrial economies must move goods on schedule. Sandy damaged much road infrastructure, impacting our status as an industrial economy. Civilizations have stable living conditions. Civilizations do not allow their housing to flood frequently.

  23. > The bottom line of every economic analysis is the price of food. Tol does not understand enough agronomy to do a rational analysis of the effects of climate change on the price of food.

    +1

  24. Pingback: Consensus Matters | Critical Angle

  25. Pingback: Richard Tol accidentally confirms the 97% global warming consensus | Gaia Gazette

  26. Pingback: Richard Tol's 97% Scientific Consensus Gremlins » Real Sceptic

  27. Pingback: A week of links | EVOLVING ECONOMICS

  28. I remember seeing Tol present this stuff at a conference many years ago. I told him that his work violated the basic assumptions of regression analysis. His response: It may be bad science but it is still policy relevant.

  29. Pingback: De Teldersstichting en het klimaat | Sargasso

  30. Pingback: De zomers worden langer en heter » Seen in Numbers

  31. Pingback: What does CNN have in common with Carmen Reinhart, Kenneth Rogoff, and Richard Tol: They all made foolish, embarrassing errors that would never have happened had they been using R Markdown « Statistical Modeling, Causal Inference, and Social Science

  32. Pingback: Same ol’ same ol’ | …and Then There's Physics

  33. Pingback: La banda del Pinocchio - Ocasapiens - Blog - Repubblica.it

  34. Pingback: Missing the Obvious | Izuru

  35. Pingback: Burying the Lede | Izuru

  36. Pingback: Altri giganti - Ocasapiens - Blog - Repubblica.it

  37. Pingback: Why the 97 per cent consensus on climate change still gets challenged | Critical Angle

  38. Pingback: Scientists Respond To Tol's Misrepresentation Of Their Consensus Research - Real Skeptic

  39. Pingback: What has happened down here is the winds have changed - Statistical Modeling, Causal Inference, and Social Science

  40. Pingback: WMO updates - Ocasapiens - Blog - Repubblica.it

  41. Pingback: "Statistical heartburn: An attempt to digest four pizza publications from the Cornell Food and Brand Lab" - Statistical Modeling, Causal Inference, and Social Science

  42. Pingback: The Linear Model for Richies | …and Then There's Physics

  43. Pingback: Polar Bears – a rebuttal | …and Then There's Physics

  44. Pingback: Scientists Respond To Tol's Misrepresentation Of Their Consensus Research - Real Skeptic

  45. It appears that Richard Tol is still publishing these data, only now fitting a piecewise linear function to the same data-points.
    https://academic.oup.com/reep/article/12/1/4/4804315#110883819

    Also still looks like counting 0 as positive, “Moreover, the 11 estimates for warming of 2.5°C indicate that researchers disagree on the sign of the net impact: 3 estimates are positive and 8 are negative. Thus it is unclear whether climate change will lead to a net welfare gain or loss.”

  46. Pingback: Gremlin time: “distant future, faraway lands, and remote probabilities” « Statistical Modeling, Causal Inference, and Social Science

  47. Pingback: “Why We Sleep — a tale of institutional failure” « Statistical Modeling, Causal Inference, and Social Science

Leave a Reply to The Linear Model for Richies | …and Then There's Physics Cancel reply

Your email address will not be published. Required fields are marked *