Skip to content
 

“A Headline That Will Make Global-Warming Activists Apoplectic”

I saw this article in the newspaper today, “2017 Was One of the Hottest Years on Record. And That Was Without El Niño,” subtitled, “The world in 2017 saw some of the highest average surface temperatures ever recorded, surprising scientists who had expected sharper retreat from recent record years,” and accompanied by the above graph, and this reminded me of something.

A few years ago there was a cottage industry among some contrarian journalists, making use of the fact that 1998 was a particularly hot year (by the standards of its period) to cast doubt on the global warming trend. Ummmm, where did I see this? . . . Here, I found it! It was a post by Stephen Dubner on the Freakonomics blog, entitled, “A Headline That Will Make Global-Warming Activists Apoplectic,” and continuing:

The BBC is responsible. The article, by the climate correspondent Paul Hudson, is called “What Happened to Global Warming?” Highlights:

For the last 11 years we have not observed any increase in global temperatures. And our climate models did not forecast it, even though man-made carbon dioxide, the gas thought to be responsible for warming our planet, has continued to rise. So what on Earth is going on?

And:

According to research conducted by Professor Don Easterbrook from Western Washington University last November, the oceans and global temperatures are correlated. . . . Professor Easterbrook says: “The PDO cool mode has replaced the warm mode in the Pacific Ocean, virtually assuring us of about 30 years of global cooling.”

Let the shouting begin. Will Paul Hudson be drummed out of the circle of environmental journalists? Look what happened here, when Al Gore was challenged by a particularly feisty questioner at a conference of environmental journalists.

We have a chapter in SuperFreakonomics about global warming and it too will likely produce a lot of shouting, name-calling, and accusations ranging from idiocy to venality. It is curious that the global-warming arena is so rife with shrillness and ridicule. Where does this shrillness come from? . . .

No shrillness here. Professor Don Easterbrook from Western Washington University seems to have screwed up his calculations somewhere, but that happens. And Dubner did not make this claim himself; he merely featured a news article that featured this particular guy and treated him like an expert. Actually, Dubner and his co-author Levitt also wrote, “we believe that rising global temperatures are a man-made phenomenon and that global warming is an important issue to solve,” so I could never quite figure out in their blog he was highlighting an obscure scientist who was claiming that we were virtually assured of 30 years of cooling.

Anyway, we all make mistakes; what’s important is to learn from them. I hope Dubner and his Freaknomics colleagues learn from this particular prediction that went awry. Remember, back in 2009 when Dubner was writing about “A Headline That Will Make Global-Warming Activists Apoplectic,” and Don Easterbrook was “virtually assuring us of about 30 years of global cooling,” the actual climate-science experts were telling us that things would be getting hotter. The experts were pointing out that oft-repeated claims such as “For the last 11 years we have not observed any increase in global temperatures . . .” were pivoting off the single data point of 1998, but Dubner and Levitt didn’t want to hear it. Fiddling while the planet burns, one might say.

It’s not that the experts are always right, but it can make sense to listen to their reasoning instead of going on about apoplectic activists, feisty questioners, and shrillness.

45 Comments

  1. Heshel says:

    “accompanied by the above graph, and this reminded me of something”

    Japan?

  2. Eli Rabett says:

    The interesting point about records in a time series is that the number of them should decrease to zero if there is no trend. That they have not is strong indication that there is a trend. Of course, there is always the salami slicing issue.

    http://rabett.blogspot.com/2018/07/on-records.html

    • Phil says:

      “The interesting point about records in a time series is that the number of them should decrease to zero if there is no trend.” I know what you’re saying but that’s not right in most time series. To consider a simple case, if my time series is y(t) ~ N(0, sd) then although it’s true that the time between records will increase, I always have an infinite number of new records ahead of me. If I happen to have drawn a number that is 3 sd away from zero, that’s OK, I will eventually draw one that is 3.5 sd away from zero, and so on.

      • anonnynim says:

        I read it as “asymptotically decrease to zero”.

          • Phil says:

            In the example I gave, and in many others, the number of future records is always infinite. It does not asymptotically decrease to zero or any other number.

            The expected number of records in a finite future interval does decrease though, as you say. So, as I said, I know what you’re saying.

            I know I may sound pedantic, and I apologize if that’s the case, but I think that although I and anonnynim know what you’re saying even though it wasn’t quite stated correctly, that will not be the case for all readers.

            • Not having actually done any math, my intuition says that the proper formulation of this is that the expected time between records increases asymptotically as time goes on.

              For something like a brownian motion (which the climate certainly doesn’t look like) the time to get from where you are now N to some new record value R in the future, should go like abs(R-N)^2 and, lets say we’re talking about record highs, each time we hit a new record, R increases monotonically, so for a fixed N, and taken at a given point t, our expectation of this quantity increases monotonically through time (for example).

        • someone says:

          Eli, Phil, thanks both for this discussion. It’s not being pedantic & the correctly stated point is interesting.

        • Bryan says:

          The probability of a record should asymptotically decrease to zero, at least for a stationary process (which global temperature is not).

    • Anoneuoid says:

      Every day, starting at sunrise, a new record in temperature for that day is set for the first 12 or so hours. A mayfly would spend half its life seeing the temperature only increase, and could extrapolate accordingly that the earth will end up like Venus. Then the second half of the day that a new ice age is imminent.

      I think most people would argue that there are centuries, millennia, and even longer cycles like the daily one, and we arent sure where we are in them since the good records only date back a century at best. I’m not sure who you are referring to that cares about whether or not there is a trend. I’d expect there is always a trend regardless of human impact.

      • The real question isn’t whether or not there’s a trend, the real question is: is the magnitude and direction of the trend heavily influenced by human activity? And would any reasonably achievable change in human activity actually have an effect on this trend that would be beneficial to humans?

        Going into these answers requires two components one of which is almost always left out of the discussion. The first component of course is a physical model of how climate changes as a function of human emissions and other factors. The second is a utility function over various human outcomes as they pertain to climate changes and that’s the most controversial and is very non uniform across countries, socioeconomic classes and soforth, it is also therefore the one swept under the rug

        • yyw says:

          I would argue that whether past increases were caused by human activities doesn’t matter. Assuming that we can trust the climate model to predict climates decades into the future conditioned on different human actions, what we need is cost/benefit analysis of the actions we can take. The analysis will be extremely difficult and rough. Different nations and socioeconomic classes as you said will have different calculations. In addition, we are talking about projected benefit to future generations with cost borne by the current generation. It could be tempting to make the argument to leave the problem to future generations who presumably will have access to much more advanced technologies. Even if the current generation is willing to make sacrifice, what the optimal resource allocation between limiting pollution and increasing productivity is not very clear. Limiting pollution could reduce the size of the climate change. Increasing productivity could make humans (especially those in developing nations) to better withstand the effect of climate change.

          • > whether past increases were caused by human activities doesn’t matter

            Except insofar as it’s relevant to determine whether we have any influence at all, which is relevant to the cost/benefit analysis of current and future action. The past is basically the data we have to fit the Bayesian models we need to predict the future. If the Bayesian models said “human’s didn’t do much of anything, it’s all naturally occurring” then most likely that would continue to be true in the future, and we should look entirely to mitigate the effect of natural climate change on individuals etc (ie. invent new technologies, migrate, etc). Whereas if the past suggests that humans had a big effect, we should then look to predict what the effect of different actions now will be in the future, and then *discuss how good each one of those effects is* (but, as we’ve both mentioned, “how good” is very loaded)

            Other than that, I agree with you entirely. The current debate goes something like this: “humans cause climate change *so we have to do something about it*” vs “human’s don’t cause climate change, so *there’s nothing to do*”

            The proper debate should be: “Doing x,y,z will cause a,b,c in the future, and based on that we should favor doing choose_one(x,y,z) because choose_one(a,b,c) is the best outcome”

            The assumption that “if we don’t do something to change, it will be disastrous in the future” is just that *an assumption* that is not based on particularly careful analysis of the consequences.

            • yyw says:

              The answer to whether humans caused climate warming in the past should be embedded in any climate prediction model good enough to base our (very expensive) decisions on. I just think that though unlikely, there is theoretically a chance that our past action did not have substantial effect but our future action (with the help of new technology) could.

              • Sure, that’s true. Even if we proved somehow that past CO2 didn’t do anything to global temperatures (seems very likely wrong), we should still consider the fact that global temperatures *are* rising, and we might want to design some technology to protect human societies against that eventuality: Efficient geothermal HVAC systems, solar and wind farms, new insulation materials, new storm-resistant construction techniques, new zoning and insurance schemes, incentives to move to regions that are less threatened, improved transport of goods from port areas to less threatened inland areas… whatever.

                The debate today is framed around “whether or not” humans cause GW, as if *proving that they do* means automatically *changing our emissions* and *proving that they don’t* means automatically *continue with whatever we’re doing*

                That framing is *very very wrong* and it’s *wrong on all sides*. That is a purely sound-bite political groupthink.

              • Curious says:

                I’m not sure I follow the logic here.

                If you believe humans do not contribute to climate change, why would you spend billions of dollars on solutions that minimize the output of human generated CO2? Why would you not simply focus on dealing with the effects of climate change such as rising ocean levels and changes in the types of crops that will thrive at different latitudes?

        • Anoneuoid says:

          The real question isn’t whether or not there’s a trend, the real question is: is the magnitude and direction of the trend heavily influenced by human activity? And would any reasonably achievable change in human activity actually have an effect on this trend that would be beneficial to humans?

          Yep. Or possibly, is there anything reasonable to be done that could make civilization/country more robust to certain aspects of climate change regardless of the cause?

          The debate today is framed around “whether or not” humans cause GW, as if *proving that they do* means automatically *changing our emissions* and *proving that they don’t* means automatically *continue with whatever we’re doing*

          That framing is *very very wrong* and it’s *wrong on all sides*. That is a purely sound-bite political groupthink.

          Its worse than that. “Prove that humans do have an effect (or not)” has become “is there a trend (or not)”. Obviously there are many reasons for the existence of a trend we arent going to be ruling out anytime soon…

          The state of the “discussion” is so detached from the actual questions of interest that I don’t understand who would care except for political reasons at this point. I think there’s a general principle at play here that politics brings down science, science does not uplift politics. Its the same as religion.

      • Chris Wilson says:

        ummm, the ice core records give us good data back on the scale of 100,000 years. Also, I agree with Daniel- the crucial question is how human activity impacts the trends, and then a utility over a range of scenarios for human activity. However, I would add that we should probably impose some strong form of the Precautionary Principle- there are any number of positive feedbacks we could be kicking off that lead to some nasty dooms-day type scenarios. Low probability sure, but human extinction arguably has negative infinite utility :)

        • I don’t think it’s possible to do a Bayesian analysis where an actual possible outcome (one with non-infinitesimal probability) results in -inf utility. We need the expectation integral to be a finite number in all cases.

          Unfortunately, in order to make decisions meaningful, I think we need to place finite cost on extinction. A reasonable cost in dollars would be the total present price of all assets plus all future income discounted exponentially past the extinction event.

          Just to ballpark that we have around 7 billion people, and we might guess maybe $10M per person future discounted income, so -7×10^16 dollars or so for total extinction today. That’s 70 thousand trillion, that’s something like 1000 times global GDP so it’s a big number, but not infinite.

          • Chris Wilson says:

            Indeed, you are correct. I think some number along those lines is good. My major point is that after integrating over U[x]*Pr[x]*dx the expected (dis)utility will be much larger than just propagating some kind of ‘most likely’ point estimate through the utility function…

            • Yes, absolutely. The “most likely” event if we pump CO2 out at continued rates may well be something equivalent to say WWII which was horrible but didn’t turn out disastrous for the entire human race. For example we could consider a cost of maybe 100 Million people affected to the tune of 100k$/person, or 1×10^13 a factor of 7000 smaller than the extinction event. But if there’s a greater than 1/7000 chance of that extinction level event, it will dominate over the most likely event in our calculation.

              The end result, I think we’re in agreement, is that whatever we’re doing right now regarding climate, we’re doing it wrong. Very wrong.

        • Anoneuoid says:

          ummm, the ice core records give us good data back on the scale of 100,000 years.

          I don’t consider this “good data”. I wouldn’t even consider measuring CO2 at a couple (is it even up to a dozen?) sites around the world (with the “best” record from the slopes of the largest active volcano in the world) to be good data.

          Having read about the interpolations, etc required to get a self consistent temperature record I also really wouldn’t consider that “good data” even though there’s way more of it. That’s why I said the good records only date back a century at best. I meant, “even allowing for questionable data to count as good”.

          Ideally we would be recording the tropospheric vertical profile of temperature, CO2, humidity, etc at short regular intervals (eg a few minutes) in a regular grid across the surface of the earth. That would be good data, and it could be done if the resources are allocated to that.

          Just because that’s the best you can do with the resources available doesn’t make it good.

          • Eli Rabett says:

            Yours is the usual passive aggressive argument from ignorance.

            You assume that the people who measure on Mauna Loa have no clue about the emissions from the volcano, are not constantly looking for evidence of contamination and have not studied the issue to death

            http://rabett.blogspot.com/2009/10/rabett-goes-romm-ian-plimer-has-written.html

            Also FWIW OCO2

            • Anoneuoid says:

              You assume that the people who measure on Mauna Loa have no clue about the emissions from the volcano, are not constantly looking for evidence of contamination and have not studied the issue to death

              Where do you see me appear to assume this? I think what I am assuming is quite clear from the post. Its that they have come up with an elaborate and byzantine system of adjustments (et al) to the data that results in a seemingly self-consistent record of CO2, temperature, etc levels.

              Did not click your link since I assume it is about the strawman you just created.

            • Anonymous says:

              Here is some of the the stuff I have read that makes me think “they have come up with an elaborate and byzantine system of adjustments”:

              The methodology of ERSST.v4 reconstruction follows Smith et al. (1996) and Smith and Reynolds (2003). The SST measurements from in situ buoy and ship observations were used to reconstruct monthly 2° × 2° SSTA data in ERSST.v4 from 1875 to present. The reconstruction before 1875 was not accomplished due to sparseness of observations in the Pacific and Indian Oceans in ICOADS R2.5 and the inability to provide sufficient empirical orthogonal teleconnections (EOTs) for construction of a reliable “global” estimate. The SSTs from ships or buoys were accepted (rejected) under a QC criterion that observed SSTs differ from the first-guess SST from ERSST.v3b by less (more) than 4 times standard deviation (STD) of SST (Smith and Reynolds 2003).

              The ship and buoy SSTs that have passed QC were then converted into SSTAs by subtracting the SST climatology (1971–2000) at their in situ locations in monthly resolution. The ship SSTA was adjusted based on the NMAT comparators; buoy SSTA was adjusted by a mean difference of 0.12°C between ship and buoy observations (section 5). The ship and buoy SSTAs were merged and bin-averaged into monthly “superobservations” on a 2° × 2° grid. The number of superobservations was defined here as the count of 2° × 2° grid boxes with valid data. The averaging of ship and buoy SSTAs within each 2° × 2° grid box was based on their proportions to the total number of observations. The number of buoy observations was multiplied by a factor of 6.8, which was determined by the ratio of random error variances of ship and buoy observations (Reynolds and Smith 1994), suggesting that buoy observations exhibit much lower random variance than ship observations.

              The SSTAs of superobservations were further decomposed into low- and high-frequency components. The low-frequency component was constructed by applying a 26° × 26° spatial running mean using monthly superobservations where the sampling ratio is larger than 3% (five superobservations). An annual mean SSTA was then defined with a minimum requirement of two months of valid data. The annual mean SSTA fields were screened and the missing SSTAs were filled by searching the neighboring SSTAs within 10° in longitude, 6° in latitude, and 3-yr in time. The search areas were tested using ranges of 15°–20° in longitude, 5°–10° in latitude, and 2–5 yr. The final SSTAs did not make much of a difference since the search area is less than the scales of the low-frequency filter. Finally, the annually averaged SSTAs were filtered with a weak three-point binomial filter in longitudinal and latitudinal directions, and further filtered with a 15-yr median filter. These processes were designed to filter out high-frequency noise in time and small scale in space.

              The high-frequency component of SSTA, defined as the difference between the original and low-frequency SSTAs, was reconstructed by first applying a 3-month running filter that replaces missing data with an average of valid pre- and postcurrent month data. The filtered SSTAs were then fitted to the 130 leading EOTs (van den Dool et al. 2000; Smith et al. 2008), which are localized empirical orthogonal functions restricted in domain to a spatial scale of 5000 and 3000 km in longitude and latitude, respectively. The EOTs were…

              http://journals.ametsoc.org/doi/10.1175/JCLI-D-14-00006.1

              The curve fitting techniques used to smooth a CO 2 measurement record, CSTA(t), where the subscript notation “STA” (for station) indicates that the expression is specific to any one of the sampling sites listed in Table 1, have been described by Thoning et al. [1989]. We briefly describe the techniques here because they are used extensively in both data extension methods. To approximate the long-term trend and average seasonal cycle at a sampling site, a function of the form:

              f_STA(t) = a_0 + a_1*t + a_2 * t^2 + sum_k{1,4}[b_(2k-1)*sin(2*pi*k*t) + b_(2k)*cos(2*pi*k*t)]

              is fitted to the measurement where t is the time in years since January 1, 1979. To account for interannual variability in the seasonal cycle, the residuals, r_STA(t)=C_STA(t)-f_STA(t), e digitally filtered through a low-pass filter with a full width at half maximum (FWHM) of approximately 40 days. The smoothed residuals from the 40-day filter, {r_STA(t)}_40d, are then combined with f_STA(t) to produce what we call the smooth curve… etc

              https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/95JD00859

              • Anoneuoid says:

                The above was me…

                I would like to see the CO2 adjustment scheme used on the temperature data and vice versa. Just how fine tuned are these algorithms to the particular historical data they were devised for?

  3. Rossman says:

    Should be equal “experts” attention to both warming and cooling records.

    Analysis/Reporting should be based on how unusual a “temperature-event” is — not whether it creases support for favored policies.
    Ordinary contemporary temperature-events/trends that frequently happened in the long term past, before “global warming era” should not be considered evidence of global warming theory.

    The largest global 2-year cooling event of the last century just occurred –from February 2016 to February 2018; global average temperatures dropped 0.56°C. 1982-84 was the next biggest 2-year drop, 0.47°C.
    (Data is from NASA GISTEMP, the standard source used in most journalistic reporting of global average temperatures)
    Not a big deal scientifically, but notably quite ignored by the media.

    Cooling-Event outliers get no media attention (and there have been more cooling months than warming months since AGW supposedly began). Biased reporting overwhelmingly implies that warming is much steadier than it is. Annual atmospheric CO2 levels have gone up pretty much linearly since 1960 … if temperatures did the same thing, the link to CO2 would be direct and obvious — but the science isn’t there to prove the link.
    The case for global warming does not rely primarily on actual observed warming… but rather upon arbitrary models with a poor record of even short term prediction accuracy.

  4. Tom Passin says:

    Rossman : “The case for global warming does not rely primarily on actual observed warming… but rather upon arbitrary models with a poor record of even short term prediction accuracy.”

    Well, no, not even close. There have been over 100 years of developing an understanding the physical processes,starting with Arrhenius in 1859, and the actual observed warming measurements are very compelling – the graph presented by Andrew is one example. The case is not mainly based on climate models, which anyway are not “arbitrary models”.

    But feel free to publish a soundly-based model of your own that shows essentially different behavior…

  5. Terry says:

    The anti-warming crowds’ arguments about “no warming since 1998” was always flimsy. 1998 was clearly a local spike and linked to El Nino, so it was to be expected that subsequent temperatures would be lower for a while. That didn’t mean there was no trend. It only meant that the trend had smaller perturbation overlaid on it. You could see that by just looking at the graphs.

    For quite a while now, the debate has been about the magnitude of the trend.

  6. Roy says:

    Most of the calculations of the temperature trends unfortunately do not correct for known solar cycles, including the sunspot cycle and the “Seuss Wiggles” (and I am not making up the latter and I am not a troll – google it – they show up very clearly in the core record). Taking these cycles into account does not remove the trend at all (as some of the people who do not think there is global warming would have you believe), but they do modify the trend. The “hiatus” starting in the late 1990’s was roughly 11 years long, right in line with the solar cycles. Correcting for the cycles gives a cleaner trend that doesn’t have such debatable characteristics. I wouldn’t bet my life on it, but don’t be surprised if you start noticing sharply increased warming starting towards the end of 2011. This is partly due to the warming trend and partly due to the solar cycles. You heard it here first. Eleven years from now you can look me up in my old age home, in my rocking chair with my shawl and cane, and tell me if I was prescient.

    PS – For what it is worth we see the cycles in ocean temperatures, and still see a significant warming trend – a common trend that can be shown to be essentially the same in the tropical record, the mid-latitude record, and in ice coverage in the Arctic.

  7. Terry says:

    Replicability (a theme central to this blog) is relevant here.

    The data for the chart at the top of this post is from NASA. That data incorporates many adjustments to the instrumental data, and there have been claims that those adjustments are important and that they tend to be in the direction of producing more warming.

    Climate science is a highly politicized area, and some climate scientists have not always behaved well.

    So, it sounds like the temperature record adjustments should be independently replicated to see if the new data holds up.

    • Jeff Walker says:

      I thought independent replication was the purpose of http://berkeleyearth.org

      This point always makes me question my own beliefs, because I think we should believe the experts, otherwise chaos would reign. But the quality of the science among fields varies from demonstrably cargo-cult at one end to what seems to be pretty rigorous at the other and there is no demarcation point to classify a field as “pretty rigorous”. Of course how one classifies different fields along this axis is heavily biased by that person’s priors. Bill Maher seems to be one person who applies a different set of reality checks to different fields (Evolution vs. alt med). One value of critics and even cranks is that it does force a field to stop and create some rigorous checks.

      • Terry says:

        Thanks for the link. I wasn’t aware of this group.

        Looking at their website, they look like they might actually be independent-minded scientists, which would be a welcome addition to the climate science field.

        Richard Muller, one of the people involved, has this pithy video, which concisely explains why an independent check is necessary. https://www.youtube.com/watch?feature=player_embedded&v=8BQpciw8suk. Muller concisely explains the dishonesty of the “hockey stick” and says he doesn’t trust any scientist who was involved with it.

  8. Anoneuoid says:

    I think we should believe the experts, otherwise chaos would reign.

    https://en.wikipedia.org/wiki/Nullius_in_verba

    • Anoneuoid says:

      Interesting that the notion of “dont be bound to the experts” was is said by some to be a response to “cognitive chaos” since the end result is everyone just listens to their own “experts”:

      Those who first used it, and many since, would have been aware of the source, and by knowing that the original was an assertion of independence, they would read the implications of the slogan. They would read it as: ‘not in bond to any master’ or ‘not bound to swear allegiance or subservience to any master’.
      […]
      Responding to a query about whether in England St George’s Day would not be more appropriate for such an important event, one of the Fellows is reported as saying: ‘I had rather have had it been on St. Thomas’s Day, for he would not beleeve till he had seen and putt his fingers into the holes; according to the motto, Nullius in Verba.’ There could scarcely be a more forceful illustration of the strength of the ‘test it for yourself’ attitude, which is the main and proper sentiment for which we can thank the Society’s founders.
      […]
      The effort to distance oneself from ‘mere opinions’ was particularly important in the 1660s, and when Sprat complained about abuses of persuasive language he had in mind not just the mediaeval scholastics and the alchemists, but also the religious enthusiasts and political pamphleteers of the preceding decades. In an age of great flux of opinion, the quest for an uncontroversial, non-interpretive, language was in part an appeal against incipient cognitive chaos.

      https://www.jstor.org/stable/4027580

    • AllanC says:

      This is a positively excellent motto for areas where one has expertise. It is probably a net negative in areas where one doesn’t have such expertise and/or doesn’t have the resources to acquire expertise within a suitable timeframe.

      You can get pretty far in evaluating the advice / talk of others in professions in which you are not an expert with rudimentary deductive skills and a dash of motivation. But at some point, you have to trust your neurosurgeon or your accountant or your lawyer or your engineer or your contractor or your banker or your [select professional here] when it gets to the nitty gritty of getting things done. Actually, you don’t have to. But there are so many things required to live one’s life that to be an expert in all of them in the timeline that life demands, is a darn difficult task.

      In general, my approach to this problem is pretty much to assume that everyone is generally wrong / operating with sub-optimal procedures. How much I care about correcting that is somewhat proportional to the trade off of energy it would take to really learn how wrong they are, the time required to learn how to correct it, and the payoff for doing so.

      Relevant here might be INFLUENCE: The Psychology of Persuasion by Cialdini (pages 5-9). Sometimes we just need to buy the expensive jewelry and believe it was worth it!

      • Anoneuoid says:

        You can get pretty far in evaluating the advice / talk of others in professions in which you are not an expert with rudimentary deductive skills and a dash of motivation. But at some point, you have to trust your neurosurgeon or your accountant or your lawyer or your engineer or your contractor or your banker or your [select professional here] when it gets to the nitty gritty of getting things done. Actually, you don’t have to. But there are so many things required to live one’s life that to be an expert in all of them in the timeline that life demands, is a darn difficult task.

        I think its fine long as you realize you’re using a heuristic and that there is zero reason to get emotionally attached to it.

  9. Bruce E. Bernstein says:

    I saw Dubner and Levitt speak at Symphony Space in Manhattan’s Upper West Side, right after the publication of SuperFreakonomics. it must have been the winter of 2009 / 2010.

    Behind a facade of lightheartedness, they are a pompous and smug pairing. they spread doubts onto anthropogenic climate change. In particular, I remember Stephen Leavitt saying, posing as the voice of authority, that the globe WAS warming but he wasn’t convinced that it was because of human activity. i thought to myself, “well, he can walk around the corner at the University of Chicago and talk to some climate scientists, who actually know the facts.”

    it struck me as ironic because this was shortly after Leavitt told some anecdote how he, as a smart-alec high school student, had cast doubt on the HIV/AIDS epidemic in some valedictorian speech or something similar, and how he regretted it. Now, he was making the same ill-informed smart-alec mistake — only he was now 40-something.

    Dubner to this day insists that global agreements on climate are worthless because of what he calls “the free rider problem.”

    Their central point in 2009 was that you can’t INTERVENE in the economy, that will only produce bad results, and that environmental problems tend to have technological solutions that are produced by the magic hand of the free market. In fact, this is their basic point on everything. They are market worshipers. Their main example in re: climate change was that at the turn of the 20th century the streets of NYC were piled high with horse manure, and it was regarded as an insolvable problem. But then, no one predicted the automobile.

    Well, i’ll leave it to you to figure out all the problems with that analogy. There are too many to enumerate.

    So their solution to climate change was various forms of fanciful geoengineering. It struck me as ironic, once again, that they were so concerned about the unintended side effects of things like renewables or energy conservation… but not too concerned with spewing a cloud of reflective particulates into the stratosphere. Nope, no unintended consequences there.

    Behind their practiced insouciance, they are very right wing people, quasi-libertarian.

    • Andrew says:

      Bruce:

      I agree that Levitt is politically conservative in some ways, but I also read that he said that he thought Obama would be the greatest president in history. So I don’t think it’s accurate to describe Levitt as right-wing. Rather, I’d say he has a mix of political views that are not completely coherent

      • Bruce Bernstein says:

        Hi Andrew, thank you for the response.

        Looking back at your blog post on Levitt (the one you linked to), it’s obvious to me that you’ve read more of his work and thought more about what he stands for than i have. My experience is with “Dubner and Levitt”, attending the presentation I mentioned (it really was a book tour event) and listening to various podcasts, etc. And i would agree with you that both of them are not particularly coherent politically.

        Anfd yet, I have always found them annoying. I don’t have any beef with tackling what others consider sacrosanct, but they seem to pick their targets with the aim of taking on some mythical left-wing hegemony. I think particularly about their stances on climate change. They have backed off the worst of those, but still can’t be considered friends of the earth.

        And i’ll stick to the point that they are “free market worshipers.” Which, at the end of the day, makes them conservative, at least with a small-c.

        In your blog post, you compare yourself to Levitt as something of an iconoclast. I think that isn’t fair — to you. I’ve heard you speak about the same amount of times i have heard Levitt. You try to get your audience to think, not to go out of the way to show how we’re UNTHINKING. Levitt and Dubner specialize in a certain brand of insouciant snark. It’s the “I’m smarter than you are” style common among libertarians and “libertarian minded.” I can usually smell it out very quickly.

    • static says:

      Right wing and libertarian are in no way the same thing. There’s more than one axis.

  10. static says:

    As you make clear, they disagreed with Easterbrook. The more salient point for them was drawing a parallel to the response to efforts to promote efforts to remove CO2 from the atmosphere, as opposed to simply limiting the addition of CO2 to the atmosphere.

    In fact, any time CO2 removal from air is proposed, there is a reflexive opposition to it.

Leave a Reply