I came across a document [updated link here], “Applying structured analogies to the global warming alarm movement,” by Kesten Green and Scott Armstrong. The general approach is appealing to me, but the execution seemed disturbingly flawed.
Here’s how they introduce the project:
The structured analogies procedure we [Green and Armstrong] used for this study was as follows:
1. Identify possible analogies by searching the literature and by asking experts with different viewpoints to nominate analogies to the target situation: alarm over dangerous manmade global warming.
2. Screen the possible analogies to ensure they meet the stated criteria and that the outcomes are known.
3. Code the relevant characteristics of the analogous situations.
4. Forecast target situation outcomes by using a predetermined mechanical rule to select the outcomes of the analogies.
Here is how we posed the question to the experts:
The Intergovernmental Panel on Climate Change and other organizations and individuals have warned that unless manmade emissions of carbon dioxide are reduced substantially, temperatures will increase and people and the natural world will suffer serious harm. Some people believe it is already too late to avoid some of that harm.
Have there been other situations that involved widespread alarm over predictions of serious harm that could only be averted at considerable cost? We are particularly interested in alarms endorsed by experts and accepted as serious by relevant authorities.
We screened the proposed analogies to find those for which the outcomes were known and that met the criteria of similarity to the global warming alarm. Our criteria for similarity were that the situations must have involved alarms that were:
1. based on forecasts of material human catastrophe arising from effects of human activity on the physical environment,
2. endorsed by scientists, politicians, and media, and
3. accompanied by calls for strong action
This all looks good to me. But then I looked their list of 26 analogies to the alarm over dangerous manmade global warming. There were two items on the list that I knew something about: “Electrical wiring and cancer, etc. (1979)” and “Radon in homes and lung cancer (1985).” My experience about the power lines example comes from having attended a conference on the topic in the late 1980s, having read some of the literature at the time, and then vaguely keeping up with developments since then. My experience of home radon comes from a large project that Phil Price and I did in the 1990s; it’s an example that we wrote up in various places, including my two textbooks.
To calibrate what Green and Armstrong were doing, I looked at what they said about these two cases.
To start with, it seemed iffy to me to consider these cases as comparable to the alarm over global warming, in that each of those examples (power lines and radon) are clearly limited in their scope. Power lines were at one point believed to raise the level of childhood leukemia for a small subset of the population, and radon was (and still is, to my knowledge) believed to cause several thousand excess cancers each year in the United States. Several thousand early cancer deaths ain’t nothing, but in both cases the risks are clearly bounded, as compared to alarms over global warming melting icecaps, flooding Bangladesh, destroying Miami, etc. So to me it trivializes the climate change risk (or, alternatively, the extent of climate change alarmism) to link it to bounded risks like power lines and radon.
On the other hand, I’ve long held that we can learn about small probabilities by extrapolating from precursor data, so, as long as this extrapolative nature of the argument is formally part of the model, maybe something useful can be learned here. I’m open to the idea that historical analysis of the perceptions of small, bounded risks, could be relevant to our understanding of the perception of a potentially large, unbounded risk.
So now on to the details. Here’s their coding system:
And here’s how they code the two risks that I know something about:
I dispute a lot of these codings. In particular, my impression is that the consensus is that electrical wiring has no effect on cancer. So that should be coded as a 0, not a -1. They also code that there was “substantial government intervention” in the area. I think that “substantial” would be overstating things here, but I guess the real point is that “substantial” needs to be defined more clearly. I’ll accept that, to the extent there was any intervention, it was retrospectively harmful in that it was combating a non-risk.
Now on to the radon example. They code the accuracy of forecasts as a -1. Huh? Are they claiming that home radon exposure actually saves lives? Sure, I know that some people make that claim, but my impression is that it’s a minority view and hardly a reasonable summary of expert consensus here. They next write that the proposed action involved “substantial government intervention.” I don’t think this is right at all. The EPA recommends that homeowners test their houses for radon and to remediate if the measurement exceeds a specified level. We have argued that these recommendations are far from optimal, but I’d hardly call such recommendations to be “substantial government intervention.” The next claim is that these policies (recommending measurement and intervention) where “harmful.” I don’t see such clarity here. As we discussed in our research paper, it’s a tradeoff between dollars and lives. You can make the claim that the tradeoff was, in aggregate, too expensive and thus the policies were harmful, but this hardly seems clear to me. I’d guess that most observers would consider the policies to be effective in that they saved lives at a reasonable cost. (See Table 5 of our paper for some estimates for cost per lives saved under various decision rules and various assumptions about risk.)
P.S. The latest date on any of these documents is 31 Mar 2011, so perhaps the authors have given up on the project. Which seems too bad, but given what they had so far, I think if they wanted to continue, they’d need to scrap it and start all over. I’ve had various research ideas that I ended up quitting on because they just didn’t work; this could be such an idea of theirs.
The outdated nature of all this made me wonder if I should post on it at all. But I think the general issues are interesting enough that it was worth a look.