Confusion from illusory precision

When I posted this link to Dean Foster’s rants, some commenters pointed out this linked claim by famed statistician/provacateur Bjorn Lomberg:

If [writes Lomborg] you reduce your child’s intake of fruits and vegetables by just 0.03 grams a day (that’s the equivalent of half a grain of rice) when you opt for more expensive organic produce, the total risk of cancer goes up, not down. Omit buying just one apple every 20 years because you have gone organic, and your child is worse off.

Let’s unpack Lomborg’s claim. I don’t know anything about the science of pesticides and cancer, but can he really be so sure that the effects are so small as to be comparable to the health effects of eating “just one apple every 20 years”?

I can’t believe you could estimate effects to anything like that precision. I can’t believe anyone has such a precise estimate of the health effects of pesticides, and also I can’t believe anyone has such a precise effect of the health effect of eating an apple. Put it together and we seem to be in a zero-divided-by-zero situation.

Maybe you have to write in this sort of hyper-overconfident way in order to get press? To me it seems a bit tacky.

P.S. In any case, I doubt Lomborg is entirely serious in his column; he also writes that cutting CO2 emissions would save “less than one-tenth of a polar bear” yearly, which again seems to imply an implausible (to me) precision. Again, not something I like to see from a statistician.

18 thoughts on “Confusion from illusory precision

  1. “I can’t believe you could estimate effects to anything like that precision.”

    (tongue in cheek): yes you can, just use a huge sample size, e.g. case-control study of everyone in the U.S.

    (serious comments follow): classical testing focuses on sampling error (of one sample) as the source of variability. is there work done on a sequence of possibly non iid samples in which we use the repeatability of an outcome as evidence of an effect?

    I’m thinking on-line advertising in particular. The response rate for each sample is tiny while the sample size for each sample is huge e.g. 10 responses out of 1 million impressions. Classical tests would easily lead to significant effects because of the huge sample size. Also, when the average rate is 10 responses, if the marketer does something to lift it to 15 responses, that would be a big deal. But then, how much confidence can we place in the precision based on one campaign? I’d be more confident if the incremental 5 responses are repeatable over multiple campaigns. But multiple campaigns are not iid samples. Are there any work that considers the repeatability of Type S effects over sequential samples?

    • Kaiser: If Andrew meant precision in the precise sense of not considering bias (excluding confounding, selection and mis-measurement as in idealised randomised experiments) then yes, but for science of pesticides and cancer the bias is arguably not ever known to be small or of a given size.

      And if the bias is consitent across studies that ends the value of considering repeatability.

      Find it hard to believe the author was not just being speculative!

      • Perhaps, the above was too casual.

        You might want to look at (or skim)
        Greenland S, O’ ROURKE K: Meta-Analysis. In Modern Epidemiology, 3rd ed. Edited by Rothman KJ, Greenland S, Lash T. Lippincott Williams and Wilkins; 2008.

    • That was certainly my impression — physiologically, it’s absurd to suggest that altering food intake by 30 mg per day has any significant effect. Poisons or vitamins yes, whole foodstuffs no.

    • It might not be an excuse, but it might be an explanation: it’s my impression that Danish polsci is much less empirically grounded than American polsci with much more emphasis on public administration and much less emphasis on methods. On the other hand, he’s published on simulation research in ASR. So I dunno.

      (Joke heard at a party at the Faculty of Humanities, University of Copenhagen, “Q: What’s a political scientist?” A: “It’s a historian of ideas who have read neither Heidegger nor Wittgenstein”. No offense intended to anyone.)

  2. When it comes to cancer, medical science does sometimes deal with vanishingly-small effects. In this case, his point is that the effect of pesticides on cancer is vanishingly small. Nearly all pesticides known as “known” or “probable human carcinogens” have already been banned. A pesticide might be classified as “known human carcinogen” if it results in 10% increase in cancer incidence among professional pesticide applicators. If the exposure due to consumption of treated fruit and vegetables is, say, 1000 times lower than the exposure by an average pesticide applicator, pesticides increase the risk of cancer by 0.01%, same order of magnitude as reduction of fruit intake by 0.03 grams a day.

    In another example, a pesticide Captan (commonly used on apples) is considered a “probable human carcinogen”, because studies show that 50% of rats fed 2000 mg/kg/day of Captan end up developing tumors. If we assume that humans respond to the chemical the same way as rats, and that dose-dependence is more or less linear, that means that you need to eat 3 g/day to stand a 1% risk of dying of Captan-induced cancer.

    How much do you actually get from eating treated apples? An average unwashed, unpeeled apple contains 3 mg/kg of captan. If you eat 1 kg/day of apples, you get 3 mg/day of Captan, which raises your (absolute) risk of cancer by 0.001%.

    • The assumption that dose-dependence is linear is problematic. I took a risk perception class once and I seem to remember reading that certain dose-response curves (to endocrine disruptors, I think) weren’t even monotonic, never mind linear. If we took that seriously, we’d suddenly have to start spending a lot more money on product safety testing.

      The other thing I always wonder about (for safety testing, but also in drug trials) is interaction effects — I don’t see any reason to assume that our bodies’ reactions to inputs are additive… but of course testing all possible interactions is, again, impossibly expensive.

  3. The other thing that bugs me about Lomberg’s claim is — since when do we have any really good data on food and cancer? Isn’t that one of those wishful-thinking branches of epidemiology where every association that is discovered fails to be confirmed five or ten years later? Or am I too cynical about this?

    • It is terribly difficult to get consistent associations between food and XXX out of epidemiological studies, for any XXX. Forget cancer: we don’t always get consistent results on food and obesity! Dietary questionnaires are unreliable, within-study variations in food intake are often too small, interactions between food and non-dietary habits (and also between different dietary habits) are mind-boggling, and confidence intervals are too wide.

      However, the correlation between cancer and consumption of fruit and vegetables is more consistent than most. Studies converge on a 10% to 20% risk reduction at most sites for a 100 g/day increase in intake.

        • Exactly. Studies usually try to control for alcohol and smoking, but I’m not sure how accurately that is done.

          Or maybe fruits and vegetables displace something else from the diet and that “something else” is bad for you. Red meat is one popular culprit. In principle, you could try to distinguish between eating more veggies and eating less red meat, but then the population you need to see a significant effect becomes truly huge.

          Here’s a relatively recent review with lots of numbers:

          http://69.164.208.4/files/Fruits%20and%20vegetables%20in%20cancer%20prevention.pdf

        • Do they control for exercise? In my anecdotal experience people who eat fruits and vegetables tend to be in general more conscientious about exercise and portion-sizes. Eating an apple when you could have had a burger instead needs high impulse-control and a good future-reward orientation. These are traits that will correlate well with other health-beneficial activities.

  4. He also very clearly implies that saving that fractional polar bear is the total benefit of the avoided climate change, which is just ridiculous. The bears have become a popular symbol, but few of those who think climate change should be a high priority would put that one species at the top of the reasons why.

  5. Dean Foster’s rants also contain some pretty uninformed silliness about evolution. E.g. Saying that population genetics is the only “theory” in evolution. This would deeply annoy us phylogeneticists…

Comments are closed.