How to think about the risks from low doses of radon

Nick Stockton, a reporter for Wired magazine, sent me some questions about radiation risk and radon, and Phil and I replied. I thought our responses might be of general interest so I’m posting them here.

First I wrote:

Low dose risk is inherently difficult to estimate using epidemiological studies. I’ve seen no evidence that risk is not linear at low dose, and there is evidence that areas with high radon levels have elevated levels of lung cancer. When it comes to resource allocation, we recommend that measurement and remediation be done in areas of high average radon levels but not at a national level; see here and here and, for a more technical treatment, here.

Regarding the question of “If the concerns about the linear no-threshold model for radiation risk are based on valid science, why don’t public health agencies like the EPA take them seriously?” I have no idea what goes on within the EPA, but when it comes to radon remediation, the effects of low dose exposure aren’t so relevant to the decision: if your radon level is low (as it is in most homes in the U.S.) you don’t need to do anything anyway; if your radon level is high, you’ll want to remediate; if you don’t know your radon level but it has a good chance of being high, you should get an accurate measurement and then make your decision.

For homes with high radon levels, radon is a “dangerous, proven harm,” and we recommend remediation. For homes with low radon levels, it might or might not be worth your money to remediate; that’s an individual decision based on your view of the risk and how much you can afford the $2000 or whatever to remediate.

Then Phil followed up:

The idea of hormesis [the theory that low doses of radiation can be beneficial to your health] is not quackery. Nor is LNT [the linear no-threshold model of radiation risk].

I will elaborate.

The theory behind LNT isn’t just ‘we have to assume something’, nor ‘everything is linear to first order’. The idea is that twice as much radiation means twice as many cells with damaged DNA, and if each cell with damaged DNA has a certain chance of initiating a cancer, then ceterus paribus you have LNT. That’s not crazy.

The theory behind hormesis is that your body has mechanisms for dealing with cancerous cells, and that perhaps these mechanisms recognize become more active or more effective when there is more damage. That’s not crazy either.

Perhaps exposure to a little bit of radiation isn’t bad for you at all. Perhaps it’s even good for you. Perhaps it’s just barely bad for you, but then when you’re exposed to more, you overwhelm the repair/rejection mechanisms and at some point just a little bit more adds a great deal of risk. This goes for smoking, too: maybe smoking 1/4 cigarette per day woud be good for you. For radiation there are various physiological models and there are enough adjustable parameters to get just about any behavior out of the models I have seen.

Of course what is needed is actual data. Data can be in vitro or in vivo; population-wide or case-control; etc.

There’s fairly persuasive evidence that the dose-response relationship is significantly nonlinear at low doses for “low linear-energy-transfer radiation”, aka low-LET radiation, such as x-rays. I don’t know whether the EPA still uses a LNT model for low-LET radiation.

But for high-LET radiation, including the alpha radiation emitted by radon and most of its decay products of concern, I don’t know much about the dose-response relationship at low does and I’m very skeptical of anyone who says they do know. There are some pretty basic reasons to expect low-LET and high-LET radiation to have very different effects. Perhaps I need to explain just a bit. For a given amount of energy that is deposited in tissue, low-LET radiation causes a small disruption to a lot of cells, whereas high-LET radiation delivers a huge wallop to relatively few cells.

An obvious thing to do is to look at people who have been exposed to high levels of radon and its decay products. As you probably know, it is really radon’s decay products that are dangerous, not radon itself. When we talk about radon risk, we really mean the risk from radon and its decay products.

At high concentrations, such as those found in uranium mines, it is very clear that radiation is dangerous, and that the more you are exposed to the higher your risk of cancer. I don’t think anyone would argue against the assertion that an indoor radon concentration of, say, 20 pCi/L leads to a substantially increased risk of lung cancer. And there are houses with living area concentrations that high, although not many.

A complication is that the radon risk for smokers seems to be much higher than for non-smokers. That is, a smoker exposed to 20 pCi/L for ten hours per day for several years is at much higher risk than a non-smoker with the same level of exposure.

But what about 10, 4, 2, or 1 pCi/L? No one really knows.

One thing people have done (notably Bernard Cohen, who you’ve probably come across) is to look at the average lung cancer rate by county, as a function of the average indoor radon concentration by county. If you do that, you find that low-radon counties actually have lower lung cancer rates than high-radon counties. But: a disproportionate fraction of low-radon counties are in the South, and that’s also where smoking rates are highest. It’s hard to completely control for the effect of smoking in that kind of study, but you can do things like look within individual states or regions (for instance, look at the relationship between average county radon concentrations and average lung cancer rates in just the northeast) and you still find a slight effect of higher radon being associated with lower lung cancer rates. If taken at face value, this would suggest that a living-aread concentration of 1 pCi/L or maybe even 2 pCi/L would be better than 0. But few counties have annual-average living-area radon concentration over about 2 pCi/L, and of course any individual county has a range of radon levels. Plus people move around, both within and between countties, so you don’t know the lifetime exposure of anyone. Putting it all together, even if there aren’t important confounding variables or other issues, these studies would suggest a protective effect at low radon levels but they don’t tell you anything about the risk at 10 pCi/L or 4 pCi/L.

There’s another class of studies, case-control studies, in which people with lung cancer are compared statistically to those without. In this country the objectively best of these looked at women in Iowa. (You may have come across this work, led by Bill Field). Iowa has a lot of farm women who don’t smoke and who have lived in just a few houses for their whole lives. Some of these women contracted lung cancer. The study made radon measurements in these houses, and in the houses of women of similar demographics who didn’t get lung cancer. They find increased risk at 4 pCi/L (even for nonsmokers, as I recall) and they are certainly inconsistent with a protective effect at 4 pCi/L. As I recall — you should check — they also found a positive estimated risk at 2 pCi/L that is consistent with LNT but also statistically consistent with 0 effect.

So, putting it all together, what do we have? I, at least, am convinced that increased exposure leads to increased risk for concentrations above 4 pCi/L. There’s some shaky empirical evidence for a weak protective effect at 2pCi/L compared to 0 pCi/L. In between it’s hard to say. All of the evidence below about 8 or 10 pCi/L is pretty shaky due to low expected risk, methodological problems with the studies, etc.

My informed belief is this: just as I wouldn’t suggest smoking a little bit of tobacco every day in the hope of a hormetic effect, I woudn’t recommend a bit of exposure to high-LET radiation every day. It’s not that it couldn’t possibly be protective, but I wouldn’t bet on it. And I’m pretty sure the EPA’s recommended ‘action level’ of 4 pCi/L is indeed risky compared to lower concentrations, especially for smokers. As a nonsmoker I wouldn’t necessarily remediate if my home were at 4 pCi/L, but I would at least consider it.

For low-LET radiation, I think the scientific evidence weighs against LNT. If public health agencies don’t take LNT seriously for this type of radiation it’s possible that they acknowledge this.

For high-LET radiation, such as alpha particles from radon decay products, there’s more a priori reason to believe LNT would be a good model, and less empirical evidence suggesting that it is a bad model. It might be hard for the agencies to explicitly disavow LNT in these circumstances. At the same time, there’s not compelling evidence in favor of LNT even for this type of radiation, and life is a lot simpler if you don’t take LNT ‘seriously’.

“Service” is one of my duties as a professor—the three parts of this job are teaching, research, and service—and, I guess, in general, those of us who’ve had the benefit of higher education have some sort of duty to share our knowledge when possible. So I have no problem answering reporters’ questions. But reporters have space constraints: you can send a reporter a long email or talk on the phone for an hour, and you’ll be lucky if one sentence of your hard-earned wisdom makes its way into the news article. So much effort all gone! It’s good to be able to post here and reach some more people.

39 thoughts on “How to think about the risks from low doses of radon

  1. The whole approach of trying to tease out this or that individual effect in these cases* (ceteris paribus) is wrong imo.

    The machine learning people got it right. Just plug everything into some inscrutable model that takes into account all the correlations then assess the predictive skill. Your brain simply is not made to understand the real web of correlations involved, at least not consciously. Or at least we haven’t figured out the correct language to communicate this type of info if it can.

    *where there is clearly no dominating universal law

    • I’m not sure I agree with this. We make progress on understanding the world by understanding mechanisms. Yes, with things like low dose responses it’s very hard to measure in humans. But we can run animal experiments. If this is a topic that matters, we should be able to fund high quality experiments, each one can provide us with some mechanistic understanding of the factors that go into risk. Machine learning may be a fine way to quickly produce predictions for fairly transient temporary things (like say marketing) but it’s a poor way to understand how cancer is initiated by low dose radiation exposure, which *is* presumably a fairly universal stable thing… on the timescale of a human lifetime at least

      • If this is a topic that matters, we should be able to fund high quality experiments, each one can provide us with some mechanistic understanding of the factors that go into risk.

        Well, if you want to turn it into a situation with a “dominating law” the basic science needs to be done first.

        The current mainstream idea is that each tissue consists of n_stem stem cells and n_func/i> functional/terminal cells. Each n_funcPerStem functional cells that die due to mutation, metabolic overload, etc (rate r_funcDeath) needs to be replaced by a stem cell division. For example, one stem cell divides into two intermediate cells which gives four functional cells.

        Now, each time any of tissue stem cells divides there is a probability of some cancer-contributing mutation occurring (p_mut). If n_mut mutations accumulate in a single cell then it will become a potential cancer cell. Some percentage (p_clear) of these cancer cell lineages will die off due to either malfunction or immune clearance without causing a problem.

        Step 1. To begin with, we want to know some estimate of:
        n_stem = number of stem cells in the tissue
        n_func = number of functional cells in the tissue
        n_funcPerStem = number of functional cells resulting from each stem cell division
        r_funcDeath = rate of functional cell turnover
        p_mut = probability of gaining a carcinogenic mutation during any given division
        n_mut = number of accumulated mutations required for a cell to turn tumorigenic
        p_clear = percent of potential cancer cells that get cleared before forming a detectable tumor

        Step 2. Figure out a way to translate the qualitative description to a quantitative model of the relationship between these parameters and compare to data. Age-specific incidence will be something like the geometric distribution, in which case the expected number of detectable tumors after d divisions is:


        E(n_tumor) = (1 - p_clear)*n_stem*(1 - (1 - p_mut)^d)^n_mut

        Where divisions accumulated d as a function of time is something like

        d(t) = r_funcDeath*n_func/n_funcPerStem

        Step 3. Once we have some idea of that, we would want to get a dose-response effect of radon exposure on each parameter. Most likely r_funcDeath, p_mut, p_clear would all be affected.

        If you try, I think you will find there is ZERO actual interest in doing what is necessary to accomplish the first step.

        • If you mean that people who today are biologists and could potentially get grants to do this kind of research are not interested in doing it, and the grant agencies wouldn’t fund it anyway… then you’re probably right. But in my opinion this doesn’t mean it’s not the right way to go about thinking of the problem and designing experiments to help measure the parameters in various in-vitro or in-situ situations…

          Nor do I think that the unwillingness to do the real work means we should just throw a crap load of data at a machine learning algorithm and accept whatever it tells us.

        • If you mean that people who today are biologists and could potentially get grants to do this kind of research are not interested in doing it, and the grant agencies wouldn’t fund it anyway… then you’re probably right. But in my opinion this doesn’t mean it’s not the right way to go about thinking of the problem and designing experiments to help measure the parameters in various in-vitro or in-situ situations…

          If it is feasible to develop some kind of rational quantitative model of the phenomenon this should definitely be done. However, in the absence of that, we are going to do something like a “statistical model” and it is much better to just do the ML thing and take into account all the correlations together.

          Also, of course you don’t just accept whatever it tells you. You assess predictive skill on new data, the same as any model gains support.

        • “If you try, I think you will find there is ZERO actual interest in doing what is necessary to accomplish the first step.”

          During grad school, I knew people who were doing exactly this for their thesis. In fact, one issue is that too many people do this by hand so it’s my understanding that people are trying to build machine learning models to determine number of each cell type in a given photo. It’s also my understanding that this a is a very hard ML problem, since there’s already lots of ambiguity in what a grad student decides to label “one cell”.

        • Sure, I’ve done similar machine vision projects just as part of getting stuff done quicker. A couple theses/dissertations/side projects/etc is not what is necessary to accomplish the first step. It needs to be the focus of the cancer research program for about a decade, not an afterthought where a few people are working on it incidentally.

          It’s also my understanding that this a is a very hard ML problem, since there’s already lots of ambiguity in what a grad student decides to label “one cell”.

          We really only need order of magnitude values to start with, the important thing is that they are reliable.

      • I concur. The epiphany for me was when 20+ years ago I happened across Petr Skrabanek’s “The Emptiness of the Black Box”. It was in this quote: “The aim of science is to find universal laws governing the world around us and within us; it is about dismantling the ‘black box'”. It’s another reason why I was so pleased to hear Andrew say at his recent discussion at Rutgers that it’s time to start working mechanisms into models – one approach launched about the turn of the century and discussed in GD Smith and S Ebrahim’s “Epidemiology – is it time to call it a day?” (note for posterity – big fan of Smith’s work on Mendelian randomization, sad that he now thinks causation can be discovered by the “best of a bad lot” fallacy Inference to the Best Explanation and that causation is ultimately about “lovely stories”).

        Anyway, mechanistic work in animals and on cell lines are in the midst of their own crises at the moment and it doesn’t just stem from Ioannidis’ “Power failure” argument (not enough rats). Cloned knock-out mice living in different buildings develop different microbiomes and often respond differently to identical treatments. Elsewhere, because of failure to validate reagents and cell lines researchers have been caught repeatedly publishing discoveries about the mechanisms driving e.g. metastatic breast cancer by studying (because they didn’t validate) male thyroid cancer lines. Immunohistochemical staining comes up a lot in what I do as it’s used to e.g. differentiate a primary lung cancer from a metastatic prostate cancer. Lots of commercial outfits make them now and pathology labs have budgets too. You get what you pay for. The discovery that the antigen in the PSA (prostate-specific antigen) is often contaminated or replaced entirely with non-PS antigens has further clouded the debate over screening. But at least these troubling discoveries indicate that science may actually be self-correcting.

        • Yes, animal models have problems, and so I think one of the things we often see is that biologists claim “animal model x shows y is true” when in fact, animal model x just provides data which we should use in an *overall assessment combining evidence* into whether y is true. This “we showed y” thing comes in many ways out of the widespread use of p-value based statistics by biologists who have very little idea of even the existence of any issue with that kind of thinking, and the funding / publication / incentives problem where you have to show “great new discoveries” on a regular basis to keep your money flowing etc.

        • In my experience with biologists, I really disagree that biologists don’t understand these issues. Everyone I worked with understood the limitations of mice models, but also recognized it was all they were permitted to work with at that stage of the process.

          In fact, even in the Newsweek article from today below, which has a title that can be easily over-interpreted (note that there’s not a single mention that they are talking about mice and not humans!), quotes the researchers as saying we need to be cautious about these results as they’ve only been shown in mice models and it might come out to nothing:

          “But he warned that it’s too early to celebrate just yet. Mice are too different from humans for us to take these results as anything.”

          http://www.newsweek.com/alzheimers-disease-completely-reversed-removing-just-one-enzyme-new-study-807156

        • “very little idea of even the existence of any issue with that kind of thinking” was meant to be about the issues with p-values and the beloved t or chi-squared tests that are rampant in bio. I think biologists are much more savvy about the limitations of translation from animal models to humans.

          For the most part, it seems biologists take their p values as evidence they really have discovered something real about mice… and then lay all the uncertainty on the translation to humans… but the experiments often don’t even really provide evidence for what they think is going on in mice after analyzing the design and statistical analysis.

        • Hmmm…my experience was that they saw p-values as an annoying hoop they needed to jump through in order to be granted publication, which is a view I pretty much I agree with when talking about the utility of p-values.

          On the other hand, this was just one bio group that had a previous statistician who would give semi-regular talks and I think shared my views on p-values (i.e. the current use of as a gatekeeper to publication is suboptimal), so it’s definitely possible I wasn’t working with a representative sample of biologists.

  2. EPA doesn’t take LNT seriously because (a) this U.S. Supreme Court decision: https://scholar.google.com/scholar_case?case=968580795714943926 ; and, (b) they understand the ****storm that would follow if they started outlawing e.g. coffee brewing and peanut butter.

    On the other hand, jurors take it very seriously (keeping me gainfully employed) and have in numerous cases awarded 8 and even 9 figure verdicts where the risk was only 1/100th to 1/1000th of what would trigger a response from the regulators. The typical juror has a worse case of risk aversion than any of Kahneman/Tversky’s students. The result is an ad hoc and unpredictable shadow regulatory system.

    Re: non-monotonicity this is the battle du jour with plaintiffs arguing that present day very low e.g. asbestos exposures are actually riskier than those 250x higher that OSHA enforced 40 years ago (and thus their “Lazy J” dose response model).The defense perspective is that the risk is obviously not linear as it invariably flattens at high levels – 90%+ of the works exposed to the highest levels of crocidolite recorded in the workplace 50 years later still haven’t developed mesothelioma. Thus in a human population it does and doesn’t cause the disease (which I think poses causal inference problems for epi). At the low dose end defendants are frustrated by lots of Chinese and Eastern Europe epi survey studies that find statistically significant relative risk increases at ever lower levels.

    The hormesis issue comes into play in the NORM litigation and nowadays even those experts prone to say almost anything for a fee will admit that the LNT hypothesis is premised on a primitive and long since debunked theory of cancer initiation. For more on hormesis see: https://www.sciencedirect.com/science/article/pii/S0013935117301664?via%3Dihub

    • The big issue with LNT is that linear is probably a very good intermediate asymptotic model for moderate exposures. Let’s talk about say dose D of carcinogen X. Let’s measure dose D as a fraction of some dose that is typical for workers in an industry where X is used, and measure risk as excess risk above and beyond that which “average” citizens experience if they adopt a “clean living” lifestyle (no smoking, no daily charred BBQ intake or whatever) and have zero exposure to X.

      So, for doses between say D=1 and D=10 if X is a carcinogen you probably do see monotonic, linear increases in risk with dose. At doses D=100 or D=1000 you see some major nonlinearity, but no-one gets those doses short of being involved in some kind of accident where X is spilled onto them or something…

      Now, at dose D=1/2 to D=1 you are probably seeing a decrease in excess risk. But, at D=0 to 1/2 the excess risk is totally drowned out by the inability to measure it due to small sample sizes and small absolute effect sizes. It could be relatively flat, it could be decreasing and then increasing, it could be basically anything, EXCEPT, it can’t be substantially larger in magnitude than for D=1

      So, for doses large enough to produce a measurable effect, linear works, for enormous doses the harm is very obvious and nonlinearity just doesn’t matter, and for doses substantially smaller than typical you have no real data. The LNT hypothesis is just simplicity and laziness, continue the linear trend down through zero because we know it has to go to zero by definition of “increased risk” at D=0. There is usually NO data based justification for it.

      The problem is that since most people get their dose down in the 0-1/2 range, the large count of people makes the decision making in this range important, even though the information to make the decision is nonexistent. If 1 billion people receive dose D=0.13 and this increases cancer risk over the next 10 years by 1/100k then we’re talking about 10^4 excess cancers. But if it decreases excess risk by 1/100k you’re talking about 10^4 fewer cancers… And if there are only 10k workers in the industry where X is used… they just don’t matter in the decision, because a billion people is much much larger than 10000

      It seems much better in general to admit to a second or 3rd order nonlinearity for the region 0-1/2 which you can’t measure. Because with LNT you are biasing your decision to assuming harm. You can’t have zero harm at D=0 and positive harm at D=1 with a linear model, without also having positive harm for the whole range between D=0 and D=1. Simply admitting you could have either harm or benefit for low doses opens a whole new dimension to your decision, and requires you to make that decision based on what information is available in all its forms. LNT just makes the decision for you based on nothing.

      • Put another way, LNT ignores uncertainty, and a flexible nonlinear model admits uncertainty. Ignoring uncertainty where the whole decision requires accounting for uncertainty is clearly the wrong thing to do, it’s like a delta-function prior on the shape of the curve, totally unjustified.

  3. I think there’s a sign error in here?

    If you do that, you find that low-radon counties actually have lower lung cancer rates than high-radon counties. But: a disproportionate fraction of low-radon counties are in the South, and that’s also where smoking rates are highest. It’s hard to completely control for the effect of smoking in that kind of study, but you can do things like look within individual states or regions (for instance, look at the relationship between average county radon concentrations and average lung cancer rates in just the northeast) and you still find a slight effect of higher radon being associated with lower lung cancer rates.

    • Whoops, you’re right, Dan! Should be ‘low-radon counties actually have higher lung cancer rates than high-radon counties!’ I hope you aren’t the only one to figure that out from context.

    • I’ve already typed up and then deleted two over-long replies that kept wandering off course but maybe this will hit the target. The corruption of the law that is asbestos litigation was what followed from letting everyone with a disease that had ever been related to asbestos by an epi study win even if, given their dose, the “RF” was orders of magnitude lower than that which Dr. Greenland thought somewhat too high. The courts will never let that happen again. Nowadays they try to roll new ones up into so-called multi-district litigations and the result, assuming you believe the epi on which the causal claims are founded, is that everybody whose experts survive Daubert get offered something. Plaintiff lawyers like it because their 45% fee doesn’t get cut to 25% as with class actions and defendants like them because they can better control transactional costs. The result is something Dr Greenland might find not too far from optimal.

      As for the “Let’s let the people who were going to get cancer but got it sooner because of their exposure to tetra-methyl-death she even though they can’t establish counterfactual causation because they lost years of life” argument; it (in the jurisdictions in which I practice) was throttled in the cradle for cases like that discussed (bone cancer in 45 yr old nuclear plant female waste worker). Outside of an MDL where marginal plaintiffs can be blended in and a groupwise expected loss can be estimated, the common law has no mechanism for compensating hypothetical fractional lives of fractional people. Now, plaintiffs’ counsel could argue “ok, my client does have the SNP that predisposes her to bone cancer but they get it at 55 on average and she got it at 45 so give me millions”. However, most jurors are genetic determinists and so she’d almost certainly get poured out on the proximate cause question; and if she got a verdict she’d better hope that 45 wasn’t anywhere near the lower bound of the confidence interval for mean age at onset of disease in nuclear plant workers or it would get tossed on appeal.

  4. “The idea of hormesis [the theory that low doses of radiation can be beneficial to your health] is not quackery.” Well, that depends on what you mean by hormesis. If you are restricting it to the context of radiation then maybe there might be a plausible mechanism. (But I doubt it.) However, if you are using it in the standard way to mean that very low doses of stimuli _generally_ have the opposite effect of higher doses then it resembles the claimed basis of homeopathy and it is, most definitely, quackery. Each case where a biphasic dose-effect relationship is found has a different biological mechanism, and in the case of most stimuli there is no biphasic relationship.

    “This goes for smoking, too: maybe smoking 1/4 cigarette per day woud be good for you.” Maybe, but probably not. That speculation shows the danger inherent in accepting the idea of hormesis as a general principle. It is a word that can be applied to the specific examples where data suggest opposite effects at the lower and upper end of dose-response relationships. It is not a general principle.

    • The last I looked into homeopathy I learned they had a peculiar method of “dilution” (shake up the solution then either skim the top or dump the contents and skim the sides of the container) and it didn’t seem like anyone was carefully verifying the active ingredient was actually missing from the final solution. I wouldn’t rule out that some method of achieving low, yet non-zero concentrations of some substances has been discovered (but interpreted very badly) there.

    • I haven’t heard hormesis used the way you say it’s ‘standard’ to use it. Plenty of things are good for you in small doses but bad for you in large doses, but that is certainly not a general rule and anyone who says it is is wrong. Here is a relevant paper.

      Other than that, I think we agree on everything.

      Homeopathy is ridiculous but I don’t think it has much to do with hormesis.

      • I do think we need to worry that the real uncertainties will be neglected (the linked relevant _seems_ overly certain) and the generalization will become non-specific and inappropriate – if hormesis is possible then its always likely (unless ruled out) – hence (some) homeopathy (involving substances known to be harmful at high doses) will seem more sensible that it should.

        • “if hormesis is possible then its always likely”, I’m not sure what that means. Maybe you mean that’s the way some people think? Could be.

          Iron and Vitamin A are toxic in high doses but beneficial or necessary in low doses. Ditto for Vitamin B6. There are plenty of other examples.

        • > toxic in high doses but beneficial or necessary in low doses. Ditto for Vitamin B6. There are plenty of other examples.
          Water being the most obvious ;-)
          (Unfortunately a few deaths every year by drinking too much water).

          And thanks for this quote ” “Until the […] uncertainties on low-dose response are resolved, … a strictly linear dose response should not be expected in all circumstances.”

  5. There appears to be some confusion about the mechanism by which ionising radiation is carcinogenic.
    Ionising radiation damages DNA in a random and dose-dependent fashion (this is simple physics).
    There is a mechanism to repair DNA damage. It is always active and does not need to be stimulated by low-level ionising radiation or any other carcinogen. This is because occasional mistakes in the transcription of DNA and RNA occur during normal cell activity and the repair mechanism must be ready to detect and repair these. Hence an hormesis effect of ionising radiation is a priori unlikely.
    Not all DNA damage is repaired however, because the repair mechanism both fails to detect some of the damage and is sometimes incapable of repairing the damage it does detect. So we are all full of damaged DNA, which accumulates with age, even in the absence of any external carcinogenic influence.
    Damage to DNA may: a) have no effect, b) kill the cell, c) cause the cell to function abnormally. For DNA damage to cause cancer it must be of the third type and not the second, and it must be localised to particular parts of the genome. Depending on the type of cell involved, between 3 and 7 very specific parts of the DNA must be damaged to lead to cancer. Hence DNA damage is common but rarely leads to cancer.
    From the foregoing it is clear that cancer occurs due to a combination of exposure to DNA damaging processes (such as ionising radiation) and bad luck (ignoring the genetic component due to the inheritance of damaged DNA). It is also clear that the linear-no-threshold model is the most likely model and this is the position of most radiation safety experts.
    The toxic mechanisms of vitamin A and iron are completely different to that of ionising radiation.

    For further interesting information search for “Knudson’s two hit hypothesis”. Alfred G Knudson, who died in 2016 at the age of 93, used a very simple and clever form of statistical inference to explain the occurrence of retinoblastoma.

    • The vitamin A and iron examples were responses to specific comments about ‘hormesis’ in general, not ‘radiation hormesis.’

      You say “There is a mechanism to repair DNA damage. It is always active and does not need to be stimulated by low-level ionising radiation or any other carcinogen.” I think there is broad agreement about this. But as far as I know there is no conclusive evidence that the performance of that mechanism is independent of the amount of DNA damage. People who believe in radiation hormesis think that low levels of damage stimulate that mechanism, and not only that, they do so with the net effect of decreasing the cancer risk.

      As I said in my response to the reporter, I don’t believe anyone who says they know what the dose-response relationship is at low doses. Hmm, Wikipedia has a page on radiation hormesis that includes this statement from UNSCEAR: “Until the […] uncertainties on low-dose response are resolved, the Committee believes that an increase in the risk of tumour induction proportionate to the radiation dose is consistent with developing knowledge and that it remains, accordingly, the most scientifically defensible approximation of low-dose response. However, a strictly linear dose response should not be expected in all circumstances.”

      Of course, they don’t say which way the nonlinearity could go!

    • The stochastic model of cancer, whether one-hit or of the Texas Two-Step variety, is, I think, on its last leg(s). Model instead a biofilm-like process in which cancer breaks the shackles by which evolution bound together the legion from which we’ve each become One; allowing thereby the rogues to exploit their suddenly discovered niche; and then you’ll be on to something.

  6. “process in which cancer breaks the shackles by which evolution bound together the legion from which we’ve each become One”

    Uh — this sounds kinda mystical to me; at the very least, a rather flowery sounding metaphor.

  7. If you could quantify the size of the interaction with smoking (i.e. marginal change in cancer risk at a given radiation level per additional daily cigarette), would that be a radon-nicotine derivative?

Leave a Reply to Daniel Lakeland Cancel reply

Your email address will not be published. Required fields are marked *