Forward causal reasoning statements are about estimation; reverse causal questions are about model checking and hypothesis generation

Consider two broad classes of inferential questions:

1. Forward causal inference. What might happen if we do X? What are the effects of smoking on health, the effects of schooling on knowledge, the effect of campaigns on election outcomes, and so forth?

2. Reverse causal inference. What causes Y? Why do more attractive people earn more money? Why do many poor people vote for Republicans and rich people vote for Democrats? Why did the economy collapse?

When statisticians and econometricians write about causal inference, they focus on forward causal questions. Rubin always told us: Never ask Why? Only ask What if? And, from the econ perspective, causation is typically framed in terms of manipulations: if x had changed by 1, how much would y be expected to change, holding all else constant?

But reverse causal questions are important too. They’re a natural way to think (consider the importance of the word “Why”) and are arguably more important than forward questions. In many ways, it is the reverse causal questions that lead to the experiments and observational studies that we use to answer the forward questions.

My question here is: How can we incorporate reverse causal questions into a statistical framework that is centered around forward causal inference. (Even methods such as path analysis or structural modeling, which some feel can be used to determine the direction of causality from data, are still ultimately answering forward casual questions of the sort, What happens to y when we change x?)

My resolution is as follows: Forward causal inference is about estimation; reverse causal inference is about model checking and hypothesis generation.

I’ll illustrate with an example and then introduce some (simple) notation. The forward question is, What is the effect of $ on elections? This is a very general question and, to be answered, needs to be made more precise, for example as follows: Supposing a challenger in a given congressional election race is given an anonymous $10 donation, how much will this change his or her expected vote share? It’s not so easy to get an accurate answer to this question, but the causal quantity is clearly defined.

Now a reverse question: Why do incumbents running for reelection to Congress get so much more funding than challengers? Many possible answers have been suggested, including the idea that people like to support a winner, that incumbents have higher name recognition, that certain people give money in exchange for political favors, and that incumbents are generally of higher political “quality” than challengers and get more support of all types. Various studies could be performed to evaluate these different hypotheses, all of which could be true to different extent (and in some interacting ways).

Now the notation. I believe that forward causal inferences can be handled in a potential-outcome or graphical-modeling framework involving a treatment variable T, an outcome y, and pre-treatment variables, x, so that the causal effect is defined (in the simple case of binary treatment) as y(T=1,x) – y(T=0,x). The actual estimation will likely involve some modeling (for example, some curve of the effect of money on votes that is linear at the low end, so that a $20 contribution has twice the expected effect as $10), but there is little problem in defining the treatment effect. In complex settings it might be useful to employ graphical models; it is not my purpose to discuss these here.

Reverse causal inference is different; it involves asking questions and searching for new variables that might not yet even be in our model. I would like to frame reverse causal questions as model checking. It goes like this: what we see is some pattern in the world that needs an explanation. What does it mean to “need an explanation”? It means that existing explanations—the existing model of the phenomenon—does not do the job. This model might be implicit. For example, if we ask, Why do incumbents get more contributions than challengers, we’re comparing to an implicit model that all candidates get the same. If we gather some numbers on dollar funding, compare incumbents to challengers, and find the difference is large and statistically significant, we’re comparing to the implicit model that there is variation but not related to incumbency status. If we get some measure for “candidate quality” (for example, previous elective office and other qualifications) and still see a large and statistically significant difference between the funds given to incumbents and challengers, then it seems we need more explanation. And so forth. Just as I view graphical exploratory data analysis as a form of checking models (which maybe implicit), I similarly hold that reverse causal questions arise in response to anomalies—aspects of our data that are not readily explained—and that the search for causal explanations is, in statistical terms, an attempt to build a new model that has the ability to reproduce the patterns we see in the world.

What does this mean for statistical practice?

A key theme in this discussion is the distinction between causal statements and causal questions. When Rubin dismissed reverse causal reasoning as “cocktail party chatter,” I think it was because you can’t clearly formulate a reverse causal statement. That is, a reverse causal question does not in general have a well-defined answer, even in a setting where all possible data are made available. But I think Rubin made a mistake in his dismissal. The key is that reverse questions are valuable in that they focus on an anomaly—an aspect of the data unlikely to be reproducible by the current (possibly implicit) model—and point toward possible directions of model improvement.

It has been (correctly) said that one of the main virtues of forward causal thinking is that it motivates us to be explicit about interventions and outcomes. Similarly, one of the main virtues of reverse causal thinking is that it motivates us to be explicit about our model. If we ask: Why do ethnic minorities (compared to whites) in NYC have a higher rate of rodents in their apartments?, we have an implicit model that the infestation rates should be the same, after controlling for x1, x2, x3, etc.

Another theme is the separation of the three steps of data analysis: (1) model construction, (2) inference, and (3) model checking. Inference is the glamour boy but you can’t get far without a model (for those model-haters in the audience, you can replace the word “model” with the phrase “choice of what information to use in your analysis, and choice the form in which you will use that information”), and models are much more effective if you allow yourself to check them and make improvements in response to problems. Rubin dismissed reverse causal reasoning because it can’t be fit into the “inference” step; others have struggled with little success (in my opinion) to construct direct answers to reverse causal questions. It all becomes cleaner when we allow statistical methods to fit into the model-construction and model-checking phases of data analysis. This is similar to how we folded EDA into Bayes by framing graphics as an informal way of model checking.

By formalizing reverse casual reasoning within the process of data analysis, I hope to make a step toward connecting our statistical reasoning to the ways that we naturally think and talk about causality. As LBJ might say: Better to have reverse causal inference inside the statistical tent pissing out than outside pissing in.

P.S. Based on the comments, I think many people missed my central point! Let me say it again:

I think reverse casual questions are important. But I don’t think there are reverse causal answers. That is, I think it’s helpful to ask, Why are incumbents better-funded than challengers? But I don’t think there’s any useful answer to that question. The question reveals a gap between reality and our (implicit) models, but I think the answer to the question must come in the form of a forward causal statement.

It was probably a mistake for me to use the term “reverse causal inference.” In future, I’ll stick with the phrase, “reverse causal question.” And I’ve changed the title of the post accordingly.

46 thoughts on “Forward causal reasoning statements are about estimation; reverse causal questions are about model checking and hypothesis generation

  1. Doesn’t reverse causal reasoning require some (perhaps more) creativity to think up and to contrast potential causes of effects than forward causal reasoning does to think up and to contrast possible effects of such causes? For medical examples, think causes of cholera and lung cancer.

  2. This crystallized a lot of thoughts for me, thanks.

    What’s the idea behind the visual out of curiosity? Is it an artistic example of how we simplify complicated social landscapes?

  3. I see reverse causality as belonging to the domain of research programs, and forwad causality to the domain of individual studies.

    Good research arbitrages information across domains. It’s the Bayesian thing to do.

    I am intereted in ways to posit a “why” model and test it, not piecemeal (one cause at a time), and not with multivariate regression (all causes at once), but something in between. Feeling our way around an implicit “why” mechanism, perturbing it, measuring feedback, and using that to update the implicit model, a kind of neural net or some such.

  4. Rubin’s “What We Cannot Do Inference On We Must Pass Over in Silence?”

    (With apologies to Wittgenstein and Rubin)

  5. Fascinating questions.

    1. If you’re putting together an investment model, you take old data and fit to it and then wave your hands around, go “presto” and say it projects. Huge issues. The one that always interests me is the extent to which the data are highly abstracted, meaning to be clear – since I don’t know if I speak your language at all – they represent a number of processes in a time, in a culture, etc. Even comparing the value of money over time is fraught with difficulty.

    2. The issue sometimes seems to me the reverse of physics computational concerns. If we worry about infinities when we shrink physical distance, when we construct inferential models or causal models we increase distances without any way of summarizing the effects of this additional volume with all its potential interactions. I see great difficulty defining the contextual reach or meaning … when the extent of context becomes more unknowable and when we lack a method or fudge factor that imposes a limit (beyond the notion of “controlling for” as though that is objectively true).

  6. It seems to me (or perhaps: the way I’ve been trained to think about this is:) that reverse causal thinking is how you identify puzzles that need further explanation. You identify a puzzle and then proceed to develop and explanation for it. Once you’ve developed your explanation, then you plug it into your forward causal model to find out if your explanation has any effect on Y.

  7. I think there’s a missing connection. To take your example of the effect of a $10 donation to a particular candidate. That’s hard to estimate for two reasons. First, it’s presumably a small portion of a large campaign budget, but we get around that by thinking of differentials, limits, and derivatives. Second and much more importantly, I think, it misses the circular causality. If I give the candidate $10 anonymously, that increases her visibility ever so slightly, which may lead to others knowing of her and contributing their $10.

    At some point, it becomes more meaningful to speak of the structure that caused the effect, not of my $10. For a dumb example, if my $10 contribution came at a point where the candidate could buy more air time, that might get a few seconds of publicity directly, plus whatever word-of-mouth effect it might have to generate more contributions. If it came at a point where the candidate couldn’t buy resources because all the air time was purchased, then my $10 has a different effect. That’s handled easily in a state variable context without resorting to two models analyzed sequentially. State variable and feedback control theory enable one to do the math. That, incidentally, is why I’m anxiously awaiting Stan getting the ability to handle a model consisting of sets of simultaneous ODEs, for many real-world feedback systems are strongly nonlinear, and many of the analytical techniques rely on linearity.

    Without considering feedback effects, I think one is confined to two-step approaches as suggested here or to asking about “the effect of the $10 contribution, all else being equal,” but all else is rarely equal.

    Even having studied feedback theory and used it in practice, I think I really grokked it for the first time when trying to fix a recalcitrant feedback loop in an electronic circuit using a now-old HP3582 analyzer. That showed me that the behavior of that feedback system (and presumably all such systems) was due to the structure, not to the individual components in the loop.

  8. Interesting. A few questions come to mind for me.

    1. Aren’t both basically the same process? In forward causation, we want to see the effect of X on Y. In reverse causation, we want to see what affects Y, and we posit (perhaps via graphical EDA) that X explains it in addition to our current model. Aren’t we just back to forward causation again.

    2. How do we guard against overfitting?

    3. Aren’t most things explained by an infinite number of things? I can give about 100 different causes of lung cancer that all probably have some effect (smoking, genetics, exposure to radiation, etc.). At what point do we say we have explained/done reverse causation correctly?

    • Patrick:

      1. Forward causation describes the process; reverse causal questions are a way to think about the process.

      2. I don’t know. I agree that overfitting is a concern.

      3. Yes, definitely. That’s why I say that reverse causal questions are good questions, but I agree with Rubin that there are generally no reverse causal answers.

      • 1. Like one can do two stage regressions instead of a multilevel model but the latter is easier and more complete, one can iterate between forward and backward models, and I think that’s often done in econometrics. Why not use a state variable model instead? Staying in the R context, look at deSolve and FME (I admit I’ve only read some of the documentation; they are on my list to try). Look at the last half century of the system dynamics (SD) literature, too; John Sterman’s /Business Dynamics/ is the current canonical text. Now you have one model in one formalism that avoids the iteration.

        2. In a state variable model, I don’t know if that’s an issue. See #3.

        3. Fit a feedback model, starting simply and adding complexity only as useful. Often you’ll discover that only a small number of feedback paths explain most of the behavior you see in the data. A classic SD meme contrasts a “laundry list approach” (“What are all the possible causes of this effect?”) with a systems approach (“What feedback path do I need to add to the model to explain most of the remaining divergence between the model and the real world?”). If adding (or subtracting) one loop, for example, takes you from simulated data that matches the real world (sort of posterior predictive checking) to simulated data that does not, you may have a good case for having found the cause.

        • 2/3 – feedback helps a lot, but overfitting can still happen outside of a dynamical process where the structure is fully known or fixed by design apriori. For example, you may be looking at a fat-tailed random process, which spends proportionally more time near the mean: http://vudlab.com/fat-tails.html

        • For a simple example, consider the real-life problem described in http://onlinelibrary.wiley.com/doi/10.1002/npr.4040180306/abstract. The site’s product development manager saw periodic bouts of overspending by perhaps 20%, followed by periodic bouts of similar underspending. He could have blamed that on poor management skills in his management team, but he chose to blame it on the nature of the product development process: when one builds a prototype run of a new product, there are large associated expenses, and those probably show up as such quasi-periodic bouts. He highlighted it to his team for months, and nothing changed, seemingly confirming it as a fact of life.

          When I noticed it, I thought “oscillation” and thought of the negative feedback systems that can create such oscillations. I created a simple model of managers’ spending thought processes and accounting’s reconciliation and reporting process, and, with a bit of calibration, that model replicated what I saw as the essential features of the organization. No, it wasn’t able to replicate the time series in all its twists and turns, but it replicated the frequency and rough amplitude of the oscillation.

          Then I changed the information given to managers in the simulation, and the problem vanished. When we made the same change in the organization, the problem was attenuated by better than 95%: bouts of 20% over- and under-spending were now reduced to under 1%. Moreover, when we got repeated instructions to change our budgeted spending mid-year, the new system was able to respond with the same accuracy and with a response time measured, as I recall, well within a month — perhaps within a couple of weeks (it’s been quite a few years). My intuitive sense, based on past experience with other budget cut requests, was that the old way of doing things was neither so accurate nor so fast in responding to changed spending goals.

          So the “cause” of the overspending wasn’t carelessness by individual managers, the nature of production prototype cycles, or price or lead time changes by vendors, although they may all have happened. What information we supplied to managers was sufficient to explain the changed behavior in the model, and our test in the real world showed it was sufficient as an explanation of the cause in the real world.

          To me, that’s (at least one form of) causal analysis. In this case, the model was likely much simpler and thus cheaper than required to reproduce the entire time series, but it did reproduce the essential features of the problem and the solution.

  9. Andrew:
    The age-old problem surrounding what you call reverse causal inference—a species of an inference to an explanation, the method of “hypothesis”, or what Peirce called abduction– is that it is easy to arrive at any number of conflicting ways to account swimmingly for, or explain, known data. Philosophers of science have tried, unsuccessfully in my judgment, to arrive at various additional criteria in order to warrant a “best” or at least preferred explanation. I would argue that explanations are warranted only to the extent that the ways they can be flawed have been reliably ruled out. So one has to appraise a method’s capacity for doing so. One may, and generally would, limit the “level” of the query so that one isn’t searching for the complete or ‘ultimate’ explanation, but rather answers to specific why or how questions that may be probed severely (given claims and theories already severely passed).

    Many people see C.S. Peirce as championing the method of explanation as a mode of ampliative (inductive) reasoning, but this is incorrect. Although it is essential for exploration and theory development, he clearly, and sagaciously distinguished it from warranted ampliative explanation. Always way ahead of his time, he noted that the latter, but not the former, required randomization (or the like) and what he called predesignation (or avoidance of ad hocness, various data-dependent selections)– or else the ability to show that the precautions these provide had in effect been provided by other means.

    My view (of ampliative inference), like Peirce’s, is I think a third variety (to the two you list): explanation and understanding (rather than prediction), yes, but warranted by stringent testing. So it too is inferential.

    What I don’t know is how your model checkers avoid these classic problems of explanation, whether they be doing it inside or outside the tent. (dashing this off quickly….)

    • Mayo:

      I guess I didn’t make myself clear enough. I think reverse casual questions are important. I don’t think there are reverse causal answers. That is, I think it’s helpful to ask, Why are incumbents better-funded than challengers? But I don’t think there’s any useful answer to that question. The question reveals a gap between reality and our (implicit) models, but I think the answer to the question must come in the form of a forward causal statement.

    • Mayo:

      I commented here http://statmodeling.stat.columbia.edu/2013/05/17/where-do-theories-come-from/#comment-148075 yesterday thinking I was here. I don’t think Andrew is taking explanation as a mode of induction (should be) but simply taking it as an abduction (might be). It would also fall under Peirce advice to become (truly) confused to make a creative move forward (get a less wrong model).

      I do recall Peirce breaking induction (a third) in to a first, second and third with the second (a must be) being what you are referring to (randomization provided the must be or deductive component). It is a preferable option if available but he referred to other options…

  10. Dear Andrew,
    Thanks for calling my attention to this interesting
    discussion on forward versus backward causation.
    I have read your entry and most of the comments
    and I think your idea of formulating the problem
    of “explanation” in terms of “model checking” is
    definitely the way to go.

    I have two words of caution, though.
    First, I do not think it is possible to “search for
    new variables that might not yet even be in our model”
    unless we understand how to answer “Why” in cases
    where the explanatory variables ARE already in the model.
    For instance: What is the probability that Joe,
    who took a drug and died, would be alive had he not taken
    the drug?” The variables “drug/drug’ “dead/alive”
    are already in our model, and measurements are
    available on both, experimental as well as non-experimental.
    Can we answer the question “Can the drug explain
    Joe’s death?”

    Second, we should be prepared for the possibility that
    your ideas about model checking should require a new language,
    and would not reach their potential if forced to be
    formulated or implemented in the language of
    “statistical reasoning” and “data analysis”, as you hope.
    Here is why I raise the language issue. You noted that
    “forward causal inferences can be handled in the
    potential-outcome or graphical-modeling framework”.
    These frameworks took decades to evolve and gain acceptance.
    If Pearson, Yule, and Fisher tried to handle forward causal inference
    in the standard language of statistics (e.g., probability
    theory, without counterfactuals) they would not get very
    far (they tried). They would not even be able to say things
    like: “treatment does not change gender” (try it). Now,
    we know that hypothetical questions such as “Joe
    would have been alive” cannot be handled within the standard
    potential outcome framework (that is probably why
    Rubin dismiss the ‘Why’ question). They require
    mixing counterfactuals from different worlds, which
    one can do in Structural Equation Models, which has
    its own mathematics.
    Are you prepared to accept a whole new language,
    new mathematical notation, new mathematical axiom,
    new multiplication tables etc. etc. ??

    I will finish with a reference to what we know
    today about questions such as “Can the drug explain
    Joe’s death?”. It is on pages 296-304 in my book Causality
    (sorry for sounding self-serving, its one of the painful burdens of
    being an author)

    One might argue that Joe is just a toy-example, and we aspire
    to deal with “real” problems such as: “Why did the economy
    collapse”. I do not think we can make even one inch of progress
    on the economy, if we do not know how to handle Joe.

    Judea

    • Judea:

      I think “Why did the economy collapse?” is a fine question. I don’t think its purpose is to have a direct answer. Rather, the question, “Why did the economy collapse?” refers to an implicit model in which the economy is not likely to have crashed. The question points to an anomaly that needs to be explained.

      I’ll illustrate the point in a way I think you’ll like by considering the very simple question, “Why did the ball fall back to Earth?” Asking this question implies an implicit model in which the ball stays in the air. There are many possible ways this question could be answered, including:
      – Gravity
      – The ball is heavy enough that it does not float
      – The atmosphere is too thin to hold up the ball
      – The earth is heavy enough to have a strong gravitational field
      – The string holding the ball was cut
      – Nobody swooped in to catch the ball as it was falling
      etc etc etc.
      I don’t think of these as competing “answers” in a statistical sense. Rather, I see the question about the ball as valuable in motivating us to supply a model for the situation in which, under the model, the ball is likely to fall to Earth.

      Regarding your question, “Can the drug explain Joe’s death?,” I agree with you that this motivates the forward causal question, “What would have happened (or, statistically, what is our uncertainty about what would have happened) had Joe not taken the drug?” Depending on the context, the statement “Joe not taken the drug” might need to be more clearly defined, but that’s no problem, precise definition is required in all areas of science.

      Finally, I have no problem with new notation etc. I myself have introduced new notation for model checking (in my 1996 paper with Meng and Stern, expanding the notation (y,theta) to the notation (y,y.rep,theta)). The idea in my post above is intended to be a starting point, a conceptual foundation that could well be expressed more formally.

      • Abstraction is key to science. In science we are often not interested in specific whys but in general whys.

        In the case of the ball science would say something like: Given gravity a ball falls towards the earth unless there is an opposite force of at least equal magnitude acting on it.

        • The engineer’s model would be a ‘free-body diagram’ with the ball as body and gravity, buoyancy, other objects etc. as forces acting on the ball. A very general model, abstracting what is necessary to know to answer “the ball fell because the vector sum of the forces acting on it pointed downward” (had a z-component <0). That is, with such a model there aren't 'competing answers'; we know how the effects of the 'competitors' combine…

        • But statisticians (as well as engineers and physicists outside of introductory textbooks) have to cope with uncertainty (or lack of information, if you prefer). I think that the point of the example was that if we are only told that the ball fell back to earth, then there are (infinitely) many free-body diagrams consistent with that fact. Asking why the ball fell back to earth is the first step toward considering which of those free-body diagrams are consistent with any other information you might have. Depending on the situation, you might have a lot of additional information (e.g., maybe you saw how the ball got into the air and watched it fall) or you might have very little (e.g., maybe you turned around just in time to see the ball hitting the ground). Either way, there will be more than one free-body diagram consistent with what you saw.

        • There’s a difference that took me a while to fully appreciate between “modeling” in engineering and “modeling” of the sort that most people on this blog deal with. And that is whether the models need to deal with well-posed problems (engineering), somewhat ill-posed problems (the sciences), or really really ill-posed problems (epidemiology / nutrition / social sciences / economics / psychology). Sometimes it can be helpful to think of this as a continuum – a purely deterministic physical model is just a graphical model with no random variation on each edge (or delta functions as probability distributions).

          This is why, in the short term, engineers will always be far more productive than scientists. If your problem is well-posed and you’re competent, you will produce correct answers. If your problem is ill-posed, you could be producing perfectly reasonable answers and not get a single thing right in your entire career.

          The analogy here is that in the latter category, your problem looks more like this – “A ball moves left. You don’t know where all the planets are or how many there are, but you have some information on where some of them are. Which planet is exerting the most force on the ball? What planet is the ball on now? Is the ball even on a planet?”

      • Joe died suddenly, a healthy young man, 18 years old. It was an anomaly, demanding explanation.
        “Did he eat anything unusual?” asks Sherlock Holmes, Did he exercised excessively? or did he complain
        about anything unusual?”

        Holmes is following Andrew’s scheme of filling a model of joe with new factors that , based
        on Holmes generic knowledge of death cases, have the capacity to “explain” Joe’s death.

        “He took this new drug” says Joe’s widow. “He complained about our new boss” said Joe’s office
        mate. Holes must decide whether to pursue the drug hypothesis or the boss hypothesis or seek
        another potential explanation. He inspects the label on the drug container
        and says: Wait a minute, I believe a statistical study was conducted lately about this drug.

        The story ends here, because the intent is not to resolve the mystery of Joe’s death, but
        to convey the idea that in each step along the grand scheme of
        “model checking” or ” model restructuring ” with new candidate explanations, we must
        be able to evaluate each candidate according to some principle. Evaluating the degree to which a given
        candidate explanation can account for a given anomaly is an indispensable component in this
        grand strategy.

        Do we know how to define the relation CAUSED(X, Y) = event X caused event Y given that
        X and Y both did in fact occur? .
        Do we have a definition for Prob(CAUSED(XY)) ? Do we know when it is identifiable from experimental
        or observational studies?
        After all, Joe was not an ordinary fellow; he belonged to an idiosyncratic subpopulation
        of people who willfully took the drug and who also died.

        Can we skip this formal exercise and go straight to speculations about why the economy collapsed?
        What then would be the scientific role of statistical thinking over that of my broker or my
        economics professor? Their capacity to generate creative explanations far exceeds anything
        my scientist colleagues can generate.

        I will end here with a link to the tools we have today for estimating Prob(CAUSED(X,Y))
        http://bayes.cs.ucla.edu/BOOK-09/causality2-p296-304.pdf
        Much more needs to be done. I dont see how it can be skipped. And, btw, this is not
        “forward causal question, but a WHY question.
        Judea

        • Judea:

          Regarding your second-to-last paragraph: Indeed, brokers and economists and political scientists can and do ask good causal questions! I think statisticians can contribute in the usual way, by helping to design surveys, experiments, and observational studies, by helping in data gathering and statistical analysis. I don’t know anything about toxicology but I’ve still been able to make contributions in that field through my work in statistical modeling, inference, and computation.

        • Andrew:
          I think there are reverse causal answers, and depending on a particular problem they can be useful. Judea’s example on Joe’s death is a very good illustration on this.

        • Andrew,
          I am not sure whether you are endorsing or disagreeing with my
          suggestion that “Evaluating the degree to which a given
          candidate explanation can account for a given anomaly is an
          indispensable component in this grand strategy [of “model checking”.]
          Judea

        • Judea:

          Causal inference is a huge topic which we’ve discussed elsewhere on the blog. In this post I want to make a particular point about the way in which we can view reverse causal inference as a form of model checking. I think this view should be valuable to people, whether or not they are fans of your approach to causal inference. So I’d prefer to keep the topics separate here.

        • Adrew,
          Forget about “Causal inference as a huge topic”. I am talking only
          about your point of viewing reverse causal inference as a form of model checking.
          Within this particular view, which I fully endorse, I noted that there is a basic
          scientific hurdle that must be overcome regardless of what approach one takes to causal inference.

          I dont understand why you have to resort to expressions such as
          “fans of your approach to causal inference” when I am speaking about
          a scientific hurdle and a technically defined subtask that is needed for
          the success of your view on reverse causation.

          If you think that the subtask I defined is not as central as I described, tell
          us how it can be circumvented or postponed. But please do not make it sound like I am trying to peddle “my approach” — I am talking about a universal task: “evaluate the
          the degree to which a given candidate explanation can account for a given anomaly.”

          Can you separate this subtask from the overall task of “model checking”?
          Judea

        • Judea:

          I’n not saying you’re trying to peddle anything. I think you are raising real, and difficult, issues, and I just want to postpone those issues to a later time. I’m thinking division-of-labor here: I see my job right now as to clearly present this idea of reverse causal questioning as a form of model checking, and then others can follow this up with specific quantitative techniques such as you are suggesting.

        • Andrew,
          Thanks for clarifying the division of labor.
          We differ only on whether the tasks are separable or not; I think they are not.
          The subtask I have been talking about may reveal(and I am fairly sure it will)
          that the type of models one needs to invoke for doing “model checking”
          is entirely different than what we think it is; and we would not find out
          about it until we try the subtask first.
          But this is a risk one can afford to take.
          Judea

  11. Pingback: Links for 07-16-2013 | Symposium Magazine

  12. Good to see this very important distinction being aired & discussed. In some disciplines – epidemiology for instance – there is a history of applied, rule of thumb, thinking about “causes of effects” type problems – Bradford Hill is the classic and in other disciplines – ie demographers are starting to think about it much more seriously.

  13. Pingback: Brett Keller – global health & development » Typhoid counterfactuals

  14. Is there a clearly agreed definition of what it means to “cause”?

    It seems to me that in any reasonably complex and interesting real-world case, the whole concept of causality is pretty shaky. But I’m sure that philosophers (and probably statisticians) have thought long and hard over it. I’d appreciate pointers to their conclusions, if any (though I expect that if you lay all the world’s philosophers end to end, they would reach…no firm conclusion).

    • James:

      Forward causation is pretty clearly defined in terms of potential outcomes or graphs. The idea is that the effect of intervention (or “treatment”) T on outcome y, given pre-treatment variables x, is y(T=1,x) – y(T=0,x), for the simple case of a binary treatment. As long as you can define T, x, and y clearly, this is is a clean definition. This is sometimes called the Neyman-Rubin definition of causality (it has other names and various convergent histories) and is discussed in various places, for example in chapters 9 and 10 of my book with Jennifer. In the forward-causation approach, you don’t ask, “Does T cause y?” Instead you ask, “What is the effect of T on y (and how does it vary as a function of x)?”

      • Thanks, but this is a rather more limited concept than I meant, and often see applied. For example, the recent Sugihara paper (you blogged it) discusses causality in the context of an autonomous system with no external “treatment”, and this sort of idea is common in my field (climate science). The authors there even said they were using a new definition of causality that extended Granger causality. It’s easy enough to see how a clearly-defined treatment effect can be measured within the context of a model, and there’s an (often implicit) assumption that this is similar to the real world, but there is no treatment that has the effect of changing physical laws (or the passage of history), so it’s not always clear to me what such an investigation actually demonstrates.

        • There could be causes without manipulation although things can harder if you try to express this in terms of counterfactuals.

  15. Andrew:
    I thought the two statements “T causes y” and “T has an effect on y” mean the same thing and can be used for both forward and reverse causal inference.

  16. Pingback: Friday links: Sears CEO vs. multi-level selection, noninformative priors = perpetual motion machines, scientific wills, a poetic paper, and more | Dynamic Ecology

  17. Pingback: Data Viz News [16] | Visual Loop

  18. Pingback: christakis 1, gelman 0 | orgtheory.net

  19. Pingback: Defensive Political Science Responds Defensively to an Attack on Social Science

Comments are closed.