What are best practices for observational studies?

Mark Samuel Tuttle writes:

Just returned from the annual meeting of the American Medical Informatics Association (AMIA); in attendance were many from Columbia.

One subtext of conversations I had with the powers that be in the field is the LACK of Best Practices for Observational Studies. They all agree that however difficult they are that Observational Studies are the future of healthcare research.

I passed along your blog item, “Thinking more seriously about the design of exploratory studies: A manifesto,” to the new chair of NCVHS (National Center for Vital and Health Statistics).

I replied: Just to clarify: the observational/experimental divide is orthogonal to exploratory/confirmatory. There is a literature on the design of observational studies (see the work of Paul Rosenbaum) but it has a confirmatory focus. That’s no knock on Rosenbaum: almost all the literature on statistical design—including my own papers and book chapters on the topic—come at it from a confirmatory perspective.

Tuttle responded:

At a deeper level I understand all this – the math at least, and the distinctions to be made, but failed to acquire the language with which to describe it well to others, or with which to communicate with those for whom these are “religious” distinctions.

This is (yet) another challenge of inter-disciplinary work.

On a related note, many (healthcare) clinical trials – “experiments” in your lingo – never finish, mostly because of failures of accrual – they can’t get enough patients.

(Difficulty in accruing patients is not just about the ethical dilemma – denying some patients something that might be better; it’s also a predictor that the study may be irrelevant – because real patients are just more complicated, with, for example, co-morbidities that disqualify them from the trial.)

This is yet another reason many are embarrassed by the whole thing – the failure of experiments, lack of reproducibility, etc. Still, those extolling “observational” studies don’t always stand up on their hind legs when they should.

For more on observational/experimental, see this paper from several years ago, which begins:

As a statistician, I was trained to think of randomized experimentation as representing the gold standard of knowledge in the social sciences, and, despite having seen occasional arguments to the contrary, I still hold that view, expressed pithily by Box, Hunter, and Hunter (1978) that “To find out what happens when you change something, it is necessary to change it.”

At the same time, in my capacity as a social scientist, I’ve published many applied research papers, almost none of which have used experimental data.

In the present article, I’ll address the following questions:

1. Why do I agree with the consensus characterization of randomized experimentation as a gold standard?

2. Given point 1 above, why does almost all my research use observational data?

6 thoughts on “What are best practices for observational studies?

  1. There’s a greater push in Criminology to evaluate interventions with experimental designs, but the biggest problem is feasibility. A lot of times it is very hard to get a police department to agree randomize patrol, or get judges to randomize probationers to some new treatment. Where I come from, most of the causal inference work I do is after the intervention has already occurred.

    • G:

      I used to run into that a lot when I worked with surgical groups – “patients and other surgeons are unlikely to agree to randomization.”

      But with time, I realized for most of the considered interventions there often was some group, somewhere in the world that was able to do a randomized study.

      Opportunities to randomize, for any part of the research (e.g. randomly ordering the assays or even data entry) should not be overlooked – if they are worth it. The gains are real and think only rarely not worth it (as when reality changes faster than you can learn about it).

      But, usually randomization opportunities are only available for a small part of what should (needs to) be learned and so observational studies and observational analysis methods will need to be leaned on heavily.

      Additionally, as Andy nicely argues in the linked paper, often observational analysis methods should be used in the analysis of the randomized studies.

  2. Epidemiologists are supposed to stick to a guideline called STROBE (Strengthening the reporting of observational studies in epidemiology) when reporting observational studies to establish CAUSAL association (1). If you want to PREDICT prognosis (i.e. build prediction models) you are supposed to stick to TRIPOD (Transparent Reporting of a multivariable prediction model for Individual Prognosis or Diagnosis) guidelines (2).
    1) http://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.0040297
    2) https://www.ncbi.nlm.nih.gov/pubmed/25560714

Leave a Reply to Keith O’Rourke Cancel reply

Your email address will not be published. Required fields are marked *