Skip to content
 

What is the role of qualitative methods in addressing issues of replicability, reproducibility, and rigor?

Kara Weisman writes:

I’m a PhD student in psychology, and I attended your talk at the Stanford Graduate School of Business earlier this year. I’m writing to ask you about something I remember you discussing at that talk: The possible role of qualitative methods in addressing issues of replicability, reproducibility, and rigor.

In particular, I think you spoke about how many (all?) of us “quantitative” researchers in fact draw on more qualitative observations in piloting studies, forming hypotheses, etc., without labeling it as such. You asked us to consider how many people we really think we need to talk to in order to have a sense of a phenomenon (on the idea that’s it’s probably more like 3 or 4 rather than 300 or 400). I think part of the point here was that our statistical models don’t reflect how we actually think about the knowledge we gain from conducting experiments. (Forgive me if I’m mangling your ideas!)

Anyway, this got me thinking about the possible role of qualitative research and “mixed methods” in pushing psychology and other fields toward more replicable, reproducible, and rigorous research – the basic idea being that qualitative methods, though less “objective” than quantitative approaches, might have complementary properties that would be useful in forming priors/hypotheses, providing a check on what kinds of results seem reasonable, and guiding the interpretation and generalization of results.

I responded:

Yes, I think these ideas are important, and I have not thought too systematically about it. This paper is somewhat relevant. Also you could look at this talk, where I discuss how we could possibly make progress in a world with weak theory and incremental improvements:

I agree with you 100% that qualitative methods are important. One way to think about this is that we learn through measurement, and qualitative research is used to decide what to measure and how to measure it. The other way to think about this is . . . where do treatments come from? Researchers just think them up, right? It presumably would be better to do this thinking-them-up more systematically, no? That’s qualitative research.

Weisman replied:

Qualitative research could guide what we measure (and how), as well as what we manipulate (and how). I think it could also guide what “priors” we bring to our analyses (formally or informally) and therefore what we make of our results. (And I’d argue that in some sense it already does – it’s just that most of us use informal intuitions from piloting/observation rather than systematic qualitative approaches, and then we don’t report these observations anywhere.)

9 Comments

  1. Bill Harris says:

    Two suggestions. First, you might find Bob Dick’s /Rigour without numbers: the potential of dialectical processes as qualitative research tools/ of interest; see http://www.aral.com.au/publ/ .

    Second, you might be interested in the field of action research. There’s a good, free course that’s just getting ready to start. It used to be offered as the first graduate course in AR at Southern Cross University, as I recall. See http://www.aral.com.au/areol/areolind.html, and contact Bob (email link on that page) if you’d like to find out more.

  2. El Gordo says:

    There is a good discussion of this in a recent book, “Multi-Method Social Science: Combining Qualitative and Quantitative Tools” (Cambridge)

    https://www.cambridge.org/core/books/multimethod-social-science/286C2742878FBCC6225E2F10D6095A0C

  3. Peter Dorman says:

    I’ve been thinking about this too in the context of economics, where one of the greatest sins (not always but often) is the refusal to take processes seriously and test only for outcomes. Some processes, of course, have distinct outcome footprints, and a purely quantitative design can look for them. But others don’t! Especially when we’re interested in questions of perception, motivation, hermeneutics, etc., there is a lot to be gained from talking with people in non-questionnaire mode. And I also agree that the issue of measurement is enormous: often there is no way to know how well an observable proxy captures the unobservable thing you’re interested in except by either talking with the principals or engaging in close observation of individual cases/episodes.

    (In work I did on the productivity vs remuneration of child labor, I benefited enormously from just watching first-hand children at work in various informal settings; this suggested proxies for their productivity as well as a rough sense of how reliable those proxies are.)

    One qualitative technique I have second-hand experience with (from a coauthor) is structure-laying. It was developed in Germany, and I’ve never seen a good English-language account. It’s a method that allows subjects to physically express the structure of their perceived causal world, one that gets around potential gaps in intersubjectivity and related experimenter bias. It’s labor intensive (you have to interview subjects twice and do some prep in between) but powerful.

  4. Seamus Power says:

    Hi there,

    My colleagues and I recently published a model outlining the complementarity between qualitative & quantitative methods to develop psychological research in Perspectives on Psychological Science. It is titled “The SAGE model of Social Psychological Research.”

    The abstract reads: “We propose a SAGE model for social psychological research. Encapsulated in our acronym is a proposal to have a synthetic approach to social psychological research, in which qualitative methods are augmentative to quantitative ones, qualitative methods can be generative of new experimental hypotheses, and qualitative methods can capture experiences that evade experimental reductionism. We remind social psychological researchers that psychology was founded in multiple methods of investigation at multiple levels of analysis. We discuss historical examples and our own research as contemporary examples of how a SAGE model can operate in part or as an integrated whole. The implications of our model are discussed.”

    It can be accessed, open access, here: http://journals.sagepub.com/doi/full/10.1177/1745691617734863

    Cheers,
    Seamus.

  5. Ali Tasso says:

    This is a conversation that qualitative researchers are having, FYI: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5491836/

  6. Hi,

    Saul Albert and I have just written a paper for Collabra (in press) that addresses how qualitative methods can improve replicability, rigor, and general validity in psychological interaction research. A preprint is here: https://osf.io/rz9xa

    Cheers,

    JP

  7. Keith O'Rourke says:

    I came into statistics from a more qualitative background and remember being dismayed at how often people dove into expensive time consuming hard data driven research rather than starting with potentially very helpful qualitative things like focus groups.

    C.S. Peirce put this mistake as trying to do induction with inadequate or incomplete abduction. I could put this in modern Bayesian terms as trying to do an analysis without any prior information – stop you are not ready.

    Now I once gave a talk (1997) on various roles for qualitative versus quantitative approaches as part of a merger of primarily qualitative and quantitative department (arguing mainly abductive and interpretation/contextualization for qualitative) and was insulted and dismissed (mainly by the qualitative faculty) in a way I had never experienced before or ever since. Afterwards an old qualitative colleague told me the same department did the same to her.

    • Your experience Keith speaks to my consistent refrain that only a small % constitute better thinkers. Way too many researchers in these fields, producing variable quality research as some of you have been pointing out here.

      What exactly entailed their dismissal? I am quite interested in sociology of expertise.

Leave a Reply