Social research is not the same as health research: Macartan Humphreys gives new guidelines for ethics in social science research

In reaction to the recent controversy about a research project that interfered with an election in Montana, political scientist Macartan Humphreys shares some excellent ideas on how to think about ethics in social science research:

Social science researchers rely on principles developed by health researchers that do not always do the work asked of them . . . because of differences in the nature of what is studied, and the nature of relations between researcher and subject, the standards developed by health researchers do not always seem well suited for social scientists.

How is social science different? Macartan explains:

Unlike many health scientists, social scientists are commonly working on problems in which:

1. Those most likely to be harmed by an intervention are not the subjects (for example, when researchers are interested in the behavior of bureaucrats whose decisions affects citizens, or in the behavior of pivotal voters, which in turn can affect the outcome of elections).

2. Researchers are interested in the behavior of institutions or groups, whether governmental, private sector or nongovernmental, and do not require information about individuals (for example, if you want to figure out if a government licensing agency processes applications faster from high-caste applicants than from low-caste applicants).

3. Subjects are not potential beneficiaries of the research and may even oppose it (for exampl,e for studies of interventions seeking to reduce corruption in which the corrupt bureaucrats are the subjects).

4. Consent processes can compromise the research (for example, for studies that seek to measure gender- or race-based discrimination of landlords or employers).

5. There is disagreement over whether the outcomes are valuable (compare finding a cure for a disease to finding out that patronage politics is an effective electoral strategy).

These five features can sometimes make the standard procedures used by Institutional Review Boards for approving social science research irrelevant or unworkable.

Macartan next explains why the standard principles of ethics for medical research do not apply:

The first two differences mean that formal reviews, as currently set up, can ignore the full range of benefits and harms of research or do not cover the research at all. Formal reviews focus on human subjects: living individuals about whom investigators obtain data through intervention or interaction or obtain identifiable private information.

The third and fourth, which again focus on subjects rather than broader populations, can quickly put the principles of justice and respect for persons — two of the core principles elaborated in the Belmont report (upon which standard review processes are based) — at odds with research that otherwise may seem justifiable on other grounds.

The fifth difference can make the third Belmont principle, beneficence, unworkable, at least in the absence of some formula for comparing the benefits to some against the costs for others.

What to do? As Macartan puts it:

I don’t mean what is in some sense the objectively right or wrong way to behave, but simply what behavior meets the standards that we would like the public to expect of us as researchers, and that we, as researchers, would like to be able to expect of each other. These expectations should reflect a combination of what behavior we feel morally comfortable with and expectations that will make it possible to do our work well. This is the idea of professional ethics.

Then he lays out some principles:

If we take it as given that research should not break the law and that conflicts of interest should be appropriately addressed, I think it is useful to think through the following four questions in this order :

Question 1 [Agency]: Is the researcher responsible for interventions that manipulate people?

Question 2 [Consent]: Is it done without their consent?

Question 3 [No Harm]: Is it likely to bring harm to some people?

Question 4 [Net Benefits]: Do possible harms outweigh the benefits?

Ideally you would want to answer no to all four of these; if you answer yes to all of these, you’re in trouble.

And now the tough calls:

The tricky cases are when you can answer yes to some but no to others. What then? What combinations of answers to these questions can be used to justify a research design?

Let’s take the questions in order.

Agency. In classic health and agricultural experiments, the researcher sets up and implements the intervention, often as a trial. But in many social science field experiments (or randomized interventions, or policy experiments), the intervention is initiated and implemented by someone else, such as a government, an NGO or a political party.

In these cases, researchers might advise on ways that an intervention is implemented to maximize learning, but the responsibility for the intervention is borne by some other actor. In some cases, these partnerships are essential — for example, if an intervention can be implemented only by a government or a large organization. Working with partners can have the advantage of increasing the realism and relevance of the intervention, ensuring that you are not just studying yourself (though, conversely, it also can limit the types of questions that can be answered).

But even if they are not critical for implementation, partnerships can simplify the ethics. The decision to implement is taken not by the researcher but by an actor better equipped to assess risks and to respond to adverse outcomes. Real risks may be reduced, but so might risks to the profession arising from public fears that researchers are using people as guinea pigs. In these cases, while there is learning, there are no guinea pigs, since the research follows the intervention rather than the other way round.

Obviously, the partnership approach doesn’t help much if the partner organization itself acts unethically, or if the partnership is just a front for the researcher who swaps back and forth between wearing the hats of a practitioner or a researcher. Moreover, partnering can raise new ethical issues, if, for example, by doing so, researchers lend legitimacy to organizations or governments that are themselves involved in corrupt or abusive practices. The general take-away is that if through partnerships you can answer no to Question 1 then things can get a lot easier. The specific question that needs clarification is when it is appropriate to partner with some third party or other.

Consent. The emphasis on consent comes from a concern with respect for persons. The goal is to minimize interference with the autonomy of others. But consent, like partnerships, also provides a way to share responsibility with others affected by the research.

If everyone consents then it is hard to mount an ethical challenge. Moreover, consent provides information regarding harm from the subject’s perspective. Conversely, when deception is used, this may damage relations of trust between a research community and the public, which can both weaken the quality of the research and make it harder for others to do good research in the future. Gains from consent can be compromised, however, if consent is less than fully informed or is coerced or is extracted from vulnerable populations or if deception is used.

The tricky issue is that in some social science research it is ambiguous whose consent is needed. If an intervention seeks to reduce violence against women, consent may be sought from the women but not from the (directly affected) aggressors. If consent is obtained from voters to receive truthful information about corruption by political candidates, is the consent of those (indirectly affected) political candidates also required before this information is handed out? Should they have a veto?

Again, the main point here is that you want a no to Question 2 whenever possible. The more specific question that needs clarification, though, is when, if ever, is it appropriate to bypass consent procedures for some people that are affected by social scientific research.

No harm and net benefits. If you have already run into problems with the agency and consent principles, then everything comes down to the no harm and net benefits principles. But these are hard ones.

Unlike much health research, politics is about winners and losers, and so a lot of political interventions create losers, whether they are subjects or third parties. For instance, an intervention that succeeds in increasing the participation of the poor in politics may weaken the position of the rich. If the intervention does no harm to anyone, then perhaps there is not much at stake — even if it is done without consent.

Of course, weak interventions are less likely to do harm, but the point of the interventions is to try to observe substantively significant effects (see here also). So often there is at least some probability that a strong political intervention will do harm to someone somewhere. The issue here is whether you are willing to do it for research reasons alone.

Perhaps you might if you can answer Question 4 and satisfy the net benefits principle. But for social and political applications you might have a very hard time answering this question and a harder time justifying any answer you give or explaining on what grounds you can even start making an answer (see, for example, the huge body of work on this following Arrow’s contributions). Figuring out principles for determining net benefits in the face of value disagreements will be a hard challenge for the discipline.

Macartan summarizes:

Taken together, these four principles are more nuanced than simply telling researchers to “leave no trace” on political outcomes or to ensure only that they break no laws. Experiments must leave traces. The whole point of field experiments is to change the world, even if it is often in small ways.

And here’s his advice for researchers:

Work through partnerships when you can. If you cannot and if instead you implement an intervention that can do harm to someone, then get consent from affected parties — not just human subjects — instead of trying to make an argument about net benefits. If getting consent compromises the research, then change topic.

The way forward

As Macartan writes:

At the moment, those expectations are not clear. In health research the expectations were formed through a joint deliberation with health practitioners and representatives of the general public. Social scientists have had no such process.

I think that Macartan’s document is a great place to start, and I hope and expect it will serve as a starting point for a generally accepted set of guidelines for social research.

My only criticism of what Macartan wrote is that he did not frame it broadly enough. He presents it as a discussion of the ethics of “field experiments” but I think it applies to social science research more generally.

12 thoughts on “Social research is not the same as health research: Macartan Humphreys gives new guidelines for ethics in social science research

  1. Just a comment on consent and consent procedures. One of the issues in social science research is getting proof of consent, as in a signed statement by a person participating in a study (e.g. an interviewee). For those of us who work on (and sometimes in) authoritarian countries, and who sometimes engage in research that is quite politically sensitive, the traditional proof of consent, based on a medical model, is absurd, for it potentially puts an interviewee at risk from the authoritarian state! The last thing I want to be carrying around is a signed statement that implicates an interviewee in any way — even if the only “data” is that the person talked to me. This is a case where the injunction to “do no harm” means that you should of course asked for informed consent orally, but have no written/ printed *record* of consent. Such a practice protects the interviewee from the authoritarian state or its agents, like the police.

    • Shawn:

      Another way to think of this is to imagine the study being done the other way, for example imagine a study conducted in the U.S. sponsored by the Chinese government, with the purpose of encouraging Americans to do some sort of political behavior, perhaps involving the government of a small suburb or school board or condo organization. I’d be a little spooked to think that agents of a foreign government are messing around with my local school board. Maybe I shouldn’t be spooked, but I would be. So I do think there’s a problem with the model in which political scientists are manipulating people, especially those in other countries. This is not to say that such manipulations should not be done, just that there are issues.

    • There should always be in place procedures for waiving signed consent when the risks associated with signing are greater than the risks associated with the study or they increase the risk of the study. This isn’t about changing outcomes, this is about, for example, interviewing women who support equality in countries where that is opposed by the government. Or homosexuals in Uganda. Or some other group for whom the dangers of being exposed when your bags are searched are much greater than the dangers that exist if you did the same interview without a signature. You still need to get consent (and document that you followed consent procedures) but the signature is unnecessary in that case.

      • In the federal regulations there is a section that allows for the waiver of documentation of informed consent. The circumstances you describe are precisely the reasons for the section. Many investigators do not know this provision or do not avail themselves of this provision.

  2. In the context of the Montana study, the key issue there wasn’t that it affected outcomes or created some losers, the cost-benefit etc.

    The issue most people would object to is the mis-representation. i.e. making the communication sound more official or impartial than it really was.

    To that extent I think Humphreys over-complicates matters. I don’t think that the community objects to social science research creating losers or helping the poor at the expense of the rich etc. The political ideologies of many social science professors are clear enough and not really a cause for uproar.

    The shit hits the fan when there is misrepresentation. That’s what ought to be most scrupulously avoided. I think. It wasn’t even about consent. It was about lying.

    • Although Humphreys’ comments were, it appears, triggered by the Montana study controversy, the points raised are quite general. IRBs evaluate human subjects research proposals applying regulations that were designed for biomedical research. As such they can easily miss the real issues raised by social science research. There is little doubt in my mind that the Montana study would pass muster with almost any IRB because, while there are rules against deception in getting consent (though even these can be waived in some circumstances) the kind of deception involved in the Montana study does not violate any existing regulations, as far as I can see. The health care research regulations focus on problems that often do not exist in social science research and are simultaneously oblivious to other problems that can be raised by social science research.

      I think Humphreys has done an outstanding job of delineating the principal differences between social science research and health care research. Separate, and new regulations are needed to govern the former, along with some criteria to decide which category a proposal should be reviewed as when it falls into a grey area. (Health services research is often as much like social science research as biomedical.)

        • I have no inside information on what they sent to the IRB(s) involved. But ordinarily, the IRB will insist on seeing _everything_ that study participants will see. And if, later on, you want to make _any_ changes, even just correct a spelling error, you have to resubmit it and get approval to make the change. IRBs are typically very strict about this sort of thing. It would astonish me if the IRBs involved were not shown this.

          Based on my IRB experience, however, it would be completely unsurprising if they looked askance at it, said “tsk tsk,” and ultimately concluded “but it doesn’t violate any of the regulations, so we’re going to approve it.”

          As with so many things today, the problem isn’t the way people violate the rules; what the rules allow is the problem.

        • When researchers from multiple universities are involved does only one IRB review the study or all? Apparently in the Montana study’s case the Dartmouth IRB did approve the study & the Stanford IRB was never asked to review.

          If a study violates federal laws, forget IRB codes on ethics, surely a good IRB will say no to the study? In the Montana scandal, actual laws are alleged to have been violated & obviously, let’s wait for a court ruling, but if something is not just unethical but patently illegal wouldn’t an IRB be expected to intervene & protect the University’s interests?

          Internal rules, perhaps an IRB can grant a waiver. But potential federal / state laws would necessitate mandatory compliance. Unless the legal position here is that no laws were actually violated & if this went to court the researchers would win?

  3. “If an intervention seeks to reduce violence against women, consent may be sought from the women but not from the (directly affected) aggressors.”

    I’ve tried to find words. I mean it could have been worse, he could have used rape. Oh. wait.

    What kind of intervention is this where aggressors are directly impacted but are not participants in the study? Would he raise the same concern about an intervention to reduce armed robberies not getting consent from local robbers? Oh wait, we couldn’t use that example, because it wouldn’t be an example of direct impact not to mention it wouldn’t be … cute? funny? sexual? What’s he going for here? Thank goodness corrupt officials are only indirectly impacted by the other hypothetical study he mentions.

    I’ve reviewed domestic violence treatment studies. Any good IRB would spend lots of time talking about indirect impacts on the offender and whether the study includes adequate safety planning for participants, their children, and program staff, I can assure you of that.

    There is so much to say about the topic, but I’ll at least agree with Andrew that the issue goes way beyond field experiments.

Leave a Reply

Your email address will not be published. Required fields are marked *