Giving feedback indirectly by invoking a hypothetical reviewer

Ethan Bolker points us to this discussion on “How can I avoid being “the negative one” when giving feedback on statistics?”, which begins:

Results get sent around a group of biological collaborators for feedback. Comments come back from the senior members of the group about the implications of the results, possible extensions, etc. I look at the results and I tend not to be as good at the “big picture” stuff (I’m a relatively junior member of the team), but I’m reasonably good with statistics (and that’s my main role), so I look at the details.

Sometimes I think to myself “I don’t think those conclusions are remotely justified by the data”. How can I give honest feedback in a way that doesn’t come across as overly negative? I can suggest alternative, more valid approaches, but if those give the answer “it’s just noise”, I’m pouring cold water over the whole thing.

There are lots of thoughtful responses at the link. I agree with Bolker that the question and comments are interesting.

What’s my advice in such situations? I often recommend that, if you’re involved in a project where you think something’s being done wrong, but you don’t have the authority to just say it should be done differently, is to bring in a hypothetical third party, to say something like this: “I think I see what you’re doing here, but what if a reviewer said XYZ? How would you respond to that?”

5 thoughts on “Giving feedback indirectly by invoking a hypothetical reviewer

  1. I imagine that if the senior members of this group of collaborators endorse the current purportedly questionable paper, then they’ve probably published similarly questionable papers in the past. I would be hesitant in this situation to raise criticisms, even in the way you (Andrew) suggest, because I wouldn’t want to have to get into the wider issues discussed on this blog with a senior collaborator who isn’t familiar. There’s definitely some cowardice involved in my view here, but it’s difficult (and potentially very career-damaging) to argue for ‘good science’ vs. ‘publishable science’ as a junior member of a research team. After all, in many places PPNAS is known as prestigious with no sarcasm involved…

    • > but it’s difficult (and potentially very career-damaging) to argue for ‘good science’ vs. ‘publishable science’ as a junior member of a research team.

      That is sad, very sad.

      Some possibilities are to encourage or arrange for them to present to a group where someone will directly raise the issue (perhaps informed by you).

      Another is to seek the opinion of a senior member of your group as to what to do and hope they volunteer to raise it or encourage you to. Now the response I got to that once was “don’t find any errors, the research funds have been spent and we won’t be able to fix it anyway” (they soon afterwards were one of the most senior research managers in Canada). But I was not on their bad side – until I found an important error and brought to their attention. Then I definitely was.

    • I’m someone who has been a junior member of a certain lab just like this. However, I’m pig-headed enough that I remained vocal anyway. I found that I couldn’t effectively prevent flawed papers from being sent out for review, and I couldn’t get people to wholesale abandon methods that were silly but had led to lauded publications in tabloid journals in the past. What I could regularly do successfully though was a kind of incrementalism. “Why don’t I run some regressions where I add priors, and we can include them as a follow-up analysis in the results.” Or, “I like it, but I wonder if it’s worth including this batch of p=.04999 results in the manuscript. They’re not very strong and we have enough other findings that are more exciting to focus on.” That sort of thing worked.

  2. I do the hypothetical reviewer thing and, at least from a social perspective, it seems to work as well as anything else. It’s easy partly because I have a reviewer voice running in my head anyway. On the other hand, senior scientists have a good sense of what can pass review, so… YMMV.

    I’d also say it is important to provide a positive path forward. What realistic experiment could be done to nail the conclusion? It’s a normal thing in research group to point out an issue and then to help brainstorm a way around it. It’s a different conversation if this guy has been trying the normal routes of critique and is always shot down. The question seems like he hasn’t even really tried giving his feedback yet.

    It would be useful, if possible, to do an analysis that highlights the weaknesses in the data. Something easy to understand like shuffling the data or simulating the (likely implied) model. If the role is “stats guy”, asking to play with the raw data should seem like a normal thing. Maybe better, sit with someone and work through the data together—pair data analysis can be fun (like pair programming).

Leave a Reply to Jason Yamada-Hanff Cancel reply

Your email address will not be published. Required fields are marked *