Mike Spagat points to this interview, which, he writes, covers themes that are discussed on the blog such as wrong ideas that don’t die, peer review and the statistics of conflict deaths.
I agree. It’s good stuff. Here are some of the things that Spagat says (he’s being interviewed by Joel Wing):
In fact, the standard excess-deaths concept leads to an interesting conundrum when combined with an interesting fact exposed in the next-to-latest Human Security Report; in most countries child mortality rates decline during armed conflict (chapter 6). So if you believe the usual excess-death causality story then you’re forced to conclude that many conflicts actually save the lives of many children. Of course, the idea of wars savings lives is pretty hard to swallow. A much more sensible understanding is that there are a variety of factors that determine child deaths and that in many cases the factors that save the lives of children are stronger than the negative effects that conflict has on child mortality. . . .
We say that if the war is causing non-violent death rates to increase then you would expect non-violent deaths to increase more in the violent parts of Iraq then they do in the non-violent parts of Iraq. To the contrary, we find this just isn’t so. At least in our preliminary analysis, there seems to be very little correlation between violence levels and changes in non-violent death rates. This should make us wonder whether there is any reality behind the excess deaths claims that have been based on this Iraq survey. In fact, we should question the conventional excess-deaths idea in general.
Then information on some particular surveys, lots of details that are worth reading, fascinating stuff.
Then some general points that arose because some of the stuff being criticized appeared in high-profile scientific journals. Here’s Spagat again:
First of all, saying that something has to be right or is probably right because it has been peer reviewed is quite a weak defense. Peer review is a good thing, and it is a strength of scientific journals that there is that level of scrutiny, but if you look at the list of scientific claims that have turned out to be wrong and that have been published in peer reviewed journals….well…the list just goes on and on and on. Publishing in a peer reviewed journal is no guarantee that something is right. Some of the people who do the referee reports are more conscientious than others. In almost no cases does refereeing ever include an element of replication. Often referees don’t even know enough about literature cited to judge whether claims about the current state of knowledge are accurate or otherwise. Mostly people just assume what they’re being told by the authors of the paper is correct and valid. Peer review is better than no peer review, but it hardly guarantees that something is going to be correct. . . .
Journal peer review is just the beginning of a long peer review process. Thinking that journal peer review is the end of this process is a serious misunderstanding. Peer review is an ongoing thing. It is not something that ends with publication. Everything in science is potentially up for grabs, and people are always free to question. Anyone might come up with valid criticisms.
If you look at Burnham et al. there have been a number of peer reviewed articles that have critiqued it, and said it is wrong. So if you think peer review has to always be correct then you’re immediately in a logical conundrum because you’ve got peer reviewed articles saying opposite things. What do you do now?
I’m happy to give people credit for doing difficult research in war zones. And I’m happy to admire the courage of people who do dangerous field work. But doing courageous field work doesn’t make your findings correct and we shouldn’t accept false claims just because someone had the guts to go out in the field and gather data. Science is a ruthless process. We have to seek the truth. Courage is not an adequate rebuttal to being wrong.
P.S. I was going through my old emails from several years ago and saw this exchange:
Someone asked me: Have you followed the debate on the Iraq death estimates obtained through survey methods? Reference: Mortality after the 2003 invasion of Iraq: a cross-sectional cluster sample survey Gilbert Burnham, Riyadh Lafta, Shannon Doocy, Les Roberts, The Lancet, Oct 13, 2006.
I replied: The study looked reasonable to me. And I pointed to this blog post from 2006, where I wrote some pretty general comments about cluster sampling. There was lively discussion in the comments section (at the time, these Iraq surveys were politically loaded, with people on the left grabbing on to evidence suggesting bad things were happening over there, and people on the right looking to discredit such claims), in particular involving the reluctance of the researchers to describe exactly what they were doing. I wrote:
Burnham et al. provide lots of detail on the first stage of the sampling (the choice of provinces) but much less detail later on. For example, they should be able to compute the probability of selection of each household (based on the selection of province, administrative unit, street, and household). Then they can see how these probabilities vary and adjust if necessary.
Unfortunately, it is a common problem in research reports in general: to lack details on exact procedures it’s surprisingly difficult for people to simply describe exactly what they did. (I’m always telling this to students when they write up their own research: Just say exactly what you did, and you’ll be mostly there.) This is a little bit frustrating but unfortunately is not unique to this study.
Unfortunately, I’d still have to go with this general position: it’s common to not share data or methods (indeed, as anyone knows who’s ever tried to write a report on anything, it can be surprisingly effortful to write up exactly what you did), so that alone is not evidence of a serious flaw in the research. However, given that serious flaws have been demonstrated in other ways (as discussed by Spagat), it becomes more relevant that the researchers can’t tell us what they did. At some point it’s up to them to defend their numbers. As Spagat says, it’s not enough just to point to publication in a top journal of various summary statistics.