Pablo Verde sends in this letter he and Daniel Curcio just published in the Journal of Antimicrobial Chemotherapy. They had published a meta-analysis with a boundary estimate which, he said, gave nonsense results. Here’s Curcio and Verde’s key paragraph:
The authors [of the study they are criticizing] performed a test of heterogeneity between studies. Given that the test result was not significant at 5%, they decided to pool all the RRs by using a fixed-effect meta-analysis model. Unfortunately, this is a common practice in meta-analysis, which usually leads to very misleading results. First of all, the pooled RR as well as its standard error are sensitive to 2 the estimation of the between-studies standard deviation (SD). SD is difficult to estimate with a small number of studies. On the other hand, it is very well known that the significant test of hetero- geneity lacks statistical power to detect values of SD greater than zero. In addition, the statistically non-significant results of this test cannot be interpreted as evidence of the homogeneity of the results among all RCTs included.
How can you generally avoid boundary estimates of multilevel variance parameters? Using our cute little trick, implemented in blmer/bglmer in the blme package in R.