## Understanding how estimates change when you move to a multilevel model

Ramu Sudhagoni writes:

I am working on combining three longitudinal studies using Bayesian hierarchical technique. In each study, I have at least 70 subjects follow up on 5 different visit months. My model consists of 10 different covariates including longitudinal and cross-sectional effects. Mixed models are used to fit the three studies individually using Bayesian approach and I noticed that few covariates were significant. When I combined using three level hierarchical approach, all the covariates became non-significant at the population level, and large estimates were found for variance parameters at the population level. I am struggling to understand why I am getting large variances at population level and wider credible intervals. I assumed non-informative normal priors for all my cross sectional and longitudinal effects, and non-informative inverse-gamma priors for variance parameters. I followed the approach explained by Inoue et al. (Title: Combining Longitudinal Studies of PSA, Biostatistics,2004, 483-500).

I don’t know but I’d recommend you graph your data and fitted model so you can try to understand where the estimates are coming from.

Also, get rid of those inverse-gamma priors, which aren’t noninformative at all! (See my 2006 paper.)

1. C Ryan King says:

Could you have an identifiability problem with one of the variances? Is the data non-gaussian, if so is there a marginal/conditional transformation happening when you add a level to the model?

2. anon says:

If you are ambitious you could try to follow the example on page 395 of Gelman
and Hill (i.e. plot the priors and posteriors)

3. Corey says:

Could this be related to the phenomenon of increased population variation as a result of adding an individual-level covariate (as discussed in Gelman and Price (1998) and Gelman and Hill pg 480)? In that case an increase in the apparent residual variance was caused by a correlation between the predictors at the lower level and the errors at the upper level of a two-level hierarchy. The instance described in the post is different in that a level was added on top rather than a covariate at the bottom, but perhaps something similar is going on.

4. [...] guess I need to go figure out why Gelman says that inverse-gamma priors for variance parameters are informative and therefore suggests (I’m assuming based on my previous approach) half-t priors. I guess [...]