Derek Sonderegger writes:

I have just finished my Ph.D. in statistics and am currently working in applied statistics (plant ecology) using Bayesian statistics. As the statistician in the group I only ever get the ‘hard analysis’ problems that don’t readily fit into standard models. As I delve into the computational aspects of Bayesian analysis, I find myself increasingly frustrated with the current set of tools. I was delighted to see JAGS 2.0 just came out and spent yesterday happily playing with it.

My question is, where do you see the short-term future of Bayesian computing going and what can we do to steer it in a particular direction?

In your book with Dr Hill, you mention that you expect BUGS (or its successor) to become increasingly sophisticated and, for example, re-parameterizations that increase convergence rates would be handled automatically. Just as R has been successful because users can extend it, I think progress here also will be made by input from ‘people with an itch to scratch.’ After the 50th time I’ve written (and made silly mistakes writing):

mu.alpha ~ dnorm(0, .0001)

for(i in 1:n){

alpha.raw[i] ~ dnorm(mu.alpha, tau.alpha)

alpha[i] <- alpha.raw[i] - mean(alpha[]) } I would love to write something that hides that from me. Here is my hope/expectation: There should be a greater decoupling of the BUGS interface to build a graph structure and the back end engine that takes a graph and runs the MCMC using whatever samplers in deems appropriate. By separating the two steps, people can modify the input to make it easier to build a specific graph without worrying about the MCMC engine. Re-parameterization problems will lie firmly in this sphere. People doing research into different samplers can just worry about a particular graph structure and not how the structure was created. This would make it easier for *both* types of developers to debug and test their code and make it easier to add new functionality.

My answer: I agree with you and I do think that future versions of Bugs will be more modular. As it is, relatively simple hierarchical regression models can take up several pages of Bugs code. The resulting models are likely to have errors and will typically run slowly. I discussed some of these issues in my recent article in Statistics in Medicine.

I am not a statistician, but I completely share his sentiment. Current tools for harder bayesian statistics problems are totally unsatisfactory; especially for someone without an advanced background in statistics.

I am involved in a similar approach in a different language. PyMC is a fairly well developed Python package that does exactly what he says. You build a model by linking different objects in a way that can be graphed and then choose a simulator to draw from the posterior.

Now for my own crass plug: I am currently writing a branch of PyMC to make gradient information accessible to the sampler so you can write Langevin samplers. The first version of the feature and samplers that use the gradient information are approaching release. In the future you should be able to build samplers that use both first and second derivative information for PyMC (so you can build Stochastic Newton MCMC algorithms).

Jsalvatier: perhaps like you, I suspect most of the challenges involve how to more effectively draw from the posterior.

As for those without an advanced background in statistics – that was the serious part of the motivation for my recent zombie post http://www.stat.columbia.edu/~cook/movabletype/ar…

For simple enough examples that are computationally feasible – write out a direct sampling method to display Bayesian calculations. The more that understand what the _real_ challenges are – the better (for most of us).

Here, one need not draw from the posterior but can sample from the prior and average the likelihoods (data model) to get an interval for a parameter of interest. Doubt if this scheme will scale up to when one has many (not independent) nuisance parameters but it likely will be conceptually easier for those without an advanced background in statistics.

But separating model specifications from the implementaion of the desired calculations from it – likley will make things better for almost all of us – regardless of the MC, MCMC schemes we have to somehow grasp or accept as a harmless back boxes.

K!

What think ye about HBC?

http://www.cs.utah.edu/~hal/HBC/

Again I am someone who has no background in statistics. This will explain some of the odd feel of BUGS. GLIM and macros and other common statistical ideas were unknown concepts (to me) when I first started thinking about the BUGS language. The language was meant to describe

general graphical models in a regular way not to be particularly compact for simple models.

The BUGS language is declarative the BUGS software has to extract meaning from it. For example the software does not know that a particular model is a random effects model. What it would know, in this case, is that the nodes in the graphical model have various topological depths. This type of information should be useable in guiding the choice of sampling algorithms.

The BUGS software has always been highly modular. The OpenBUGS software takes this to extremes: the source code of each module is available on line. Any user of the OpenBUGS software can write new sampling algorithms and

"connect" them to the BUGS engine. A more complex

problem then actually implementing an MCMC sampler is deciding when it should be used (and when it is valid). The use can be forced for a particular model that the user has knowledge of but in general is a hard problem. For block updating algorithms which nodes to put in the block can be a difficult and delicate question.

The graphical model that OpenBUGS constructs can now be stored on file. So this file is a potential interface for anyone who wants to develop their own sampling technology. It is also

a way of spreading different chains of a MCMC run over a cluster of computers.

But this is all work and the BUGS team is at most

a few part tome workers. So things will not happen quickly. Over the next 20 years???

Regards

Andrew

Jsalvatier: I wasn't familiar with PyMC, will have to look into it.

T: We've been working with Hal Daume to adapt HBC to the sorts of models that we fit. We'll have to see if the payoff is worth the programming effort.

Andrew: As you say, the general problem of solving a graphical model is more difficult than the problem of implementing a sampler given direction from the user. What I'd like is for Bugs/Jags/Hbc etc to be open enough that the user (for example, me) can get inside and direct the sampling, a bit. That said, the complete generality of Bugs is just wonderful for a large class of fairly complicated models on small datasets. I've successfully used Bugs in my own research several times.

I believe PyMC adopted some syntax from your paper on Fully Bayesian Computing. PyMC is also nice in that it is heavily vectorized (i.e. a model may look like Y = X + m * B ** 2 where X and B are multidimensional vectors of random variables), so the models you write generally do not involve loops and are more concise than in other modeling languages.