Somebody asks:

I’m reading your paper on path sampling. It essentially solves the

problem of computing the ratio \int q0(omega)d omega/\int q1(omega) d omega. I.e the arguments in q0() and q1() are the same. But this assumption is not always true in Bayesian model selection using Bayes factor.In general (for BF), we have this problem, t1 and t2 may have no relation at all.

\int f1(y|t1)p1(t1) d t1 / \int f2(y|t2)p2(t2) d t2

As an example, suppose that we want to compare two sets of normally distributed data with known variance whether they have the same mean (H0) or they are not necessarily have the same mean (H1). Then the dummy variable should be mu in H0 (which is the common mean of both set of samples), and should be (mu1, mu2) (which are the means for each set of samples).

One straight method to address my problem is to preform path integration for the numerate and the denominator, as both the numerate and the denominator are integrals. Each integral can be rewritten as the ratio of integrals in the following form, where the parameter \theta the same. So the problem is solved. Is it the case?

\int f(y|\theta)p(\theta) d \theta = \frac{\int f(y|\theta)p(\theta) d

\theta}{\int p(\theta) d \theta}

My reply:

In your example with the normal distribution, I would prefer to estimate the parameter using an informative prior distribution, rather than to distinguish between theta=0 on one extreme or a noninformative prior distribution on the other. If you really want to compute these marginal probabilities, then the only way I know how to do it in general is to compute them separately, for example using path sampling to compute each relative to some computable standard distribution of the same dimension.

Reversible jump MCMC should also work here. My memory is a bit hazy, but I believe you can use the proportion of time the RJMCMC chain spends in a given model to estimate the ratio of marginals.