Johannes Castner writes:
Suppose there are k scientists, each with her own model (Bayesian Net) over m random variables. Then, because the space of Bayesian Nets over these m variables, with the square-root of the Jensen-Shannon Divergence as a distance metric is a closed and bounded space, there exists one unique Bayes Net that is a mixture of the k model joint-distributions which is at equal distance to each of the k models and may be called a “consensus graph.” This consensus graph is in turn a Bayes Net, which can be updated with evidence. The first question is: What are the conditions for which, given a new bit of evidence, the updated consensus graph is exactly the same graph as the consensus graph of the updated k Bayes Nets? In other words, if we arrive at a synthetic model from k models and then update this synthetic model, under what conditions is this the same thing as if we had first updated all k models and then build a synthesis. The second question is: If these are not the same, then which of the two would be better and under what conditions, from the perspective of collective learning?
Does anyone have any thoughts on this? It all seems related to various topics of interest to me (see, for example, this presentation from 2003) but I don’t know anything about what he is talking about.