Forest Gregg writes:
I want to incorporate a prior belief into an estimation of a logistic regression classifier of points distributed in a 2d space. My prior belief is a funny kind of prior though. It’s a belief about where the decision boundary between classes should fall. Over the 2d space, I lay a grid, and I believe that a decision boundary that separates any two classes should fall along any of the grid line with some probablity, and that the decision boundary should fall anywhere except a gridline with a much lower probability.
For the two class case, and a logistic regression model parameterized by W and data X, my prior could perhaps be expressed
Pr(W) = (normalizing constant)/exp(d) where d = f(grid,W,X) such that when logistic(W^TX)= .5 and X is ‘far’ from grid lines, then d is large. Have you ever seen a model like this, or do you have any notions about a good avenue to pursue?
My real data consist of geocoded Craigslist’s postings that are labeled with the neighborhood claimed by the poster, and I have a strong belief that decision boundaries separating neighborhoods (as classes) should fall along streets, railroad embankments, parks, and the river.
My reply: This reminds me of some models in spatial statistics such as conditional autoregressions, where you can specify a reasonable prior distribution over the space of possible parameter values, but the prior doesn’t have any simple normalized form. Essentially, you’re setting up a prior by penalizing certain configurations. If you label all configurations as theta, you set up a penalty function g(theta), then your prior is p(theta) proportional to exp(-beta*g(theta)), where beta is a hyperprior that expresses how strongly the penalty function is expressed. This is all fine but computation can be difficult.