parameters{

real u;

real n;

}

transformed parameters{

real par;

par = u+n;

}

…

u ~ uniform(a,b)

n ~ normal(0,scale);

So density at a given point x is integral(pn(s,x,scale) * 1/(b-a),s,a,b)

with pn(s,x,scale) is the normal density centered at x with given scale, evaluated at point s.

which is probably how you get your phi based representation.

I think the useful fundamental way to think about this class of priors is in terms of the convolution, since you can construct any kind of “smoothed Foo” for all kinds of reasonable base distributions “Foo” and the smoothing kernel can be normal, or something less smooth, or have finite tails, or whatever you want.

]]>prior density for par ∝ Φ[(par + 20)/10] – Φ[(par – 20)/10]

]]>n ~ normal(0,10)

par = u+n

This is

par ∝ Φ[(par + 20)/10] – Φ[(par – 20)/10]

in which Φ[⋅] is the standard normal CDF.

]]>Shira and Mariel follow up:

We share your notion of conservative. How should this be balanced with prior information we have from a previous (very related) study? Do we center at 0 or the previous study’s estimate? Somewhere in-between? If centered at a non-zero value, then a strong prior is no longer your version of conservative, correct?

My response:

In this context, yes, I feel that centering the prior at a positive value rather than 0 would not be conservative. But I suppose it depends on the context.

“`

the way i like to think about umbrella ‘shrinkage’ is: shrinkage to some value c.

* if c is 0, then you get lasso type of shrinkage

* if c is basically the mle, then you have ‘no shrinkage’

with bayes + multilevel models, you the flexibility to structure c.

in turn, what is ‘conservative’ or ‘liberal’ will better match your context

]]>u ~ uniform(a,b)

n ~ normal(0,scale)

par = u + n;

and use par as my parameter of interest. The point being to generate a broad plateau of reasonable values between a,b while still giving support to the whole real line outside that range, and a nice parabolic tail like in a normal distribution.

So, for example if you expect something near zero, and not much more than absolute value around $20 you could do

u ~ uniform(-20,20)

n ~ normal(0,10)

par = u+n

and now the main prior probability mass of par is between -20 and 20 with a normal tail on either side extending out another few multiples of 10, 95% mass is something like -40 to 40

You could get something similar from normal(0,30) or the like, but the soft uniform gives you a little stronger tail and a little more uniform plateau in the high probability region.

]]>