Non-Bayesian analysis of Bayesian agents?

Econometrician and statistician Dale Poirier writes:

24 years ago (1988, Journal of Economics Perspectives) I [Poirier] noted cognitive dissonance among some economists who treat the agents in their theoretical framework as Bayesians, but then analyze the data (even in the same paper!) as a frequentist. Recently, I have found similar cases in cognitive science. I suspect other disciplines exhibit such behavior. Do you know of any examples in political science?

My reply:

I don’t know of any such examples in political science. Game theoretic models are popular in poli sci, but I haven’t seen much in the way of models of Bayesian decision making.

Here are two references (not in political science) that might be helpful.

1. I have argued that the utility model (popular in economics and political science as a way of providing “microfoundations” for analyses of aggregate behavior) is actually more of a bit of folk-psychology that should not be taken seriously. To me, it is silly that many economists and political scientists give this model such prominence. Utility theory can be a helpful normative model in many situations, but I don’t think it should be anything close to foundational as

2. Are you familiar with the work of Josh Tenenbaum? He is a cognitive scientist at MIT who has been working on Bayesian models for human reasoning and also Bayesian methods for fitting such models given data from psychological experiments. See here and here.

Getting back to Poirier’s original point, it could make complete sense to me to use non-Bayesian inference to learn about Bayesian agents, as long as you believe that (a) people’s behavior can be reasonably approximated by Bayesian decision rules, and (b) from a normative standpoint, non-Bayesian inference is to be preferred. It seems that many economists believe both a and b, so I don’t necessarily see any cognitive dissonance in using non-Bayesian statistical inference while modeling behavior as Bayesian.

The funny thing is, I believe not-A and not-B, so my preference would be to use Bayesian inference for non-Bayesian models of behavior.

12 thoughts on “Non-Bayesian analysis of Bayesian agents?

  1. I think the answer below from late Clive Granger to the question raised by Peter Phillips “… do you see some advantages in the Bayesian paradigms … ?” is relevant to this post:

    ” … I am not a Bayesian because I lack self-confidence […]“

    ” … a good Bayesian, that is, a Bayesian who picks a prior that has some value to it, is better than a non-Bayesian. And a bad Bayesian who has a prior that is wrong is worse than a non-Bayesian, and I have seen examples of both. What I do not know is how do I know which is which before we evaluate the outcome”

    ET, 13, 1997, pp. 253–303. http://korora.econ.yale.edu/phillips/pubs/art/i006.pdf

  2. I think this is perfectly valid. Analysts may disagree about prior information. And it is sometimes (always?) unclear whether the analysis would differ if different priors were used. Whether assuming that the individuals being analyzed are Bayesian requires a completely different, logically distinct assessment of whether their behavior can be approximated by such an assumption.

    On another note: I am increasingly annoyed by your unprofessional and silly attack on economists that you keep repeating. Maybe your points (1) and (2) above are, despite their juxtaposition, not meant to be related. But the reference to Tenenbaum and links to his work on how people think about the relationship between different species is a great example of why your argument fails. Tenenbaum is asking a question much much different than the questions economists ask. He is interested in the actual process of cognitive reasoning. To me it is not even clear what the ultimate goal of his analysis is (ie in Popperian terms, what “novel predictions” his theory has). But economist’s ask much less narrow questions. They want to know how an oil embargo affects individual’s decisions about where to live, how an increase in concentration in an industry raises the prices of a product and how consumers respond, how the fundamentals of the economy influence fertility decisions, etc. To answer these questions we cannot run experiments in a lab and we cannot start from a (maybe) more accurate but (certainly) more complex model of cognitive reasoning. To the extent that a more accurate approximation of consumer behavior exists that produces a model that is not substantially more complex, we would welcome it. And this is the role that behavioral economics has filled for decades.
    Maybe I can interpret your criticism as a analogy: economists are more like engineers than scientists. (This is where you’ll have to excuse my ignorance a little bit…) Engineers who must make predictions about how much weight a bridge can hold or where a projectile will land or whatever don’t need to model the humidity, temperature, and wind speed (at every possible point on the bridge) and the exact distribution of traffic on the bridge (over time and space) OR the weight, density, shape of an object and the air pressure (and the distribution of air pressure) and wind speed (at every point in space) in order to make predictions that are accurate enough for us to rely on daily. They make simplifying assumptions. Is it “folk physics” to assume that the space between cars on a bridge is always the same, or that the weight distribution of a projectile is uniform?

    At any rate, if you keep bringing this up all the time isn’t it your responsible to defend it a little better?

    • Ben:

      I don’t see what’s unprofessional about stating what I believe. Regarding your examples of the oil embargo etc., I completely agree with you that models are useful; it would be tough to answer such questions from data alone. The “folk physics” comes along when people seem to think that there is such a thing as a utility function for money or that such a function could be a good model for the well-known general preference for $29 over 50-50 chance of $20 [typo fixed] or $40. You can follow my links for much discussion of these points.

      • Is there really a well known preference for $29 over 50-50 chance of $30 vs 40? I mean the 50-50 chance comes out a minimum of a dollar ahead every time! Perhaps you misquoted the numbers? Or do people really have that bad a documented dislike for gambles that they usually prefer to pay at least $1 and on average $5 to avoid having to even think about them?

  3. As one who has been both guilty and critical of this behaviour I thought I might offer something of an apology that doesn’t require acknowledging a collective of ‘cognitive dissonance’ among bayesian behaviour scientists. In my opinion, there are two reasons this happens. The first is that the Bayesian cognitive scientist is often placed in the position of comparing a Bayesian model of a particular behaviour to a non-Bayesian one. This makes things like Bayesian model comparison somewhat difficult. Indeed, many authors don’t even bother with a model comparison at all. Instead they simply wave their hands while stating that a non-bayesian model can’t do qualitative behaviour X very well (X = track uncertainty for example), while both my Bayesian model and people can. Quantitative model comparison is then left to the reader. Regardless, once you’re at the point where it’s not clear how to do proper model comparison all that is left is showing that a bayesian model can replicate the behaviour and standard frequentest statistical techniques are easily rationalized.

    The second reason is that stubborn Reviewer #2. The above issue can be circumvented by taking the approach we originally took here (http://www.nature.com/neuro/journal/v14/n6/full/nn.2814.html) where we reformulated the non-Bayesian models as impoverished Bayesian models, i.e. bayesian models that make incorrect assumptions regarding the structure of the task or how certain parameters are tied, etc. This allows for proper model comparison and predictive checks and such. Of course, when you take this approach, Reviewer 2 invariably says that it is unclear how to interpret Bayes factors and 95% confidence bounds on predicted behavioural statistics. He then insists on ‘an objective goodness of fit test’ like means square error on ROC fits or standard error on the AOC. You can argue (perhaps more effectively than we did) that MSE makes no sense in this domain and so on, but at the end of the day you report the required statistics and console yourself with the knowledge that at least you integrated out the uncertainty in your parameter estimates when generating your predictions for each subject. It’s a very frustrating situation.

  4. Kahneman and Tversky and others, in well-known early work on whether people reasoned “rationally,” tended to assume rational reasoning (involving probabilities) would be Bayesian, and then they’d interpret the results using frequentist significance tests. However, to be fair, they generally set things up to be ordinary probability problems, or tried to. Yet there was a lot of confusion between “probability” and “likelihood” because of equivocations in English (e.g., a conjunction may have smaller probability than its conjuncts, but higher explanatory power).

    Anyway, what is meant by “use Bayesian inference for non-Bayesian models of behavior”. Given your disinclination to view Bayesian inference as the usual probabilistic updating, the question is whether your preference to “use Bayesian inference” here—on the metalevel, as it were—reverts to Bayesian updating? Or is it also a kind of Bayesian falsification (or however you wish to describe it)?

    • Thanks for the reference; I appreciate this sort of input and feedback.

      The concepts of utility, and more specifically of declining marginal utility, are useful and I use them all the time. I have no problem with someone doing psychology research and checking various intuitive principles such as that of declining marginal utility. That’s great. I just don’t think utility theory is fundamental or foundational in any sense. It’s an old, simple model that is often useful, prescriptively and sometimes even descriptively. It’s not the foundation of social reality or of decision making.

      • I recommend reading _Fortune’s Formula: the untold story of the scientific betting system that beat the casinos and wall street_ by William Poundstone (2005). It’s the very entertaining story about the Kelly criterion which derives from logarithmic utility. Those that use the Kelly criterion achieve long term success, while those that don’t blow things up financially.

  5. Pingback: Over 200 Years Of Experience « daniel gillis

Comments are closed.