Bayesian Truth Serum

What’s the deal with Drazen Prelec’s “Bayesian Truth Serum“? There’s something about this catchy name that makes me suspicious, but he and his colleagues claim to have results:

The Bayesian ‘truth serum’ (BTS) is a scoring method that provides truthtelling incentives for respondents answering multiple-choice questions about intrinsically private matters (opinions, tastes, past behavior). The method requires respondents to supply not only personal answers but also percentage estimates of how other respondents will answer that same question. The scoring formula then assigns high scores to answers whose actual frequency is greater than their predicted frequency. It is a theorem that truthful answers maximize expected score, under fairly general conditions. We describe four experimental tests of the truthtelling property of BTS. In surveys with varied content (personality, humor, purchase intent) we assess whether some identifiable category of respondents would have attained a higher score had they engaged in a systematic deception strategy, i.e., given a non-truthful answer according to some algorithm. Specifically, we test whether respondents would have achieved a higher score had they replaced their actual answer with the answer that they believe will prove the most popular (or least popular); whether they would have done better by misreporting their demographic characteristics (gender); and whether they would have done better by simulating the answers of some other person “that they know well.” We find that all types of deception are associated with substantial losses in score, for the majority of respondents. Hence, BTS can function as a truth-inducing scoring key even in settings where only the respondent only knows the actual truth.

I like the idea of trying to harness the “wisdom of crowds” in an anticipatory way. I really should look into this and try to understand exactly what’s going on.

The Information Pump

Prelec certainly has a knack for naming things!

4 thoughts on “Bayesian Truth Serum

  1. BTS claims that it can determine if an answer to a question was truthful or not. To do this, it computes how different is the answer from the predicted answer. The predicted answer can be either the population mean, or perhaps the covariate-informed prediction.

    If there is a big difference, BTS claims that the answer was false.

    So, to cheat with BTS, you should just plug in the population mean. And if you're unusual, you will be considered to be a liar.

    While the statistics/mathematics behind this is extremely simple, it's nice that they study this problem – it has many practical applications.

  2. to poster above:
    i don't think you really read or thought about this paper or the news snippet about it. first, your summary of the bts is wrong. this makes it somewhat funny when you say that the math behind the bts is "extremely simple."

    anyway, what i wanted to say is this: you can't "cheat" by plugging in the population mean. to get the most points for your answer you need to have some metaknowledge that allows you to 1. predict what most people are going to say (perhaps because of some misconception they have) and 2. predict what other people who share your metaknowledge are going to say. Then, for your own answer, you go with the latter folks. you do NOT get points for saying the most predicted answer. you get points for saying the "most under-predicted" answer. (think about it before assuming there's a typo)

    your comment illustrates a potential problem with the bts. if you explain it to people before asking the questions, their misunderstandings of the scoring system will lead them to incorrectly believe they've found a way to cheat. (fortunately, your specific misconception wouldn't create any problems if people are accurate at predicting the most common answer). but depending on the misunderstanding, and if enough people share it, it seems possible that it could create problems for the procedure (e.g., if the people who don't know much — i.e., the majority — somehow congregated on an answer that neither they nor the smart guys predicted people would choose). to me, that problem seems a little far fetched, though. the limitations should be identified by prelec and other people willing to do some math or conduct some studies on it…

    i'm sorry if this comes off rude, but i just find it interesting that you so completely misunderstood the bts and then went along your way thinking you easily identified its limitations

  3. HI,

    I'm into statistical signal processing stuff (PhD grad). Method looks interesting; I'm trying to understand the bayesian aspect of this. I mean how does Prelec, categorize this as bayesian ? Is it because he wants the population to be countably infinite (large finite for practical apps) ?

    There's a similar paper, named “A truth serum for non-bayesian … "
    http://faculty.fuqua.duke.edu/seminarscalendar/pr
    Here, the method is slightly different, but same line of thinking. However, population size doesn't have to be infinite (i think, only skimmed thru it)

    Any clarifications ?

Comments are closed.