Skip to content
 

More thoughts on self-experimentation

Susan writes:

I’ve started reading the piece you sent me on Seth. Very interesting stuff. I generally tend to think that one can get useful evidence from a wide variety of sources — as long as one keeps in mind the nature of the limitations (and every data source has some kind of limitation!). Even anecdotes can generate important hypotheses. (Piaget’s observations of his own babies are great examples of real insights obtained from close attention paid to a small number of children over time. Not that I agree with everything he says.) I understand the concerns about single-subject, non-blind, and/or uncontrolled studies, and wouldn’t want to initiate a large-scale intervention on the basis of these data. But from the little bit I’ve read so far, it does sound like Seth’s method might elicit really useful demonstrations, as well as generating hypotheses that are testable with more standard methods. But I also think it matters what type of evidence one is talking about — e.g., one can fairly directly assess one’s own mood or weight or sleep patterns, but one cannot introspect about speed of processing or effects of one’s childhood on present behavior, or other such things.

My thoughts: that’s an interesting distinction between aspects of oneself that can be measured directly, as compared to data that are more difficult to measure.

I remember that Dave Krantz once told me that many of the best ideas in the psychology of decision making had come from researchers’ introspection. That sounds plausible to me. Certainly, speculative axioms such as “minimax risk” and similar ideas discussed in the Luce and Raiffa book always seemed to me to be justified by introspection or by demonstrations of the Socratic-dialogue type that (such as in Section 5 of this paper, where we demonstrate why you can’t use a curving utility function to explain so-called “risk averse” attitudes).

One of the discussants of Seth’s paper in Behavioral and Brain Sciences compared introspection to self-experimentation. Just as self-experimentation is a cheaper, more flexible, but limited version of controlled experiments on others, introspection is a cheaper etc. version of self-experimentation.

Back to Susan’s comments: she appears to agree with Seth that it’s not a good idea to jump from the self-experiments to the big study. So there should be some intermediate stage . . . pilot-testing with volunteers? How much of this needs to be done before he’s ready for the big study? More generally, this seems to be an important experimental design question not addressed by the usual statistical theory of design of experiments.

3 Comments

  1. Bob O'H says:

    Wouldn't the intermediate stage be equivalent to a phase I clinical trial? There must be some design of experiments theory on this, but as I'm not a medical statistician, so I can't point you towards the key references.

    David Spiegelhalter's book "Bayesian Approaches to Clinical Trials and Health-Care Evaluation" might help. Amazon are suggesting that you could buy it with another book, called "Bayesian Data Analysis"…

    Bob

  2. deb says:

    Section 5 is very interesting. It is related to Berkeley economist Matthew Rabin’s controversial paper in Econometrica (2000) and an obscure paper he cites by Bengt Hansson (1988) in Gardenfors and Sahlin’s “Decision, Probability and Utility.”

    Hansson contrasts two types of invariance:

    1. A person is always indifferent between a dollar and a 50% chance of three dollars. U(x+1)=.5[U(x)+u(x+3)], for all x. This person has the utility function U(x)=c-k^x where k=[sqrt(5)-1]/2. This is a negative exponential function.

    2.A person is always indifferent between the status quo and a 50% chance of three times the status quo. U(x)=.5U(3x), for all x. This person has the utility function U(x)=x^k where k=log2/log3. This is a power function.

    You describe a new type:

    3. A person is indifferent between x and a [55% chance of x+$10, 45% chance of x-$10]. U(x)=.55U(x+10)+.45U(x-10).

    This is also a negative exponential. You have replicated the Rabin (2000) and Rabin & Thaler (2001) result that this function implies an absurd level of risk aversion.

    There are two observations worth making.

    A. The power utility function is much better descriptively and normatively than the negative exponential utility function.

    B.Within the EU framework, reasonable small-scale risk aversion can imply an absurd level of large-scale risk aversion. It is very strange and counterintuitive that the invariance you describe leads to such a ridiculous prediction (indifferent between $40 and 50% chance of $1 billion) when one assumes risk aversion is solely due to the concavity of the utility function. To me, this is a STRONG argument against the normative status of EU. Sensible small-scale risk aversion + EU = BIZARRE large-scale risk aversion.

    http://debfrisch.com/archives/000121.html

    http://debfrisch.com/archives/000057.html

    http://debfrisch.com/archives/000083.html

    http://debfrisch.com/archives/000130.html

  3. Andrew says:

    Deb,

    Well, since my paper came out in 1998 (it was based on a demo I did in 1993 or so in my decision analysis class) and Matthew's was in 2000, I think it would be fairer to say that he replicated my results! I like the demo because it emphasizes to students the absurdity of using a nonlinear utility function to explain uncertainty aversion. (I avoid the term "risk aversion" because it has too many meanings–sometimes it simply means aversion to risk, other times aversion to uncertainty, other times it refers to a concave utility function.)

    I'm aware of Matthew's paper–when I noticed it a few years ago in one of those compilation books, it made me happy to see my little argument written more formally. And I was happy to see the economists recognizing that nonlinear utility functions don't explain loss aversion. I sent him and Thaler copies of my paper on teaching demos for decision analysis but didn't hear back from them.

    Regarding your criticism of EU, I see what you're saying. If someone is indifferent between x and a [55% chance of x+$10, 45% chance of x-$10], I don't think this fact can be used to learn anything useful about the utility function. The decision problem is just too polluted by the psychological phenomenon of uncertainty aversion. I just don't think that EU is a useful way of describing people's actions in these situations. Although it is a helpful benchmark (so that, for example, one can show that someone is being inconsistent with EU).

    But in the framework of institutional decision analysis, I think that expected monetary value is indeed normative for small dollar amounts.

    P.S. Matthew Rabin and I know each other from high school. Actually, we were in the same economics class. We called him Yitzhak back then.