Transformative experiences: a discussion with L. A. Paul and Paul Bloom

A couple years ago we had a discussion of philosopher L. A. Paul’s argument that the decision to have a child cannot be made rationally. Paul recently published her ideas as a book, “Transformative Experience,” which she recently discussed online with psychology researcher Paul Bloom. (I’ll refer to the two people involved as L.A. and Bloom, to avoid using the ambiguous identifier “Paul” in this case.)

The discussion between L.A. and Bloom was interesting but there were a few places where I disagreed, so I thought I’d share that with you here, along with their reactions to my disagreements, my reactions to their reactions, etc.

1. Bloom wrote:

How can we rationally decide whether or not to undergo certain irreversible changes, such as having a child, or taking LSD, or becoming religious?

My reaction: Is taking LSD “irreversible”? Or, should I say, is it more irreversible than any other decision in life? (Seeing a movie is irreversible too, in that, once you’ve seen it, you can’t un-see it).

Bloom:

Sure, in a strict metaphysical sense every experience is irreversible. You can’t step into the same river twice, etc., and once you’ve seen a bad Adam Sadler movie, you can’t un-see it, even if you have consciously have forgotten the experience.

But some experiences are irreversible in a non-trivial sense. Having a baby, being raped, perhaps taking LSD change you _profoundly+, and you can’t ever go back to the person you were before these experiences. I think it was clear from the context that this was the sense I meant.

2. As I wrote earlier, I don’t think it’s quite correct to say that having a child is something we decide. I think it’s more accurate to say that we decide whether to try to have a child, because many people who want children, can’t have them, and many other people who don’t want children end up with them anyway. So you can’t quite “choose to have a child.” In statistics jargon, this is all an intent-to-treat analysis. I know I raised that point earlier but I think it’s important enough to bring it up again.

L.A.:

I agree that this is an important issue. When I frame this for philosophers I have tended to set this issue aside because I see it as a complication. Do you see a philosophical issue in there that I need to look at more closely? I can see connections to the fundamental identification problem and to inferences we might want to make based on counterfactual scenarios. I’m tempted to say this makes the ordinary problem of weighing evidence even worse in the child case, because while “assignment” to child/no-child conditions is noisy, few people trying to make this decision assess it with an ITT framework in mind. (This is one reason why infertility is often more of a terrible shock to people who really want kids than it perhaps rationally ought to be, for instance.)

Me: Regarding the “aim to have a child” issue, I think it’s important because it illustrates how we tend to jump the gun in our decision making processes. If we systematically make mistakes in our decisions, perhaps one reason is that we do not always think through our implications. Thus someone might decide to have a child, even though, strictly speaking, this is not a decision that we can make (unless, I suppose, we’re referring to someone who is deciding whether to sign on the dotted line for an adoption or, to take it in the other direction, someone who is sitting in an abortion clinic, deciding whether or not to go through with it).

For another example which might make the point clearer, it’s my impression that people sometimes (often) make the decision to fall in love, or to find a girlfriend, or whatever. As you might put it, such a decision is questionable because the post-“in love” or the post-“girlfriend” state is unimaginable given the existing state. To this, I’d add that it’s not clear that this state will be achieved at all.

This is perhaps related to my earlier point that parenthood is often not as unknowable as you seem to claim. If someone already has 1 or 2 kids, is it really so unknowable what it will be like to have the second or third child? And many childless people have had experience with younger siblings, nieces and nephews, etc. As I said to Paul in my earlier email: Sure, you could rule all these cases out and say that you’re focusing on people with no extensive babysitting experiences contemplating their first child in a situation in which the chance of having the baby is very high—but then you’re clearly focusing on a subset of experiences. And I think your philosophical argument loses some of its implied universality once the restrictions start to kick in.

L.A.:

I should also say that, in case this was not clear, that I don’t think all decisions are transformative. In particular, with having children, I agree that things get less clear when we consider second and third and… etc children. I focus on the first child because that case makes the point I want to make. Some people think the second child is also transformative—there are new things that are unknowable. For me the jury is still out on this question. However, I don’t think that having nieces or babysitting etc.,will make the experience of having a child non-transformative, because the transformative part is the forming of the parent-child attachment relation. This carries a distinctive kind of preference change with it, and partly depends on the properties of the particular child that is produced.

One thing I disagree with you on, however, is the idea that the decision to have a child is normally a joint decision. Sometimes it really is a joint decision. But from a woman’s point of view, often the decision (at least the decision about what she wants, as opposed to, say, the decision to *physically try to have a child*), is often negotiated individually, at least at first. She decides she wants/doesn’t want kids, as a personal choice, often even before she has a suitable partner. The rhetoric around abortion and around life/balance issues in the popular press reflects this. (On a related note, it’s funny, because sometimes I’ll have an older male philosopher tell me that his decision to have a child was just a decision to do what his wife wanted. But of course then he’s just giving up his autonomy with regard to that decision.)

Regarding the more general point about decisions made by more than one person: thanks for raising it–there are interesting issues here that I haven’t explored. Perhaps to the extent that you allow any decision to be less individual and more joint, you give up your autonomy to some degree. I’m thinking of a decision as less and less individual the larger the group is who makes the decision, until finally any single individual perspective plays almost no defining role at all. Maybe the limit case with joint decision-making from the individual point of view is (again) giving up on introspection and letting someone else (eg, the experts) make the decision for you. This is interesting and deserves further attention.

3. I wrote: Some of your discussion seems a bit too individualistic. L.A. writes, “when I consider the major, irreversible, long-term and life-changing decision to have a baby . . . I’m the one who will be spending the next 18 years raising my child.” But it takes 2 people to have a child. And it typically (but not always) takes 2 or more people to raise the child. So I think it might be misleading for you to frame this as a personal, individual decision. This seems symptomatic of the hyper-individualism that is central to so much social science discourse nowadays.

One might say my concern about individualism is irrelevant, that you could simply replace “I” and “my” with “we” and “our” in your paragraph, and the reasoning would still go through. But I think not. If you have a truly individual decision, it can be based on any grounds. But a group decision, even with only two people in the group, typically needs some justification. We call this “institutional decision analysis” when we discuss this in BDA.

And, of course, introspection has a completely different meaning when two people are involved in the decision, as some means of communication is necessary to share the introspections.

L.A.:

My reason for framing this artificially is because I am drawing out a point about a tension within one’s first-personal perspective. It’s also the case—and this is part of the context for my broader discussion in the book—that the hyper-individualism you describe isn’t just part of social science discourse but also modern (and perhaps especially American) society. Having a child really is framed as a very personal decision that people, especially women, must face and choose about with respect to their personal happiness and careers and so on. In other settings of course, it’s not seen as that sort of a strictly personal decision at all, and for much of history it wasn’t even a decision but more like part of the natural condition of being a woman. So again, drawing it out in the way that I do is partly trying to show the strange things that happen when we treat aspects of life in this individual way, and this is especially clear to me as a woman who is committed to rational decision-making yet experiences the social pressure in a hyper-indivualized way.

Bloom:

I certainly agree that a decision of two people brings in special considerations.

But I think that even in the prototypical case of a couple deciding to have (or not have) a baby, the sort of individual process that Laurie is interested in takes place. Typically, each person will have already asked “Do I want a baby?” and come to the conversation with an opinion—and if the opinions are strongly felt, and identical, the decision process is over. (It gets more interesting if people are undecided or in conflict.)

Also, putting aside coercion, every joint decision is also an individual decision. If my wife and I decide to have (or to try to have) a baby, presumably this means that, individually, my wife and I have each decided to have a baby. Still, agreeing with your point here, the individual calculation gets more complicated, because now I’m not just asking about my future happiness but also about my wife’s, and vice-versa.

4. L.A. wrote:

So we agree here: When we make big decisions as well as small, the rational thing to do, if we have good enough data, is to dispense with introspection and make the choice based on the science.

My reaction: I don’t know about this. What if “the science” tells us that the rational thing to do is to use introspection? It’s possible, right? The point is that there is individual variation. So, to speak most generally, the appropriate decision depends on some mix, or partial pooling, of introspection and societal experience (“science”). Or, to put it another way, the role of “science” is to figure out how to calibrate the data from our introspection.

Bloom:

Agreed, so long as we’re open to the fact that science might tell us that the value of introspection is zero . . . or worse than zero, in that it can be systematically wrong.

Yes, excellent point! Sometimes the optimal weight is negative.

And we’ll give the final word to L.A., who writes:

A way to see my view about the need to attend to the possibility of transformation is as a question for psychologists and other scientists: how should I calibrate the data, given my uncertainty about who I’ll be after the transformative experience? (This relates to some of the subtle issues and problems surrounding informed consent in contexts of medical decision-making, in particular, the situation of the patient who is supposed to understand such a calibration.)

Comparing data and introspection gives rise to different types of decisions, some of which are easy and some of which are hard:

Easy Decision: I can’t introspect but I have good external testimony or data (e.g. trying a new food that my friends all recommend). To decide rationally, use the testimony or data.

Hard Decision: I can introspect but there’s good external testimony or data that contradicts my introspective assessment. To decide rationally, use the testimony or data.

But there’s a complication with Hard Decision: when Hard Decision involves deciding to to undergo an experience that entails a transformation of my epistemic and personal perspective. Take Laurie@t1 to be the person I am before I make the hard decision, and Laurie@t2 to be the person I am after I make the hard decision.

These are cases where, when I introspect, as Laurie@t1, I think I’ll be unhappy at t2 if I decide to A. But the data tells me that Laurie@t2 will be happy if she chooses A. The trouble is, I change from Laurie@t1 to Laurie@t2 if I perform A. And I can’t introspect to see if Laurie@t1 is, in the relevant sense, the same person as Laurie@t2 (or if she is the kind of person I want to be at t2). So when I was debating with Paul, I wanted to emphasize that I think it’s less helpful to focus just on whether it’s rational to rely on introspection, because the crux of the *philosophical* problem arises from the possibility of transformation mixed with the unreliability of introspection. [Tomer Ullman should get credit for suggesting that framing things this way would help psychologists see the point I’m making.]

In order to highlight this point, which I don’t think the data normally adjudicates, I closed my discussion with Paul Bloom with “This is why we need to pay special attention to transformative experiences. When you face a transformative choice, even if psychology can tell you what to choose, you still face an existential problem: Will you really be happier after the transformative change—or will you just be a different person?”’ Paul thinks that research should be used to calibrate introspection. I agree. But the issue here is philosophical as opposed to practical. What I want to know is the answer to the philosophical problem of how we are supposed to interpret the data, given the threat of transformative change.

10 thoughts on “Transformative experiences: a discussion with L. A. Paul and Paul Bloom

  1. This is interesting. I was working on a similar problem in the early 90s, which I labeled “rational alienation”: how a state of alienation could be compatible with an overarching model of rational choice. In particular, I was interested in what I called, at the time, the Faust Problem, in which one has made an exchange, but the terms of the exchange alter who you are, so that you no longer perceive or value it ex post as you did ex ante.

    In the L.A. context, this poses a somewhat different question, which involves unpacking “transformative”. I would ask specifically whether it is likely (with all the probability aspects thrown in) that having one’s first child changes how one evaluates having children in a manner that is inaccessible to someone in the ex ante state. I know this was true for me! My parental self would not be able to explain to my pre-parental self how I experienced parenthood. In any case, such conversations aren’t possible.

    My approach at the time, and I think it would still be my approach today, was not to look for a solution in the form of an ideal decision process; I doubt that it exists. I was more interested in thinking about the institutions and social processes than structure such decision-making, and how they might be tweaked to minimize choices the ex post self comes to deeply regret, which is how I viewed the Faust Problem.

    Incidentally, my starting point was the effect that the content of work activity has on the perceptive and valuative characteristics of the worker.

  2. Deciding whether to leave one’s spouse seems like a good example of a treatment choice (assuming a motivation!). If you stay, you become the kind of person who stays, if you leave, you become the kind of person who has left. You decide whom to be, more than what to do.

  3. I don’t think it makes sense to say a decision is rational but rather to ask whether it can be well-modeled as a process of rational decision making. So it sounds like this debate is really about how well certain decisions can be modeled as rational.

    Models of rational decision making can be very complex. Virtually all decisions are made under severe uncertainty. Virtually all decisions are made under time and resource constraints. Often, the decisions can be modeled as rational if one assigns appropriate costs to the amount of time spent reasoning, introspecting, gathering data, etc. Often, one needs to model the agent as having a risk-averse (or risk-seeking) utility function. Some of the “evidence” that one can gather about transformative decisions is to assess one’s own utility for the different outcomes, which requires imagination and introspection.

    I don’t see how this argument can really be resolved. People arguing against a rational account can restrict the scope and complexity of what counts as a rational account. People arguing in favor can expand the scope and complexity.

    Can we ask which account is more predictive? Does L. A. Paul propose an alternative, non-rational, way of predicting human decision making behavior that can be evaluated objectively? A rational account will encounter some difficulties, because many of the factors are internal to the decision maker and hence must be latent variables in our model that will need to be marginalized away.

    Aside: In AI systems, we typically model the result of a decision as a goal that has been adopted. A decision is modeled as a mental action. So one can adopt the goal of falling in love or of having a child. This does not mean that one necessarily has a well-defined predicate for evaluating whether the goal has been attained (e.g., consider the goal of “getting into Heaven”). It only requires that you can evaluate whether possible actions in the world are likely to advance progress toward the goal.

  4. You pay your money and take your chances. (If Yogi Berra didn’t say this, he probably thought it was too obvious to say.)

    PS

    Can anyone give a definition of “transformative”? (No fair giving a definition that uses the word “transform” — that’s essentially circular.)

  5. Thanks for the interesting comments, everyone.

    Martha, an epistemically transformative experience is an experience that teaches you something you could not have learned without having that kind of experience, for example, seeing color for the first time or tasting a durian. The experience teaches you what that kind of experience is like, giving you new abilities to imagine, recognize, and cognitively model possible states involving those experiences and their effects. A personally transformative experience changes you in some deep and personally fundamental way, for example, by changing your core personal preferences or by changing the way you understand your desires, defining intrinsic properties, or perspective. Transformative experiences are experiences that are at once epistemically and personally transformative.

  6. Taking introspection as just a form of inquiry, and scientific inquiry as just an attempt to get inquiry that can get less wrong or that bends over backwards to get less wrong, one cannot start inquiry from anywhere else but where one finds oneself Laurie@t1.

    But scientific inquiry should recognize its fallibility (part of getting less wrong) and allowing for things one does not yet know cannot be more? than recognizing fallibility – this judging of introspection in terms of what Laurie@t2 knows seem inappropriate.

    Its like criticizing the analysis of study1 because of what arose in study2 but not in study1.

Leave a Reply

Your email address will not be published. Required fields are marked *