“Rational” != “selfish”

John Quiggin sent me this article of his from 1987 that made the same argument as my paper with Edlin and Kaplan on why and how it’s rational to vote. In his article, Quiggin wrote:

There is strong evidence that voting behaviour is both ends-directed and rational. That is, electors choose to vote because of the effects their vote will have, and do not vote if these effects are insufficient to outweigh the costs of voting. However, as Downs’ paradox shows, rationality and egoism together imply non-voting. The evidence suggests that egoism is the postulate which must be abandoned. . . . voters’ interest in political information increases with the importance of political choices. Once again, this is consistent with rationality but not with egoism.

Our article had more math and more focus on U.S. politics but the basic point is the same.

Also let me use this as yet another excuse to plug a wonderful article, The Norm of Self-Interest, by psychologist Dale Miller, in which he argues the following:

A norm exists in Western cultures that specifies self-interest both is and ought to be a powerful determinant of behavior. This norm influences people’s actions and opinions as well as the accounts they give for their actions and opinions. In particular, it leads people to act and speak as though they care more about their material self-interest than they do.

6 thoughts on ““Rational” != “selfish”

  1. Self-interest does not have to be narrowly selfish. "Self-interest" can include acting in the best interests of one's family and community. I believe there is a discussion of this idea in Michael Novak's book The Spirit of Democratic Capitalism.

  2. "The evidence suggests that egoism is the postulate which must be abandoned"

    So descriptively, in this situation, people don't act according to egoism.

    However, this may still not be the normatively right thing to do. For example, most of the arguments for rational voting seem only to apply to people who would choose A over B.

    A: Gain $1 for everyone in the US.
    B: Gain $10M for myself.

    If you would choose A over B, then it might be rational to vote. If you would choose B over A (as I suspect almost everyone in the world would), then it is irrational to vote. In other words, it is inconsistent to care about others enough to vote but not to care enough to choose A over B. Does this make sense? Where have I gone wrong?

  3. Noname,

    We have a parameter in our model that expresses how much you discount others' gains compared to yours. For example, alpha might be 1/1000, in which case you could prefer B, but social concerns would still rationally determine your vote.

  4. So if someone has a low enough alpha, then it would be irrational for them to vote.

    To assess your alpha, you give the X for which you would be indifferent between C and D

    C: Gain $1 for everyone in the US (300M people).
    D: Gain $X for myself.

    Your alpha is then equal to X/300M, right? I suspect most people would be below, say, 1/10,000 (i.e. X below $30K). What do you think?

  5. I see your point. I think that there are a lot of pieces to the puzzle, and probably one of the pieces is that people have difficulties with very small probabilities. Also, I think a person's perspective of the utilities of others is not additive. That is, a national good that is equivalent to $1 billion seems better than giving $3 to every American. A lot has to depend on the context, especially since some benefits (for example, that of being at peace) are difficult to monetize. But, yes, that's the basic idea.

Comments are closed.