Using “How many X’s do you know” questions to get better survey estimates

After I spoke at Princeton on our studies of social polarization, John Londregan had a suggestion for using such questions to get more precise survey estimates. His idea was, instead of asking people, “Who do you support for President?” (for example), you would ask, “How many of your close friends support Bush?” and “How many of your close friends support Kerry?” You could then average these to get a measure of total support.

The short story is that such a measure could increase bias but decrease variance. Asking about your friends could give responses in error (we don’t really always know what our friends think), and also there’s the problem that “friends” are not a random sample of people (at the very least, we’re learning about the more popular people, on average). On the other hand, asking the question this way increases the effective sample size, which could be relevant for estimating small areas. For example, in a national poll, you could try to get breakdowns by state and even congressional district.

It might be worth doing a study, asking questions in different ways and seeing what is gained and lost by asking about friends/acquaintances/whatever.

8 thoughts on “Using “How many X’s do you know” questions to get better survey estimates

  1. Interesting approach, but I think the idea that you're increasing the effective sample size is entirely illusory. You're getting a different sort of information from each individual but it's still really only one observation.

  2. Brendan,

    The point is that you're learning about more people (although with less control of whom you're learning about and with possible measurement error). It could be thought of as a cluster sample, so, yes, the number of clusters is unchanged. But if you're learning more from each cluster, your standard error for inferences about the population can decrease.

  3. Andrew, I'm deeply sceptical of proxy information. I think there would be substantial and systematic measurement error. That said, the measure could still be interesting.

    (OT: Try documentclass[handout]{beamer})

  4. But isn't it also about removing bias caused by socially desirable responding? It could be that in some areas, people won't express support for one candidate, because that's seen as a bad thing.

    A similar approach for measuring racism has been suggested, which asks not if you hold racist attitudes, but if your neighbors do. Here's the ref and abstract:

    Self-reports of racist attitudes for oneself and for others by DA Saucier. In PSYCHOLOGICA BELGICA, 42 (1-2): 99-105 2002.

    Abstract:
    Individuals are often motivated to avoid appearing prejudiced. In this study, it was hypothesized that participants would indicate that other
    people would be more likely than themselves to agree with racist arguments. Participants read a series of positive and negative arguments about African Americans and rated the extent to which they agreed with the arguments and how convincing they found the arguments to be. Participants also rated how much the "average person" would agree with and be convinced by the arguments. The hypotheses were supported. Participants overwhelmingly reported that, compared to
    themselves, the "average person" would agree more with and be convinced more by the racist arguments. These results suggest that
    individuals may justify their own prejudice by believing that other people are more prejudiced, allowing the individuals to maintain
    nonprejudiced self-concepts despite their own racist attitudes.

  5. Something like this has been tried for political opinion polls. A couple of Germans suggested asking people what they thought the result of the election would be instead of asking them who they would vote for. They reasoned that people might lie about whom they support (cf. the poor opinion poll results in the UK in 1992), but would not about the expected result. The researchers reckoned that there would be bias in favour of the party the respondent actually supported. A test was carried out and reported to be successful (surprise, surprise). I have a reference for this if anyone's interested, but it's at the office!

  6. Jeremy,

    Thanks for the reference. I'll have to ask my Belgian psychologist colleagues what they think of it…

    Antony,

    No, you're talking about something different. I don't want to do a poll to predict who will win an election. I want to learn about public opinion. For example, suppose that 60% of the people in a certain city support the death penalty. That's what I want to know. I don't want to know that 90% of the people know that a majority of the people support it, or whatever.

  7. Andrew,

    I don't see the difference. In your death penalty example, the question would be "In your opinion, what percentage of the population supports the death penalty?" instead of "Do you support the death penalty?".
    In Germany the question in political opinion polls is not "Which party will you vote for in the next election?" but the so-called Sonntagsfrage (Sunday question): "If the election took place next Sunday, which party would you vote for?" It is intended to capture the current support for the parties, i.e. current public opinion.

  8. Antony,

    They ask the Sonntagsfrage in U.S. polls. Actually, our 1993 paper included a mini-study that compared the two questionwordings and found little difference (except in the overall nonresponse rate).

    But my point was about something different. My idea (or, should I say, Londregan's idea) is to ask people about their own social networks, and from there (possibly with appropriate weighting) aggregate this local information to learn about the population.

    To ask each person about their estimate of the population would short-circuit the process. The idea of including the social-network questions is to gather more information from individuals' personal experiences.

Comments are closed.