Gay persuasion update

Hey, did you hear about that study last year, where some researchers claimed to find that a 20-minute doorstep conversation with skeptical voters could change views on same-sex marriage? It was published in the tabloids and featured on This American Life? And it turned out it was all a fraud, that one of the authors of the paper made up the data and the other author had to retract it from the journal? Remember that?

OK, good. Well, here’s some news. David Broockman and Joshua Kalla, two of the people who figured out about that earlier study being faked, then up and ran their own study, and look what they found:

A single approximately 10-minute conversation encouraging actively taking the perspective of others can markedly reduce prejudice for at least 3 months. We illustrate this potential with a door-to-door canvassing intervention in South Florida targeting antitransgender prejudice. . . . 56 canvassers went door to door encouraging active perspective-taking with 501 voters at voters’ doorsteps. A randomized trial found that these conversations substantially reduced transphobia. . . . These effects persisted for 3 months, and both transgender and nontransgender canvassers were effective. The intervention also increased support for a nondiscrimination law, even after exposing voters to counterarguments.

Screen Shot 2016-04-05 at 8.18.39 PM

Wow. As I wrote at the sister blog, this has to cause us to rethink the idea that persuasion is so difficult—at least for fluid issues such as gay rights where public attitudes are changing fast.

Betsy Levy Paluck wrote an accompanying article summarizing the research on persuasion:

What do social scientists know about reducing prejudice in the world? In short, very little. Of the hundreds of studies on prejudice reduction conducted in recent decades, only ~11% test the causal effect of interventions conducted in the real world. Far fewer address prejudice among adults or measure the long-term effects of those interventions.

Also, I’d expect persuasion to be less effective in the real world than in Broockman and Kalla’s experiment, because in a live political setting you’ll see efforts at persuasion from both sides.

Finally, let me remind you that when the suggestion came up to replicate the LaCour and Green study, I was dismissive:

Ulp. There are lots and lots of studies people are interested in doing, and I’m sure this activist group in Los Angeles has a long to-do list. Do you really think they should spend their precious time, money, and human resources to study an idea that is contradicted by an entire 900-paper literature and whose only claim to plausibility was a made-up experiment?? . . . You gotta be kidding.

I was dismissive, and I was wrong. I’m glad nobody was listening to me on this one!

70 thoughts on “Gay persuasion update

  1. The main “result” from LaCour and Green seemed to be that it was a gay canvasser delivering the script which had an effect; with a straight canvasser the reported attitudes at +3 months were the same as baseline. Eyeballing the graph it seems like the change in attitudes were the same whether or not the script was delivered by a trans or non-trans person (means within 1 standard error), meaning it was the script that mattered rather than the person delivering it. That’s an interesting result in its own right.

    • Snarky but not snarky question: would this have been an “interesting result” (the fact that the person delivering the message doesn’t matter) if we hadn’t been “primed” by a fraud to think that the messenger mattered?

      I just think it is funny how Lacour managed to shape the conversation and people’s priors even though he, by clear fact, contributed no empirical evidence. I mean, before Lacour, who had strong priors that the messenger should matter when the message itself did not, even just in this context? Sure, I could imagine some people thinking “oh, I’ve never met a gender-non-conforming person face-to-face before, and that young person at my door seemed so nice, maybe I was wrong.” But I can also imagine other people hating the canvasser and doubling down (“how dare that person knock on my door!”). If I had to guess at the distribution of those two types among people with strong pre-established prejudices, I’m not sure I’d have thought that a non-traditional-gender-conforming stranger showing up at their door would change their mind for the better.

      But somehow this idea is now out there in the world that the canvasser could matter even when the message doesn’t*, just because some jerk decided it was a good sell to the Tabloids. A bit weird, right?

      *And of course, as Andrew points out, a general pre-condition of this fraud working was that it has plausibility – I really do think that being exposed to people of different races, classes, cultures and sexualities can affect how we judge people in a way that makes us more tolerant, but I think that is a long personal and cultural process, not a 10 minute thing. I mean, imagine sending African-American people door-to-door in the South in 1954 and telling them about the value of racial diversity in hiring. Do you think that would’ve worked out well?

      • Yes I think it would have been an interesting result regardless. Because like you said we can think of reasons it could go either way. And it gets to the point of whether people listen more to the message or the messenger. That is a very old question and seems to depend on context.

  2. Well, let’s wait for someone to replicate Broockman and Kalla’s findings. You may yet have been right to be dismissive, Andrew.

    We put the fake data behind us (hopefully! ) but replicability is a different ball game.

  3. I haven’t read the paper(s) so I may be off-base, but I’m curious about why everyone seems to accept this as evidence that the actual attitudes changed. To me, it seems much more likely that the underlying mechanism is a shift in the type of answer that people perceive to be socially desirable.

    • I looked at the graph, and the only thing I really saw was an upward trend with time. Not really any clear difference between “treatment” and “control” groups. Could well be what @sckeptical says and have nothing to do with the “treatment” vs. “control”.

  4. As I wrote in 2015 about the fraudulent research:

    And yet the most interesting point about this ignominious affair is that even if the paper had been utterly legitimate, it still wouldn’t have been “science” in the sense that most people understand the word: as a search for relatively permanent truths. Instead, it would have just been marketing research.

    And that illustrates out a long-term trend. In our Age of Gladwell, leftist social scientists are increasingly giving up on looking for truths about human beings, which could get them in trouble if they found them, and reconfiguring themselves as handmaidens of the marketing industry. …

    The scandal has led to many thumbsucker articles about the replication crisis in science and other weighty topics. But almost all of them are missing the point that even if this analysis had been honest, it still wouldn’t have been Science-with-a-capital-S as most people think of the word. Rather, it would have been lowly marketing research. This was never claimed to be a study of whether or not gay marriage was a good idea. Instead, it just purported to be research into how best to spin gay marriage to voters.

    And that’s emblematic of a trend in which the social sciences, having repeatedly failed to demonstrate the truth of the political dogmas espoused by most leftist social scientists, are slowly repositioning themselves as an arm of the marketing industry. …

    Due to this endless history of empirical failures, leftist social scientists have pretty much given up using the tools of their trade to come up with evidence in support of Social Justice Warrior shibboleths. That’s almost inevitable: real science is replicable and thus has to be about enduring truths. But the anti-science conventional wisdom demonizes actual knowledge as “stereotypes.”

    Hence, social scientists have been increasingly focused not on truth-finding, but on how better to manipulate the masses.

    http://takimag.com/article/ten_thousand_haven_monahans_steve_sailer/print#ixzz45BbRy8Df

    • Steve:

      I don’t see political ideology having anything to do with it. Most social scientists are indeed on the left side of the political spectrum, but it’s my impression that the research paradigms of rightist social scientists are similar.

      • Let’s not get too hung-up on the well-worn left vs. right dichotomy; instead I’d like to direct attention to the more subtle science vs. marketing research distinction.

        The new, supposedly non-fraudulent study about how best to propagandize for transgender politics isn’t “science” in the sense of revealing anything relatively longterm about transgender individuals.* Instead, it’s marketing research, just like marketing researchers study how Coke’s latest marketing campaign will work versus Pepsi, or how Hillary could win votes away from Bernie.

        I actually knew two scientists who did study the transgendered and discovered something new and interesting about them, but they were demonized for their discovery:

        http://www.nytimes.com/2007/08/21/health/psychology/21gender.html

        It’s much safer these days for social scientists to stick to marketing research rather than to try to learn potentially uncomfortable truths about humanity.

        • Well you can’t blame others getting stuck in the dichotomy when you chose to start your argument invoking the “leftist social scientist” basis.

        • Steve:

          I don’t think of this new survey as being about transgender politics. I see it more as a test of political persuasion, it’s just applied in a particular issue domain—transgender politics—where attitudes are changing fast and thus particularly susceptible to change. I agree with you that it’s hard to imagine this being an eternal effect—it seems very much bound to the here and now. On the other hand, politics is a collection of here-and-nows so this study does seem interesting to me.

          Just to take a completely different example: Much has been written about how Stalin maneuvered his way to success. This is all about politics (not electoral politics, more like office politics with some violence thrown in). Details about Stalin are very much tied to the specifics of his environment, but it’s still an important topic in political science.

          To the extent that we care about political persuasion, studies like that of Broockman and Kalla are interesting and they tell us about politics. Just as, I suppose, a study of Coke marketing tells us something about economics.

        • I think Steve has a point, and I think Andrew is actually arguing directly against Steve’s point in his post. This was my huge takeaway from the OP: “this [study] has to cause us to rethink the idea that persuasion is so difficult.”

          The more dogmatic versions of higher education criticism insist that there is a neat dichotomy in which the left produces pure pomo bullshit artistry and the right produces already established truths handed down from some combination of classical liberalism and Judeo-Christianity. Both of those poles are in the sociology of knowledge absurd.

          Leftists who study social movements (a field that could benefit from more charitable studies of Evangelicals and Arab theocracies!) aren’t after pomo marketing hucksterism any more than marketing researchers themselves. There are durable, more or less objective and context-general patterns in the way persuasion works.

          And that’s as important to leftists who want to convince people gays are great, as it is to Alan Bloom et al., who want to convince everyone that there are easily interpretable and objective moral truths in nature.

  5. Thank you for the nice write-up, Andy.

    I totally understand your original skepticism and I shared it too, until I saw the data!

    I think what also persuaded me was spending a lot of time getting to know the Leadership LAB at the LA LGBT Center, watch them in action, and canvass with them.

    They don’t just send people out to tell personal stories at the door and hope voters feel bad or just recite talking points and walk away, as is typical. The first paper sort of gave one that sense, particularly because the “results” set of made it seem like it was driven by gay canvassers personalizing themselves — that made it seem all quite simple and straightforward.

    But the reality on the ground of what this intervention looks like is quite different. They have been working every month since 2009 to figure out how to talk to people in a way that got them to open up and express less prejudice towards LGBT people, trying lots of different approaches with trial and error. Ultimately they settled on something that looks a lot like cognitive behavioral therapy in a certain way, in that it involves asking the voters to tell stories from voters’ own lives and ruminate on them aloud in response to active listening and good questions. Canvassers are trained on it for ~2 hours before going out and it is a skill they get better at. What they do definitely isn’t easy or simple in a “this one weird trick!” way. Consistent with that, in the data we find experienced canvassers who have canvassed with them for a while are ~2.5x more effective (noisy estimate, but our best guess) than first-timers. In that sense, it is probably rather like therapists — it is hard to cure alcoholism, and first time therapists probably aren’t great at it, but it is a skill they can hone with the right theoretical approach and get relatively good at. I have no doubt most conventional canvassing would have much lower effects (although we’re working to examine that view in follow-up studies and better isolate the mechanism).

    They have some video on an example here: https://www.youtube.com/watch?v=2663J2d3VY4. (Although every conversation is a bit different because voters share different stories. I think they are going to post more video soon.)

    One last thing I’d add regarding external validity to a competitive environment: I agree, although check out Figure 2. That was our attempt at trying to simulate a campaign environment, although a weak one. (We showed people opposition ads and saw if the canvassing treatment effect still survived, which it appeared to.) Another clear next step is trying this in a more active campaign environment for sure.

    • You should study whether this LGBT door to door salesmanship is as effective as, say, the salesmanship in selling time shares, Amway, or Herbalife. There’s likely much that political marketers could learn about the arts of persuasion from multi-level marketers.

    • David:

      Yes, as I wrote in my post on the sister blog, there was a focus from LaCour and Green to Broockman and Kalla. It is my impression that LaCour and Green presented their result as a big surprise, as a big step beyond what was expected from the political psychology literature, and they seemed to be attributing much of their (claimed) success to particulars of their intervention, especially the idea that the canvasser was describing his or her own personal experience. In contrast, you don’t seem to be saying that their intervention has any special sauce; rather, it’s just a high-quality focused persuasion effort.

  6. As somebody who spent 18 years in the marketing research profession, it strikes me a reasonably honorably way to make a moderately well-paid living. But marketing research projects like this should not get the prestige of Science with a capital S.

    In contrast, there are extremely interesting scientific questions about transgenderism that almost no social scientists today are brave enough to touch, having watched the SPLC persecute scientists a decade ago for their work.

    For example, when I was getting my MBA at UCLA in 1981, a teammate of mine in Marketing Strategy class was notorious for being the most arrogant and insensitive man in the entire B School. I tried to put up with him because he was so incredibly intelligent in a logical sense and I found interesting his overwhelming obsession with space exploration, but he was widely hated by most of his fellow students for being such a huge [male anatomy part].

    A couple of years ago, I learned that he is now considered to be the highest paid “female” CEO in America.

    It’s a little hard to make this one case fit with the conventional wisdom that he must have always felt like a girl on the inside and that it was society’s persecution of his true nature that made him act so extremely stereotypically male.

    Perhaps … but probably not. I spent several dozen hours talking to him in 1981 and he showed zero feminine traits.

    In fact, when you start to think about all the celebrity m-to-f transsexuals — the brothers who directed the Matrix, the libertarian economist McCloskey who played football for Harvard, etc etc — you start to see a pattern: that for the highest profile m to f celebrity trans people, the conventional wisdom is backward. These are extreme male brain individuals. We’ve been lied to about them.

    But if you are a scientist you really, really don’t want to have these kind of super high IQ angry people angry at you for telling the truth about their lies, as that NYT article I linked to above demonstrates.

    In summary: trying to understand the enduring patterns of types among the transgendered is science, while trying to understand how to more effectively bully the people who answer your doorbell is marketing research.

    • Steve:

      Studying transgenderism is interesting psychology, I agree. Studying persuasion is interesting political science. I see no reason why Broockman and Kalla’s successful experiment on persuasion should get in the way of other people studying transgenderism. You can call Broockman and Kalla’s study “marketing research”; that’s fine, a research study can be relevant to marketing and also to political science.

      Also, I didn’t see any evidence that the study in question involved “bullying.” I think that “persuasion” is a more accurate term.

      • My point is that the trend, as exemplified by the enormous coverage over the years on the front page of the New York Times of these door to door political marketing studies, is away from social scientists doing scientific research into enduring truths about humanity and toward social scientists doing marketing research for which clients will pay good money and they can’t get themselves in trouble for finding out something politically unpopular.

        For example, Dr. Broockman got top center front page on NYTimes.com today for his marketing research study about persuading voters about the politics of transgenderism.

        But say he had instead looked into the bizarre pattern that’s been staring everybody in the face recently: Why are so many of these high profile middle-aged M to F transsexuals — e.g., Jenner, McCloskey, Morris, Col. Pritzker, Rothblatt, the Wachowskis, etc. etc. — either science fiction fans or rightists or arrogant extreme male brains (or all three)? And how can the obvious realities of their pungent personalities be reconciled with the conventional wisdom that they are really little girls on the inside who have always been bullied by society?

        Would such a scientific study get as much admiring coverage as these marketing research studies into how to manipulate voters into believing the conventional wisdom have received?

        I doubt it: We can see what happened to real scientific inquiry into transgenderism a decade ago, back when the Trans Lobby was hardly as powerful as it is today:

        http://www.nytimes.com/2007/08/21/health/psychology/21gender.html

        This isn’t to say that Dr. Broockman shouldn’t do his marketing research studies. I did lots of marketing research studies in my day.

        Simply, we should be aware that the trend among people who call themselves social scientists is away from what people think of as Science with a capital S and toward marketing research. The latter has always paid better; and now genuine scientific research into human realities is increasingly risky to one’s career. So why not just stick to the safety of marketing research? It’s a reasonable, prudent choice for reasonable, prudent individuals.

        The question, however, is: what are the long term effects on society?

      • Andrew:

        What are your thoughts on the measurement noise in this study?

        I know in the past you’ve been critical of studies due to their poor signal to noise ratio. To the point where even attempting a replication was futile.

        Doesn’t this study fit that critique?

        • Rahul:

          I haven’t looked into it in detail but this study had a pretty large N and if the effect is truly large and consistent, then it can be detected amidst the noise.

  7. I think the Stalin analogy isn’t good. Stalin was important. So people study him. If someone analyzed the strategy of how the first gay President got elected that would be similar to your example.

    The problem with the Broockman kind of work is that the cohort they study is nothing unique. Hence the results derive relevance only if they can be generalized. And I think what we are all saying is that we are skeptical if they can be generalized.

    Stalin doesn’t need external validity. He is worth study in itself.

    • Rahul:

      Stalin was important in his own right, but also the more general phenomenon of infighting and maneuvering is important for politics.

      Transgender rights isn’t so important (at least, not for most of us), but it is an example of a more general phenomenon of new political issues where public opinion is fast moving. Still not as important as Stalin, perhaps, but it’s a legitimate topic of political science.

  8. I’m not sure I understand the turnaround here. If this idea was really “contradicted by an entire 900-paper literature”, why would a single positive finding change your mind about anything? I don’t know this field at all, so maybe I’m misunderstanding your original statement about the state of the literature, but I would think if you had 900+ papers saying “no”, and one saying “yes”, you would dismiss the “yes” as noise.

      • We have a prima facie reason for believing that “persuasion” (i.e., marketing & sales) is possible because hundreds of billions of dollars are spent annually on marketers and salesmen (and on marketing researchers to measure them).

        Now it could be that it’s all a hoax, but even in that case the marketers and salesmen are successfully persuading businesses to give them hundreds of billions.

        I first pointed out in May 2013 that the New York Times appeared to be marketing transgenderism as the next big thing after gay marriage:

        http://isteve.blogspot.com/2013/05/post-gay-marriage-cont.html

        This has proven an extraordinarily successful marketing campaign, a real display of how much power the media has to get people to take credulously something that would have struck them as comic without all the salesmanship.

        • Steve:

          I’m not saying you’re wrong—indeed, given that this new experiment worked, that’s evidence that short-term persuasion really can work—but you can also look at this from the perspective of political science. Especially in recent years with extreme polarization, it’s been hard for political groups to persuade people, and there’s been a lot of discussion of the superior effectiveness of mobilization as compared to polarization.

          Now that this new experiment has shown positive effects, it’s reasonable to say what we’ve been saying in the above thread, that on certain new issues such as transgenderism it’s possible for people to be persuaded. But this was a surprise (even if Glass, Green, etc., exaggerated the level of surprise with the 900 studies etc).

        • Shouldn’t the answer be specific? You can convince some people on some things.

          Just because we convinced one cohort to change their thinking on transgender issues doesn’t imply anything beyond just that.

          Was it really anyone’s position that no one can be convinced to change their opinion on anything at all?

          Sounds like one big collection of strawmen.

        • That persuasion is hard, even in the consumer packaged goods business, actually was the general finding of the revolutionary BehaviorScan test marketing system that I worked on in 1982-85: we could do real world randomized tests of television ads in homes and track shopping at all the grocery stores in town for a year or more. This was marketing research carried out with the highest standards of the scientific method.

          Initially, this expensive test marketing system was extremely popular with consumer packaged goods brand managers who all seemed to believe that they should have their TV ad budgets doubled. So we were paid to do dozens of tests where we split the sample size of 3000 households into a test group and a control group that were identical over the previous year in purchasing of the brand and category. Then the test group saw twices as many ads for the brand, while the control group saw the same number of brands for the ads, plus public service announcements.

          We persuaded our panel members to show their Shoppers Hotline cards when checking out at any supermarket in town (e.g., Pittsfield, MA).

          And … most of the time, doubling advertising of well-known brands did no good whatsoever. In fact there was a mini-recession in CPG advertising in 1986 as the Behaviorscan-induced disillusionment spread in the industry.

          I was involved in a couple of meta-analyses of our tests and one finding we came up with was that more advertising tends to work when you have some real news to tell the customer. For example, higher ad spending was successful for a new version of Crest toothpaste that included a breakthrough new chemical that had been endorsed by the American Dental Association. That was objectively important news and the more you told viewers about it, the more they were likely to be persuaded by it and remember it the next time they were in the toothpaste aisle.

          But doubling the ad budget for same old same old ads seldom did any good.

        • Steve,

          No one here agrees your anti-intellectual views. People are just politely tolerating your inane screeds. Please be quiet and let people discuss stats.

  9. Andrew:

    > I’m glad nobody was listening to me on this one!
    Really, you tell someone not to go gambling, they do, win and you say glad you did not listen to me.

    Even if this replicates and turns out to be of wide importance, that does not meant it was a good (economic) decision to do the study.
    (Paluck and others may have known better but that should be the evaluation of it being good _economics_).

        • Yep.

          “Consequently, to discover is simply to expedite an event that would occur sooner or later, if we had not troubled ourselves to make the discovery. Consequently, the art of discovery is purely a question of economics. The economics of research is, so far as logic is concerned, the leading doctrine with reference to the art of discovery. Consequently, the conduct of abduction, which is chiefly a question of heuretic and is the first question of heuretic, is to be governed by economical considerations.” – Guess Who (copied form wiki)

  10. Count me skeptical on this account as well. Why assume that people do not read into the project and adjust their answers accordingly?

    Here’s my challenge for Broockman and Kalla and what might convince me that there really is something going on here: run the same experiment with your control group and *two* treatment groups, one that is geared toward increasing the transgender tolerance and one that is geared toward decreasing it (or increasing their placement on the “transgender intolerance scale.”) If the effects are equal and opposite, then I’ll buy it, but for now, I believe this research is pretty easily explained by people responding to what is perceived to socially desirable/acceptable.

    • BC:

      As I wrote in my post above regarding “pushing at an open door,” I think the treatment would have a much larger effect in one direction than the other, as people are already moving in one direction.

      • Right, so the question for me is whether this is really persuading them to take a particular position or whether it is just priming them to respond in the way the door is opening, which they readily know from the broader social context in which they live. Three months out, they still know how they are “supposed” to answer.

    • The problem in Soc. Sci. is no one really cares about whether there’s a real effect. Broockman made it to the NYT. Whether the effect replicates or doesn’t the NYT won’t care.

      The positive effect on Broockman’s career is fait accompli and will last irrespective of whether he discovered a real effect or not. The incentives are all wrong.

  11. This is from the study: “we recruited individuals who came to their doors in either condition (n = 501) to complete follow-up online surveys via email presented as a continuation of the baseline survey. These followup surveys began 3 days (n = 429), 3 weeks (n = 399), 6 weeks (n = 401), and 3 months (n = 385) after the intervention.”

    Who the heck participates in a survey 3 months after the treatment? I question whether this is a representative sample of people. And so I question the validity of the results.

  12. transgender canvassers DID have larger effects, look at Figure 1 in the broockman paper. The difference is not statistically significant, but the difference is apparent. They are misrepresenting the findings to contradict lacour.

        • Anon:

          No, he told a story, and some aspects of his story were more plausible than others. Lacour’s story involved some things that canvassers were already doing—he didn’t have to make up that part, he just had to make up his data.

          I have no problem with people making model-based predictions. If what Lacour wanted to do was anticipate the effects of some treatments, he could’ve written a paper describing a hypothetical experiment and making some hypotheses—predictions—about what he’d see. Then if someone later did such a study and it conformed with his predictions, he’d get some deserved credit. I suppose you could interpret his fake-data paper as a version of that, but it seems like a stretch to me.

          I feel the same way about Marc Hauser and those monkey videos he didn’t want to share. Nobody was forcing Hauser to run experiments. If all he wanted to do was work on theory and make predictions, he could’ve done so. He was the one who insisted on running experiments, and then he couldn’t handle the fact that the data didn’t conform to what he wanted.

          Lacour, of course, did one step less than Hauser because he didn’t even gather the data.

        • One interesting angle is that, historically, door to door salesmanship like this has had a sleazy reputation. There used to be all sorts of comedy movies about conmen ringing doorbells and all the ploys they used on people.

          But in these recent cases, I see a strong desire on the part of the media to believe that nothing sleazy is going on, just the spreading of Light and Truth.

        • Door to door salesmen used to peddle goods. Now they peddle ideas & opinions.

          Do you think that has changed perceptions towards the door to door salesmen? i.e. we think the goods peddlers as sleazier than the intangibles?

        • The NYT has been treating these studies of door to door salesmanship as inevitably doing Good Works because they peddle a socially approved view, kind of like people used to assume that door-to-door Bible salesman had to be using ethical techniques. Here, for example, is Flannery O’Connor’s famous 1955 short story “Good Country People” about a one-legged female atheist with a Ph.D. in philosophy whose mother persuades her to go on a date with a traveling Bible salesman:

          http://engl273-3-stair.wikispaces.umb.edu/file/view/Good+Country+People.pdf

          My suspicion would be, however, that door to door sales techniques that are effective tend to push the ethical envelope.

        • Steve,

          While I disagree that there is anything negative about this effort in the least, I disagree even more with your bible salesman analogy. It is simply a false analogy to compare idea persuasion with persuasion to purchase a good or service. Though there may be similar techniques at play, there would be inevitable differences.

          A more apt comparison would be Mormon’s on Mission or Jehovah’s witnesses going door to door in an effort to persuade someone to change their ideas about God and Jesus Christ. Or a politician going door to door to persuade someone of their political ideology and in turn their vote for the office they seek.

  13. This is obviously anecdotal, but if you talk to people about how they came to their opinions on things, or importantly how they came to their opinions on things, particularly difficult things, they will often mention a single moment or encounter where that change happened or started to happen. You mentioned this in your talk on Penumbras at NYR yesterday, where someone’s son comes out as gay and the whole topic realigns for him and it doesn’t happen that minute, it takes some thought and reflection. Now, admittedly a child is totally different from a stranger you encounter (that’s why the penumbra matters), but still it can be the same thing. One person you know who had an abortion (and that’s why I think the “within the last 5 years” part of your data is problematic i.e. it could have been a high school classmate and now you are 60). Also, you don’t actually know what happens between the two measures. Did they talk to anyone about the topic? Did having talked about it make them more attuned to noticing other references? I can think of a lot of ways this could work. It’s not the road to Damascus exactly.

    • Elin:

      I agree there are a lot of complexities here and it would be good to study the qualitative aspects of persuasion. A key part of our study was to just get an overall measure of penumbra size and shape, and it does seem that the individual part of our measurement had some problems.

  14. The Lacour study got so much attention even before the fraud was revealed not only because the good guys won but because the idea was plausible. If people who think in terms of negative caricatures talk with an actual person who is a member of the despised category, their attitudes will soften. Is there really a 900-page literature that says this idea is wrong? My hunch at the time was that if the study had actually been done, it would have found positive results.

    Is it merely market research? Why is the label important? I’m not sure what the difference between good market research and science is except in the immediacy of its application. Anyway, yesterday’s science becomes today’s application. Nor is that application inevitably liberal. You could probably use the same persuasion techniques to get liberals to rethink their position on guns. Most gun owners are responsible, upstanding citizens who use their guns in responsible and safe ways. They are not the paranoid, homicidal, nutjob, fascist caricatures that liberals may imagine. Real conversations with real gun owners might get anti-gun liberals to be less absolutist in their views.

    • The label matters because it is far more relevant to focus on enduring truths. If the next study cannot replicate this & we make the excuse “Ah, but their canvassers weren’t good enough” we’ve hardly made any progress in understanding.

      That’s the difference between market research & science. One sets out specifically to solve the immediate problem. e.g. swing the voters in Florida on transgender issues. External validity is irrelevant.

      Science, on the other hand, hopefully, discovers a pattern that’s generalizable.

      • Rahul:

        There are journals of marketing research (which I’ve published in). They like their research to be reproducible too! I don’t think the distinction between marketing research and scientific research is so clear.

        • But marketing researchers don’t like their work to be _too_ reproducible, because if you learn enduring truths, it hurts your marketing research business.

          This actually happened to the once enormously successful BehaviorScan test market system in the second half of the 1980s. When it debuted in 1980, BehaviorScan enjoyed tremendous demand from the top consumer packaged goods marketers because many brand managers hoped they finally had a scientifically impeccable way to prove to their bosses to let them greatly increase their TV ad budgets.

          BehaviorScan conducted a huge number of tests of doubling ad deliveries up through the mid-1980s. The quality of the research was excellent. But the results kept coming up the same: unless you had something new and important to tell customers, increasing advertising for one year had very little measurable effect upon sales.

          In the second half of the 1980s, clients cut way back on BehaviorScan testing because this lesson had started to sink in.

          I ran one test for a famous brand in which they had three cells instead of two: the control group that saw the national advertising budget, a test group that saw twice as many commercials of Mr. A not being able to stop squeezing the B and another test group that saw half as many of these commercials that had been running with the same character for two decades. There was no difference in sales.

          I suggested to my famous client that they should test all their brands to see if they could cut their huge advertising budget: run BehaviorScan tests for two years and if there is no drop off in sales, then go national, while continuing to run the BehaviorScan test. If there started to be a drop off in the test market, with its two year lead on the rest of the country, then advertising could be boosted nationally before much harm was done. But the client said that X&Y brand managers would never spend money to see if they should have their advertising budgets cut. You don’t rise up the corporate hierarchy at X&Y by getting your advertising budget cut. (That would be like an ambitious officer at the Pentagon proving that America shouldn’t spend a trillion dollars on the F-35 — not gonna happen.)

        • You’ve identified the problem: Mistaking correlation for causation. The solution is not to dismiss marketing research, the solution is to do it better such that one is able to make robust causal claims.

      • An experiment in social science does not discover a pattern. It tests an idea in one particular setting. The pattern comes from the accumulation of similar experiments in different settings. I don’t see how desiring a specific outcome makes the experiment any less scientific. Is there a difference between, “I want to find out if having students think of The Ten Commandments before they take a test makes them less likely to cheat,” and “I really, really want to reduce cheating so I’m going to try this Ten Commandments thing and see if it works.” By your definition (“solve an immediate problem

        • From the title “How do you reduce prejudice toward transgender people? This new study explains.” sounds more like they are selling it as a general pattern.

        • Rahul:

          I really didn’t like that title, which was not the original title I gave for my post at the Monkey Cage. It just seemed like too much trouble to change it. Next time I’ll be more careful and make sure that my titles don’t get changed without me having a chance to check.

    • Jay:

      Are anti-gun liberals “absolutist”? I’d argue that “absolutist” is, paradoxically, a relative term. To a pro-gun person, it’s absolutist to require registration of bullets or whatever; to an anti-gun person, it’s absolutist to allow just about anyone to buy a gun. It’s not that I’m saying that everyone’s an absolutist—on any issue, there will be people with moderate positions—it’s just that I think that by using a term like “absolutist,” you’re assuming a lot.

  15. One point I’m unclear on is: If you surveyed the same person multiple times over three months (without intervention) what’s the natural variation in their responses?

    i.e. How noisy are these measurements? Are respondents consistent about attitudes? Are attitudes consistently measured by the questionnaires?

Leave a Reply to Rahul Cancel reply

Your email address will not be published. Required fields are marked *