Cherry-picking during pumpkin-picking season? (the effects of the Jan. 6th hearings)

This post is by Jeff Lax.

Is something off in the recent Talking Points Memo by Mindy Finn/Citizen Data, on the effects of the Jan. 6th hearings?  It’s about the change in people’s attitudes in a pair of surveys before and during the hearings. It is framed around the basic health of our democracy–what could be more serious? I was going to use it to respond to a student question… but then I looked closer.

https://talkingpointsmemo.com/cafe/the-jan-6-committee-is-having-a-measurable-impact-on-voter-attitudes

They basically claim only good news for democracy and with great confidence: “While there have been conflicting reports on the impact of the Jan. 6 hearings, our polling has been more conclusive. Since the hearings began, more Americans have come to view Jan. 6 as a violent attempt to overthrow the government and more Americans now see the committee’s findings as legitimate. As we look ahead to the midterms and on to 2024, I believe the committee’s communications offer a playbook to replicate on the journey to protect our democracy and thwart those who threaten it.”  They also claim that, after the Jan. 6 Committee’s hearings, to nearly quote, Americans are more likely to hold Trump accountable, that those supporting him will face headwinds, and that efforts to rebuild democracy are paying off. 

My concerns are NOT about tricky stats or surveying issues. Rather, I think they misread their own results and neglect to note contrary evidence from their own numbers. Why is the piece doing so much cherry-picking in reporting results? Why is there so much cheerleading given the number of findings that are bad or mixed? Why isn’t it at least clear on which numbers are being compared? One worries the findings are just noise and the reporting too selective to be considered properly objective.

Start with the claims that comparisons between April and July show good news: “Skeptical Americans,” Finn writes, “including those who initially believed the 2020 election was tainted through widespread voter fraud, might be changing their minds” and “the share who believed Joe Biden won the 2020 election increased by 5%” and “Nearly twice the number of Americans who view the 2020 election as ‘stolen’ and Jan. 6th as peaceful now view the events as a violent attempt to overthrow the government.” You can get to some of the underlying numbers through a link in the original post: https://citizendata.docsend.com/view/umfmknfvebgf5uru

Their analysis of the ‘Did Biden win” question seems right in isolation. Comparing July to April, more people said Biden won, fewer had doubts about this, and fewer said he did not win. But the results from that question are undercut by the numbers in the main question on Jan. 6th.

This is obscured by the non-standard (or at least overly complicated) set of options respondents are given in the key question. To answer it, you have to implicitly make three decisions with two options each, picking one of a permitted six combinations for labeling Jan. 6th: (1) “Violent attempt to overthrow the government in response to a legitimate election,” (2) “Peaceful protest against a stolen election,” (3) “Violent protest against a legitimate election,” (4) “Violent attempt to overthrow the government in response to a stolen election,” (5) “Violent protest against a stolen election,” or (6) “Peaceful protest against a legitimate election.”  (For better or worse, they don’t allow all eight possibilities, so you can’t say “peaceful attempt to overthrow the government in response to a legitimate election” or same “… against a stolen election.”)

So if you think Jan. 6th was “violent,” you narrow it down to options 1, 3, 4, or 5. But then do you think it is “attempt to overthrow” (1 or 4)? Or “protest” (3 or 5)? Then you still have to choose whether the election was “legitimate” (1 or 3) or “stolen” (4 or 5). If you start with “peaceful,” you then have to contend with two options, “stolen” (2) or “legitimate” (4), distributed among the six options in total. Or you can start with “legitimate election” (1, 3, or 6) or “stolen” (2, 4, and 5). Then you have to do “violent attempt” or “violent protest” or “peaceful protest”. So messy. So hard on the survey respondent. Too hard? Perhaps even too hard to get the write-up right?

Here is what the results look like, tabulated from the “behind-the-scenes” graph they link to (with less information in the article itself):

category april july change
violent attempt to overthrow the government in response to a legitimate election 1 34.1 33.9 -0.2
peaceful protest against a stolen election 2 16.1 14.6 -1.5
violent protest against a legitimate election 3 12.4 13.7 1.3
violent attempt to overthrow the government in response to a stolen election 4 6.5 11 4.5
violent protest against a stolen election 5 9.6 9.6 0
peaceful protest against a legitimate election 6 3.7 6.1 2.4

Do we see only good news? If you think (1) (‘violent-overthrow-legitimate’) is correct, then it would be good news if that percentage went up. It didn’t (-.2% April to July). True, fewer (-1.5% April to July) falsely think it a (2) ‘peaceful-protest-stolen’ (that’s good), but more (+2.4%) think it only a (6) ‘peaceful-protest-legitimate’ (that’s bad, or mixed). Those who said (3) ‘violent-protest-legitimate’ went up (+1.3%, that’s bad, or mixed). There was no change in (5) at all (+0.0%). Those who chose 6, just ‘peaceful-protest-legitimate’, went up by (+2.4%, that’s bad, or mixed).

Where did the main claim come from, that nearly twice the number who said ‘peaceful-stolen’ in April say ‘violent-overthrow-legitimate’ in July? Maybe the write-up was meant to rephrase this text in the linked report: “At the conclusion of the hearings in July, the percentage of Americans who viewed January 6th as a “violent attempt to overthrow the government in response to a stolen election” nearly doubled from 6.5% to 11%; most of this increase came from those who had previously indicated they were uncertain about the events of January 6th.” That’s category 4’s pair of numbers. But that’s not the same as the main text’s “Nearly twice the number of Americans who view the 2020 election as ‘stolen’ and Jan. 6 as peaceful now view the events as a violent attempt to overthrow the government.”  (Also, any cross-tabs showing where “most of this increase came from” are not shown.)  

I suppose they could be comparing category 1 in July at 33.9% (violent-overthrow-legitimate) to category 2 in April at 16.1% (peaceful-protest-stolen) but the former is MORE than twice the latter. And I don’t think that comparison is meaningful really.  Heck, one could also say more than twice the number who said that January 6 was ‘violent-overthrow-stolen’ (6.5%, category 4, April) later say it was a ‘peaceful protest against a stolen election’ (14.6%, category 2, July). How dramatically bad does that sound! Moreover, one could have said using only April to April comparisons that over twice as many said good category 1 as bad category 2.  Again, I’m not sure which results are being discussed in the main finding or why it makes sense to arbitrarily compare categories and time periods with so many comparisons that could be done.

What can we safely say?  Those who admit Jan. 6th was “violent” did go from 62.6 to 68.2 (summing categories, an increase of 5.6%). So that’s good news. Yet a quarter of that overall increase is the increase in the people who said it was only a ‘violent protest’ of a ‘legitimate election’ (there was no change in those who said ‘violent protest of stolen election’). Minimizing it as ‘protest’ (compared to ‘attempt to overthrow’) doesn’t seem so great. And there was an increase of .9% in those who said it was a ‘peaceful protest’ of any type (that’s bad). The percent who thought the’ election legitimate’ did increase by 3.5% (that’s good), but those who said ‘stolen’ went up nearly as much, 3% (that’s bad).  Movement seems to be coming from “don’t know” (down 6.5% from April to July), as much as from converting those with false views.

Turning to the question on the legitimacy of the committee on the 6th, more say ‘legitimate’ and their recommendations should be ‘seriously considered’ (good!), but more say ‘not legitimate’ and should be ‘ignored’ (bad!). More say legitimate but should be ‘not seriously considered.’ That’s… bad? Good? No idea. And how is ‘not seriously considered’ different from ‘ignored’? I give up.

 

Darn that Lindsey Graham! (or, “Mr. P Predicts the Kagan vote”)

On the basis of two papers and because it is completely obvious, we (meaning me, Justin, and John) predict that Elena Kagan will get confirmed to be an Associate Justice of the Supreme Court. But we also want to see how close we can come to predicting the votes for and against.

We actually have two sets of predictions, both using the MRP technique discussed previously on this blog. The first is based on our recent paper in the Journal of Politics showing that support for the nominee in a senator’s home state plays a striking role in whether she or he votes to confirm the nominee. The second is based on a new working paper extending “basic” MRP to show that senators respond far more to their co-partisans than the median voter in their home states. Usually, our vote “predictions” do not differ much, but there is a group of senators who are predicted to vote yes for Kagan with a probability around 50% and the two sets of predictions thus differ for Kagan more than usual.

The other key factors that enter into the models (which build on the work of Cameron, Segal, Songer, Epstein, Lindstadt, Segal, and Westerland) are senator and nominee ideology, party and partisan control, presidential approval, nominee quality, and nomination timing.

The bottom line? The older model predicts nine Republican defections (votes for Kagan) but the newer model breaking down opinion by party predicts only five. Ten Republicans straddle or push against the 50% mark for point predictions.

Median state-level support for Kagan is approximately that for Alito, and about nine points higher than that for Sotomayor. Median state-level support for Kagan among Republicans is about 12 points higher than for Sotomayor. On the other hand, Obama’s approval is definitely lower. So far, we have only one national poll to work with (which we thank ABC for), but we will update our data and “predictions” later when other poll data become available. We do not yet have Jeff Segal’s official scores for quality and ideology so are currently fudging these a bit (using the same scores as for Sotomayor).

First, here is the distribution of opinion across states by party groups (using an extension to the MRP technique to generate not only opinion by state using national polls, but opinion by party by state):

View image

kagan_opinion.png

Next, here are the predicted probabilities of a positive confirmation vote for Republican senators (Democrats are all predicted to vote yes):

View image

kagan_gop_preds.png

If only Lindsey Graham would do the right thing and vote no…

Future Trends for Same-Sex Marriage Support?

How will support for same-sex marriage change over time? One way to speculate is to break down current support across age groups, and that’s what Justin and I have done, building off of our forthcoming paper.

We plot explicit support for allowing same-sex marriage broken down by state and by age. Seven states cross the 50% mark overall as of our current estimates, but the generation gap is huge. If policy were set by state-by-state majorities of those 65 or older, none would allow same-sex marriage. If policy were set by those under 30, only 12 states would not allow-same-sex marriage.

marriagebyage.png

Political Neuroscience

A piece by Brandon Keim in Wired points out some issues in the fMRI brain-politics study on reactions to presidential candidates discussed in a recent NYT op-ed. For example,

Let’s look closer, though, at the response to Edwards. When looking at still pictures of him, “subjects who had rated him low on the thermometer scale showed activity in the insula, an area associated with disgust and other negative feelings.” How many people started out with a low regard for Edwards? We aren’t told. Maybe it was everybody, in which case the findings might conceivably be extrapolated to the swing voter population of the United States. But maybe it was just five or ten voters, of whom one or two had such strong feelings of disgust that it skewed the average. What about the photographs? Was he sweating and caught in flashbulb glare that would make anyone’s picture look disgusting? How did the disgust felt towards Edwards compare to that felt towards other candidates? How well do scientists understand the insula’s role in disgust — better, I hope, than they understand the Romney-activated amygdala, which is indeed associated with anxiety, but also with reward and general feelings of arousal?

(And don’t forget “Baby-faced politicians lose” on this blog.)