Skip to content
 

My interview on EconTalk, and some other podcasts and videos

[cat picture]

Russ Roberts recently interviewed me for his EconTalk podcast. We talked about social science and the garden of forking paths. Roberts was also going to talk with me about Case and Deaton, but we ran out of time.

Whenever I announce a talk, people ask in comments if it will be streamed or recorded. Most of my talks are not streamed or recorded, but some are, also sometimes I get interviewed in podcasts. I can’t vouch for the podcasts because I hate the sound of my own voice, but here are some things you can check out:

Podcast with Chauncey DeVega on the 2016 Election.

Podcast with Barry Lam, who interviewed several people, including me, on the replication crisis in psychology.

Mutual interview with Christian Hennig on our new paper, Beyond subjective and objective in statistics.

Podcast with Julia Galef on Why do Americans vote the way they do.

My talk at New York R conference 2016 on the political impact of social penumbras. Also starts off with some discussion of Bayes, church, the folk theorem, and some Stan.

My talk at New York R conference 2015, But When You Call Me Bayesian, I Know I’m Not the Only One.

My talk in 2016 at a Harvard conference on big data.

My talk at Bath on crimes against data.

My talk at Oxford on teaching quantitative methods to social science students.

Bloggingheads with Eliezer Yudkowsky on probability and statistics.

Bloggingheads with Will Wilkinson on Red State, Blue State, Rich State, Poor State.

You can find more on Google. The talks come out ok on video but you don’t get a sense of the audience participation. In real life people are actually laughing at the jokes.

19 Comments

  1. Rahul says:

    >>> because I hate the sound of my own voice<<<

    Whew! I'm glad I'm not the only one. :)

  2. ? says:

    Andrew, I get why you went on EconTalk, but you really just fed into Russ Roberts’ awful empirical nihilism. Issues with statistical significance are obviously important but you were talking to a guy who basically believes you can’t learn *anything* from data, and just giving him and his listeners more reasons to ignore the real world. And naturally he tried to get you, as a statistician, to join him in rejecting minimum wage studies he doesn’t like, and to confirm that statistics is just a tool to push ideology.

    • Ben Prytherch says:

      I’ve been listening to Econtalk for a long time and I disagree with this characterization of Roberts’ beliefs. He does push his skepticism further than most, but he also welcomes challenges from his guests, and encourages them to argue against his view of things. Recent examples are his podcasts with James Heckman (http://www.econtalk.org/archives/2016/01/james_heckman_o.html) and Noah Smith (http://www.econtalk.org/archives/2015/12/noah_smith_on_w.html) – Smith really took him to task and got him to concede that there are plenty of cases in which economists changed their minds in light of convincing data. There are a bunch more like these.

      I also think his skepticism, while sometimes over the top, is rooted in serious concerns. Economic data are as messy as they come, and it’s smart to be skeptical of causal statements based on the results of regression models for a world where everything is correlated with everything and the “data generating mechanisms” are massively complex.

      • ? says:

        Being “skeptical” is different than saying “There is likely to no way of knowing which view is correct with anything close to reliability or certainty.” If you truly believed that, then you should have no interest in this blog–if you don’t think data can resolve theoretical questions, then why do you need statistics? Who cares about the garden of forking paths if empirics are useless?

        Calling him merely “skeptical” is a far-too-charitable reading of his extremely backwards views. The most charitable view would be that he’s playing devil’s advocate in an interview setting, but given that he publishes the same opinions outside of his podcasts, I think it’s safe to say he really believes what he’s saying.

        • Ben Prytherch says:

          Statements like “there is likely to no way of knowing which view is correct with anything close to reliability or certainty” are a criticism of the methods and data being currently used. He isn’t saying this about physics. I’ve never heard him say anything like “data can’t resolve theoretical questions” in general; he says these things in the context of debates in which competing sides are sparring against each other using regression models.

    • Anthony St. John says:

      This is a gross mischaracterization of Roberts’ beliefs. I think his concern is that no one is convinced by data anymore. Roberts seems more aware of his biases than many other economists, and is willing to engage respectfully with those who disagree with him — without name calling.

      The guest list on EconTalk has included some of the most interesting authors, economists and public intellectuals out there. The only way the audience could fail to be exposed to the real world is if they listened to the podcast with their fingers in their ears. Perhaps that explains your remarks.

      • ? says:

        I agree the podcast often has interesting people on it, saying lots of interesting things. I think Russ himself is the worst part of it. You can see this in Andrew’s interview, and especially the one with Josh Angrist.

        His concern certainly isn’t that no one is convinced by data. His recent article (and everything he says about the topic) seems to suggest that he thinks that economists are *overly* swayed by empirical results on the minimum wage.

        • Rahul says:

          I love the Econotalk podcasts & I think Russ does a good job as an interviewer.

          That said, I cannot deny that at times he’s not playing the role of a neutral, unbiased interviewer but sort of tries to search the interviewee’s answers for bits reinforcing his own preconceived (libertarian) beliefs.

          I can see why that part may be annoying but I just ignore it in the interest of enjoying the interview which in itself is a great one usually.

  3. Shravan says:

    WOW! Did Brian Nosek just say on Hi-Phi Nation that he expected that 5% of the very different studies in the Replicability Project would be false positives? “95% of the studies should replicate successfully”, said the moderator, explaining how the p-value works. She also says that if the p is less 0.01, then this should replicate 99% of the time!!!!

    This was on Hackademics II. The bizarre statements begin at 8 minutes or so.

    • Keith O'Rourke says:

      Yup they did :-(
      (Not sure how the podcast was put together or vetted and Nosek did not actually say exactly why 5% was expected)

      • Shravan says:

        Yeah, you’re right. I think the moderator might have been recording her talk alone, after having collected the interviews. I thought podcasts were kind of live interviews, but this one didn’t have that feel. She should have had her text about p-values vetted by her interviewees.

        I would not even agree with Nosek’s statement though, that he expected that only 5% of experiments in the Replicability Project would be false positives. One cannot make such a statement based on a p-value (which was the topic of that part of the podcast segment), which is a one-time outcome (a random variable) from a single experiment. Maybe he’s thinking about the alpha value (Type I error). The two cannot be treated as the same thing.

        • Keith O'Rourke says:

          > The two cannot be treated as the same thing.
          One would hope that Nosek knows that – but he might not.

          He might have been referring to his past naive expectation and the expectation of the community that only 5% would be false.

          I am not sure I would agree to be recorded for that podcast if this cut and paste editing is actually being done!

          • Shravan says:

            It’s astonishing how poor the understanding of the moderator is, and how the people doing the podcast didn’t think to ask one of their interviewees (Andrew Gelman) if they had got it right. He could have set them straight.

            The p-value is a concept that almost nobody will never understand, but almost everyone will continue to use it anyway.

            I just reviewed a paper in which the authors keep saying they got a result that was “reliable” because p$<$0.05. What does "reliable" mean? This must be one of psychology's contributions to statistics, because I have never seen that word in a math stats textbook.

Leave a Reply