Skip to content
 

I hate that “Iron Law” thing

Dahyeon Jeong wrote:

While I was reading your today’s post “Some people are so easy to contact and some people aren’t”, I’ve come across your older posts including “Edlin’s rule for routinely scaling down published estimates.”

In this post you write:

Also, yeah, that Iron Law thing sounds horribly misleading. I’d not heard that particular term before, but I was aware of the misconception. I’ll wait on posting more about this now, as a colleague and I are already in the middle of a writing a paper on the topic.

I was especially curious about this, so I’ve searched your blog and CV, but I didn’t find a relevant follow-up post/article on this topic. If there’s indeed no post on this, I would really look forward to reading it at some point in future.

Jeong’s email was in 2016, and my quote above is from 2014. In the meantime, Eric Loken and I finally wrote that paper: it came out early this year. Here’s our article, and here and here are some relevant blog posts.

So we do make progress. Slowly.

5 Comments

  1. Sean S says:

    Iron Law – he who has the Iron makes the rules

  2. Thanks for following up! I’ve read your Science article and I hope it is read widely by economists (and other professions). Here is the reference to the blog post “Edlin’s rule for routinely scaling down published estimates.” http://andrewgelman.com/2014/02/24/edlins-rule-routinely-scaling-published-estimates/

  3. Thanatos Savehn says:

    Another reason why this is the first blog I hit when I wake up in the morning (right after the news, just in case a killer asteroid is on the way and I can stay home and go back to sleep) – that “I never thought about that before” moment. So a question. I’m standing there telling lawyers about all this wacky statistics stuff and get to the part about false positives, false negatives and decision-making in a widget factory and conclude: “Ta da! And that’s how they do quality control but science ain’t quality control because …” when someone asks “Ok, I get your long run widget factory ship/don’t ship decision rule but doesn’t your claim that widgets can never be perfect also apply to the widget-O-meter you used to measure your sample of widgets?” “Uh, yes” I reply in an attempt to dodge the unforeseen question, “and that’s why they calibrate the widget-O-meter every so often!” “No doubt” comes the riposte “but the widget-O-meter-calibrator is no more perfect than the widget-O-meter that measures the widget; and how, pray tell, did quality control work at the widget-O-meter-calibrator factory; or did you calibrate it by zeroing it on one of those admittedly imperfect widgets? And how do you account for such error in a widget factory?” Thus my question: how do you model/cope with measurement error, especially when it’s measurement-error turtles all the way down (or tautological turtles, as in the latter case).

    • Sean S says:

      Dealing with measurement uncertainty is a lot easier in the factory than it is in some of these social science “experiments.”

      Some ways to deal with measurement error: 1) use a continuous measure rather than pass/fail so sample sizes are manageable 2) measure twice with multiple gauges or operators 3) back off your spec limits by 2 measurement error sigmas.

      Of course this assumes that you’ve dug into the sources of measurement error and are unable to further reduce it without spending big bucks.

Leave a Reply