Skip to content

BREAKING . . . Princeton decides to un-hire Kim Jong-Un for tenure-track assistant professorship in aeronautical engineering

1261045599566007911

Full story here.

Here’s the official quote:

As you’ve correctly noted, at this time the individual is not a Princeton University employee. We will review all available information and determine next steps.

And here’s what Kim has to say:

I’m gathering evidence and relevant information so I can provide a single comprehensive response. I will do so at my earliest opportunity.

“In my previous post on the topic, I expressed surprise at the published claim but no skepticism”

stapel

Don’t believe everything you read in the tabloids, that’s for sure.

P.S. I googled to see what else was up with this story and found this article which reported that someone claimed that Don Green’s retraction (see above link for details) was the first for political science.

I guess it depends on how you define “retraction” and how you define “political science.” Cos a couple of years ago I published this:

In the paper, “Should the Democrats move to the left on economic policy?” AOAS 2 (2), 536-549 (2008), by Andrew Gelman and Cexun Jeffrey Cai, because of a data coding error on one of the variables, all our analysis of social issues is incorrect. Thus, arguably, all of Section 3 is wrong until proven otherwise. We thank Yang Yang Hu for discovering this error and demonstrating its importance.

Officially this is a correction not a retraction. And, although it’s entirely a political science paper, it was not published in a political science journal. So maybe it doesn’t count. I’d guess there are others, though. I don’t think Aristotle ever retracted his claim that slavery is cool, but give him time, the guy has a lot on his plate.

Objects of the class “Foghorn Leghorn”

Reprinting a classic from 2010:

Foghorn_Leghorn.png

The other day I saw some kids trying to tell knock-knock jokes, The only one they really knew was the one that goes: Knock knock. Who’s there? Banana? Banana who? Knock knock. Who’s there? Banana? Banana who? Knock knock. Who’s there? Orange. Orange who? Orange you glad I didn’t say banana?

Now that’s a fine knock-knock joke, among the best of its kind, but what interests me here is that it’s clearly not a basic k-k; rather, it’s an inspired parody of the form. For this to be the most famous knock-knock joke—in some circles, the only knock-knock joke—seems somehow wrong to me. It would be as if everybody were familiar with Duchamp’s Mona-Lisa-with-a-moustache while never having heard of Leonardo’s original.

Here’s another example: Spinal Tap, which lots of people have heard of without being familiar with the hair-metal acts that inspired it.

The poems in Alice’s Adventures in Wonderland and Through the Looking Glass are far far more famous now than the objects of their parody.

I call this the Foghorn Leghorn category, after the Warner Brothers cartoon rooster (“I say, son . . . that’s a joke, son”) who apparently was based on a famous radio character named Senator Claghorn. Claghorn has long been forgotten, but, thanks to reruns, we all know about that silly rooster.

And I think “Back in the USSR” is much better known than the original “Back in the USA.”

Here’s my definition: a parody that is more famous than the original.

Some previous cultural concepts

Objects of the class “Whoopi Goldberg”

Objects of the class “Weekend at Bernie’s”

P.S. Commenter Jhe has a theory:

I’m not entirely surprised that often the parody is better know than its object. The parody illuminates some aspect of culture which did not necessarily stand out until the parody came along. The parody takes the class of objects being parodied and makes them obvious and memorable.

Bayesian inference: The advantages and the risks

This came up in an email exchange regarding a plan to come up with and evaluate Bayesian prediction algorithms for a medical application:

I would not refer to the existing prediction algorithm as frequentist. Frequentist refers to the evaluation of statistical procedures but it doesn’t really say where the estimate or prediction comes from. Rather, I’d say that the Bayesian prediction approach succeeds by adding model structure and prior information.

The advantages of Bayesian inference include:
1. Including good information should improve prediction,
2. Including structure can allow the method to incorporate more data (for example, hierarchical modeling allows partial pooling so that external data can be included in a model even if these external data share only some characteristics with the current data being modeled).

The risks of Bayesian inference include:
3. If the prior information is wrong, it can send inferences in the wrong direction.
4. Bayes inference combines different sources of information; thus it is no longer an encapsulation of a particular dataset (which is sometimes desired, for reasons that go beyond immediate predictive accuracy and instead touch on issues of statistical communication).

OK, that’s all background. The point is that we can compare Bayesian inference with existing methods. The point is not that the philosophies of inference are different—it’s not Bayes vs frequentist, despite what you sometimes hear. Rather, the issue is that we’re adding structure and prior information and partial pooling, and we have every reason to think this will improve predictive performance, but we want to check.

To evaluate, I think we can pretty much do what you say: ROC as basic summary and do graphical exploration, cross-validation (and related methods such as WAIC), and external validation.

New Alan Turing preprint on Arxiv!

poof

Dan Kahan writes:

I know you are on 30-day delay, but since the blog version of you will be talking about Bayesian inference in couple of hours, you might like to look at paper by Turing, who is on 70-yr delay thanks to British declassification system, who addresses the utility of using likelihood ratios for helping to form a practical measure of evidentiary weight (“bans” & “decibans”) that can guide cryptographers (who presumably will develop sense of professional judgment calibrated to the same).

Actually it’s more like a 60-day delay, but whatever.

The Turing article is called “The Applications of Probability to Cryptography,” it was written during the Second World War, and it’s awesome.

Here’s an excerpt:

The evidence concerning the possibility of an event occurring usually divides into a part about which statistics are available, or some mathematical method can be applied, and a less definite part about which one can only use one’s judgement. Suppose for example that a new kind of traffic has turned up and that only three messages are available. Each message has the letter V in the 17th place and G in the 18th place. We want to know the probability that it is a general rule that we should find V and G in these places. We first have to decide how probable it is that a cipher would have such a rule, and as regards this one can probably only guess, and my guess would be about 1/5,000,000. This judgement is not entirely a guess; some rather insecure mathematical reasoning has gone into it, something like this:-

The chance of there being a rule that two consecutive letters somewhere after the 10th should have certain fixed values seems to be about 1/500 (this is a complete guess). The chance of the letters being the 17th and 18th is about 1/15 (another guess, but not quite as much in the air). The probability of a letter being V or G is 1/676 (hardly a guess at all, but expressing a judgement that there is no special virtue in the bigramme VG). Hence the chance is 1/(500 × 15 × 676) or about 1/5,000,000. This is however all so vague, that it is more usual to make the judgment “1/5,000,000” without explanation.

The question as to what is the chance of having a rule of this kind might of course be resolved by statistics of some kind, but there is no point in having this very accurate, and of course the experience of the cryptographer itself forms a kind of statistics.

The remainder of the problem is then solved quite mathematically. . . .

He’s so goddamn reasonable. He’s everything I aspire to.

Reasonableness is, I believe, and underrated trait in research. By “reasonable,” I don’t mean a supine acceptance of the status quo, but rather a sense of the connections of the world, a sort of generalized numeracy, an openness and honesty about one’s sources of information. “This judgement is not entirely a guess; some rather insecure mathematical reasoning has gone into it”—exactly!

Damn this guy is good. I’m glad to see he’s finally posting his stuff on Arxiv.

Bob Carpenter’s favorite books on GUI design and programming

Bob writes:

I would highly recommend two books that changed the way I thought about GUI design (though I’ve read a lot of them):

* Jeff Johnson. GUI Bloopers.

I read the first edition in book form and the second in draft form (the editor contacted me based on my enthusiastic Amazon feedback, which was mighty surprising). I also like

* Stephen Krug. Don’t Make Me Think.

I think I read the first edition and it’s now up to the 3rd. And
also this one about general design:

* Robin Williams. Non-Designers’ Design Book. (not web specific, but great advice in general on layout)

It’s also many editions past where I first read it. I’m also a huge
fan of

* Hunt and Thomas. The Pragmatic Programmer.

for general programming and development advice. We’ve implemented most of the recommended practices in Stan’s workflow.

On deck this week

Mon: Bob Carpenter’s favorite books on GUI design and programming

Tues: Bayesian inference: The advantages and the risks

Wed: Objects of the class “Foghorn Leghorn”

Thurs: “Physical Models of Living Systems”

Fri: Creativity is the ability to see relationships where none exist

Sat: Kaiser’s beef

Sun: Chess + statistics + plagiarism, again!

“Do we have any recommendations for priors for student_t’s degrees of freedom parameter?”

In response to the above question, Aki writes:

I recommend as an easy default option
real nu;
nu ~ gamma(2,0.1);

This was proposed and anlysed by Juárez and Steel (2010) (Model-based clustering of non-Gaussian panel data based on skew-t distributions. Journal of Business & Economic Statistics 28, 52–66.). Juárez and Steel compere this to Jeffreys prior and report that the difference is small. Simpson et al (2014) (arXiv:1403.4630) propose a theoretically well justified “penalised complexity (PC) prior”, which they show to have a good behavior for the degrees of freedom, too. PC prior might be the best choice, but requires numerical computation of the prior (which could computed in a grid and interpolated etc.). It would be feasible to implement it in Stan, but it would require some work. Unfortunately no-one has compared PC prior and this gamma prior directly, but based on the discussion with Daniel Simpson, although PC prior would be better this gamma(2,0.1) prior is not a bad choice. Thus, I would use it until someone implements the PC prior for degrees of freedom of the Student’s t in Stan.

Are you ready to go fishing in the data lake?

While Andrew is trying to get someone to make a t-shirt design “Gone fishing”, someone else thinks fishing is one of the “big data trends in 2015″. This advertisement by some company keeps re-appearing in my twitter feed.

Fishing in the data lake

Apology to George A. Romero

This came in the email one day last year:

Good Afternoon Mr. Gelman,

I am reaching out to you on behalf of Pearson Education who would like to license an excerpt of text from How Many Zombies Do You Know? for the following, upcoming textbook program:

Title: Writing Today
Author: Richard Johnson-Sheehan and Charles Paine
Edition: 3
Anticipated Pub Date: 01/2015

For this text, beginning with “The zombie menace has so far,” (page 101) and ending with “Journal of the American Statistical Association,” (409-423), Pearson would like to request US & Canada distribution, English language, a 150,000 print run, and a 7 year term in all print and non-print media versions, including ancillaries, derivatives and versions whole or in part.

The requested material is approximately 550 words and was originally published March 31, 2010 on Scienceblogs.com

If you could please look over the attached license request letter and return it to us, it would be much appreciated. If you need to draw up an invoice, please include all granted rights within the body of your invoice (the above, underlined portion). . . .

I decided to charge them $150 (I had no idea, I just made that number up) and I sent along the following message:

Also, at the bottom of page 2, they have a typo in my name (so please cross that out and replace with my actual last name!) and also please cross out “Author: George A. Romano”. Finally, please cross out the link (http://scienceblogs.com/appliedstatistics/2010/07/01/how-many-zombies-do-you-know-u/) and replace by: http://arxiv.org/pdf/1003.6087.pdf

I got the $150 and they told me they’d send me a copy of the book. And last month it came in the mail. So cool! I’ve always fancied myself a writer so I loved the idea of having an entry in a college writing textbook. (Yeah, yeah, I know some people say that college is a place where kids learn how to write badly. Whatever.)

I quickly performed what Yair calls a “Washington read” and found my article. It’s right there on page 266, one of the readings in the Analytical Reports chapter. B-b-b-ut . . . they altered my deathless prose!

– They removed the article’s abstract. That’s fine, the abstract wasn’t so funny.

– My name in the author list pointed to the following hilarious footnote which they removed: “Department of Statistics, Columbia University, New York. Please do not tell my employer that I spent any time doing this.”

– George A. Romero’s name in the author list pointed to the following essential footnote which they removed: “Not really.”

– They changed “et al.” to “et. al.” That’s just embarrassing for them. Making a mistake is one thing, but changing something correct into a mistake, that’s just sad. It reminds me of when one of my coauthors noticed the word “comprises” in the paper I’d written and scratched it out and replaced it with “is comprised of.” Ugh.

– They removed Section 4 of the paper, which read:

Technical note

We originally wrote this article in Word, but then we converted it to Latex to make it look more like science.

Ouch. That hurts.

But the biggest problem was, by keeping Romero’s name on the article and removing the disclaimer, they made it look like Romero actually was involved in this silly endeavor. Indeed, in their intro they refer to “the authors,” and after they refer to “Gelman and Romero’s article.” That’s better than “Gleman and Romano,” but, still, it doesn’t seem right to assign any of the blame for this on Romero. I’d have no problem sharing the credit but I have no idea how he’d feel about it.

At least they kept in the ZDate joke.

P.S. Overall I’m happy to see my article in this textbook. But it’s funny to see where it got messed up.