Skip to content

Stan Meetup Talk in Ann Arbor this Wednesday (5 Aug 2015)

I (Bob) will be presenting an overview of (R)Stan at the Ann Arbor R User Group meetup this Wednesday night (5 August 2015) at 7 PM.

To see the abstract and register to attend:

RStan: Statistical Modeling Made Easy with Bob Carpenter

Wednesday, Aug 5, 2015, 7:00 PM

Barracuda Networks
317 Maynard St Ann Arbor, MI

46 useRs Attending

This August we’ll be joined by special guest Dr. Bob Carpenter to learn about Stan (http://mc-stan.org/), his open-source probabilistic programming language, and how to use it with R.RStan: Statistical Modeling Made Easy ————————————– Bob Carpenter Columbia Uni., Dept. of StatisticsI’ll introduce Stan, a new language f…

Check out this Meetup →

Neat—Wordpress includes a preview of the link.

We’ll see how many people show up expecting to see Andrew despite my saying it’s me and the talk saying it’s me.

The plagiarist next door strikes back: Different standards of plagiarism in different communities

Xerox_914

Commenters on this blog sometimes tell me not to waste so much time talking about plagiarism. And in the grand scheme of things, what could be more trivial than plagiarism in an obscure German book of chess anecdotes? Yet this is what I have come to talk with you about today.

As usual, I will make the claim that this discussion has more general relevance, that a careful exploration of this particularly trivial topic can give us larger insights into statistics and human understanding.

Earlier this year I learned that a fellow student from my graduate program, Christian Hesse, who for many years has been a math professor in Germany, ripped off material for a chess book that he wrote. In the comments to my post on the story, Christian wrote, “The author falsely accuses me of copying material for my chess book.” And by “the author,” Christian is talking about me. But, from the linked material from Edward Winter, it seems pretty clear that Chrissy did copy material, introducing errors in the process.

As I wrote in response to Chrissy’s comment, if you’re gonna copy material, you should give the source. Otherwise you’re misleading your readers and allowing yourself to propagate misinformation. And why do that?

But that’s all background. Today I want to focus on a particular aspect of this dispute, which is Chrissy’s implicit argument that this sort of copying in standard in the world of chess, and that Edward Winter and others accuse of plagiarism “anybody who later writes about the same chess games and matches, chess positions, studies, events.” I think what Chrissy is saying is that it’s commonplace to do what he did, which is to copy material from an old chess magazine or book, not attribute the source, and not check it for accuracy. And, indeed, you can go far in the world of chess journalism by copying, much more shamelessly than Christian Hesse has ever done.

So here’s the question: If everybody does it, and Christian’s book has been well received (he quotes a glowing newspaper review), then should we care?

I don’t mean, “Should we care?” as in “Is it important?” In that case, no, we shouldn’t care, any more than we should care whether Tom Brady had his footballs deflated or whether Pete Rose gambled on baseball or whether Lance Armstrong ever uttered a true sentence in his life. This is not an Ed Wegman situation in which professional misconduct was used in the service of potentially consequential political activities, or even a Mark Hauser story in which professional misconduct was used to waste a lot of people’s time and money. It’s just chess. It’s just sports.

No, when I say, “Should we care?”, I mean “Does this matter in the context of the chess book?” In the same way as we could care about Pete Rose because we are baseball fans and don’t want to see the sport become as scripted as the NBA, for example.

And this gets back to the way in which we (that is, Thomas Basbøll, me, and anyone else out there who happens to agree with us) like to frame plagiarism in particular, and scholarly misconduct more generally: It’s not about the wrongdoing, it’s about the corruption of the communication channel.

OK, so that wasn’t so pithy; maybe one of you can punch this up enough that it can make it into the lexicon?

Ok, to continue: If the problem with Chrissy’s copying-without-attribution is that he’s cheating, one could well respond that, no, he’s not cheating: the value-added in his chess book does not come from the games and stories themselves but in how he arranges them. If the problem is that he’s breaking the rules, one could well respond that, no, in the chess world, “the rules” allow this sort of thing. If the problem is that he’s stealing, one could respond that chess games are free for all to share, as are stories, and even any directly copied material might be in the public domain by now anyway.

T. S. Eliot wrote, “Immature poets imitate; mature poets steal.” Similar quotes were then attributed to Igor Stravinsky and Pablo Picasso, two other great artists from the modernist period.

Also, Martin Luther King, Jr., plagiarized—have I mentioned that recently??

So, sure, steal and steal and steal away. It’s a waste land out there.

No, the problem with copying without attribution is, if anyone’s going to want to care about these stories, they can learn a lot more by knowing where the stories come from. As Basbøll and I wrote, it’s a statistical crime. Which is one reason it makes me sad to see a statistician doing it.

P.S. Chrissy also notes that the world chess champion and his father are dear friends of his. I think it’s safe to say that Chrissy is a much better chess player than I am. Much much much better! If we played a game where I got an hour and he got 2 minutes, I’m almost certain he’d destroy me. Also, let me be clear that I am not claiming that his book has no value. Not at all! It could well contain some plagiarized material and some good material. Even some of the plagiarized material could be good. Actually, it should by good, otherwise why copy it?

On deck this week

Mon: The plagiarist next door strikes back: Different standards of plagiarism in different communities

Tues: Pro Publica’s new Surgeon Scorecards

Wed: How Hamiltonian Monte Carlo works

Thurs: When does Bayes do the job?

Fri: Here’s a theoretical research project for you

Sat: Classifying causes of death using “verbal autopsies”

Sun: All hail Lord Spiegelhalter!

Spam!

The following bit of irrelevance appeared on the stan-users mailing list:

On Jun 11, 2015, at 11:29 AM, Joanna Caldwell wrote:

Webinar: Tips & Tricks to Improve Your Logistic Regression
. . .
Registration Link: . . .

Abstract: Logistic regression is a commonly used tool to analyze binary classification problems. However, logistic regression still faces the limitations of detecting nonlinearities and interactions in data. In this webinar, you will learn more advanced and intuitive machine learning techniques that improve on standard logistic regression in accuracy and other aspects. As an APPLIED example, we will demonstrate using a banking dataset where we will predict future financial stress of a loan applicant in order to determine whether they should be granted a loan. Although the focus is related to finance and loans, the concepts are relevant for anyone who actively uses logistic regression and wishes to improve accuracy and predictor understanding. . . .

I wrote onto the list:

Now we’re getting spam on the Stan list?? This is really weird! Unless they’re actually using Stan, but I doubt it. “More advanced and intuitive techniques,” indeed!

And Bob replied:

The spam came from an account with their receptionist’s name on it. We have been getting spam all along, which is why we have a registration step. Daniel banned this user, but dedicated spammers are hard to keep out, especially if they go the manual labor route.

Dedicated spammers, huh? I guess it’s a sort of backhanded compliment to Stan, that we’re considered important enough for some sleazoids to sic their secretary on us. Not enough phone calls to answer today, so spend the afternoon posting unwanted ads on listservs. Maybe they’re using some sort of advanced Bayesian filter to decide whose time to waste here.

Statistical/methodological prep for a career in neuroscience research?

Shea Levy writes:

I’m currently a software developer, but I’m trying to transition to the neuroscience research world. Do you have any general advice or recommended resources to prepare me to perform sound and useful experimental design and analyses? I have a very basic stats background from undergrad plus eclectic bits and pieces I’ve picked up since, and have a fairly strong mathematical background.

I’m not sure! I think my book with Jennifer is a good place to start on statistical analysis, and for design I recommend the classic book by Box, Hunter, and Hunter.

I also recommend this paper by Microsoft engineers Ron Kohavi, Alex Deng, Brian Frasca, Roger Longbotham, Toby Walker, Ya Xu, a group of software engineers (I presume) at Microsoft. These authors don’t seem so connected to the statistics literature—in a follow-up paper they rediscover a whole bunch of already well-known stuff and present it as new—but this particular article is crisp and applied, and I like it.

Maybe readers have other suggestions?

If you leave your datasets sitting out on the counter, they get moldy

I received the following in the email:

I had a look at the dataset on speed dating you put online, and I found some big inconsistencies. Since a lot of people are using it, I hope this can help to fix them (or hopefully I did a mistake in interpreting the dataset).

Here are the problems I found.

1. Field dec is not consistent at all (boolean for a big chunk of the dataset, in the range 1-10 later). Should this be the field of the decision and dec_o be the decision of the partner? dec and match should be the same thing? I tried to used match instead of dec but then I get the following problem

2. I tried to see if matches are consistent (if my partner decided yes it should mean that in his record I see a match): if I look at the record with iid x and pid y, dec_o=1 should mean that in the record with iid y and pid x I should see a match (in match or dec). This is not in general true. So dec_o is not consinstent with the matches.

3. Same thing for like and attr_o (or attr and attr_o)

I sent this to Ray Fisman, the source of the data, who replied:

Saurabh Bhargava used the underlying files and has posted data in a replication file for a study in the Review of Economics and Statistics.

I’m glad somebody put those data in the freezer.

This sentence by Thomas Mallon would make Barry N. Malzberg spin in his grave, except that he’s still alive so it would just make him spin in his retirement

Don’t get me wrong, I think Thomas Mallon is great. But what was he thinking when he wrote this:

Screen Shot 2015-06-14 at 12.32.19 PM

I know the New Yorker doesn’t do fact-checking anymore, but still.

The funny thing is, Malzberg has similarities with Mailer both in style and subject matter.

I’m guessing that in his statement Mallon is trying to make the point about the insularity of the literary establishment, but in some way I think that with his narrow focus he’s making his point all too well, at his own expense.

“Women Respond to Nobel Laureate’s ‘Trouble With Girls'”

Someone pointed me to this amusing/horrifying story of a clueless oldster.

Some people are horrified by what the old guy said, other people are horrified by how he was treated. He was clueless in his views about women in science, or he was cluelessly naive about gotcha journalism. I haven’t been following the details and so I’ll expressing no judgment one way or another.

My comment on the episode is that I find the whole “nobel laureate” thing a bit tacky in general. People get the prize and they get attention for all sorts of stupid things, people do all sorts of things in order to try to get it, and, beyond all that, research shows that not getting the Nobel Prize reduces your expected lifespan by two years. Bad news all around.

P.S. Regarding this case in particular, Basbøll points to this long post from Louise Mensch. Again, I don’t want to get involved in the details here, but I am again reminded how much I prefer blogs to twitter. On the positive side, I prefer a blog exchange to a twitter debate. And, on the negative side, I’d rather see a blogwar than a twitter mob.

P.P.S. Someone in comments objects to my term “clueless oldster,” so let me clarify that it’s not intended to be a particularly negative phrase. I would not be surprised if someone were to call me a “clueless oldster” myself; that’s just what happens.

What’s the stupidest thing the NYC Department of Education and Columbia University Teachers College did in the past decade?

Ummm, how bout this:

The principal of a popular elementary school in Harlem acknowledged that she forged answers on students’ state English exams in April because the students had not finished the tests . . . As a result of the cheating, the city invalidated several dozen English test results for the school’s third grade.

The school is a new public school—it opened in 2011—that is run jointly by the New York City Department of Education and Columbia University Teachers College.

So far, it just seems like an unfortunate error. According to the news article, “Nancy Streim, associate vice president for school and community partnerships at Teachers College, said Ms. Worrell-Breeden had created a ‘culture of academic excellence'” at the previous school where she was principal. Maybe Worrell-Breeden just cared too much and was under too much pressure to succeed, she cracked and helped the students cheat.

But then I kept reading:

In 2009 and 2010, while Ms. Worrell-Breeden was at P.S. 18, she was the subject of two investigations by the special commissioner of investigation. The first found that she had participated in exercise classes while she was collecting what is known as “per session” pay, or overtime, to supervise an after-school program. The inquiry also found that she had failed to offer the overtime opportunity to others in the school, as required, before claiming it for herself.

The second investigation found that she had inappropriately requested and obtained notarized statements from two employees at the school in which she asked them to lie and say that she had offered them the overtime opportunity.

After those findings, we learn, “She moved to P.S. 30, another school in the Bronx, where she was principal briefly before being chosen by Teachers College to run its new school.”

So, let’s get this straight: She was found to be a liar, a cheat, and a thief, and then, with that all known, she was hired to two jobs as school principal??

The news article quotes Nancy Streim of Teachers College as saying, “We felt that on balance, her recommendations were so glowing from everyone we talked to in the D.O.E. that it was something that we just were able to live with.”

On balance, huh? Whatever else you can say about Worrell-Breeden, she seems to have had the talent of conning powerful people. Or maybe just one or two powerful people in the Department of Education who had the power to get her these jobs.

This is really bad. Is it so hard to find a school principal that you have no choice but to hire someone who lies, cheats, and steals?

It just seems weird to me. I accept that all of us have character flaws, but this is ridiculous. Principal is a supervisory position. What kind of toxic environment will you have in a school where the principal is in the habit of forging documents and instructing employees to lie? How could this possibly be considered a good idea?

Here’s the blurb on the relevant Teachers College official:

Nancy Streim joined Teachers College in August 2007 in the newly created position of Associate Vice President for School and Community Partnership. . . . Dr. Streim comes to Teachers College after nineteen years at the University of Pennsylvania’s Graduate School of Education where she most recently served as Associate Dean for Educational Practice. . . . She recently completed a year long project for the Bill and Melinda Gates Foundation in which she documented principles underlying successful university-assisted public schools across the U.S. She has served as principal investigator for five major grant-funded projects that address the teaching and learning of math and science in elementary and middle grades.

It’s not clear to me whether Streim actually thought Worrell-Breeden was the best person for the job. Reading between the lines, maybe what happened is that Worrell-Breeden was plugged into the power structure at the Department of Education and someone at the D.O.E. lined up the job for her.

In a talk I found online, Streim says something about “patient negotiations” with school officials. Maybe a few years ago someone in power told her: Yes, we’ll give you a community school to run, but you have to take Worrell-Breeden as principal. I don’t know, but it’s possible.

I guess I’d prefer to think that Teachers College made a dirty but necessary deal. That’s more palatable to me than the idea that the people at the Department of Education and Teachers College thought it was a good idea to hire a liar/cheat/thief as a school principal.

Or maybe I’m missing the point? Perhaps integrity is not so important. The world is full of people with integrity but no competence, and we wouldn’t want that either.

“We can keep debating this after 11 years, but I’m sure we all have much more pressing things to do (grants? papers? family time? attacking 11-year-old papers by former classmates? guitar practice?)”

Someone pointed me to this discussion by Lior Pachter of a controversial claim in biology.

The statistics

The statistical content has to do with a biology paper by M. Kellis, B. W. Birren, and E.S. Lander from 2004 that contains the following passage:

Strikingly, 95% of cases of accelerated evolution involve only one member of a gene pair, providing strong support for a specific model of evolution, and allowing us to distinguish ancestral and derived functions.

Here’s where the 95% came from. In Pachter’s words:

The authors identified 457 duplicated gene pairs that arose by whole genome duplication (for a total of 914 genes) in yeast. Of the 457 pairs 76 showed accelerated (protein) evolution in S. cerevisiae. The term “accelerated” was defined to relate to amino acid substitution rates in S. cerevisiae, which were required to be 50% faster than those in another yeast species, K. waltii. Of the 76 genes, only four pairs were accelerated in both paralogs. Therefore 72 gene pairs showed acceleration in only one paralog (72/76 = 95%).

In his post on the topic, Pachter asks for a p-value for this 72/76 result which the authors of the paper in question had called “surprising.”

My first thought on the matter was that no p-value is needed because 72 out of 76 is such an extreme proportion. I guess I’d been implicitly comparing to a null hypothesis of 50%. Or, to put it another way, if you have 76 pairs, out of which 80 were accelerated (I think I did this right and that I’m not butchering the technical jargon: I got 80 by taking 72 pairs with only one paralog plus 4 pairs with two paralogs each), it would be extremely extremely unlikely to see only four pairs with acceleration in both.

But, then, as I read on, I realized this isn’t an appropriate comparison. Indeed, the clue is above, where Pachter notes that there were 457 pairs in total, thus in a null model you’re working with a probability of 80/(2*457) = 0.087, and when the probability is 0.087, it’s not so unlikely that you’d only see 4 pairs out of 457 with two accelerated paralogs. (Just to get the order of magnitude, 0.087^2 = 0.0077, and 0.0077*457 = 3.5, so 4 pairs is pretty much what you’d expect.)

So it sounds like Kellis et al. got excited by this 72 out of 76 number, without being clear on the denominator. I don’t know enough about biology to comment on the implications of this calculation on the larger questions being asked.

Pachter frames his criticisms around p-values, a perspective I find a bit irrelevant, but I agree with his larger point that, where possible, probability models should be stated explicitly.

The link between the scientific theory and statistical theory is often a weak point in quantitative research. In this case, the science has something to do with genes and evolution, and the statistical model is was that allowed Kellis et al. to consider 72 out of 76 to be “striking” and “surprising.” It is all too common for a researcher to reject a null hypothesis that is not clearly formed, in order to then make a positive claim of support for some preferred theory. But a lot of steps are missing in such an argument.

The culture

The cultural issue is summarized in this comment by Michael Eisen:

The more this conversation goes on the more it disturbs me [Eisen]. Lior raised an important point regarding the analyses contained in an influential paper from the early days of genome sequencing. A detailed, thorough and occasionally amusing discussion ensued, the long and the short of which to any intelligent reader should be that a major conclusion of the paper under discussion was simply wrong. This is, of course, how science should proceed (even if it rarely does). People make mistakes, others point them out, we all learn something in the process, and science advances.

However, I find the responses from Manolis and Eric to be entirely lacking. Instead of really engaging with the comments people have made, they have been almost entirely defensive. Why not just say “Hey look, we were wrong. In dealing with this complicated and new dataset we did an analysis that, while perhaps technically excusable under some kind of ‘model comparison defense’ was, in hindsight, wrong and led us to make and highlight a point that subsequent data and insights have shown to be wrong. We should have known better at the time, but we’ve learned from our mistake and will do better in the future. Thanks for helping us to be better scientists.”

Sadly, what we’ve gotten instead is a series of defenses of an analysis that Manolis and Eric – who is no fool – surely know by this point was simply wrong.

In an update, Pachter amplifies upon this point:

One of the comments made in response to my post that I’d like to respond to first was by an author of KBL [Kellis, Birren, and Lander; in this case the comment was made by Kellis] who dismissed the entire premise of the my challenge writing “We can keep debating this after 11 years, but I’m sure we all have much more pressing things to do (grants? papers? family time? attacking 11-year-old papers by former classmates? guitar practice?)”

This comment exemplifies the proclivity of some authors to view publication as the encasement of work in a casket, buried deeply so as to never be opened again lest the skeletons inside it escape. But is it really beneficial to science that much of the published literature has become, as Ferguson and Heene noted, a vast graveyard of undead theories?

Indeed. One of the things I’ve been fighting against recently (for example, in my article, It’s too hard to publish criticisms and obtain data for replication, or in this discussion of some controversial comments about replication coming from a cancer biologist) is the idea that, once something is published, it should be taken as truth. This attitude, of raising a high bar to post-publication criticism, is sometimes framed in terms of fairness. But, as I like to say, what’s so special about publication in a journal? Should there be a high barrier to criticisms of claims made in Arxiv preprints? What about scrawled, unpublished lab notes??? Publication can be a good way of spreading the word about a new claim or finding, but I don’t don’t don’t don’t don’t like the norm in which something that is published should not be criticized.

To put it another way: Yes, ha ha ha, let’s spend our time on guitar practice rather than exhuming 11-year-old published articles. Fine—I’ll accept that, as long as you also accept that we should not be citing 11-year-old articles.

If a paper is worth citing, it’s worth criticizing its flaws. Conversely, if you don’t think the flaws in your 11-year-old article are worth careful examination, maybe there could be some way you could withdraw your paper from the published journal? Not a “retraction,” exactly, maybe just an Expression of Irrelevance? A statement by the authors that the paper in question is no longer worth examining as it does not relate to any current research concerns, nor are its claims of historical interest. Something like that. Keep the paper in the public record but make it clear that the authors no longer stand behind its claims.

P.S. Elsewhere Pachter characterizes a different work of Kellis as “dishonest and fraudulent.” Strong words, considering Kellis is a tenured professor at MIT who has received many awards. As an outsider to all this, I’m wondering: Is it possible that Kellis is dishonest, fraudulent, and also a top researcher? Kinda like how Linda is a bank teller who is also a feminist? Maybe Kellis is an excellent experimentalist but with an unfortunate habit of making overly broad claims from his data? Maybe someone can help me out on this.

Ripped from the pages of a George Pelecanos novel

king

Did anyone else notice that this DC multiple-murder case seems just like a Pelecanos story?

Check out the latest headline, “D.C. Mansion Murder Suspect Is Innocent Because He Hates Pizza, Lawyer Says”:

Robin Flicker, a lawyer who has represented suspect Wint in the past but has not been officially hired as his defense attorney, says police are zeroing in on Wint because his DNA was found on pizza at the crime scene. The only problem, Flicker said is that Wint doesn’t like pizza.

“He doesn’t eat pizza,” Flicker told ABC News. “If he were hungry, he wouldn’t order pizza.”

When I saw the DC setting, the local businessman, the manhunt, and the horror/comic story of a pizza-ordering killer, I thought about Pelecanos immediately. And then I noticed that the victim’s family was Greek. Can’t get more Pelecanos than that.

I googled *pizza murder dc pelecanos* but I didn’t see any hits at all. I can’t figure that one out: surely someone would interview him for his thoughts on this one?

On deck this week

Mon: Ripped from the pages of a George Pelecanos novel

Tues: “We can keep debating this after 11 years, but I’m sure we all have much more pressing things to do (grants? papers? family time? attacking 11-year-old papers by former classmates? guitar practice?)”

Wed: What do I say when I don’t have much to say?

Thurs: “Women Respond to Nobel Laureate’s ‘Trouble With Girls’”

Fri: This sentence by Thomas Mallon would make Barry N. Malzberg spin in his grave, except that he’s still alive so it would just make him spin in his retirement

Sat: If you leave your datasets sitting out on the counter, they get moldy

Sun: Spam!

The 3 Stages of Busy

Last week I ran into a younger colleague who said he had a conference deadline that week and could we get together next week, maybe? So I contacted him on the weekend and asked if he was free. He responded:

This week quickly got booked after last week’s NIPS deadline.

So we’re meeting in another week. That’s busy for you: after one week off the grid, he had a week’s worth of pent-up meetings! I thought I was busy, but it’s nothing like that.

And this made me formulate my idea of the 3 Stages of Busy. It goes like this:

Stage 1 (early career): Not busy, at least not with external commitments. You can do what you want.

Stage 2 (mid career; my friend described above): Busy, overwhelmed with obligations.

Stage 3 (late career; me): So busy that it’s pointless to schedule anything, so you can do what you want (including writing blogs two months in advance!).

Ira Glass asks. We answer.

glass

The celebrated radio quiz show star says:

There’s this study done by the Pew Research Center and Smithsonian Magazine . . . they called up one thousand and one Americans. I do not understand why it is a thousand and one rather than just a thousand. Maybe a thousand and one just seemed sexier or something. . . .

I think I know the answer to this one! The survey may well have aimed for 1000 people, but you can’t know ahead of time exactly how many people will respond. They call people, leave messages, call back, call back again, etc. The exact number of people who end up in the survey is a random variable.

45 years ago in the sister blog

susan andy 557

More gremlins: “Instead, he simply pretended the other two estimates did not exist. That is inexcusable.”

1977_AMC_Gremlin_X_-_Hershey_2012_d-1024x5291977_AMC_Gremlin_X_-_Hershey_2012_d-1024x5291977_AMC_Gremlin_X_-_Hershey_2012_d-1024x5291977_AMC_Gremlin_X_-_Hershey_2012_d-1024x5291977_AMC_Gremlin_X_-_Hershey_2012_d-1024x5291977_AMC_Gremlin_X_-_Hershey_2012_d-1024x529

1977_AMC_Gremlin_X_-_Hershey_2012_d-1024x5291977_AMC_Gremlin_X_-_Hershey_2012_d-1024x5291977_AMC_Gremlin_X_-_Hershey_2012_d-1024x5291977_AMC_Gremlin_X_-_Hershey_2012_d-1024x5291977_AMC_Gremlin_X_-_Hershey_2012_d-1024x5291977_AMC_Gremlin_X_-_Hershey_2012_d-1024x529

1977_AMC_Gremlin_X_-_Hershey_2012_d-1024x5291977_AMC_Gremlin_X_-_Hershey_2012_d-1024x5291977_AMC_Gremlin_X_-_Hershey_2012_d-1024x5291977_AMC_Gremlin_X_-_Hershey_2012_d-1024x5291977_AMC_Gremlin_X_-_Hershey_2012_d-1024x5291977_AMC_Gremlin_X_-_Hershey_2012_d-1024x529

1977_AMC_Gremlin_X_-_Hershey_2012_d-1024x5291977_AMC_Gremlin_X_-_Hershey_2012_d-1024x5291977_AMC_Gremlin_X_-_Hershey_2012_d-1024x5291977_AMC_Gremlin_X_-_Hershey_2012_d-1024x5291977_AMC_Gremlin_X_-_Hershey_2012_d-1024x5291977_AMC_Gremlin_X_-_Hershey_2012_d-1024x529

Brandon Shollenberger writes:

I’ve spent some time examining the work done by Richard Tol which was used in the latest IPCC report.  I was troubled enough by his work I even submitted a formal complaint with the IPCC nearly two months ago (I’ve not heard back from them thus far).  It expressed some of the same concerns you expressed in a post last year.

The reason I wanted to contact you is I recently realized most people looking at Tol’s work are unaware of a rather important point.  I wrote a post to explain it which I’d invite you to read, but I’ll give a quick summary to possibly save you some time.

As you know, Richard Tol claimed moderate global warming will be beneficial based upon a data set he created.  However, errors in his data set (some of which are still uncorrected) call his results into question.  Primarily, once several errors are corrected, it turns out the only result which shows any non-trivial benefit from global warming is Tol’s own 2002 paper.

That is obviously troubling, but there is a point which makes this even worse.  As it happens, Tol’s 2002 paper did not include just one result.  It actually included three different results.  A table for it shows those results are +2.3%, +0.2% and -2.7%.

The 2002 paper does nothing to suggest any one of those results is the “right” one, nor does any of Tol’s later work.  That means Tol used the +2.3% value from his 2002 paper while ignoring the +0.2% and -2.7% values, without any stated explanation.

It might be true the +2.3% value is the “best” estimate from the 2002 paper, but even if so, one needs to provide an explanation as to why it should be favored over the other two estimates.  Tol didn’t do so.  Instead, he simply pretended the other two estimates did not exist.  That is inexcusable.

I’m not sure how interested you are in Tol’s work, but I thought you might be interested to know things are even worse than you thought.

This is horrible and also kind of hilarious. We start with a published paper by Tol claiming strong evidence for a benefit from moderate global warming. Then it turns out he had some data errors; fixing the errors led to a weakening of this conclusions. Then more errors came out, and it turned out that there was only one point in his entire dataset supporting his claims—and that point came from his own previously published study. And then . . . even that one point isn’t representative of that paper.

You pull and pull on the thread, and the entire garment falls apart. There’s nothing left.

At no point did Tol apologize or thank the people who pointed out his errors; instead he lashed out, over and over again. Irresponsible indeed.

Stan 2.7 (CRAN, variational inference, and much much more)

logo_textbottom

Stan 2.7 is now available for all interfaces. As usual, everything you need can be found starting from the Stan home page:

Highlights

  • RStan is on CRAN!(1)
  • Variational Inference in CmdStan!!(2)
  • Two new Stan developers!!! 
  • A whole new logo!!!! 
  • Math library with autodiff now available in its own repo!!!!! 

(1) Just doing install.packages(“rstan”) isn’t going to work because of dependencies; please go to the RStan getting started page for instructions of how to install from CRAN. It’s much faster than building from source and you no longer need a machine with a lot of RAM to install.

(2) Coming soon to an interface near you.

Full Release Notes

v2.7.0 (9 July 2015)
======================================================================

New Team Members
--------------------------------------------------
* Alp Kucukelbir, who brings you variational inference
* Robert L. Grant, who brings you the StataStan interface

Major New Feature
--------------------------------------------------
* Black-box variational inference, mean field and full
  rank (#1505)

New Features
--------------------------------------------------
* Line numbers reported for runtime errors (#1195)
* Wiener first passage time density (#765) (thanks to
  Michael Schvartsman)
* Partial initialization (#1069)
* NegBinomial2 RNG (#1471) and PoissonLog RNG (#1458) and extended
  range for Dirichlet RNG (#1474) and fixed Poisson RNG for older
  Mac compilers (#1472)
* Error messages now use operator notation (#1401)
* More specific error messages for illegal assignments (#1100)
* More specific error messages for illegal sampling statement 
  signatures (#1425)
* Extended range on ibeta derivatives with wide impact on CDFs (#1426)
* Display initialization error messages (#1403)
* Works with Intel compilers and GCC 4.4 (#1506, #1514, #1519)

Bug Fixes
--------------------------------------------------
* Allow functions ending in _lp to call functions ending in _lp (#1500)
* Update warnings to catch uses of illegal sampling functions like
  CDFs and updated declared signatures (#1152)
* Disallow constraints on local variables (#1295)
* Allow min() and max() in variable declaration bounds and remove
  unnecessary use of math.h and top-level :: namespace (#1436)
* Updated exponential lower bound check (#1179)
* Extended sum to work with zero size arrays (#1443)
* Positive definiteness checks fixed (were > 1e-8, now > 0) (#1386)

Code Reorganization and Back End Upgrades
--------------------------------------------------
* New static constants (#469, #765)
* Added major/minor/patch versions as properties (#1383)
* Pulled all math-like functionality into stan::math namespace
* Pulled the Stan Math Library out into its own repository (#1520)
* Included in Stan C++ repository as submodule
* Removed final usage of std::cout and std::cerr (#699) and
  updated tests for null streams (#1239)
* Removed over 1000 CppLint warnings
* Remove model write CSV methods (#445)
* Reduced generality of operators in fvar (#1198)
* Removed folder-level includes due to order issues (part of Math
  reorg) and include math.hpp include (#1438)
* Updated to Boost 1.58 (#1457)
* Travis continuous integration for Linux (#607)
* Add grad() method to math::var for autodiff to encapsulate math::vari
* Added finite diff functionals for testing (#1271)
* More configurable distribution unit tests (#1268)
* Clean up directory-level includes (#1511)
* Removed all lint from new math lib and add cpplint to build lib
  (#1412)
* Split out derivative functionals (#1389)


Manual and Documentation
--------------------------------------------------
* New Logo in Manual; remove old logos (#1023)
* Corrected all known bug reports and typos; details in 
  issues #1420, #1508, #1496
* Thanks to Sunil Nandihalli, Andy Choi, Sebastian Weber,
  Heraa Hu, @jonathan-g (GitHub handle), M. B. Joseph, Damjan
  Vukcevic, @tosh1ki (GitHub handle), Juan S. Casallas
* Fix some parsing issues for index (#1498)
* Added chapter on variational inference
* Added strangely unrelated regressions and multivariate probit
  examples 
* Discussion from Ben Goodrich about reject() and sampling
* Start to reorganize code with fast examples first, then
  explanations
* Added CONTRIBUTING.md file (#1408)

BREAKING . . . Kit Harrington’s height

Screen Shot 2015-06-15 at 9.00.23 PM

Rasmus “ticket to” Bååth writes:

I heeded your call to construct a Stan model of the height of Kit “Snow” Harrington. The response on Gawker has been poor, unfortunately, but here it is, anyway.

Yeah, I think the people at Gawker have bigger things to worry about this week. . . .

Here’s Rasmus’s inference for Kit’s height:

Screen Shot 2015-07-21 at 10.24.21 PM

And here’s his summary:

From this analysis it is unclear how tall Kit is, there is much uncertainty in the posterior distribution, but according to the analysis (which might be quite off) there’s a 50% probability he’s between 1.71 and 1.75 cm tall. It is stated in the article that he is NOT 5’8” (173 cm), but according to this analysis it’s not an unreasonable height, as the mean of the posterior is 173 cm.

His Stan model is at the link. (I tried to copy it here but there was some html crap.)

A bad definition of statistical significance from the U.S. Department of Health and Human Services, Effective Health Care Program

As D.M.C. would say, bad meaning bad not bad meaning good.

Deborah Mayo points to this terrible, terrible definition of statistical significance from the Agency for Healthcare Research and Quality:

Statistical Significance

Definition: A mathematical technique to measure whether the results of a study are likely to be true. Statistical significance is calculated as the probability that an effect observed in a research study is occurring because of chance. Statistical significance is usually expressed as a P-value. The smaller the P-value, the less likely it is that the results are due to chance (and more likely that the results are true). Researchers generally believe the results are probably true if the statistical significance is a P-value less than 0.05 (p<.05). Example: For example, results from a research study indicated that people who had dementia with agitation had a slightly lower rate of blood pressure problems when they took Drug A compared to when they took Drug B. In the study analysis, these results were not considered to be statistically significant because p=0.2. The probability that the results were due to chance was high enough to conclude that the two drugs probably did not differ in causing blood pressure problems.

The definition is wrong, as is the example. I mean, really wrong. So wrong that it’s perversely impressive how many errors they managed to pack into two brief paragraphs:

1. I don’t even know what it means to say “whether the results of a study are likely to be true.” The results are the results, right? You could try to give them some slack and assume they meant, “whether the results of a study represent a true pattern in the general population” or something like that—but, even so, it’s not clear what is meant by “true.”

2. Even if you could some how get some definition of “likely to be true,” that is not what statistical significance is about. It’s just not.

3. “Statistical significance is calculated as the probability that an effect observed in a research study is occurring because of chance.” Ummm, this is close, if you replace “an effect” with “a difference at least as large as what was observed” and if you append “conditional on there being a zero underlying effect.” Of course in real life there are very few zero underlying effects (I hope the Agency for Healthcare Research and Quality mostly studies treatments with positive effects!), hence the irrelevance of statistical significance to relevant questions in this field.

4. “The smaller the P-value, the less likely it is that the results are due to chance (and more likely that the results are true).” No no no no no. As has been often said, the p-value is a measure of sample size. And, even conditional on sample size, and conditional on measurement error and variation between people, the probability that the results are true (whatever exactly that means) depends strongly on what is being studied, what Tversky and Kahneman called the base rate.

5. As Mayo points out, it’s sloppy to use “likely” to talk about probability.

6. “Researchers generally believe the results are probably true if the statistical significance is a P-value less than 0.05 (p<.05)." Ummmm, yes, I guess that's correct. Lots of ignorant researchers believe this. I suppose that, without this belief, Psychological Science would have difficulty filling its pages, and Science, Nature, and PPNAS would have no social science papers to publish and they'd have to go back to their traditional plan of publishing papers in the biological and physical sciences. 7. "The probability that the results were due to chance was high enough to conclude that the two drugs probably did not differ in causing blood pressure problems." Hahahahahaha. Funny. What's really amusing is that they hyperlink "probability" so we can learn more technical stuff from them. OK, I'll bite, I'll follow the link:

Probability

Definition: The likelihood (or chance) that an event will occur. In a clinical research study, it is the number of times a condition or event occurs in a study group divided by the number of people being studied.

Example: For example, a group of adult men who had chest pain when they walked had diagnostic tests to find the cause of the pain. Eighty-five percent were found to have a type of heart disease known as coronary artery disease. The probability of coronary artery disease in men who have chest pain with walking is 85 percent.

Fuuuuuuuuuuuuuuuck. No no no no no. First, of course “likelihood” has a technical use which is not the same as what they say. Second, “the number of times a condition or event occurs in a study group divided by the number of people being studied” is a frequency, not a probability.

It’s refreshing to see these sorts of errors out in the open, though. If someone writing a tutorial makes these huge, huge errors, you can see how everyday researchers make these mistakes too.

For example:

A pair of researchers find that, for a certain group of women they are studying, three times as many are wearing red or pink shirts during days 6-14 of their monthly cycle (which the researchers, in their youthful ignorance, were led to believe were the most fertile days of the month). Therefore, the probability (see above definition) of wearing red or pink is three times more likely during these days. And the result is statistically significant (see above definition), so the results are probably true. That pretty much covers it.

All snark aside, I’d never really had a sense of the reasoning by which people get to these sorts of ridiculous claims based on such shaky data. But now I see it. It’s the two steps: (a) the observed frequency is the probability, (b) if p less than .05 then the result is probably real. Plus, the intellectual incentive of having your pet theory confirmed, and the professional incentive of getting published in the tabloids. But underlying all this are the wrong definitions of “probability” and “statistical significance.”

Who wrote these definitions in this U.S. government document, I wonder? I went all over the webpage and couldn’t find any list of authors. This relates to a recurring point made by Basbøll and myself: it’s hard to know what to do with a piece of writing if you don’t know where it came from. Basbøll and I wrote about this in the context of plagiarism (a statistical analogy would be the statement that it can be hard to effectively use a statistical method if the person who wrote it up doesn’t understand it himself), but really the point is more general. If this article on statistical significance had an author of record, we could examine the author’s qualifications, possibly contact him or her, see other things written by the same author, etc. Without this, we’re stuck.

Wikipedia articles typically don’t have named authors, but the authors do have online handles and they thus take responsibility for their words. Also Wikipedia requires sources. There are no sources given for these two paragraphs on statistical significance which are so full of errors.

What, then?

The question then arises: how should statistical significance be defined in one paragraph for the layperson? I think the solution is, if you’re not gonna be rigorous, don’t fake it.

Here’s my try.

Statistical Significance

Definition: A mathematical technique to measure the strength of evidence from a single study. Statistical significance is conventionally declared when the p-value is less than 0.05. The p-value is the probability of seeing a result as strong as observed or greater, under the null hypothesis (which is commonly the hypothesis that there is no effect). Thus, the smaller the p-value, the less consistent are the data with the null hypothesis under this measure.

I think that’s better than their definition. Of course, I’m an experienced author of statistics textbooks so I should be able to correctly and concisely define p-values and statistical significance. But . . . the government could’ve asked me to do this for them! I’d’ve done it. It only took me 10 minutes! Would I write the whole glossary for them? Maybe not. But at least they’d have a correct definition of statistical significance.

I guess they can go back now and change it.

Just to be clear, I’m not trying to slag on whoever prepared this document. I’m sure they did the best they could, they just didn’t know any better. It would be as if someone asked me to write a glossary about medicine. The flaw is in whoever commissioned the glossary, to not run it by some expert to check. Or maybe they could’ve just omitted the glossary entirely, as these topics are covered in standard textbooks.

Screen Shot 2015-07-18 at 3.40.10 PM

P.S. And whassup with that ugly, ugly logo? It’s the U.S. government. We’re the greatest country on earth. Sure, our health-care system is famously crappy, but can’t we come up with a better logo than this? Christ.

P.P.S. Following Paul Alper’s suggestion, I made my definition more general by removing the phrase, “that the true underlying effect is zero.”

P.P.P.S. The bigger picture, though, is that I don’t think people should be making decisions based on statistical significance in any case. In my ideal world, we’d be defining statistical significance just as a legacy project, so that students can understand outdated reports that might be of historical interest. If you’re gonna define statistical significance, you should do it right, but really I think all this stuff is generally misguided.

Don’t put your whiteboard behind your projection screen

Daniel, Andrew, and I are on our second day of teaching, and like many places, Memorial Sloan-Kettering has all their classrooms set up with a whiteboard placed directly behind a projection screen. This gives us a sliver of space to write on without pulling the screen up and down.

If you have any say in setting up your seminar rooms, don’t put your board behind your screen, please — I almsot always want to use them both at the same time.

I also just got back from a DARPA workshop at the Embassy Suites in Portland, and there the problem was a podium in between two tiny screens, neither of which was easily visible from the back of the big ballroom. Nobody knows where to point when there are two boards. One big screen is way better.

At my summer school course in Sydney earlier this year, they had a neat setup where there were two screens, but one could be used with an overhead projection of a small desktop, so I could just write on paper and send it up to the second screen. And the screens were big enough that all 200+ students could see both. Yet another great feature of Australia.