Skip to content

Born-open data

Hatching

Jeff Rouder writes:

Although many researchers agree that scientific data should be open to scrutiny to ferret out poor analyses and outright fraud, most raw data sets are not available on demand. There are many reasons researchers do not open their data, and one is technical. It is often time consuming to prepare and archive data. In response my [Rouder’s] lab has automated the process such that our data are archived the night they are created without any human approval or action. All data are versioned, logged, time stamped, and uploaded including aborted runs and data from pilot subjects. The archive is GitHub github.com, the world’s largest collection of open-source materials. Data archived in this manner are called born open.

Rouder continues:

Psychological science is beset by a methodological crisis in which many researchers believe there are widespread and systemic problems in the way researchers produce, evaluate, and report knowledge. . . . This methodological crisis has spurred many proposals for improvement including an increased consideration of replicability (Nosek, Spies, & Motyl, 2012), a focus on the philosophy and statistics underlying inference (Cumming, 2014; Morey, Romeijn, & Rouder, 2013), and an emphasis on what is now termed open science, which can be summarized as the practice of making research as transparent as possible.

And here’s the crux:

Open data, unfortunately, seems to be paradox of sorts. On one hand, many researchers I encounter are committed to the concept of open data. Most of us believe that one of the defining features of science is that all aspects of the research endeavor should be open to peer scrutiny. We live this sentiment almost daily in the context of peer review where our scholarship and the logic of our arguments is under intense scrutiny.

On the other hand, surprisingly, very few of our collective data are open!

Say it again, brother:

Consider all the data that is behind the corpus of research articles in psychology. Now consider the percentage is available to you right now on demand. It is negligible. This is the open-data paradox—a pervasive intellectual commitment to open data with almost no follow through whatsoever.

What about current practice?

Many of my colleagues practice what I [Rouder] call data-on-request. They claim that if you drop them a line, they will gladly send you their data. Data-on-request should not be confused with open data, which is the availability of data without any request whatsoever. Many of these same colleagues may argue that data-on-request is sufficient, but they are demonstrably wrong.

No kidding.

Here’s one of my experiences with data-on-request:

Last year, around the time that Eric Loken and I were wrapping up our garden-of-forking-paths paper, I was contacted by Jessica Tracy, one of the authors of that ovulating-women-wear-red study which was one of several examples discussed in our article. Tracy wanted to let us know about some more research she and her collaborator, Alec Beall, had been doing, and she also wanted to us to tell her where our paper would be published so that she and Beall would have a chance to contact the editors of our article before publication. I posted Tracy and Beall’s comments, along with my responses, on this blog. But I did not see the necessity for them to be involved in the editorial process of our article (nor, for that matter, did I see such a role for Daryl Bem or any of the authors of the other work discussed therein). In the context of our back-and-forth, I asked Tracy if she could send us the raw data from her experiments. Or, better still, if she could just post her data on the web for all to see. She replied that, since we would not give her the prepublication information on our article, she would not share her data.

I guess the Solomon-like compromise would’ve been to saw the dataset in half.

Just to clarify: Tracy and Beall are free to do whatever they want. I know of no legal obligation for them to share their data with people who disagree with them regarding the claim that women in certain days of their monthly cycle are three times more likely to wear red or pink shirts. I’m not accusing them of scientific misconduct in not sharing their data. Maybe it was too much trouble for them to put their data online, maybe it is their judgment that science will proceed better without their data being available for all to see. Whatever. It’s their call.

I’m just agreeing with Rouder that data-on-request is not the same as open data. Not even close.

Stan workshops at UCLA (6/23) and UCI (6/24)

While Bob travels to Boston-ish, I’ll be giving two Stan workshops in Southern California. I’m excited to be back on the west coast for a few days — I grew up not too far away. Both workshops are open, but space is limited. Follow the links for registration.

The workshops will cover similar topics. I’m going to focus more on Stan usage and less on MCMC. If you’re attending, please install RStan 2.6.0 before the workshop.

 

P.S. Congrats, Dub Nation.

The David Brooks files: How many uncorrected mistakes does it take to be discredited?

Nickelback_in_Brisbane_November_2012_Here_And_Now_Tour

OK, why am I writing this? We all know that New York Times columnist David Brooks deals in false statistics, he’s willing and able to get factual matters wrong, he doesn’t even fact-check his own reporting, his response when people point out his mistakes is irritation rather than thanks, he won’t run a correction even if the entire basis for one of his columns is destroyed, and he thinks that he thinks technical knowledge is like the recipes in a cookbook and can be learned by rote. A friend of facts, he’s not.

But we know all that. So I was not surprised when Adam Sales pointed me to this recent article by David Zweig, “The facts vs. David Brooks: Startling inaccuracies raise questions about his latest book.”

Unlike Zweig (or his headline writer), I was hardly startled that Brooks had inaccuracies. Accuracy ain’t Brooks’s game.

And Jonathan Falk pointed me to this review by Mark Liberman of many instances where Brooks got things wrong.

Amazingly enough, the errors pointed out by Sales and Liberman don’t even overlap with the errors that I’d noticed in some Brooks columns—the anti-Semitic education statistics and his completely wrong guess about the social backgrounds of rich people.

Anyway, this is all known, and my first response was a flippant, Yeah, no kidding, David Brooks is like Gregg Easterbrook without the talent.

Just to be clear: this is not meant as a backhand slam on Easterbrook, a columnist who, like Brooks, loves to quote statistics but can get them horribly wrong. Easterbrook is a good writer, a fun football columnist, and sparkles with ideas. He really does have talent.

So here’s my question

Anyway, to continue, here’s my question: How is it that Brooks, who has such a reputation for screwing things up, continues to occupy his high post in journalism? Where did he get his Isiah Thomas-like ability to keep bouncing back from adversity, his Ray Keene-like ability to violate the norms of journalistic ethics?

And it’s not just the New York Times. Here, for example, is a puff piece that appeared on NPR a couple months ago. The reporter didn’t get around to asking, Hey, David Brooks, what about those fake statistics you published??

What will it take for Brooks’s external reputation to catch up to his internal reputation? Lots of things have come out over the years and it hasn’t happened yet. But this new story that came in, maybe it will make a difference. Straw that broke the camel’s back and all that.

For example, that NPR story quoted Brooks quoting a statistic that, according to Zweig’s thorough investigation, got “nearly every detail” wrong. NPR reporters don’t like to be patsies, right? Publishing fake numbers in the NYT is one thing—heck, Brooks has columns to fill every week, he can’t be picky and choosy about his material. But promulgating this in other news outlets, that could annoy people.

And, once Brooks loses the constituency of his fellow journalists, what does he have left?

At that point, he’s Dennis Miller without the jokes.

Michael LaCour in 20 years

Screen Shot 2015-05-22 at 2.34.30 PM

In case you were wondering what “Bruno” Lacour will be doing a couple decades from now . . .

James Delaney pointed me to this CNN news article, “Connecticut’s strict gun law linked to large homicide drop” by Carina Storrs:

The rate of gun-related murders fell sharply in the 10 years after Connecticut implemented a law requiring people buying firearms to have a license, according to a study. . . . To assess the effect of this law, researchers identified states that had levels of gun-related homicide similar to Connecticut before 1995. These include Rhode Island, New Hampshire and Maryland. When the researchers compared these states to Connecticut between 1995 and 2005, they found the level of gun-related homicide in Connecticut dropped below that of comparable states.

Based on the rates in these comparable states, the researchers estimated Connecticut would have had 740 gun murders if the law had not been enacted. Instead, the state had 444, representing a 40% decrease.

Wow—40%, that’s a lot! And, indeed, Storrs has a quote on it:

“I did expect a reduction [but] 40% is probably a little higher than I would have guessed,” said Daniel Webster, director of the Johns Hopkins Center for Gun Policy and Research who led the study, which was published Friday in the American Journal of Public Health.

A legal expert named Daniel Webster, huh? And I guess it’s good they have someone named Storrs writing articles about Connecticut.

Anyway, that’s a funny quote from the leader of the study! Perhaps the reporter should push a bit, maybe ask something like: Do you really believe the effect is 40%?? Or do you think that 40% is an overestimate coming from the statistical significance filter and the garden of forking paths?

OK, this is all important stuff. But it’s not the subject of today’s post.

Here’s the deal. Storrs continues her article:

Ten states have laws similar to Connecticut’s, including background check requirements. It is hard to know what effect permit-to-purchase laws have without looking in these other states, said John R Lott Jr., president of the Crime Prevention Research Center, a gun rights advocate and columnist for Fox News. “If 10 states passed a law, eight could increase and two could fall, and how do I know that it was because of the gun law?” he said.

Wha??? John Lott? CNN can’t find any real expert to interview? Why not just follow up with a quote from Mary Rosh, endorsing Lott as “the best professor I ever had”???

For those of you who don’t remember, John Lott shares with Michael LaCour the distinction of having announced, with great publicity, controversial data from a survey that he said he conducted, but then for which he could supply no evidence of its existence. Damn! I hate when that happens. As I wrote last month, Lott represents a possible role model for LaCour in that he seems to continue to be employed in some capacity doing research of an advocacy nature. And, like LaCour, Lott never admitted to fabrication nor did he apologize. (I guess that last part makes sense: if there’s nothing to admit, there’s nothing to apologize for.)

Ok, just on the statistics for a moment, Lott’s argument is terrible. First, “Ten states have laws similar to Connecticut’s” is not so relevant, given that the causal identification comes from the change in the law, not the existence of the law. Indeed, Storrs gets a good quote dismissing Lott’s argument:

Although Webster said he would like to study the effect of gun laws in other states, that research is not practical. Most states passed meaningful gun laws, such as laws requiring background checks, long ago, “frankly before I was born,” and it would be hard to know how those laws were enforced back then, and how society responded to them, he explained. In addition, information from death certificates was less readily available from the Centers for Disease Control and Prevention before 1980, he said.

Second, Lott says, “If 10 states passed a law, eight could increase and two could fall.” But that’s just ridiculous. Why suppose that introducing this law, which the data indicated was associated with a drop in homicides, would lead to an increase in 8 states out of 10?

It’s not that the Webster et al. claims are airtight. I’ve already expressed my concern that the estimated effect is too high, also Storrs alludes to evidence from other states that send mixed messages. And Delaney has a point when he writes, “One concern about the construction of the synthetic control is Connecticut’s proximity to and interconnections with NYC, which experienced a dramatic decrease in overall homicides from 1177 in 1995 to 539 in 2005 (according to Wikipedia). Whereas, from what I can tell, homicide totals while decreasing across the nation during this period, happened to be closer to constant in this period in New Hampshire, Rhode Island, and Maryland.”

But Lott’s criticisms are uninspiring. Let’s hope that Bruno Lacour can do better in his future career as an advocate and pundit, and let’s hope that news outlets can do better when looking for a quote. I heard John Yoo is available. . . .

How tall is Kit Harrington? Stan wants to know.

We interrupt our regularly scheduled programming for a special announcement.

Screen Shot 2015-06-15 at 9.00.23 PM

Madeleine Davies writes: “Here are some photos of Kit Harington. Do you know how tall he is?”

I’m reminded, of course, of our discussion of the height of professional tall person Jon Lee Anderson:

Cata w Jon Lee Anderson

Full Bayes, please. I can’t promise publication on Gawker, but I’ll do my best.

Because there is no observable certainty other than the existence of thought

Someone who is teaching a college philosophy class writes:

We discussed Descartes’ Meditations on First Philosophy last week — specifically, concerning the existence of God — and I had students write down their best proof for God’s existence in one minute, independent of their beliefs. Attached is a particularly funny response:

Screen Shot 2015-04-12 at 5.30.39 PM

Another good one was the blank sheet of paper that a student handed in…

On deck this week

Mon: Because there is no observable certainty other than the existence of thought

Tues: Michael LaCour in 20 years

Wed: Born-open data

Thurs: You can crush us, you can bruise us, yes, even shoot us, but oh—not a pie chart!

Fri: In which a complete stranger offers me a bet

Sat: Statistics Be

Sun: “When more data steer us wrong: replications with the wrong dependent measure perpetuate erroneous conclusions”

Saturday’s entry is my favorite this week.

Wikipedia is the best

Boo-Boo_Bear

“It is not readily apparent whether Boo-Boo is a juvenile bear with a precocious intellect or simply an adult bear who is short of stature.”

The language of insignificance

Jonathan Falk points me to an amusing post by Matthew Hankins giving synonyms for “not statistically significant.” Hankins writes:

The following list is culled from peer-reviewed journal articles in which (a) the authors set themselves the threshold of 0.05 for significance, (b) failed to achieve that threshold value for p and (c) described it in such a way as to make it seem more interesting.

And here are some examples:

slightly significant (p=0.09)
sufficiently close to significance (p=0.07)
trending towards significance (p>0.15)
trending towards significant (p=0.099)
vaguely significant (p>0.2)
verged on being significant (p=0.11)
verging on significance (p=0.056)
weakly statistically significant (p=0.0557)
well-nigh significant (p=0.11)

Lots more at the link.

This is great, but I do disagree with one thing in the post, which is where Hankins writes: “if you do [play the significance testing game], the rules are simple: the result is either significant or it isn’t.”

I don’t like this; I think the idea that it’s a “game” with wins and losses is a big part of the problem! More on this point in our “power = .06″ post.

JuliaCon 2015 (24–27 June, Boston-ish)

JuliaCon is coming to Cambridge, MA the geek capital of the East Coast: 24–27 June. Here’s the conference site with program.

I (Bob) will be giving a 10 minute “lightning talk” on Stan.jl, the Julia interface to Stan (built by Rob J. Goedman — I’m just pinch hitting because Rob couldn’t make it).

The uptake of Julia has been nothing short of spectacular. I’m really looking forward to learning more about it.

Trivia tidbit: Julia and Stan go way back; they were both developed under the same U.S. Department of Energy grant for high-performance computing (DE-SC0002099).