Whassup, Pace investigators? You’re still hiding your data. C’mon dudes, loosen up. We’re getting chronic fatigue waiting for you already!

[cat picture]

James Coyne writes:

For those of you who have not heard of the struggle for release of the data from the publicly funded PACE trial of adaptive pacing therapy, cognitive behaviour therapy, graded exercise therapy, and specialist medical care for chronic fatigue syndrome, you can access my [Coyne’s] initial call for release of the portion of the data from the trial published in PLOS One.

Here is support from Stats News for release of the PACE trial data. And my half-year update on my request.

Despite the investigators’ having promised the data would be available as a condition of publishing in PLOS One, 18 months after making my request, I still have not received the data and the PACE investigators are continuing to falsely advertise to readers of their PLOS One article that they have complied with the data-sharing policy.

Here’s what the Pace team wrote:

We support the use of PACE trial data for additional, ethically approved research, with justified scientific objectives, and a pre-specified analysis plan. We prefer to collaborate directly with other researchers. On occasion, we may provide data without direct collaboration, if mutually agreed. . . .

Applicants should state the purpose of their request, their objectives; qualifications and suitability to do the study, data required, precise analytic plans, and plans for outputs. See published protocol for details of data collected.

Dayum. Them’s pretty strong conditions, considering that, as Coyne points out, the original Pace trial didn’t follow a pre-specified analysis plan itself.

Also:

Data will be provided with personal identifiers removed. Applicants must agree not to use the data to identify individual patients, unless this is a pre-specified purpose for record linkage.

Individual researchers who will see the data must sign an agreement to protect the confidentiality of the data and keep it secure.

This seems fair enough, but I don’t see that it has anything to do with pre-analysis plans, qualifications and suitability, or direct collaboration. The issue is that there are questions with the published analyses (see, for example, here), and to resolve these questions it would help to work with the original data.

Putting roadblocks in the way of data sharing, that seems a bit vexatious to me.

P.S. There’s nothing more fun than a story with good guys and bad guys, and it’s easy enough to put the Pace struggle into that framework. But, having talked with people on both sides, I feel like lots of people are trying their best here, and that some of these problems are caused by the outmoded statistical attitude that the role of a study is to prove that a treatment “works” or “does not work.” Variation is all. I don’t know how much can be learned by reanalysis of these particular data, but it does seem like it would be a good idea for the data to be shared as broadly as possible.

64 thoughts on “Whassup, Pace investigators? You’re still hiding your data. C’mon dudes, loosen up. We’re getting chronic fatigue waiting for you already!

  1. What kind of data sharing policy is that? The Freedom of Information act is liberal compared to that. Basically, they have a veto they can exercise on completely subjective criteria. What are “adequate qualifications”?

    At the very least they should make the criteria objective and verifiable.

    • Thanks for the link. Two quotes (toward the end) plus a little commentary for people who need either a teaser to read the article or a TLDR summary:

      Quote: “Like Anaya and Brown, van der Zee says he’s not driven by any animus toward Wansink; none of them had heard of him before they started dissecting his research. Instead he’s motivated by frustration with a scientific establishment that too frequently rewards dubious work, that seems to prefer flashiness over rigor. “It’s a disgrace for the field that we are three nobodies and we are the ones that have to discover this when it’s been out there in the published research for years,” he says.”

      Comment: I am grateful for Anaya, Brown, and van der Zee. Before they came along I had a few experiences writing to people and politely trying to point out problematical aspects of their work. Usually I got no response, or occasionally something like, “Thank you for taking an interest in my work, but I don’t feel the points you are making are really a problem.”

      Quote (very end of article): “Wansink still hopes to prove that his research lives up to that high standard. But he’s well aware that academic credibility is a fragile commodity and, in his less-optimistic moments, worries that the damage to his reputation has already been done. “I still think that most of our stuff is really, really rigorous,” Wansink says. “What I’m upset with is that it may not be seen as such anymore. That’s my disappointment: That all my amazing work, now people will say, ‘Yeah, but I wonder.’””

      Comment: I certainly hope that now people will say, “but I wonder.” That’s what needs to be said more often in response to published papers. I also hope that people won’t be so quick to call their work “rigorous” or “amazing” when it’s not really rigorous.

        • Carol:

          Yup, people have been slamming Wansink’s work for years but Wansink’s just ignored the criticism, a strategy that worked brilliantly until just a couple months ago. As is often the case in such situations, I’m not clear where’s the line between incompetence and unethical behavior.

        • Interesting link, thanks! In it the following is stated:

          “Yet the authors are real people, with a record of publishing in this field. Would they perpetrate such an ingenious hoax under their own names? Are they, perhaps, doing a sociological study to determine how the media and the public respond to such outrageous nonsense presented in the trappings of a scientific study? A test of science literacy or human gullibility, perhaps? ”

          The more i hear about Wansink, the more it resembles the Stapel case to me. Both looking, and acting more like actors than scientists, both producing research with media appeal, etc.

          The only two differences is that Stapel faked his data, and the same can not be said of Wansink (yet?). Also, with Stapel (and Bem) i did wonder whether it was all a sociological study, with Wansink not so much.

        • I just noticed that it was shortly after Simenek’s post that I wrote a bunch of posts trying a polite approach to critiquing some papers on “stereotype threat” (also sending a courtesy “heads-up” email to the lead author of each paper I critiqued). I never got any response from any of the authors.

          (In case anyone’s interested: The posts start at http://www.ma.utexas.edu/blogs/mks/2014/06/22/beyond-the-buzz-on-replications-part-i-overview-of-additional-issues-choice-of-measure-the-game-of-telephone-and-twwadi/.)

    • Jordan:

      Yes, I was interviewed for that article. It doesn’t seem like any of my material survived the editing process (my favorite bit was when I said that in my opinion Wansink has no obligation to share any of his data, and in turn I have no obligation to believe a single word he writes), but I guess the general points got made.

      • Yes, as I’ve said before, getting the data isn’t that big of a deal, we already know what horrors await. I do find it interesting however that his lab consistently publishes in BMC, which has had an open data policy since 2011, and yet none of his BMC papers seem to include links to the data sets.

        The thing I found most frustrating were his excuses for why the data couldn’t be shared. If you don’t want to share the data because you’re embarrassed by its quality, fine, just say that. Don’t come up with some BS excuse that we could identify who the diners were because the data set includes heights and weights. Does he think we’re stupid? At least come up with some smart lies, shit.

        You know, after we posted our preprint I wanted my involvement in this scandal to be over, but I’ve found the responses by Wansink, the journals, and Cornell to be absolutely pathetic. Cornell’s research integrity office and IRB won’t even respond to our emails. As a result, I’m all in boys. I’m trying to get this story to more news outlets and will continue to blog about it, and follow up with journals to make sure they are aware of the issues. And best of all, when I speak at schools/conferences I won’t pull any punches.

        • Jordan:

          This is not to excuse anything that Wansink is doing . . . I think he’s in over his head and has been hoping for years that all this fuss would go away. And, the amazing thing is, such a strategy can work! It worked for Ed Wegman (search this blog for that story), and it’s certainly worked for lots of researchers who spend their careers spewing out the results of noise mining. It didn’t work for Michael Lacour, but, as I discuss at the time, he might well have been able to weather the storm had he not faked his data but had just done the usual p-hacking. Mark Hauser got pushed out, but it took Harvard a lot of effort to do so, and even now he seems to have the support of some of his former colleagues in the psychology department. Last I checked, John Bargh still has a job, as does Roy Baumeister. If Amy Cuddy had already been tenured, she’s probably still be at Harvard Business School. Satoshi Kanazawa continues to show up in the headlines. Susan Fiske never even apologized for publishing those papers on himmicanes etc.

          All these people are like the neighborhood butcher who will sell you spoiled meat and then act all innocent when you come back later to complain. Once the sale is completed, or the paper is published, it’s history. That’s the attitude. And, amazingly, it seems to work most of the time.

        • “Once the sale is completed, or the paper is published, it’s history. That’s the attitude. And, amazingly, it seems to work most of the time.”

          Yes, this strategy works for a lot of people. Some people call it “Put the past behind you,” others, “Live in the present,” and others just call it “denial.” But it does work for a lot of people.

        • What may be even more relevant is what immediately follows the quote you gave:

          “The social-priming debate will rumble on, he [Dijksterhuis] says, because “there is an ideology out there that doesn’t want to believe that our behaviour can be cued by the environment”.

          Others remain concerned. Kahneman wrote in the e-mail debate on 4 February that this “refusal to engage in a legitimate scientific conversation … invites the interpretation that the believers are afraid of the outcome”.”

        • Here is what can happen when “historical experiments” are rigorously examined by scientists.

          An example of critically examining papers concerning Dijksterhuis’ “Unconscious Thought Theory (UTT)”:

          https://osf.io/preprints/psyarxiv/j944a/

          “To be frank, we expected our analysis would enable us to write a paper about theoretical and conceptual issues
          surrounding UTT. Instead, we encountered a massive failure of expert peer-review to reject manuscripts of a quality unacceptable as a product of science.”

        • Anon:

          I’d quote from that article but it very clearly says:

          EVALUATE BEFORE YOU REPLICATEMANUSCRIPT SUBMITTED FOR PUBLICATION -PLEASE DO NOT QUOTE

          Usually I don’t like it when people tell me what not to do, but they did say please, so I’ll respect their wishes.

        • “there is an ideology out there that doesn’t want to believe that our behaviour can be cued by the environment”

          And conversely, there is an ideology out there that does want us to believe that our behaviour can be cued by the environment. The ideology has the corollary that governments should be consulting well-remunerated experts for policy advce on the right cues — ‘nudges’, as it were — that will achieve the desired behaviour.

        • Martha (Smith): Dijksterhuis says “there is an ideology out there that doesn’t want to believe that our behaviour can be cued by the environment”. And he is certainly contributing to the evidence that environment affects behavior – if not through cues, at least through the strong incentives that Dutch universities provide for getting published in approved journals.

        • I received an interesting email from your friend Paul Alper.

          Apparently he is opposed to my use of the word “shit” in my previous comment. Damn, I just said “shit” again. Fuck, I just said “damn”. Cock sucking cunt, I just said “fuck”.

          He also told me to think carefully before I press submit. Oops.

        • Jordan:

          The world is full of people whom I respect and whose opinions I value but whom I’ve never met.

          Regarding the use of street language online: it all depends on context. I’m mostly monolingual but I’ve gained a lot of insights from my efforts to learn other languages. When I try to speak other languages, I’m aware of the importance of context and I learn that words that are synonyms in some settings don’t act the same in others. I don’t think it’s necessary to agree with all of Paul’s opinions to recognize that certain language when placed in the context of scholarly discussion can be an unhelpful distraction for some readers, even if in conversation it would not seem out of place at all.

        • Jordan: I’m curious to know what good you think is accomplished by language like this. I recently pointed out via private e-mail that your understanding of one of Wansink’s analyses was not correct; I was polite and professional
          over the course of several e-mail exchanges, and eventually you agreed with me. Would you have been so receptive if I had said that you were “stupid” or a “liar” or called you a “shit”?

          Whatever his other failings, one has to give Wansink credit for his grace under fire; he is behaving more appropriately than his critics.

        • Because of the incomplete methods in Wansink’s papers, it is sometimes impossible to determine with 100% certainty what is wrong, and what is simply some unusual statistics.

          I thank you for your contributions in sorting through this giant mess, and you should feel free to use whatever language you see fit.

        • Jordan Anaya March 20, 2017 at 4:31PM : In this particular case, it was lack of statistical expertise on your part.

        • Jordan Anaya: There is more than one statistical mistake in the analyses by you and your co-authors. I raised one
          with Nick Brown (about chi square vs. McNemar’s test) which he inserted into his blog (I think). The rest of them I haven’t tackled.

          In any case, my original comment was not about your analyses, or your statistical expertise or lack of it; it was about your language. What good is accomplished by language like this? I’ve done a lot of statistical consulting over the years. I know that a gentle approach gets resistance. A harsh approach? No hope.

        • Carol:

          You write, “Whatever his other failings, one has to give Wansink credit for his grace under fire; he is behaving more appropriately than his critics.”

          I disagree. Yes, Wansink is using polite language but his substance is terrible. He’s repeatedly been trying to deflect real criticism with irrelevant responses. Whether Wansink is the most clueless person to have ever entered a psychology lab, or a cunning manipulator, or something in between, I can’t say his behavior has been “appropriate” to the many, many problems that have been found in his work.

        • Jordan Anaya: I am not defending Wansink. I think this is, at best, very sloppy research. But it is also the case that critics are not necessarily right, and also make mistakes. My position is to keep an open mind until one has the data. I’ve dealt with a huge number of data sets over the years (generally from social psychology and medicine and related fields) and I have often been astonished at what some of them contain.

        • Andrew: Perhaps. Time will tell. Will Wansink in fact correct the four pizza gate papers that he said he would correct? Will he in fact release the datasets that he said he would release?. To me this looks more hopeful than some of the reactions from other people whose work has been criticized; virtually all defend it no matter what. I am willing to give him the benefit of the doubt for the time being at least.

          And I still see no reason to call him “stupid,” “liar,” “a shit,” and a “a cock sucking cunt.” I’d prefer to keep the focus on the science or the statistics or the methods or whatever.

        • Carol, not sure why you can’t follow the Twitter link, but here’s the content of the tweet:

          “Amazing survey discovery: No matter how many people you mail, with any $ reward, with or without Canadians, you’ll always get 770 responses.”

          With then a marked up set of screenshots from Wansink pdfs where time after time he always got exactly 770 people to respond to his survey requests… which obviously sounds dodgy, like either the data is completely fabricated, or they always dropped some of the responses to get the same count, or whatever.

          My guess is that people like Jordan who respond with anger and strong language do so because they see how they are personally harmed by having spent time to “get into” a field that they find after the fact is horribly corrupted and reeks of dishonest behavior being rewarded with large monetary returns.

        • Basically it is absolutely correct to be outraged by this kind of thing, and strong language is an effective way of signaling extreme outrage. Not something I personally choose to do, but I don’t think it’s inappropriate to be outraged here. Wansink doesn’t deserve any kind of kid gloves, his responses have been to let it all roll off like water off a duck’s back… so maybe the solution is to start pouring acid.

        • Daniel Lakeland: Thank you very much for taking the time to send me the twitter information. Sounds bad. Really, really bad.

          As far as outrage goes, I did not follow the Wansink situation at all for a few weeks so, so I may not be up-to-date on the situation. Or it may just be a difference in personal style. Or a difference in age or experience; perhaps I no longer get as hot under the collar as I did in my youth, simply because I have seen so much bad and/or dishonest research over the years.

        • Daniel and Carol: Yes, this 770 thing looks really bad. Before we blog about this we are doing our absolute best to try and understand what might be happening. I bought Wansink’s books and am reading them. We are going through his hundreds of papers. We are going through his CV to check timelines, we are leaving no stone unturned.

          So far what I am seeing is not looking good; I don’t know when we will release our report of what we find.

        • Jordan:

          If only Wansink were, y’know, an actual scientist. Then you could just call him up and ask what happened. Once we have to abandon the assumption that the people involved are honest and have good intentions, everything gets so much more difficult.

        • Andrew: Reading the books is more interesting than you might think. It’s just one entire case study of HARKing. Regardless of the result of each experiment he spins the results to match his narrative.

          Sometimes large containers cause people to eat more, sometimes multiple small containers cause people to eat more.

          Did you know that shelled peanuts (vs unshelled) has no effect on number eaten for average people? But guess what? It does for obese people! Just as you would expect…

          The scariest thing is I think at least some of his work has been “replicated”. I would actually view this as a result of priming the researchers. Researchers know what result they are trying to replicate, so when they don’t replicate it they don’t report it and assume they did something wrong. And when they do replicate it they announce it. Selective reporting of replications, scary stuff. This field is screwed.

        • Andrew: “Once we have to abandon the assumption that the people involved are honest and have good intentions, everything gets so much more difficult.”

          The trouble with this is that “what is honesty” and “what are good intentions” are, regrettably, often subjective judgments, and (also regrettably) we may call someone “not honest” when their idea of honesty does not coincide with our own; similarly for good intentions.

        • Don’t forget Barbara Frederickson. Neither her involvement in the Positivity Ratio fabrication nor the genes-and-positivity junk-science seem to have threatened her paramount position in any way. The cliche about “Too big to fail” does seem to apply to PIs.

          The Thorstenson-Pazda-Elliot claims about sadness affecting colour perception, that was retracted, but Andrew Elliot’s lab has been churning out equally rubbish claims about “Red (or yellow) affects attractiveness (or health, or something else)” for a decade or so without anyone caring.

        • It is not true that no one cares about these claims.

          My apologies, And it may be that someone in Elliot’s lab read your concerns about subject numbers and power, for therir latest studies on the sexiness of apoplexy in men and women —
          Facial Redness Increases Men’s Perceived Healthiness and Attractiveness
          Women’s Facial Redness Increases Their Perceived Attractiveness: Mediation Through Perceived Healthiness
          — they ran the power calculation for a value of Cohen’s d, and recruited enough subjects through SurveyMonkey.

          It is interesting that they teamed up with Dave Perrett, who was last author on both studies. For Perrett has earlier shown that it’s facial yellowness that increases attractiveness.
          http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0032988
          It is almost as if the crucial variable is saturation, but we may never know.

        • Smut:

          Yup. And, just to complete the circle, the notorious paper on wearing red and the menstrual cycle—the paper that in itself represented so many of the problems of 2010-2015-era Psychological Science—cited 3 papers from that Eliot group. Regular readers of this blog may remember that the authors of the ovulation-and-wearing-red paper refused even to concede that days 6-14 are not the dates of peak fertility (thus putting themselves in disagreement with the U.S. Health and Human Services) and also that they performed their own replication of their study, which failed, but which they then, Bargh-like, interpreted as a success by introducing a new interaction never mentioned in their earlier paper.

        • Regular readers of this blog may remember that the authors of the ovulation-and-wearing-red paper refused even to concede that days 6-14 are not the dates of peak fertility (thus putting themselves in disagreement with the U.S. Health and Human Services)

          I remember that.
          You: Days 6-14 of the cycle are not peak fertility / ovulation, but are the period reported as peak desire — Perrett confused the two, and you are citing Perrett’s mistake.

          Tracy & Beall: But look at all these other Evo-Psych papers co-authored by Perrett, repeating the same mistake! See, the literature is on our side!

          and also that they performed their own replication of their study, which failed, but which they then, Bargh-like, interpreted as a success by introducing a new interaction never mentioned in their earlier paper.

          Weather, wasn’t it? The whole effect could only be reproduced in the replication study when they remembered to record meteorology and bring those variations into their analysis, due to a weather interaction that didn’t affect the first study, only the second. But Tracy and Beall still think you are a big old meanie when you accuse them of using a forking-paths selection of variables to shelter the hypothesis from falsification.

          Tracy and Beall don’t just “cite 3 papers from that Eliot group”, they’re part of Elliot’s group.

        • Smut:

          Yeah, the whole thing looks a lot like power politics: Eliot, Perrett, etc. represent a powerful and cohesive group within psychology so they get published in the official journal of APS, their acolyte gets a coveted featured speaking spot at the APS conference, etc.

          It’s Game of Thrones out there, and, as far as the Lannister crowd are concerned, Uri Simonsohn is the High Sparrow.

        • Smut Clyde: It’s not true that no one cares about these claims. But it is terribly difficult to get critiques of poor quality work published. I have a statistical commentary on Barbara Fredrickson’s PNAS “well-being and gene expression” article now in press. It has taken me almost three years to get it accepted somewhere.

        • Smut Clyde: Thank you! It will appear in the new online journal COLLABRA:PSYCHOLOGY, and soon. I received the typeset proofs today to review.

  2. The good/evil dichotomy might be in question if it weren’t for that ‘qualifications and suitability’ qualification.
    If they believe they can withhold data from people they think are unqualified, then they should be swiftly exited
    from professional science, even possibly medicine.

  3. Any good reason why the original article is still published in PLOS One ? The guidance for sharing data should be ground enough for the publication been removed/retracted from PLOS one according to its data sharing policy.

  4. On the comment “some of these problems are caused by the outmoded statistical attitude that the role of a study is to prove that a treatment “works” or “does not work.” ”

    As I understand it, the PACE study was intended to help the NHS decide whether Doctors should treat Chronic Fatigue Syndrome in a particular way. The NHS, amongst its other attributes, is a large bureaucratic organisation, which makes extensive use of checklists, targets, incentives, and other structures which work best with advice of the form “Patients diagnosed with X should be treated with Y”. I have (once) received a strong impression that NHS staff deciding whether to treat me were working doggedly through a checklist despite having some doubts as to whether it was really applicable in my case.

    Under these circumstance, the role for nuance and subtlety is limited. Organisationally, at least, the most attractive answers will be either “Patients with Chronic Fatigue Syndrome should be encouraged to exercise in all circumstances” or “Patients with Chronic Fatigue Syndrome should be given help and advice so they can make the best use of their current capacity without over straining themselves.”

    It seems to me that this is the sort of situation which lends itself to something like the dreaded NHST, albeit with some allowance for prior information (which might be difficult to find consensus on) and for different costs of type I and type II errors. How would a more modern approach differ from this?

    • Ag:

      At the very least, a modern approach would apply a cost-benefit analysis (where cost includes human cost, not just financial cost). For any given patient there is a range of possible outcomes with different predictive probabilities. Based on a literature review of studies such as Pace (and appropriately correcting for biases such as the statistical significance filter), one can estimate the effects of a treatment such as graded exercised therapy on the population. There’s a whole decision tree here but it can be simplified into an estimate of what would happen under different proposed National Health Service recommendation policies. But nowhere in this procedure does it make sense to consider the treatment as generally “working” or “not working,” nor is there any place for a decision being made based on a p-value or significance test.

  5. ‘Applicants should state the purpose of their request, their objectives; qualifications and suitability to do the study, data required, precise analytic plans, and plans for outputs. See published protocol for details of data collected.’

    Sounds innocuous to me. In my experience, these multi-centre studies take great care to allocate papers among contributing researchers, avoid duplicate publication etc. So there’s likely the same process internally, determining who gets to analyse what. An added benefit is that potentially identifying information* isn’t available to everyone within the ‘team’ of a multi-centre study, but only people who have at least written a statistical analysis plan.
    Another concern is outsiders analysing stuff that makes no sense substantially and wanting to catch that early instead of having to correct a tabloid headline.

    Sure, it’s not particularly open or welcoming, but I’d want to see a submission and denial letter (for which there is an independent appeal process apparently) before crying foul.

    * With patient data everyone is super paranoid about that: age, height and weight, occupational status, marriage status and income just might might might be enough to identify patients at smaler sites. With PACE therapists are another concern: They need to be specialized in the therapies under review and it might be possible to identify them based on that.

    • Markus:

      As I wrote in my post, this seems fair enough, but I don’t see that it has anything to do with pre-analysis plans, qualifications and suitability, or direct collaboration. If they have security concerns, that’s one thing. But their restrictions don’t seem to be based just on security.

    • This identifiabilty issue seems blown out of proportion. It sounds like an excuse to hide data for nefarious/greedy reasons. Im not clear on the history of these concerns, did they originate in pharmaceutical trials?

      Personally, I value privacy, but most people do not seem to care about it at all and would surely participate with minimal protections (ie dont put my name in the paper/db, a form saying “I won’t try to identify the subjects” before anyone gets access).

  6. ” I don’t see that it has anything to do with pre-analysis plans, qualifications and suitability, or direct collaboration.”
    From my experience, that is the same process that happens internally. Reasons for that procedure I’m aware of:
    – Avoid duplication in work/publications.
    – Share fruits of reasearch work fairly among contributors.
    – Avoid association with crappy analyses (both statistical and theoretical)

    I’m arguing this is a reasonable policy, even without the privacy concerns. I’m not defending PACE’s current unwillingness to share data. They’re wrong no to. I’m saying the data sharing policy (DSP) you linked to is a fairly standard, unobjectionable document that doesn’t argue your case either way. Coyne’s frustration is understandable, but his criticism of the DSP is apprently not based on the actual process but on an uncharitable reading. And I disagree with him when he deems it unreasonable to provide analytic plans because his analyses would be exploratory. If he did write up some hypotheses and ways to check them plus robustnest checks and the PACE people complained unreasonably he would have a case. But preregistration does keep people honest, and I’m fine with researchers wanting their critics to put some skin into the game as well.
    Plus, one can’t weigh security against the scientific benefits of disclosure without knowing exactly what data are requested. (And, arguably, a description of the potential scientific benefits.)

    • Markus:

      1. “Avoid association with crappy analyses . . .”: too late for that one!

      2. I see nothing wrong with Coyne or anyone else providing a data analysis plan, but I don’t see why it should be required.

      3. I don’t see why it should be necessary for people to “put some skin in the game” in order to get data access.

      4. Again, I understand there can be security concerns; I just don’t see the relevance of these to points 1, 2, 3 above.

    • Markus

      >– Avoid duplication in work/publications.

      This is central planing. It might work for the original research group but not for the community of scientists and interested citizens (who financed study through taxes). Imagine if we had the same policies for every public data set out there, like the census. Science would stop in its tracks.

      >– Share fruits of reasearch work fairly among contributors.

      I find that publicly financed data should not be under “copyright”. At least such copyright should only last a few years.

      >– Avoid association with crappy analyses (both statistical and theoretical)

      A crappy analysis includes one not willing to share the data. Indeed, the original work may be crappy, and that is why people want to re-analize the data. But now original authors are defendants and prosecutors. Also se central planning above. Just let the market of ideas take care of itself.

    • The process that happens internally to a study is COMPLETELY irrelevant.

      Protecting identities of individuals is one thing but it should really be protected by law with nothing more required than an acknowledgement of the identity protection requirements.

      the only excuse for withholding a data set from someone who signs a paper acknowledging a statutory identity protection requirement should be that the requester was under the legal age to make a binding signature. Other than that, if you are a homeless person who reads at a 5th grade level and never completed high school you should still get the damn data, and promptly.

      Its balderdash to argue that only special guild members should be allowed access which is what this policy says in essence

    • Isn’t the whole point of sharing the data to explicitly do the opposite of “Avoid duplication in work/publications”? It is a good thing when multiple grups analyze the data, especially if they are antagonistic.

Leave a Reply to Greg Francis Cancel reply

Your email address will not be published. Required fields are marked *