Skip to content

Unethical behavior vs. being a bad guy

[cat picture]

I happened to come across this article and it reminded me of the general point that it’s possible to behave unethically without being a “bad guy.”

The story in question involves some scientists who did some experiments about thirty years ago on the biological effects of low-frequency magnetic fields. They published their results in a series of papers which I read when I was a student, and I found some places where I thought their analysis could be improved.

The topic seemed somewhat important—at the time, there was concern about cancer risks from exposure to power lines and other sources of low-frequency magnetic fields—so I sent a letter to the authors of the paper, pointing out two ways I thought their analysis could be improved, and requesting their raw data. I followed up the letter with a phone call.

Just for some context:

1. At no time did I think, or do I think, that they were doing anything unethical in their data collection or analysis. I just thought that they weren’t making full use of the data they had. Their unethical behavior, as I see it, came at the next stage, when they refused to share their data.

2. Those were simpler times. I assumed by default that published work was high quality, so when I saw what seemed like a flaw in the analysis, I wasn’t so sure—I was very open to the possibility that I’d missed something myself—and I didn’t see the problems in that paper as symptomatic of any larger issues.

3. I was not trying to “gotcha” these researchers. I thought they too would be interested in getting more information out of their data.

To continue with the story: When I called on the phone, the lead researcher on the project said he didn’t want to share the data: they were in lab notebooks and it would be effort to copy these, and his statistician had assured him that the analysis was just fine as is.

I think this was unethical behavior, given that: (a) at the time, this work was considered to have policy implications; (b) there was no good reason for the researcher to think that his statistician had particular expertise in this sort of analysis; (c) I’d offered some specific ways in which the data analysis could be improved so there was a justification for my request; (d) the work had been done at the Environmental Protection Agency, which is part of the U.S. government; (e) the dataset was pretty small so how hard could it be to photocopy some pages of lab notebooks and drop them in the mail; and, finally (f) the work was published in a scientific journal that was part of the public record.

A couple decades later, I wrote about the incident and the biologist and the statistician responded with defenses of their actions. I felt at the time of the original event, and after reading their letters, and I still feel, that these guys were trying to do their best, that they were acting according what they perceived to be their professional standards, and that they were not trying to impede the progress of science and public health.

To put it another way, I did not, and do not, think of them as “bad guys.” Not that this is so important—there’s no reason why these two scientists should particularly care about my opinion of them, nor am I any kind of moral arbiter here. I’m just sharing my perspective to make the more general point that it is possible to behave unethically without being a bad person.

I do think the lack of data sharing was unethical—not as unethical as fabricating data (Lacour), or hiding data (Hauser) or brushing aside a barrage of legitimate criticism from multiple sources (Cuddy), or lots of other examples we’ve discussed over the years on this blog—but I do feel it is a real ethical lapse, for reasons (a)-(f) given above. But I don’t think of this as the product of “bad guys.”

My point is that it’s possible to go about your professional career, doing what you think is right, but still making some bad decisions: actions which were not just mistaken in retrospect, but which can be seen as ethical violations on some scale.

One way to view this is everyone involved in research—including those of us who see ourselves as good guys—should be aware that we can make unethical decisions at work. “Unethical” labels the action, not the person, and ethics is a product of a situation as well as of the people involved.


  1. Eric Rasmusen says:

    “Every way of a man is right in his own eyes, but the LORD weighs the heart.” (Proverbs 21)

    “It’s not who you are underneath, it’s what you do that defines you.” (Batman Begins)

  2. zbicyclist says:

    Until the journals require posting the data, it won’t happen.

    • Dzhaughn says:

      We are not slaves to the journals. Hiring committees, tenure committees, and grant-making agencies could refuse to weight work with unpublished data (say, from this point forward) if they chose to.

      • Dale Lehman says:

        I agree with this completely. While having journals change their policies would help a great deal, we do some of this to ourselves. Far too little discussion has taken place on why this is the case (I have some ideas, but they are not well thought out). If hiring committees would stop admiring “published” work without asking the right questions, it is hard to imagine things changing. Yet we all serve on hiring committees and apparently mostly follow the same formula. What is that?

  3. Tom says:

    I’ve been told that cops make the same distinction. There are citizens and a**holes. Citizens make honest mistakes, and when you dig into their lives, those mistakes show up as isolated incidents. The more you dig into the others, though, the more you dig up.

  4. Sharing raw research data is mandatory for any researcher affiliated to any of the 14 Dutch research universities (see ). The same is the case for any researcher affiliated to any of the Dutch research institutes which endorse the VSNU ‘The Netherlands Code of Conduct for Academic Practice’( ).

    It is stated at page 8 of this CoC:
    “Principle 3 Verifiability. Presented information is verifiable. Whenever research results are published, it is made clear (..) how they can be verified. (..). 3.3. Raw research data are stored for at least ten years. These data are made available to other academic practitioners upon request, unless legal provisions dictate otherwise.”

    See my posting, dated July 5, 2015 at 11:12 am, at, for the consequences when Dutch researchers refuse to give other researchers access to raw research data of a publication.

  5. Jonathan says:

    Such a Catholic post. Mistakes are typically venial. A pattern of venial sins doesn’t generally convert those into a mortal sin but they show a disposition and thus would extend your time in purgatory, etc. The frauds commit mortal sins.

    I have an issue with the cat at top: they can’t be bad because they don’t have human morality. They can be annoying, like me, but they’re entirely incapable of emotions like remorse and what is in law the “mens rea” of intent.

    • Curious says:

      Ahhh… yes, the fine grained and precisely measured metrics of disposition for sin. Psychometrics shall forever be the better for it.

    • Eric Rasmusen says:

      Great point, Jonathan! And, of course, the Catholics are wrong. God demands perfection. Protestant doctrine is that no sin is just venial, and no one is a sinless saint, so there are not really Good People and Bad People, but Jesus’s dying on the cross covers the sins of those He chooses to forgive.

      Just because it’s very tempting to hide your data and a lot of people do it in various fields doesn’t mean it’s OK, even if it’s not as bad as making the data up.

      • Andrew says:


        My knowledge of Christianity is very shallow, but your comment reminds me of something I’ve said to students sometimes, that goes as follows: We are all sinners, and the only way to be saved is to accept that. In this case, “sinning” refers to statistical mistakes. But the point is that pride does not serve us well. We are born with a lack of statistical understanding, and we make mistakes, and we will make mistakes again. Being saved does not come from perfection nor does it lead to perfection; it comes with continuing struggle and enlightenment.

        Anyway, I don’t remember all the details. It was all based on a sermon I heard on the radio one Sunday morning riding a taxi to the airport. While listening I suddenly realized how relevant this evangelical message was to statistical learning and practice.

        • Keith O'Rourke says:


          I would make it more general ” We are born with a lack of understanding of how to inquire about the reality beyond our direct access, and we make mistakes, and we will make mistakes again. Being saved does not come from perfection nor does it lead to perfection; it comes with continuing struggle and enlightenment of how to better inquire driven by an insatiable love to get at the reality beyond us.”

  6. Will says:

    Regarding (d), there is now a policy for agencies that fund more than $100 million in research to make data and publications more available. (Assuming these policies aren’t rescinded under the new administration.) The best, current source that I am aware of is

    Andrew’s arguments in the linked post reflect a lot of the logic in the policy.

    Until the plans are finalized, a FOIA should work to get this kind of data (although, as a government researcher, I would argue that Andrew is correct and that the data should have been turned over on request).

  7. Chris J says:

    Ethics is not the same as professionalism. For purposes of clarity and precision, professional quality should be defined in detail. Other professions avoid conflating ethics, work of professional quality, and professional qualifications. For example, from the American Academy of Actuaries website:
    “The U.S. actuarial profession has established three kinds of standards.
    1. The Code of Professional Conduct. The Code of Professional Conduct identifies actuaries’ ethical responsibilities to the public, to clients and employers, and to the actuarial profession.
    2. Actuarial Standards of Practice. The ASOPs provide a framework for performing professional assignments and offer guidance on recommended practices, documentation, and disclosure.
    3. Qualification Standards. The Qualification Standards specify requirements for basic education, experience, and continuing education that must be met by actuaries issuing statements of actuarial opinion (SAOs).
    The three sets of standards are interrelated, as is reflected in the Code of Professional Conduct. The Code’s purpose, stated in its introduction, is “to require actuaries to adhere to the high standards of conduct, practice, and qualifications of the actuarial profession, thereby supporting the actuarial profession in fulfilling its responsibility to the public.” The Code specifically requires actuaries to comply with applicable ASOPs and Qualification Standards.”
    The above system provides necessary structure and better addresses the dichotomy of “unethical behavior vs. being a bad guy” in addition to which researcher(s) on the project are “qualified guys”, which is another occasional topic on this blog.
    It boils down to 3 things, not 1:
    Be ethical.
    Be professional.
    Be qualified.

    • Curious says:

      I wonder if qualification is predictive of anything besides being allowed to do the things one who is qualified is allowed to do. Credentialing often seems to be little more than a revenue stream and a barrier to entry for those lacking the funds.

    • Dzhaughn says:

      +1. I see the researchers in Andrew’s story as definitely unhelpful, but not unethical. I hope, a decade hence, it will be unethical, as they will have committed to share data.

    • I personally think this kind of stuff is a crock that is primarily self-serving. There are similar things for example with professional engineers. But then, in the last two weeks the primary and secondary spillways at the Oroville dam failed at something like 5 to 10% of supposedly rated capacity. Everyone points fingers, eventually it will be deemed that appropriate standards of practice were met by the original engineers. The people who received warnings about these spillways a decade ago will be exonerated of any wrongdoing or nonprofessionalism, or some scapegoat will be found, usually someone already retired or about to retire. etc.

      I suspect the main purpose of professional standards are to have something to point to to say “look I did what I was supposed to” when crap hits the fan and things fall apart. It also deflects away accusations of unethical behavior by narrowly defining “ethics” to confuse matters so that two people can be talking about different things and still use the term ethics (ie. “ethics” as constructed by someone who actually thinks about the general philosophy of ethical behavior vs “ethics” as constructed by some document written by professionals in some field).

      I’m not saying there’s no place for any of this, I just believe that the world better resembles game theory than some Platonic ideal of the good and the true.

      • Also note that my personal guess as to what happened was that in 1960’s some engineer was told to design this thing assuming XYZ, and they either were too ignorant to realize that some standardized formula was not going to cut it, or they realized that it couldn’t be done in the way they were asked to do, and put up a fuss, at which point they were removed from the project and a different engineer was found who was sufficiently ignorant enough to do the work.

        You don’t build an emergency spillway that looks like this ( ), if your design capacity is a 16 foot high torrent of water flowing over the top of it down onto all that soil and broken rock below (and yes, that is supposedly the design capacity!). This thing even has a jog in the bottom that is supposed to drain the water south towards one end of the weir. Water flowed off that splash pad onto the soil and ate away at the base of the weir at less than 1.5 ft of depth over the weir.

        Is it “unethical” to design such a thing? My feeling is yes, even in 1960 with slide rules, someone should have said “this doesn’t seem right” and failing to do that would be unethical because it amounts to claiming more expertise than you really have. But my guess is you’ll find somewhere where you can weasel out of that by pointing at some kind of document.

        • Martha (Smith) says:

          +1 to first paragraph.

          But it gets iffy after that — e.g., “even in 1960 with slide rules, someone should have said “this doesn’t seem right” and failing to do that would be unethical because it amounts to claiming more expertise than you really have.”

          My thinking:

          1. Someone couldn’t say “this doesn’t seem right” unless “this” indeed does not seem right to them.

          2. Failing to see that something doesn’t seem right isn’t the same as claiming more expertise than one really has.

          I have a similar concern with Andrew’s point “(b) there was no good reason for the researcher to think that his statistician had particular expertise in this sort of analysis”. But was there any good reason to believe that the statistician didn’t have particular expertise in this sort of analysis?

          (But possibly I don’t understand the point(s) that Andrew and Daniels are trying to make here — so possibly a clarification from them might help?)

          • Andrew says:


            Regarding my point (b): I did give some specific reasons how I thought the analysis could be improved. My reasoning was clear, and it was the statistician’s job to either evaluate my suggestions, or say he was not competent to evaluate them, or to say he was too busy to take a look.

          • So I should clarify some background here. I have a PhD in Civil Engineering, and so I know something about the background that CEs should have. I don’t do hydraulic design, and I’m not a licensed Engineer.

            Here is the picture of the setting:

            At the top of the flume that is dramatically spilling water, you see a concrete structure that’s a gateway for the main spillway. Directly to the left of that concrete gateway is a big concrete wall that is a weir allowing uncontrolled overflow in an emergency. This wall sits there by gravity, with no anchors whatsoever. Directly below that we see a hillside with dirt, trees, and whatnot. Now ask yourself. Could you imagine a 16 foot high stream of water spilling over that long concrete weir and gushing down this hillside without taking the hillside with it? claims that this 16 feet is in fact the design capacity.

            Consider that 16 feet high is twice the height of the ceiling in a typical room. Consider that the flow we’re talking about is probably 6 to 10 times as much as is pouring out of the flume in the picture (volumetric rate). Consider that CEs certainly could calculate the velocity that such a flow would pour over the weir, it’s a well known equation and has been forever. Consider that the scaling laws for erosive power are easy to derive using simple dimensional analysis and the power scales super-linearly with velocity.

            Consider that this guy Ron Stork who is not an engineer was able to look at this and say “gee I don’t think so” and critically *he was absolutely right* and the engineers patted him on the head and said the equivalent of “no worries…. let us big boys handle these equations”

            It’s critical that engineers have some kind of “gut feeling” or intuition for these things because it helps them detect when they’ve made mistakes. You should know if you’re claiming that you could pour water down a hillside at something like 100 times the erosive power that it could actually handle.

            A person who designs weirs for a living should be able to look at a hillside full of dirt and broken rock, and say “You know what? I’m not going to spill a 16 foot thick torrent of water at multiple meters per second down this stuff here”. And if they had any question on the matter, they should have gotten a fire-truck to roll out to the site and sprayed the hillside with a fire hose. And if they couldn’t do that, they should have at least used a garden hose. We’re talking about endangering hundreds of thousands of people and destroying a dam that supplies about 1/3 the total water supply to CA. You can’t be that far off and call it an “honest mistake”.

            So, if you had the competence required to be an undergraduate student in CE, it shouldn’t have seemed right. And even if you can claim something about “in 1960 standard practice was all wrong… and so it would have seemed fine” in 2007 or so when they received a warning about all this at the licensing renewal for the dam, someone should have looked at the requirements and said “you know what, this report was absolutely right, we can’t spill all that water down this hillside.”

            So, ethically speaking, I feel like no one should practice CE if they couldn’t look at this situation and get a report in 2007 from an environmental group and say “gee, we need to do a serious analysis here” and the same goes for someone designing this in 1961 or whatever. This was done all wrong, and someone should have known it the way that a bicyclist should know not to get on a bike with no brakes and start pedaling down a hill.

            • The Broad crested weir calculation, well known forever, I’m sure it’s in every hydraulics textbook.


              volumetric flow rate per unit length:
              q = HV = C H^(3/2)

              V ~ H^(1/2)

              Kinetic Energy per unit mass = v^2/2, mass per unit time proportional to v, power delivery rate is therefore proportional to v * v^2/2 ~ H^(3/2)

              This thing eroded significantly at 1.5 ft over the weir, but was designed for 16, so they expected to be able to handle 10^(3/2) ~ 32 times the erosive power.

              How much erosive power would this rock withstand? What about the shear forces (another similar calculation). Hard to calculate, but easy to do experiments on. Just use a fire hose and see. If anyone had gone out there with a fire hose and calculated the required velocity and washed that rock away… you’d have decided… no this isn’t going to work.

              All the professional and ethical stuff didn’t prevent them from ignoring all this inappropriately.

            • Martha (Smith) says:

              OK — the additional information substantiates your claim.

            • Martha and Chris and others,

              My feeling on the ethical and professional responsibilities documents is that they are a good reminder, like it’s a good reminder to have your doctor tell you not to smoke and to exercise regularly every time you visit. But that doesn’t mean that all we need is a doctor to tell us once a year and we’ll all quit smoking or start exercising. So, they’re nice to have, it’s great when an engineer looks at a problem and thinks “I bet this will work fine” and then looks at the poster on their wall that says something about professional ethics, gives a big sigh, and pulls out the handbook and does the calculation…

              But if you want to actually prevent catastrophes like Fukushima, or what almost happened at the Oroville dam this week, you really need independent third party audits. And, it would help if there were prizes offered for proof of bugs in designs, kind of like bug bounties in software. You want someone to have good motivation to dig into the designs and really try hard to find dangerous situations. Which is why I said “I just believe that the world better resembles game theory than some Platonic ideal of the good and the true”

              Critically it needs to be that the role of auditor has a lot of respect from the engineers who do design. They should be thinking “Please please please find any bugs that I might have had in my calculation” and the auditors themselves should do this full time and their livelihood should depend on it, it shouldn’t be something you do once in a blue moon and your motiviation is really to promote your day job which is on the other side of the audit desk.

              I personally think we should dispense with all but the very basics of licensing in engineering (ie. maybe have a requirement for a particular core set of classes from an accredited university and one 4 hour test, or 3 four hour tests for someone who doesn’t have the classes). As it is, lots of people who have worked in and around engineering for years and years are kept out of designing railings for 3 foot high back-yard retaining walls because they haven’t been through 4 years of college education. That’s really just cushy competition-reduction for the licensed engineers. Rely more on audits less on jumping through hoops, the hoops didn’t prevent Fukushima or Oroville or the 1972 Teton Dam failure:


              • Rahul says:

                Licensed Engineers / PE’s are a big scam. From my experience they are no better than non-PE’s if you adjust for a similar university / IQ cohort.

                The big reason behind the PE exam to me seems to be to control the supply of engineers and thus prop up wages.

              • Keith O'Rourke says:

                I certainly agree with the need for audits but it does seem very hard to get across to some.

                One of my earliest experiences of an audit of my work (when I was 15/16) was by the Fuller brush sales manager of one of my sales calls (they knew the house I was calling on and hid in the living room). I was neither overly surprised nor upset – how else was they know that I had the sales pitch right?)

                Even when the audits are going to be internal.

                When I was working with one of the Cochrane Collaboration’s senior statisticians, they were discussing the possibility of hiring full time statistician to work centrally with the Collaboration. Now, I had just reviewed about 40 randomly selected Cochrane meta-analysis and found what seem like serious statistical errors in about 50%. So I suggested that as a primary role for them (which would included contacting the authors to help sort these out). They thought is was a terrible idea stating that no one would want such a boring job. My guess is they wanted the new person to help them advance their methodological work ;-)

              • Rahul: yep, I agree that the propping up wages part is a big component. In CE typically what you have is a bunch of people who may or may not have PEs doing the actual work, and then one guy with a stamp who is supposed to be responsible for all the work… the fact that he or she has to stamp it with their name is supposed to mean that they take things seriously and don’t let shoddy work get through. But in fact, it’s the other guys who do the work, and economic pressures can easily overcome things. “If you won’t stamp this, we’ll fire you and get someone else who will do the work” kind of stuff. Since the vast majority of everyday stuff is just that… everyday, there’s little risk in stamping things you don’t understand if you have some trust that the people who work with you know what they’re doing, after a while it’s just a rubber stamping process with an occasional “hey, explain this to me, I’m not sure this makes sense”

                The part where this really *hurts* is when the psychologically puffery of having a stamp and being a big shot makes you not take “Ron Stork” concerned citizen and ecology crusader seriously. And that’s the kind of stuff that usually leads to actual disaster. At Fukushima they seem to have just ignored the possibility of a Tsunami the size of what they saw, even though some people did actually point out paleological evidence that they had occurred in the past.

                Overconfidence is a big big problem in engineering, and my guess is the whole certification and stamping procedure actually increases this problem, not decreases it.

              • The meltdown in the mortgage market is a similar kind of engineering failure that actuary or certified financial analyst or whatever certifications and professional standards and soforth certainly didn’t prevent. In some sense, they may have enhanced them, again because of overconfidence.

              • Rahul says:


                I’m not a PE and I once got told that if I’ll design something they have a PE to sign off on it for regulatory reasons. I ask why not get the PE himself to design it and I got an uncomfortable laugh and got told that they’d never build anything that he had designed. True story.

              • Rahul: does not even BEGIN to surprise me, and confirms my basic feeling on professional licensing and soforth after similar experiences.

                I’ve even worked with PEs, structural engineers, who I actually WOULD trust to design things, but they knew how to perform a particular algorithm by hand. Once someone with a PE came to me and his client was asking him to calculate the stiffness of an assembly consisting of one smaller pipe inside another pipe as if it were the stiffness of a single pipe with a thicker wall…. well he knew that didn’t sound right, but he couldn’t explain why. I had to write up an email or something talking about how without shear-transfer between the pipes, one pipe would slide inside the other pipe during bending, and so the forces required to bend the pipe would be closer to just the forces calculated with the stiffer of the two pipes, the inner pipe would contribute a force that was more or less negligible for practical purposes.

                This was a *good* and trustworthy structural engineer, but he was a craftsman more than a scientist. He knew what to do, not what the science was behind it, at least in this case.

                The point is, licensure and ethics and self-policing don’t have the right game-theoretic properties. They don’t produce the results that are their intended goal.

    • shravan says:

      In certain disciplines like medicine you cannot practice without qualification. Not so in statistics and i think it will stay that way. I would replace qualifications with sufficient knowledge. But to the main point i think that we should all start releasing data with the paper? Why is this proving so hard to do? I just put up my data and code on my home page. I know a few others who do this but almost nobody does it. My experience has been that linguists are more willing to release data; a few psychologists have been totally willing to release data but the majority either refused or just don’t reply. From this i infer that psychologists may be more insecure about their data; or maybe they are afraid of being shown to be wrong. I always thought linguists were the worst when it came to sticking to your theory no matter what, but maybe that is because i have no formal training in psych.

  8. Here’s an interesting way of considering the ethical. In Fear and Trembling, Kierkegaard writes that ethics rewards revelation, whereas aesthetics rewards disclosure. The ethical is universal, by definition–and thus it favors bringing things into the open. I am taking this out of context–and there are exceptions and qualifications–but the basic principle makes sense.

    Given this framework, it is ethical to publish one’s data, because this brings the research out in the open and allows others to view and scrutinize it in full.

    But one can be a decent person (not a “bad guy”) and refuse to release data, perhaps on aesthetic grounds. (“It’s been peer-reviewed and published, and we see no reason to mess things up.”)

    How do people weigh the ethical against the aesthetic? What might encourage them to publish their data more often?

    Perhaps what’s needed is greater recognition that (a) flaws, once uncovered and analyzed, can lead to greater understanding–so the more we know about them, the better; and (b) questions have a longer shelf life than answers, and should be honored accordingly. (The latter is not always true, of course, but it needs more recognition.)

  9. Eric Rasmusen says:

    I’d like to suggest something I haven’t seen before: make authors give the data to the referees.

    I’m reviewing a paper right now where I’m somewhat dubious about the ability of the authors to do good statistical work, and I also wonder how reliable their data is, though I don’t suspect them of any dishonesty. If I had the data, I’d look at some histograms and correlations and I’d be able to make a much more reliable report.

  10. Chris J says:

    Daniel and others,
    Some of my key points in the above comment are being missed. I detect great resistance to establishing standards which somehow coexists with tremendous interest in – “why can’t everybody just do what is best?”. Standards can be tailored – not too tight and not too loose. The comparison of statisticians to actuaries is much closer than any other profession, with similar training and tools and many similar projects. The primary difference is that the actuaries generally work directly for the business and in certain industries and not so much in academia. Actuaries do not just work in insurance.
    If responsible statisticians can agree that data should always be shared, then that would be codified in a professional standard. If certain exceptions are considered reasonable and the researcher will not share the data, an explanation for that posture would be need to be included with the original paper. The beauty of the standards is that the researcher is not forced into a box; he or she just needs to explain why they did not follow a certain standard. The result of having standards is a general agreement on responsible professional behavior that avoids the common post-publication misunderstandings and defensiveness.
    There is a better chance of getting together with other statisticians and reaching broad agreement as a matter of principle than there is when one researcher is reviewing the results of another researcher. Obviously some people are more defensive than others.
    Professional standards could also address requirements to be considered a qualified peer review, which addresses the concerns in comments above regarding audits as well as the current function of retractions. Too many retractions suggest the system is failing upstream.

    • The problem with this way of thinking, as I see it, is that it produces yet another lever that the game theory version of the world would have which would allow special interests to game the system.

      The extreme version of this story would be you start out with a statement about how the default is to publish all data… and then over time you start accumulating exceptions, and after a while they vote to make non-publication the default with a few exceptions where publication is required, and in the mean time they vote to apply sanctions and remove the licenses or certifications from statisticians who impugn other statisticians, no one is allowed to review articles without a guild-stamp, and you get a law passsed that says you can’t call yourself a ‘statistician’ in the state of California unless you’ve got certain degrees from certain universities, and you have taken certain tests… and pretty soon anyone who takes an average has to pay royalties to the royal society of statistical consultants…

      I mean, it sounds outrageous but it’s basically the way that all kinds of guilds worked back in the day. all those bakers guilds and stonemason guilds and whatnot. The big problem isn’t that a good standard is bad, it’s that all the knobs and buttons and levers are an enormous “attack surface” for hackers.

      • Another way to put it is that it replaces thought and logic with “argument from authority”. “I was just following standards” or “I don’t have to listen to you because you don’t have certification X” and soforth.

        • shravan says:

          Following up on Daniel’s general point about qualification, note that there are plenty of statisticians with PhDs who peddle serious misinterpretations of basic statistical concepts. Just being qualified did not get them anywhere close to the basic facts.

          • Chris J says:

            The Ethical Guidelines for Statistical Practice already separate the three concepts of “an obligation to work in a professional, competent, and ethical manner”. Those guidelines include the responsibility to share data under “Responsibilities to other Statisticians or Statistics Practitioners”
            But the responsibility to share the data also would rightly fall under “Responsibilities to Science/Public/Funder/Client”, but that category addresses the need to guard data confidentiality, not dissemination. So there is rightly a tension between data protection and data confidentiality which may need to addressed in greater detail, given this is currently such a big societal issue and concern.

            Data sharing is arguably a responsibility to science, not just peers.


            • Yes, I think the argument “the public funded it, the public deserves the raw data in a very easily accessible form” trumps almost everything almost everywhere. In fact, I’d argue that grants should be structured in such a way where there’s an initial payout, and then a requirement to archive the data in a usable form before the final payout.

              The only real issue is with releasing confidential data on individuals (such as individually identifiable health-care or income data etc).

              Of course, if you get private funding, and want to publish a paper on private data, this doesn’t apply, but I gotta guess that’s a really small percentage of the overall publications.

            • Martha (Smith) says:

              I don’t see how this addresses Shravan’s point.

              • Chris J says:

                You’re right. It doesn’t. I felt my comment needed to be down thread to fit in with the flow of the discussion, but is directed more to Shravan’s earlier two comments about sharing the data. Andrew’s post distinguishes between unethical behavior and an unethical person. But if we can define professional behavior first, as completely as possible, then “ethics” becomes more the intuitive concept we are used to – acting in good faith in all matters. For example, checking work carefully to avoid mistakes is definitely “professional”. Making occasional mistakes is not unprofessional or unethical. It happens. Making many mistakes is unprofessional. Doing nothing about it to correct your ways as the mistakes pile up is unethical.

                Is sharing the data an issue of “professionalism”? I would argue – yes, because the best way to act in good faith as a scientist is to explore and seek valid results, which is enhanced if others have the ability to review your results at the highest level of detail possible. Others may disagree, or maybe disagreed in the past.

              • Martha (Smith) says:

                Thanks for the clarification and elaboration. I’d add: Professional is a subset of ethical, but ethical includes more than professional. And perhaps being ethical requires occasional review of what is considered professional, and adjusting (adding, deleting, or modifying) what is considered professional in the light of new developments.

        • Keith O'Rourke says:

          An example of we don’t have to listen you was the Cochrane Collaboration’s Statistical Methods Group banning me from their email list for making overly provocative comments.

          I think its just a general process of group thinking leading to tribalizing , cliquing and excluding those who do not think like us.

          Societies with legal status and standards have extra teeth when they are in this stage of group think.

Leave a Reply