NSF Cultivating Cultures for Ethical STEM

The National Science Foundation is funding this program:

NSF Cultivating Cultures for Ethical STEM (CCE STEM [science, technology, engineering, and mathematics])

Funding: The maximum amount for 5-year awards is $600,000 (including indirect costs) and the maximum amount for 3-year awards is $400,000 (including indirect costs). The average award is $275,000.

Deadline: Internal Notice of Intent due 02/07/18
Final Proposal due 04/17/18

Limit: One application per institution

Summary: Cultivating Cultures for Ethical STEM (CCE STEM) funds research projects that identify (1) factors that are effective in the formation of ethical STEM researchers and (2) approaches to developing those factors in all the fields of science and engineering that NSF supports. CCE STEM solicits proposals for research that explores what constitutes responsible conduct for research (RCR), and which cultural and institutional contexts promote ethical STEM research and practice and why.

Successful proposals typically have a comparative dimension, either between or within institutional settings that differ along these or among other factors, and they specify plans for developing interventions that promote the effectiveness of identified factors.

CCE STEM research projects will use basic research to produce knowledge about what constitutes or promotes responsible or irresponsible conduct of research, and how to best instill students with this knowledge. In some cases, projects will include the development of interventions to ensure responsible research conduct.

I don’t know what to think about this. On one hand, I think ethics in science is important. On the other hand, it’s not clear to me how to do $275,000 worth of research on this project. On the other hand—hmmm, I guess I should say, back on the first hand—I guess it should be possible to do some useful qualitative research. After all, I think a lot about ethics and I write a bit about the topic, but I haven’t really studied it systematically and I don’t really know how to. So it makes sense for someone to figure this out.

There’s also this:

What practices contribute to the establishment and maintenance of ethical cultures and how can these practices be transferred, extended to, and integrated into other research and learning settings?

I’m thinking maybe the Food and Brand Lab at Cornell University could apply for some of this funding. At this point, they must know a lot about what practices contribute to the establishment and maintenance of ethical cultures and how can these practices be transferred, extended to, and integrated into other research and learning settings. You could say they’re the true experts in the field, especially since that Evilicious guy has left academia.

Or maybe an ethics supergroup could be convened. (Here’s a related list, of which my favorite is the Traveling Wilburys. Sorry, Dan!)

In all seriousness, I really don’t know how to think about this sort of thing. I hope NSF gets some interesting proposals.

19 thoughts on “NSF Cultivating Cultures for Ethical STEM

  1. This program has been around a few years now, and it replaced a previous program on research ethics education. You can browse the list of awards here: https://nsf.gov/awardsearch/advancedSearchResult?PIId=&PIFirstName=&PILastName=&PIOrganization=&PIState=&PIZip=&PICountry=&ProgOrganization=&ProgEleCode=019Y&BooleanElement=All&ProgRefCode=&BooleanRef=All&Program=&ProgOfficer=&Keyword=&AwardNumberOperator=&AwardAmount=&AwardInstrument=&ActiveAwards=true&ExpiredAwards=true&OriginalAwardDateOperator=&StartDateOperator=&ExpDateOperator=

    Several of the awards are for workshops. Most of the larger awards are for multi-year, multi-investigator research projects. If the budgets are like most of the budgets I saw when I worked at NSF, most of the money will go to pay for researcher time. In terms of researcher hours per “unit of knowledge,” qualitative research can easily be more expensive than quantitative research. There’s still no better way to analyze interview transcripts than having humans read them four or five times.

  2. Awhile back Andrew had a post saying:

    For a paradigmatic scientist, truth and tolerance and open inquiry are liberal virtues; while for a paradigmatic soldier, duty and honor and patriotism are conservative virtues.

    http://statmodeling.stat.columbia.edu/2018/01/04/politically-extreme-yet-vital-nation/

    One thing that struck me about this is that many of the problems in research seem to be primarily due to incompetence (usually, in turn, assumed to be due to poor training) rather than any ethical failing (eg, fraud). Given the well known liberal bias of the current research community, I wonder if more concern about honor and duty is warranted?

    If you want to call yourself a scientist you shouldn’t be able to get away with being ignorant about basic things seen in almost every paper in your field like “what is a p-value”. It isn’t really unethical for you to be ignorant, but it is dishonorable.

    It is your duty as someone who claims to be a member of the scientific community to have some competence whatever techniques you are using. The critiques and correct explanations I am talking about have been around for multiple generations at this point, all in the public literature, so it isn’t that hard to find the info.

    So, are honor and duty a component of ethics? It seems like something different to me that may have been overlooked.

    • Anoeuoid:

      This could be. But it’s also my impression that many of the problems in science arise from an excess of personal loyalty, when scientists support mistaken ideas as a way of sticking up for their friends. They’re showing duty to the platoon, as it were, rather than to science as a whole. To put it another way, knowledge of p-values is not a necessary requirement for being a good scientist. But you do need someone on your team who understands p-values, if they come up in your project. And that person should demonstrate loyalty to the larger cause, not to the platoon. To put it yet another way, the soldier is, or can be, fighting against another group, so loyalty to one’s own group is crucial. In science, there’s no other side, so a different, more abstract sense of loyalty is needed.

      • First, I do agree that loyalty to the team is a thing, although not considered to be of utmost importance in research. I also agree that this is inappropriate when there is no other side.

        However, I disagree with your other point. The PIs on these teams, people reviewing the papers, grad students writing them, etc are reading/interpreting p-values/etc all the time. They don’t have someone hovering over their shoulder to remind them stuff like “p-value isn’t probability null hypothesis is true”, so I don’t see any way around the need to be competent in that area.

    • Anoeuoid:

      If one uses Peirce’s concept of ethics – “how one should deliberately act to best obtain what you value” then someone who values science who does not act to acquire necessary understanding to advance science is being unethical.

      But that requires initial and continuous recognition of the deficit – the incompetence – which I believe supports your argument of poor training rather than any ethical failing – but they are highly conflated.

      For instance, in Andrew’s comment where one has acquired necessary understanding by engaging someone on your team, not recognizing their deficits would support being loyal to them. So if one has reason to suspect that deficit, clearly unethical, if one has little to no reason to suspect that deficit, not unethical.

      Additionally, engaging the right someone on your team when you have little to no grasp of what they really need to provide can be tricky http://statmodeling.stat.columbia.edu/2018/01/23/better-enable-others-avoid-misled-trying-learn-observations-promise-not-transparent-open-sincere-honest/

    • Before jumping to honor and duty, it’s worth acknowledging the collection action problem most fields face. I work in a field that relies heavily on p-values, using methods that we all know are wanting. If someone submitted a paper to my journal using the most up-to-date hierarchical/Bayesian approaches that avoided p-values, the way Andrew and many readers of this blog would like to see, I would be delighted. But I would also have a hard time finding a reviewer who is qualified in both the topic area and the statistical methods. If I solved that problem (maybe one topic expert and one statistician), I suspect most readers would find it hard to interpret the unfamiliar approach. Authors would have to do a lot of hand-holding, and be able to make a strong case that the method’s improvement in inference is worth the challenges it presents to readers.

      It takes time for fields to improve their standards. At some point change might be slow enough to be dishonorable. But most fields seem to get enough stuff right to make honorable progress, even as they fall short on some practices that seem obviously sub-optimal to every individual,but that no individual has the power to change without more buy-in from the community.

      Btw, change is even slower when there isn’t a clear alternative to the sub-optimal. I always enjoy the 50-comment threads where all of you experts disagree heatedly on exactly how p-values and confidence intervals should be interpreted and taught. But the lack of consensus makes change in my field even harder.

      • If I solved that problem (maybe one topic expert and one statistician), I suspect most readers would find it hard to interpret the unfamiliar approach. Authors would have to do a lot of hand-holding, and be able to make a strong case that the method’s improvement in inference is worth the challenges it presents to readers.

        The current situation is much worse though. They already are finding it hard to interpret the approach, only they don’t realize it! They actually require more hand-holding to use NHST* due to the need for overcoming pre-existing misunderstandings. Also, once they do understand it, they won’t want to use it anymore.

        But most fields seem to get enough stuff right to make honorable progress

        The insidious thing about all this is that the flaw is in the method of assessing progress itself. To actually do check the progress you need to measure the predictive skill of the models the field produces on new data. The NHST heavy ones pretty much never do this, in which case we have no idea whether there has been progress (as opposed to Lakatos’ “intellectual pollution”).

        I always enjoy the 50-comment threads where all of you experts disagree heatedly on exactly how p-values and confidence intervals should be interpreted and taught.

        So teach about the controversy or just don’t use them… The worst possible thing is to make something up that leads to the maximum # of “important” papers published and teach/do that instead.

        *NHST meaning to test a default nil null hypothesis

      • The lack of consensus on the meaning or usefulness of p values should be an accelerator for change away from P values not something that slows progress.

        Would you wait around while 30 difference experts in automotive braking systems debated the merits of a particular known to be broken Antilock break system, or would you dump your car and get one that had Flawless brakes?

        • > lack of consensus on the meaning or usefulness of p values should be an accelerator for change away from P values
          Yes, but the lack of consensus on alternatives decelerates it.

      • > lack of consensus makes change in my field even harder.
        An agnotologists dream – they don’t need to convince researchers that the consensus of statisticians was faulty or arrived at for ulterior reasons but just simply point out no consensus exists.

        (If interested see http://statmodeling.stat.columbia.edu/2018/01/23/better-enable-others-avoid-misled-trying-learn-observations-promise-not-transparent-open-sincere-honest/ )

  3. I’m thinking maybe the Food and Brand Lab at Cornell University could apply for some of this funding. At this point, they must know a lot about what practices contribute to the establishment and maintenance of ethical cultures and how can these practices be transferred, extended to, and integrated into other research and learning settings. You could say they’re the true experts in the field

    I often have trouble parsing your sarcasm from your sincerity. I can imagine someone (but maybe not you) making this argument sincerely, like those who hire hackers as ‘white hats’ to find weaknesses in their systems. Would NSF program officers be sympathetic to that argument? What about those in the “other research and learning settings” to which practices would be transferred?

    I’m also not sure how it would depend on the details of their history. Is the white hat argument more or less persuasive for some members of your ethics supergroup than others? How about co-authors who got caught up with them?

  4. I don’t think NSF considers nutrition one of its funding areas ;-).

    As someone mentioned that program has been around for a while. A lot of the awards in that are for design and evaluation of ethics education for engineers etc.

Leave a Reply to Andrew Cancel reply

Your email address will not be published. Required fields are marked *