Skip to content

Research benefits of feminism

Screen Shot 2014-11-02 at 4.18.31 AM

Unlike that famous bank teller, I’m not “active in the feminist movement,” but I’ve always considered myself a feminist, ever since I heard the term (I don’t know when that was, maybe when I was 10 or so?). It’s no big deal, it probably just comes from having 2 big sisters and growing up during the 1970s.

And most of the time this attitude is pretty much irrelevant to my professional life. It comes up every now and then when interpreting research claims (see here, for example) in which the male perspective is taken as the baseline. And when I teach I try to avoid overuse of stereotypically male-interest topics such as sports.

And my feminism has made me somewhat immune to simplistic gender-essentialist ideas such as expressed in various papers that make use of schoolyard evolutionary biology [see definition below] that we’ve discuss over the years on this blog.

But it doesn’t affect my approach for partial pooling in hierarchical models, or my approach to inference from non-random samples, or the ways in which I monitor convergence for Hamiltonian Monte Carlo, or my models for voting, etc etc etc. Most of my research, even in political science, is basically “orthogonal” to feminism. Even studies that could have some sort of feminist interpretation—for example, my analysis with Yair of differences in attitudes toward abortion, or our estimate of geographic variation in the gender gap—doesn’t have any feminist content at all, at least not that I notice.

Recently, though, I had a research project where a feminist perspective made (a bit of) a difference. It was from my paper with Christian Hennig on going beyond objectivity and subjectivity in statistical thinking.

It came up near the beginning of the paper. We start off by discussing the usual dichotomy in statistics between objective and subjective approaches:

Statistical discourse on objectivity and subjectivity is at an impasse. Ideally these concepts would be part of a consideration of the role of different sorts of information and assumptions in statistical analysis, but instead they often seemed to be used in restrictive and misleading ways.

One problem is that the terms “objective” and “subjective” are loaded with so many associations and are often used in a mixed descriptive/normative way. Scientists whose methods are branded as subjective have the awkward choice of either saying, No, we are really objective, or else embracing the subjective label and turning it into a principle. From the other direction, scientists who use methods labeled as objective often seem so intent on eliminating subjectivity from their analyses, that they end up censoring themselves. This happens, for example, when researchers rely on p-values but refuse to recognize that their analyses are contingent on data (as discussed by Simmons, Nelson, and Simonsohn, 2011, and Gelman and Loken, 2014). More generally, misguided concerns about subjectivity can lead researchers to avoid incorporating relevant and available information into their analyses.

And then we say this:

A perhaps helpful analogy is to gender roles in social interactions. To get respect, women often need to choose between claiming stereotypically-male behaviors or affirming, or “taking back,” feminine roles. At the same time, men can find it difficult to step outside the restrictions implied by traditional masculinity. Rather than point and label, it can be better in such situations to identify the positive aspects of each sex role and then go from there. Similarly, good science contains both subjective and objective elements, and we think it would be best to understand how these perspectives can complement each other.

I suspect that, to many readers, that paragraph won’t fit in at all. But to me it makes a lot of sense. Conventional labels, whether of objectivity and subjectivity, or of masculine and feminine, can be a trap. The labels are not empty, they reflect real differences (being a feminist does not is all about understanding, not denying, the real differences that exist on average between the sexes—along with recognizing that averages are just that, and don’t represent all cases), but people can also get stuck in these boxes, or get stuck trying to rearrange these boxes. So, to me, a feminist attitude gave me a useful perspective on how to think about the important topic of objectivity and subjectivity in science and statistics. (And it’s a topic with real applications; see for example this paper which discusses how we use model checking to incorporate both subjective and objective elements into a Bayesian analysis in tosicology.)

Just to be clear: I’m not claiming that feminism is purely a good thing for a researcher, or even that it’s purely good for my research. There may well be important work that I’m missing, or misunderstanding, because of my political biases. I think everyone must have such blind spots, but that doesn’t excuse me from the blind spots that I have.

At some level, in this post I’m making the unremarkable point that each of us has a political perspective which informs our research in positive and negative ways. The reason that this particular example of the feminist statistician is interesting is that it’s my impression that feminism, like religion, is generally viewed as a generally anti-scientific stance. I think some of this attitude comes from some feminists themselves who are skeptical of science in that is a generally male-dominated institution that is in part used to continue male dominance of society, and it also comes from people such as Larry Summers who might say that reality has an anti-feminist bias.

Feminism, like religion, can be competitive with science or it can be collaborative. See, for example, the blog of Echidne for a collaborative approach. To the extent that feminism represents a set of tenets are opposed to reality, it could get in the way of scientific thinking, in the same way that religion would get in the way of scientific thinking if, for example, you tried to apply faith healing principles to do medical research. If you’re serious about science, though, I think of feminism (or, I imagine, Christianity, for example) as a framework rather than a theory—that is, as a way of interpreting the world, not as a set of positive statements. This is in the same way that I earlier wrote that racism is a framework, not a theory. Not all frameworks are equal; my point here is just that, if we’re used to thinking of feminism, or religion, as anti-scientific, it can be useful to consider ways in which these perspectives can help one’s scientific work.

P.S. It would also be fair to say that I talk the talk but don’t walk the walk: a glance at my list of published papers or the stan-dev list reveals that most of my collaborators are male. I don’t know what to say about this—it could be interpreted as evidence that I’m not a real feminist because I’m not committed enough to equality between the sexes in my own professional life, or as evidence of the emptiness of feminism: like a Christian Scientist who talks tough but then goes to the doctor when he gets sick, I’m a feminist who, when given the choice of how to spend my hard-earned research dollars, generally hires men. I don’t think I’m under any obligation to explain myself at all on this one, but to the extent I do, I guess I’d say that there are more men than women working in computational statistics right now, that I hire the people who seem best for the job, and these people often happen to be male—a set of observations, or opinions, that can be interpreted in any number of ways.

P.P.S. As promised, here’s my definition of “schoolyard evolutionary biology”: It’s the idea that, because of evolution, all people are equivalent to all other people, except that all boys are different from all girls. It’s the attitude I remember from the grade school playground, in which any attribute of a person, whether it be how you walked or how you laughed or even how you held your arms when you were asked to look at your fingernails (really) were gender-typed. It’s gender and race essentialism. And when you combine it with what Kahneman and Tversky called “the law of small numbers” (the attitude that any underlying pattern should reproduce in any small sample) has led to endless chasing of noise in data analyses. In short, if you believe this sort of essentialism, you can find it just about anywhere you look.

P.P.P.S. And, just to clarify further, of course there are lots of systematic differences between boys and girls, and between men and women, that are not directly sex-linked. To be a feminist is not to deny these differences; rather, placing these differences within a larger context is part of what feminism is about.

On deck this week

Mon: Research benefits of feminism

Tues: Using statistics to make the world a better place?

Wed: Trajectories of Achievement Within Race/Ethnicity: “Catching Up” in Achievement Across Time

Thurs: Common sense and statistics

Fri: I’m sure that my anti-Polya attitude is completely unfair

Sat: The anti-Woodstein

Sun: Sometimes you’re so subtle they don’t get the joke

It’s Too Hard to Publish Criticisms and Obtain Data for Replication

Peter Swan writes:

The problem you allude to in the above reference and in your other papers on ethics is a broad and serious one. I and my students have attempted to replicate a number of top articles in the major finance journals. Either they cannot be replicated due to missing data or what might appear to be relatively minor improvements in methodology may either remove or sometimes reverse the findings. Almost invariably, the journal is reluctant publish a comment. Due to the introduction of a new journal, Critical Finance Review, by Ivo Welsh, http://cfr.ivo-welch.info/, that insists on the provision of data/code and encourages the original authors to further comment, this poor outlook is improving in the finance discipline.

See for example: Gavin S. Smith and Peter L. Swan, Do concentrated institutional investors really reduce executive compensation whilst raising incentives?. Code, CFR 3-1, 49-83.

and the response:

Jay C. Hartzell and Laura T. Starks, Institutional Investors and Executive Compensation Redux: A Comment on “Do Concentrated Institutional Investors Really Reduce Executive Compensation Whilst Raising Incentives”, CFR 3-1, 85-97.

The model of criticism and rebuttal is fine, but it’s disturbing that the people criticized never seem to back down and say they were wrong. I don’t think people should always admit they’re wrong, because sometimes they’re not. But everybody makes mistakes, while the rate of admission of mistakes seems suspiciously low!

Sokal: “science is not merely a bag of clever tricks . . . Rather, the natural sciences are nothing more or less than one particular application — albeit an unusually successful one — of a more general rationalist worldview”

Alan Sokal writes:

We know perfectly well that our politicians (or at least some of them) lie to us; we take it for granted; we are inured to it. And that may be precisely the problem. Perhaps we have become so inured to political lies — so hard-headedly cynical — that we have lost our ability to become appropriately outraged. We have lost our ability to call a spade a spade, a lie a lie, a fraud a fraud. Instead we call it “spin”.

We have now travelled a long way from “science,” understood narrowly as physics, chemistry, biology and the like. But the whole point is that any such narrow definition of science is misguided. We live in a single real world; the administrative divisions used for convenience in our universities do not in fact correspond to any natural philosophical boundaries. It makes no sense to use one set of standards of evidence in physics, chemistry and biology, and then suddenly relax your standards when it comes to medicine, religion or politics. Lest this sound to you like a scientist’s imperialism, I want to stress that it is exactly the contrary. . . .

The bottom line is that science is not merely a bag of clever tricks that turn out to be useful in investigating some arcane questions about the inanimate and biological worlds. Rather, the natural sciences are nothing more or less than one particular application — albeit an unusually successful one — of a more general rationalist worldview, centered on the modest insistence that empirical claims must be substantiated by empirical evidence. [emphasis added]

Well put.

Sokal continues:

Conversely, the philosophical lessons learned from four centuries of work in the natural sciences can be of real value — if properly understood — in other domains of human life. Of course, I am not suggesting that historians or policy-makers should use exactly the same methods as physicists — that would be absurd. But neither do biologists use precisely the same methods as physicists; nor, for that matter, do biochemists use the same methods as ecologists, or solid-state physicists as elementary-particle physicists. The detailed methods of inquiry must of course be adapted to the subject matter at hand. What remains unchanged in all areas of life, however, is the underlying philosophy: namely, to constrain our theories as strongly as possible by empirical evidence, and to modify or reject those theories that fail to conform to the evidence. That is what I mean by the scientific worldview.

And then he discusses criticism:

The affirmative side of science, consisting of its well-verified claims about the physical and biological world, may be what first springs to mind when people think about “science”; but it is the critical and skeptical side of science that is the most profound, and the most intellectually subversive. The scientific worldview inevitably comes into conflict with all non-scientific modes of thought that make purportedly factual claims about the world.

He might also discuss certain pseudo-scientific modes of thought, those methods that follow various forms of science but which lack the elements of criticism. I’m thinking in particular of what we’ve been calling “Psychological Science”-style work in which a researcher manages to find a statistically significant p-value and uses this to make an affirmative claim about the world. This is not so much a “non-scientific mode of thought” as a scientific mode of thought that doesn’t work.

The Use of Sampling Weights in Bayesian Hierarchical Models for Small Area Estimation

All this discussion of plagiarism is leaving a bad taste in my mouth (or, I guess I should say, a bad feeling in my fingers, given that I’m expressing all this on the keyboard) so I wanted to close off the workweek with something more interesting.

I happened to come across the above-titled paper by Cici Chen, Thomas Lumley, and Jon Wakefield. I haven’t had a chance to read it in detail but these people know what they’re doing and so it seems like it could be worth a look.

And here’s some related work:

- On the applied side, this paper with Yair in the American Journal of Political Science from 2013, on deep interactions with MRP. In particular, take a look at the section on Accounting for Survey Weights on p. 765. I wonder how this relates to the Chen, Lumley, and Wakefield approach.

- From the more theoretical direction, this paper to with Yajuan and Natesh, to appear in Bayesian Analysis, on Bayesian nonparametric weighted sampling inference.

I think we, as a field, are getting closer on this problem but we’re still not quite there.

Defense by escalation

Basbøll has another post regarding some copying-without-attribution by the somewhat-famous academic entertainer Slavoj Zizek. In his post, Basbøll links to theologian and professor Adam Kotsko (cool: who knew there were still theologians out and about in academia?) who defends Zizek, in part on the grounds that Zizek’s critics were being too harsh. Kotsko writes of “another set of trumped-up complaints about [Zizek’s] supposed ‘self-plagiarism.’ Apparently he needs to write things fresh every single time he publishes, or else he’s doing something akin to the most serious ethical violation in academia.”

Now, my goal here is not to pick a fight with Kotsko, someone whom I’ve only heard of through Basbøll’s blog. But I do want to disagree with that above-quoted statement, because I see it as symptomatic of a more general problem in how people sometimes respond to criticism.

Here’s what I wrote on Basbøll’s blog:

I followed the link, and Kotsko characterizes plagiarism as “the most serious ethical violation in academia.”

I disagree. I think that making shit up or falsifying data is a more serious ethical violation. The two violations can go together, for example Karl Weick, by plagiarizing the Alps story, was then free to make shit up, in a way that he couldn’t have done so easily had he cited his source.

Beyond this, Kotsko seems to me to be doing something that I find very annoying: when someone defends himself, or a friend, from some criticism by first exaggerating the criticism (and perhaps characterizing it as an “accusation”) and then denying the larger claim.

Kotsko did this by taking concerns about Zizek’s misleading lack of attribution of quotes, and interpreting this as the position, “Apparently he needs to write things fresh every single time he publishes, or else he’s doing something akin to the most serious ethical violation in academia.” Nobody’s saying this (or, at least, you’re not saying this!) but now Kotsko can argue against it. (Remember, with plagiarism, it’s not about the copying, it’s about the attribution.)

I felt a similar feeling of frustration after Eric Loken and I raised methodological problems with that fecundity-and-clothing-color study, the authors of that study (Alec Beall and Jessica Tracy) responded that we “imply that [they] likely analyzed our results in all kinds of different ways before selecting the one analysis that confirmed [their] hypothesis.” They then defend themselves against this claim, or implication, that we never made.

Beall and Tracy’s response was more understandable to me than Kotsko’s—after all, Eric and I were criticizing their research and saying (correctly, I believe) that their experiments are dead on arrival, essentially too noisy for them to ever learn anything interesting about the research questions they’re studying, so that’s bad news even though we were not accusing them of ethical violations. In contrast, Kotsko is a third party so it seems particularly ridiculous to see him first exaggerating the criticisms of Zizek, and then shooting down the exaggeration.

But in any case, perhaps it would be useful to give a name to this sort of behavior (or maybe it already has a name)?

P.S. Before we slam these postmodernists too much, let me remind you of this excellent quote from Frederic Jameson. He speaks truth.

Message to Booleans: It’s an additive world, we just live in it

Boolean models (“it’s either A or (B and C)”) seem to be the natural way that we think, but additive models (“10 points if you have A, 3 points if you have B, 2 points if you have C”) seem to describe reality better—at least, the aspects of reality that I study in my research.

Additive models come naturally to political scientists and economists, including myself. We think of your political attitudes, for example, as a sum of various influences (as for example in this paper with Yair). Similarly for economists’ models of decisions in terms of latent continuous variables. But my impression is that “civilians” think in a much more Boolean way, with different factors being switches that flip you to one state or another.

And, when it comes to statistics, applied people often think Booleanly or lexicographically (“Use rule A, with rule B as a tiebreaker”) and, I think, make mistakes as a result. For example, consider the attitude that seems to be prevalent in econometrics, that you want to use an unbiased estimate and then reduce variance only as a secondary concern. As we’ve discussed elsewhere in this space, such an attitude is incoherent because in practice the only way to get an unbiased estimate is to pool data and thus assume the effect of interest does not vary. Also recall the foolish survey researchers who don’t want to let go of the fiction that they are doing theoretically-justified inference using the principles of probability sampling.

We live in an additive world that our minds try to model Booleanly. Sort of like how Mandelbrot pointed out that mountains and trees are fractals but we like to think of them as triangles, circles, and sticks (as exemplified so clearly in childrens’ drawings).

Hey, I just wrote my April Fool’s post!

(scheduled to appear in a few months, of course).

I think you’ll like it. Or hate it. Depending on who you are.

Wegman Frey Hauser Weick Fischer Dr. Anil Potti Stapel comes clean

Thomas Leeper points me to Diederik Stapel’s memoir, “Faking Science: A True Story of Academic Fraud,” translated by Nick Brown and available online for free download.

I’d like to see a preregistered replication on this one

Under the heading, “Results too good to be true,” Lee Sechrest points me to this discussion by “Neuroskeptic” of a discussion by psychology researcher Greg Francis of a published (and publicized) claim by biologists Brian Dias and Kerry Ressler that “Parental olfactory experience [in mice] influences behavior and neural structure in subsequent generations.” That’s a pretty big and surprising claim, and Dias and Ressler support it with some data: p=0.043, p=0.003, p=0.020, p=0.005, etc.

Francis’s key grounds for suspicion is that Dias and Ressler in their paper present 10 successful (statistically significant) results in a row, and, given the effect sizes they estimated, it would be unlikely to see such an unbroken string of successes.

Dias and Ressler replied that they did actually report negative results:

While we wish that all our behavioral, neuroanatomical, and epigenetic data were successful and statistically significant, one only need look at the Supporting Information in the article to see that data generated for all four figures in the Supporting Information did not yield significant results. We do not believe that nonsignificant data support our theoretical claims as suggested.

Francis followed up:

The non-significant effects reported by Dias & Ressler were not characterised by them as being “unsuccessful” but were either integrated into their theoretical ideas or were deemed irrelevant (some were controls that helped them make other arguments). Of course scientists have to change theories to match data, but if the data are noisy then this practice means the theory chases noise (and the findings show excess success relative to the theory).

I would also like to say that it’s probably not a good idea for Dias and Ressler to wish that all their data are “successful and statistically significant.” With small samples and small effects, this just isn’t gonna happen—indeed, it shouldn’t happen. Variation implies that not every small experiment will be statistically significant (or even in the desired direction), and I think it’s a mistake to define “success” in this way.

Do a large preregistered replication

In any case, the solution here seems pretty clear to me. Do a large preregistered replication. This is obvious but it’s not clear that it’s really being done. For example, in a news article from 2013, Virginia Hughes describes the research in question as “tantalizing” and that “other researchers seem convinced . . . neuroscientists, too, are enthusiastic about what these results might mean for understanding the brain,” and she talks about further research (“A good next step in resolving these pesky mechanistic questions would be to use chromatography to see whether odorant molecules like acetophenone actually get into the animals’ bloodstream . . . First, though, Dias and Ressler are working on another behavioral experiment. . . . Scientists, I have to assume, will be furiously working on what that something is for many decades to come . . .”) but I see no mention of any plan for a preregistered replication.

I’d like to see a clean, pure, large, preregistered replication such as Nosek, Spies, and Motyl did in their “50 shades of gray” paper. I recognize that this costs time, effort, and money. Still, replication in a biological study of mice seems so much easier than replication in political science or economics, and it would resolve a lot of statistical issues.