Skip to content
 

Bias against women in academia

I’m not the best one to write about this: to the extent that there’s bias in favor of men, I’ve been a beneficiary. Also I’m not familiar with the research on the topic. I know there are some statistical difficulties in setting up these causal questions, comparable to the difficulties arising in using “hedonic regression” to estimate the so-called risk premium or value of a life. (See this post from 2004 for my discussion of these challenges, and what I called the “inevitable inconsistency” of these sorts of estimates.)

All challenges aside, there are disparities, for whatever reasons, between men and women in the workforce, and it’s a topic worth studying, especially given how the roles of men and women have changed in recent decades. I’m sure that bias against women comes in many ways, not just salary. It is possible for two people to have the same salary but to have unequal working conditions.

I have no reason to think that academia is worse than other sectors when it comes to how women are treated. But academia has an (imperfect) tradition of openness, so it makes sense to study things like inequality within academia, where maybe the topic will be more clear to study.

Tian Zheng pointed me to this page by Virginia Valian, author of the 1988 book, “Why So Slow? The Advancement of Women,” about women in certain professional careers. The page has transcripts from some interviews from 2006, so perhaps some readers can point us to research in this area since then? Also there’s the related topic of bias against ethnic minorities.

136 Comments

  1. Derek B says:

    I’m not so sure it’s even clear that there is a strong bias against women anymore, at least in the hiring process. There’s at least one very recent study indicating that, if anything, the bias has shifted against men. http://www.pnas.org/content/112/17/5360.abstract http://www.pnas.org/content/112/17/5360.abstract

    Of course there are far more issues than this, but I think academia has been rather progressive here, consistent with their increasingly strongly leftward leanings.

    • Clare says:

      Having looked pretty carefully at this study, I would really hesitate to conclude it provides evidence that there is no longer a bias against women. The premise seems to me to be that, because women are more likely to be ranked first for an assistant professorship compared to equally qualified men, discrimination must no longer exist. It ignores the fact that these “equally qualified” women are much less likely to exist in the first place, due to systematic discrimination in the years leading up to applying for assistant professor jobs! Female graduate students are asked to take on administrative duties and babysit faculty’s children. In their research, they often suffer negative consequences for (and are reluctant to partake in) behaviors that male graduate students are rewarded for, e.g. pushing for first authorship when it is not offered to them, asking pointed questions in seminars, honestly answering they don’t know when a tough question is asked when they present their work etc.. On top of all that female graduate students get to experience male graduate students explaining machine learning to them from day one of their PhD program because they just subconsciously assume the female graduate students probably don’t understand what it is and they get to watch as male graduate students get praised for their intelligence constantly when they did just as well or worse than they did in the same classes. If they don’t decide to leave academia after graduating, they are often less able to pursue postdocs that would leave them well positioned for tenure track jobs, because due to social norms it is less acceptable for women to ask their partners to uproot their lives on a regular basis for their career than it is for men to do the same.

      The study also ignores the challenges faced by women once they are invited out for an interview. They are without a doubt held to different standards. And even once they get a job, women find themselves struggling to participate in the boys-club culture that could otherwise help jump start their careers.

      I could go on, but I’ll stop myself because I’m sure your comment is well intentioned and I understand why people take this paper at face value. That being said, it is saddening to hear that people are willing to take this study and dismiss the voices of women experiencing discrimination in academia – I hope they will reconsider.

      • Rahul says:

        Your comment about “babysitting faculty’s children” made me think, back when I was a young grad student and we had one critical piece of equipment that always kept breaking down.

        My adviser almost always telephoned me to ask me to come in at night, over weekends etc. and fix it. And he was pretty frank in stating that since the others had kids or were females he thought it not right to call them in the middle of the night.

        I didn’t really resent the others for it, but just saying that if you start classifying job-assignment in a fine-grained way (like Clare was doing) females did end up getting a lot of perks too, whether they wanted them or not.

  2. Jessica H. says:

    As a female professor working in a male dominated field I can assure that not a week goes by where I don’t observe some form of bias against women. Sometimes it affects me, sometimes my students, colleagues, etc. But its consistent and inescapable. As a result I am never surprised when I hear that female students are less likely to enter and then drop out of male dominated fields like CS at alarming rates. Media attention to certain high level decisions, such as hiring and promotion, may have resulted in more awareness around the possibility of biases when these types of discussions occur (at least in my department). But there are countless more subtle ways that bias pervades academic work. Just a few: women’s contributions at meetings being overlooked or chalked up to a male who reiterated them (believe me, it happens all the time), known gender bias in course evaluations that are nonetheless used to evaluate faculty teaching, women being overlooked for their role in research contributions that also involved men (or where men did work later on, see e.g. Bonferroni correction- https://en.wikipedia.org/wiki/Olive_Jean_Dunn), and so on. I haven’t checked them all to see if they seem valid, but someone at least made a start at collecting the recent studies in one place:
    http://blogs.lse.ac.uk/impactofsocialsciences/2016/03/08/gender-bias-in-academe-an-annotated-bibliography/
    Anyway, to think that this is a problem that somehow wouldn’t affect academia is a bit ludicrous to me.

    • stuff says:

      Women’s contributions at meetings being overlooked or chalked up to a male who reiterated them (believe me, it happens all the time) <- This happens too often to count

    • Keith O'Rourke says:

      My prior (based somewhat on experience) would be that’s much wore in academia and much harder to deal with as the players have large latitude regarding who they work with, who they pay attention to and especially how the value the work of others.

      One I became aware of early in my career was when there was 3 male research fellows and 1 female, the male faculty/director would regularly play squash with just the males (and this is where the collaborations were often worked out). Here I could see both sides, many males are reluctant to play with females when physical contact, even if just accidental might result (perhaps for good reasons http://www.cbc.ca/news/politics/trudeau-conservative-whip-1.3588407 ) Males and females often spar at my kickboxing club, but with some exceptions it is still expected the males be _gentlemen_ in this (how does one punch a female sparring partner in a gentlemanly manner?).

      Historically there is likely many e.g. Neyman-Scott and Fieller-Creasy. About 10 years ago, in a review of paper on ratios I noticed Creasy being unjustly dismissed and with the help of the editor (a female) had that redressed.

      Now a meta-analysis of gender bias in academia?
      That would be a challenge!

    • Rahul says:

      This bit is intriguing: “known gender bias in course evaluations that are nonetheless used to evaluate faculty teaching”

      Are you saying we should *not* pay heed to course evaluations?

      • Ben Bolker says:

        Guessing that Keith is just pointing out that course evaluations are hugely problematic, for that and many other reasons: they’re age- as well as gender-biased, they measure popularity more than teaching effectiveness, they’re filled out by a biased subsample of the class, they’re gameable … In the current setup we rarely have many alternatives, but any sensible academic institution should at least be combining them with other assessments such as peer evaluation when judging faculty teaching …

        https://chroniclevitae.com/news/1011-student-evaluations-feared-loathed-and-not-going-anywhere

        http://blogs.berkeley.edu/2013/10/21/what-exactly-do-student-evaluations-measure/

        • Z says:

          You gave Keith credit for Jessica’s comment!

        • Rahul says:

          @Ben

          Absolutely, course evaluations have a problem. And it would be foolish for any institution to *only* use course evaluations to judge faculty. But I think that’s a straw-man. I don’t think any Departments rely exclusively on them alone.

          If I was reading Jessica’s comment correctly she was objecting to *any* use of course evals even as one component of faculty evaluation.

          Sure course evals are flawed. But so are peer evaluations. Just ask Andrew about his peer evaluation by UC-Berkeley faculty.

        • Keith O'Rourke says:

          No Ben that was just your perception that is was me and perceptions can changed and manipulated which might be key as to how to study this question.

          Background reading Causal Effects of Perceived Immutable Characteristics http://www.mitpressjournals.org/doi/abs/10.1162/REST_a_00110#.Vz3W6Pkwhpg

          Excerpt “[many] contend that it is inappropriate to conceptualize a person’s actual race, sex, or national origin as a treatment
          in an observational study … other scholars have explored the idea that perceptions of immutable characteristics, not the “actual” traits (to the extent that the latter are well defined), are what matter and that perceptions are manipulable.”

      • Anon says:

        I suppose gender bias corrected course evaluations should be used, but that requires a good (causal) model for the bias and you still run into the difficulty that individuals will be assessed, in part, in terms of averages of _other_ people that share some of their characteristics.

        • Rahul says:

          Exactly. And do we even have such a good model.

          In fact, how strongly do we have data-based evidence that there’s a significant gender-bias in course evaluations? What’s the previous work on this? Not even sure how we would measure this sort of bias.

          • Tian Zheng says:

            This is precisely why I thought Andrew may want to take a look at the research in this important area that can use better causal inference.

          • Felix Thoemmes says:

            Hello Rahul,
            Here is a neat interactive graph that shows gender differences in teaching evaluations, across different disciplines, based on 14 million reviews.

            http://benschmidt.org/profGender/

            Try search terms “bossy” or “intelligent”.

          • Rahul, on your reading of my comment on teaching evaluations — I have been considering the (radical?) stance that we should not use course evaluations at all until we have a causal model (though as Anon suggests even then it is complicated) . Currently in my department we discuss the biases that are suggested to affect women and minorities in teaching evaluations before we evaluate faculty but with no model is impossible to known how much of a margin of error we should attribute to evaluation scores for women or minorities. I also tend think even when people are consciously aware of biases we are pretty bad at treating a mean or median score as an interval (e.g., comparing a female and male faculty member with teaching scores of 4.1 and 4.4 out of 5 but concluding that these might indicate equivalent teaching effectiveness given gender biases seems like a far less likely conclusion than concluding that the male faculty is at least slightly better). But my own skepticism of student teaching evaluations is also related to their inconsistent relationship to other factors, namely how they don’t always correlate (or sometimes negatively correlate) with learning outcomes. Seems like there are at least a few papers that attempt to measure the biases in student evaluations: (though I haven’t had time to read them yet)

            http://www.tandfonline.com/doi/abs/10.1080/00220671.1989.10885887
            http://sciedu.ca/journal/index.php/ijhe/article/viewFile/6418/4025
            http://link.springer.com/article/10.1007/s11162-011-9229-0#/page-1
            http://journals.cambridge.org/action/displayAbstract?fromPage=online&aid=8782935&fileId=S1049096500043407
            https://www.scienceopen.com/document?id=25ff22be-8a1b-4c97-9d88-084c8d98187a

            • Oh and not to mention that most of these numbers are based on very small sample sizes anyway. We might as well be openly advocating belief in the law of small numbers if we are going to keep using them.

            • Rahul says:

              But there’s biases in other things too. If we totally discard student evals, the question is, what *do* we use?

              Peer evaluations can be biased too. So can hiring committees. Do we have causal models for those biases so that we can compensate?

              I’m sure Andrew’s evaluation by his Berkeley peers was negatively correlated with whatever metric they were (or should have been) trying to maximize in a hiring decision.

              It would have been very easy to discard one metric (“student evals”) as crappy had we had clearly superior alternatives. Do we?

              In fact, in something like a hiring decision or a teacher evaluation is there even a gold standard for the ultimate objective that we can compare different appraisal strategies against?

              • At least with peer evaluations we have the benefit of being evaluated by someone who has taught themselves, and understands what the goals of teaching are (i.e., that students learn things but also on the departmental level). For instance in my department, many undergrads have no interest in theory and just want to learn now to code with as many different toolkits as possible. Should I redesign my course around teaching tools so that so that my course evals go up? My peer evaluators on the other hand have a better understanding of what my job is when I teach. Maybe even better is to have an independent pedagogy unit in the university that evaluates teachers. I seem to remember something like this at University of Michigan, where a person from the center for teaching and learning would come evaluate each student instructor and then talk to them about ways to improve.

            • Andrew says:

              We now have an interesting sub-thread on student evaluations of teaching. Let me just say that nowadays I ignore these evaluations—I don’t really learn anything from them beyond the feedback I get in the class itself—but in the past they’ve been very helpful to me, both as a student (yes, I personally have felt there was a positive correlation between average teaching evaluation and what I got out of the class) and as an instructor (I used to get really really low evaluations and that did motivate me to do better). So, yes I can see the problem with evaluating faculty based on teaching evaluations, but I have found these evaluations to provide valuable information. I’m not really sure what to conclude from all this.

              • Rahul says:

                Ironically, we are so very willing to use survey data for so many of our theories & measurements in social sciences it’s a bit incongruous that we are so skeptical of survey-revealed-preferences only when it concerns us and our teaching abilities.

              • mpledger says:

                @Rahul
                As statisticians we are always evaluating the quality of the estimates we get from whatever type of study we are doing to see if they make sense e.g. are there sampling and non-sampling issues that we are aware of that could lower out trust in the estimates we derive.

                That we look on the output of these kinds of surveys and see that the estimates are likely to be poor because of small sample sizes and other biases is exactly the kind of opinion that ought to be expected of us.

              • Elin says:

                I was on a college committee for 2 years that tried to revise the teaching evaluations, and it was the most frustrating experience of 19 years at my institution. I just kept asking “What are we trying to measure?” and no one could really answer. In reality Rate my Professor has pretty much correctly identified the right 2 questions for their summed rating (basically, clarity/quality of teaching and caring/what are interactions like) and one extra question of interest to students (easiness) and there just as on our forms, the qualitative comments are helpful for contextualizing and also for highlighting some specific really terrible or really great thing.

              • Elin:

                The inability to answer the question “what are we trying to measure?” is pretty telling. Evaluations are not really primarily about improving teaching.

                My experience was that my teaching evaluations as a TA interacted pretty highly with the style of the professor for the course. For softball professors my concept oriented strong learning approach led to students being put off. For hardball professors, the students came and wanted more.

                A tiny minority of students would actually come to office hours. Those that did usually left saying things like “thanks that was so helpful”. But if I had 5 or 6 students that came to office hours at all, I’d get evaluations from 15 students saying that I was terribly unhelpful in office hours. Of these 15 students, not one of them would have ever come to the office hours even once! It probably reflected the fact that I was in my mid 30’s and they were 19 (ie. they were intimidated by age difference) more than anything else.

                A year or two later, the students who did actually come to the TA sessions or office hours would be really happy they had learned the concepts and would tell me so, they send me requests on linked-in, a few wanted to friend me on facebook etc. So, my experience regarding what the above linked article about teaching evaluations says (that they are anti-correlated with learning and later performance except among the high performing students) seems to make a lot of sense.

                With all this basically well understood, why do we keep doing it? I think there are basically two issues, 1) inertia, and 2) no-one at decision making levels in universities *really* cares about the teaching (exaggeration), especially in departments like biology, chemistry, physics, mechanical engineering, civil engineering, electrical engineering etc where the main reason to have professors at all is to suck in the grant overhead on million dollar grants. An active civil engineering department could have $5-10M a year in grant funds coming in directly to the department with maybe $1M/yr coming in from undergrad tuition. Even one professor losing one grant due to spending extra time on teaching would be seen as a big financial loss. Also, there’s the overseas grad-students funded by foreign governments to think about.

                Teaching is all about lip service under those conditions.

                In departments that are primarily teaching oriented, where the number of students is higher and the grants are lower, especially in private schools where the tuition is high, the financial incentives are to softball the students through and keep them happy and cushy. The growth of the student loan industry and concomitant increases in tuition are well documented to have resulted in better dorm conditions, great athletics centers, improvement in meals and the existence of new lounges, Amazon delivery centers, wifi everywhere (like even in the middle of grassy quads), free premium cable TV… Under those conditions, keeping the students “happy” is valued highly. Student evals do actually evaluate that happiness factor. Perhaps we’re getting what really matters to the decision makers.

                Why did they not make an episode of The Wire about academia? I mean really.

              • Rahul says:

                @mpledger

                About small sample sizes: Doesn’t a typical Prof. in a Dept. like Physics or Chemistry teach approx. 100-200 students every semester?

                Even aggregating over just the pre-tenure period that ought to be a sample of 1000+ students. Perhaps even 2000+.

                Is that really a small sample? What’s the average sample size in a Soc. Sci. study?

              • Rahul – on small samples sizes:
                I’ve been an assistant professor for two years, and all of my course evals have consisted of less than 20 responses. In this past year, I taught undergrad and grad data visualization courses that are capped at about 40 people. Course evaluations are done online at Univ. of Washington, and sent out around finals weeks, and not surprisingly response rates are pathetic (this last year I had 16 and 8 for eval. sample sizes, after sending about 3 reminder emails to students to do the evals). There are definitely some large lecture classes in departments like mine (I’m in an information school, but computer science is similar) but it is not that unusual for pre-tenured faculty to only teach 1 or 2 large courses (e.g., 100 people) before tenure, if at all, and then you add the terrible response rates in and you may still end up with pretty small sample sizes.

              • Rahul says:

                @Jessica

                Interesting. People criticize other studies for using small samples out of a potentially large population (for reasons of cost, resources, laziness whatever), but this seems unique in that the *population* itself is tiny?!

            • Christian Hennig says:

              The major mistake that people make about teaching evaluations is that the results are statistical estimates of some kind of “true” teaching quality. They are not. They are a mode of communication that has its clear problems but also some benefits.

              I have learned at least a bit from some of my evaluations; sometimes a number of students find an important issue with my teaching that I either wasn’t aware of or I wouldn’t think that’s important, and somehow they wouldn’t communicate this clearly to me at other occasions. However it’s always my own decision what I take from them and what I leave. They shouldn’t be taken too seriously and results shouldn’t be used in “official” ways. But they should still be there, that’s what I think.

              • Rahul says:

                Isn’t that a general critique applying to most any surveys?

              • Christian Hennig says:

                Not sure. I didn’t think about surveys in general when writing this.

              • Rahul says:

                @Christian

                That’s exactly my point: We take survey results for granted and use them in other settings indiscriminately. A huge chunk of Soc. Sci. research is grounded on analyzing survey results.

                We single out teaching evaluations an exception & hate them vehemently just because we don’t like what they are saying about us.

              • Christian Hennig says:

                It’s still a different issue. There’s no sampling at all in teaching evaluations; it’s not about “estimating population quantities from samples” at all, whereas many other surveys are. The issue is not statistical sampling, variation, bias, the issue is very different, namely how what the students say in the surveys relates to what one would generally call “quality of teaching”. (This could be controversial for other surveys as well, of course.)

                “We single out teaching evaluations an exception & hate them vehemently just because we don’t like what they are saying about us.”
                Yeah, I had this thought as well a few times. I’ve seen a number of critical remarks in such surveys that the lecturers wouldn’t accept, but secretly I’ve often suspected that they were justified (as were some critical remarks made about me).

              • Elin says:

                @Christian I think that is right, these are not measurement procedures we would use to do statistical estimates and not anything like we would do for a research survey. I mean, to start with, if we were doing that we would take a carefully created sample and then try to understand non response. Also we would have some idea of what they are trying to measure, then go through a serious question design process including validity testing.

                I too have gotten very helpful feedback from students about specific aspects of my courses. I think eliciting that kind of feedback is a good goal and it would be good to design instruments to do that. For example this week I asked my students to write letters to students next semester telling them about the class and how to succeed in it. I definitely got really interesting information about what they found hard.

                Information for other students to know whether they should take the class/prof, honestly Rate My Professor is not bad but not timely and also very small number of posters. I can say my experience hiring adjuncts is that I have learned that they are pretty accurate predictors of problems.

                Information for your promotion and tenure file, that is really somewhat different than the other two. Not completely different (e.g. if you get legitimate feedback on problems and then the same exact comments year after year; if students say that you don’t reply to email and don’t explain expectations clearly those are both important to know) but there it should be complex and contextualized with your course materials and what grades you give — if you give all As yeah you are going to get high rates and everyone knows that.

              • Rahul says:

                @Elin

                So with all the in-house expertise available why cannot Departments, at least the Soc. Sci. ones which churn out survey designs by the dozen every semester, design the proper surveys to elicit what you would consider “good quality” information about their own teaching?

                Why is this dichotomy of research-survey = good quality vs. teaching evaluation = bad-quality data?

              • Curious says:

                I find the comments from Christian, Rahul, and Elin interesting. They raise a few questions in my mind:

                1. Why would we not want to understand the true population (i.e., class) distribution of responses rather than the biased set generated by a review of responses unadjusted for response biases?

                2. Why would anyone take an evaluative process such as this seriously? — when they process clearly seems to be saying, “We don’t care what the actual distribution of responses is at some comparative level.”

                3. Why would anyone want evaluations that have not be properly adjusted for biases to be included in any decisions that might affect their career?

              • Martha (Smith) says:

                Christian and Elin make some good points. The discussion brings to mind advice I once heard from the folks who run the course-instructor surveys at my university:

                Before reading all the results, sort the surveys according to the overall rating the students give. Then read the other responses group-by-group. This gives you a better sense of what comments come from what groups of students. It can help you use the feedback more constructively.

      • Greg Francis says:

        One of my favorite articles on this topic is Neath (1996). How to improve your teaching evaluations without improving your teaching.
        http://www.amsciepub.com/doi/abs/10.2466/pr0.1996.78.3c.1363

        “Tip 1: Be Male”

      • Yes, we should ignore course evaluations. Have you ever read course evaluations? They’re useless. There is even research on how course evaluations are ANTI-CORRELATED with success of the students in future courses. Teachers who force people to learn the material get shitty evals.

        From this summary article which links to the paper in question as well: http://www.npr.org/sections/ed/2014/09/26/345515451/student-course-evaluations-get-an-f

        “The paper compared the student evaluations of a particular professor to another measure of teacher quality: how those students performed in a subsequent course. In other words, if I have Dr. Muccio in Microeconomics I, what’s my grade next year in Macroeconomics II?

        Here’s what he found. The better the professors were, as measured by their students’ grades in later classes, the lower their ratings from students.”

        I’m not saying we shouldn’t evaluate teachers, but we shouldn’t do it based on voluntary student responses to a shitty questionnaire.

        • Rahul says:

          Well, if every yardstick is a shitty yardstick, what yardstick do we use?

          Apparently, if we ask students, the evaluations are anti-female biased. Meanwhile faculty hiring-committees are also biased against women.

          Perhaps we can discard all evaluations by male students and constitute all-female hiring-committees? Would that help any?

          • I think it’s perfectly possible to create decent teaching effectiveness evaluations, some of the suggestions by the author of that linked study make good sense. Looking at graduation GPA in-major and regressing it against information about which teachers that person had and which courses they took would be a start. Post-graduation followups, say 5 years out on a random subset of grads would help. Evaluation of anonymized course materials by professors at other universities would help.

            Gender bias in teaching evaluations is undoubtedly relevant, but teaching evals are hugely problematic even without the gender issues.

            • Martha (Smith) says:

              Rice University has a teaching award based on the vote of “alumni who graduated with four-year undergraduate degrees two years and five years previously.” (http://students.rice.edu/students/Teaching.asp) Perhaps only a small wealthy school like Rice could do this, but the idea seems like a good one. My own experience is that several times I have had a student tell me sometime after they took a course from me that they didn’t appreciate what I did at the time (e.g., taking off points for not explaining their reasoning), but they sure appreciated it later, when taking follow-up courses or on the job.

              • Keith O'Rourke says:

                Agree (that could get at something more important).

                Not sure what the end of term perhaps very anxious before the final exam perspective from students reflects.

          • Also, you have to understand that in part, the course eval system is serving its purpose well, it’s just that its purpose is to be a tool to wield power. Kind of like how one reason we have such baroque tax laws is that the government finds it convenient to ensure that everyone can be made into a criminal if needed (that’s how we finally nabbed Al Capone right?).

            So, if you see teaching evals as all about finding out who teaches well, then yeah, they’re shitty, but if you see teaching evals as extra leverage to force people out who the department doesn’t like, to be used only when necessary… then they’re perfectly fine. If the departments like a given professor they can easily ignore bad evals. But if they don’t like them, bad evals become one possible poker chip.

          • stuff says:

            That would not help, because women are biased against women as well.

            • Martha (Smith) says:

              A woman mathematician friend once remarked that she found that the good women students in her classes gave her high ratings, but the poor women students gave her poor ratings.

              • Rahul says:

                Presumably, the good male students in class also gave her good ratings?

              • Martha (Smith) says:

                Rahul:
                I don’t really know for sure, but I got the impression that the male students didn’t show a noticeable pattern.

              • Andrew says:

                Martha:

                Interesting. When I get my teaching evaluations, I am not told which student gave which ratings, so there’d be no way I could know if the male or female students were giving me different ratings.

              • Andrew: some places give the written comments just as xeroxes/scans of the forms, others might actually type them up. If you’ve been grading homework you can get a good sense from the handwriting things like whether the students is male or female, and for some students with distinctive handwriting you can guess pretty well which student it is.

                This contest for computerized classification of handwriting was won with a log-loss of .46 or so, which if I understand correctly means about 65% correct, and that’s with computer-vision issues in the pipeline.

                https://www.kaggle.com/c/icdar2013-gender-prediction-from-handwriting/leaderboard

                I also think this distinctiveness is more pronounced at high-school to college undergrad levels. Highly rounded letters with little circles for the dots of the i and soforth are pretty distinctive features of young female handwriting.

                p(female | rounded letters and circle/heart dots) is very large even if the opposite conditioning isn’t true.

              • Martha (Smith) says:

                Andrew:
                I don’t really know how she could tell, but Daniel’s explanation had occurred to me also — I recall that (especially in small classes), I often recognized individual students’ handwriting; this would have been especially the case in the proof courses, which my friend presumably taught a lot of (as did I).

                (And then there was the student who always used purple ink, even on the course evaluations.)

              • Rahul says:

                @Daniel

                Just trying to interpret the “65% correct” metric: I’d be 50% correct with just random guessing, right? Assuming a 50-50 MF student ratio?

    • P says:

      women being overlooked for their role in research contributions that also involved men (or where men did work later on, see e.g. Bonferroni correction- https://en.wikipedia.org/wiki/Olive_Jean_Dunn)

      There’s nothing gender-specific about scientific pioneers being overlooked in favor of senior researchers or later-comers who are better at self-promotion. It’s so common that there’s a name for it: Stigler’s law of eponymy
      . And, surely, the vast majority of the victims of Stigler’s law are men (including Robert Merton who seems to have been to the first to formulate the law Stephen Stigler humorously named after himself). For every Rosalind Franklin, there are a thousand men who may not have got proper acknowledgement for their work. How many people talking about Franklin know who Raymond Gosling was, for example?

      I haven’t checked them all to see if they seem valid, but someone at least made a start at collecting the recent studies in one place:
      http://blogs.lse.ac.uk/impactofsocialsciences/2016/03/08/gender-bias-in-academe-an-annotated-bibliography/

      That’s a truly terrible list. It cherry-picks studies, no matter how awful, to promote the viewpoint that it’s terrible to be a woman in science. It carefully omits all studies finding the opposite, no matter how well conducted and convincing they are. For example, it cites studies finding discrimination against women in hiring and promotions, but disregards the studies that found the opposite (i.e., bias against men). As another example, it cites a study claiming that “women are underrepresented in fields whose practitioners believe that raw, innate talent is the main requirement for success because women are stereotyped as not possessing that talent.” Not cited is the reanalysis which concluded that “female representation among Ph.D. recipients is associated with the field’s mathematical content and that faculty beliefs about innate ability were irrelevant.”

      That recommended reading list is a good example of the phenomenon Alice Eagly recently discussed in this paper: “[A]dvocates sometimes misunderstand or even ignore scientific research in pursuit of their goals, especially when research pertains to controversial questions of social inequality.”

      The best, most balanced article on the causes of women’s underrepresentation in science is this one. It concludes that “although in the past, gender discrimination was an important cause of women’s underrepresentation in scientific academic careers, this claim has continued to be invoked after it has ceased being a valid cause of women’s underrepresentation in math-intensive fields. Consequently, current barriers to women’s full participation in mathematically intensive academic science fields are rooted in pre-college factors and the subsequent likelihood of majoring in these fields, and future research should focus on these barriers rather than misdirecting attention toward historical barriers that no longer account for women’s underrepresentation in academic science.”

      • Tian Zheng says:

        I loved the last quote when I first saw it.

        Also this one: http://www.sciencemag.org/careers/2016/04/complex-role-gender-faculty-hiring

        “Women in computer science Ph.D. programs operate in cultures that often fail to be inclusive, so when they get to the faculty hiring process, they have already been disadvantaged in their training. Using “unbiased” measures like productivity and prestige can make it look like the decisions are gender-blind, but gender has in fact played a role all through the training process and is therefore already baked in. For example, the authors found that for the assistant professors who started their positions after 2002, women were less productive than men. “The origin of this productivity gap seems unlikely to be related to inherent differences in talent or effort,” the authors write, “and may instead be related to differential access to resources and mentoring, greater rates of hostile work environments or sexual harassment, differences in self-perceptions, or other gender-correlated factors.”

    • Lauren says:

      It is good to have a collection of such studies, especially an apparently up to date one like that. I note that it does not contain the recent PNAS paper “National hiring experiments reveal 2:1 faculty preference for women on STEM tenure track” posted by Derek B above. Given how prominent an outlet that study appeared in, I wouldn’t think it’s just by chance that the study doesn’t appear on that page; it seems unlikely they just missed it. I wonder, therefore, if the collection might have a tendency to exclude research that suggests bias against male academics. This would be concerning if that site is to be used as an authoritative and unbiased source on this important topic.

      • The reasonable thing to do would be to email the people who created this list and point them to this thread so they can add any studies they missed. Assuming I can find their contact info I’ll do that. If they can’t/won’t edit the original list, well there’s an opportunity for someone to create a resource that people are clearly interested in.

      • Martha (Smith) says:

        Has anyone read any of these papers carefully enough to be able to comment on whether or not they are high quality, or if they use some of the questionable practices that have so often been critiqued on this blog?

  3. Eric R says:

    This was in nytimes (write-up of study done by a Harvard PhD student) a few months ago about the gender bias in economics.

    http://www.nytimes.com/2016/01/10/upshot/when-teamwork-doesnt-work-for-women.html

  4. stuff says:

    This website lists a number of recent studies at the bottom of the page (some of the stuff is in dutch, but the paper titles are in english): http://www.athenasangels.nl/athenas-wisdom.

    I’ll translate what it says below. I have no idea about the quality and content of the papers though.

    ####
    creativity is associated with male-ness
    Proudfoot, D., Kay, A. C., & Koval, C. Z. (2015). A gender bias in the attribution of creativity: Archival and experimental evidence for the perceived association between masculinity and creative thinking. Psychological Science.

    differences in starting subsidies
    Sege, R., Nykiel-Bub, L., & Selk, S. (2015). Sex differences in institutional support for junior biomedical researchers. JAMA, 314(11), 1175-1177.

    veni-grants: the leaking pipeline
    Van der Lee, R. & Ellemers, N. (2015). Gender contributes to personal research funding success in the Netherlands. PNAS, 112(40), 12349-12353.

    resistance against research findings on gender bias:
    Moss-Racusin, C. A., Molenda, A. K., & Cramer, C. R. (2015). Can evidence impact attitudes? Public reactions to evidence of gender bias in STEM fields. Psychology of Women Quarterly, 39, 194-209.

    Handley, I. A., Brown, E. R., Moss-Racusin, C. A., & Smith, J. L. (2015). Quality of evidence revealing gender bias in science in in the eye of the beholder. PNAS.

    Men and women have roughly the same ambitions and capacities:
    Hyde, J. S. (2014). Gender similarities and differences. Annual Review of Psychology, 65, 373-398.

    The same cvs are judged differently for men and women:
    Moss-Racusin, C. A., Dovidio, J. F., Brescoll, V. L., Graham, M. J., & Handelsman, J. (2012). Science faculty’s subtle gender biases favour male students. PNAS, 109, 16474-16479.

    women are seen as less talented:
    Leslie, S.-J., Cimpian, A., Meyer, M., & Freeland, E. (2015). Expectations of brilliance underlie gender distributions across academic disciplines. Science, 347, 262-265.

    people associate women with family, men with career:
    Greenwald, A. G., & Banaji, M. R. (1995). Implicit social cognition: attitudes, self-esteem, and stereotypes. Psychological Review, 102, 4-27.

    stereotypical expectations may inhibit the performance of women:
    Shapiro, J. R. & Williams, A. M. (2012). The role of stereotype threats in undermining girls’ and women’s performance and interest in STEM fields. Sex Roles, 66, 175-183.

    Women who leave science do not do this out of ‘free choice’
    Stephens, N.M., & Levine, C.S. (2011). Opting out or denying discrimination? How the framework of free choice in American society influences perceptions of gender inequality. Psychological Science, 22, 1231-1236.

    Gender bias in language usage makes selection of women more difficult:
    Gaucher, D., Friesen, J, & Kay, A. C. (2011). Evidence that gendered wording in job advertisements exists and sustains gender equality. Journal of Personality and Social Psychology, 101, 109-128.

    Schmader, T., Whitehead, J., & Wysocki, V. H. (2007). A linguistic comparison of letters of recommendation for male and female chemistry and biochemistry job applicants, Sex Roles, 57, 509-514.

    Rubini, M. & Menegatti, M. (2014). Hindering women’s careers in academia: Gender linguistic bias in personnel selection. Journal of Language and Social Psychology, 33, 632-650.

    Stout, J. G., & Dasgupta, N. (2011). When he doesn’t mean you: Gender-exclusive language as ostracism. Personality and Social Psychology Bulletin, 36, 757-769.

    ####

    There is also a collection of (funny but not funny) anecdotes on the same website which are interesting to read, although these are mostly in dutch: http://www.athenasangels.nl/angel-alerts.

  5. numeric says:

    But academia has an (imperfect) tradition of openness…

    One wonders how you can get that out with a straight face. Nearly every decision made in academy relating to research, hiring and admission is a result of decisions made in secret by individuals who are, at best, tangentially accountable. Research is evaluated by anonymous peer review, hiring and tenure decisions are made by a committees/votes of the faculty that are secret (and sometimes anonymous), and admission is similarly handled. In fields without a strong outside industrial counterpart, a self-perpetuating collegium establishes itself and fends off all attempts to modernize/reform the field.

    • Rahul says:

      It’d be an interesting exercise to choose a small sub-discipline & collate all funding proposals for the last decade & evaluate the degree of overlap between the names of grantees & reviewers. Basically quid pro quo all the way.

    • Andrew says:

      Numeric:

      Yeah, good point. I had the impression that academia was more open in these things than most other white-collar work environments—that it is in part because of academia’s openness that we hear about its problems. For example, we hear a lot about plagiarism in academia and of course I’ve experienced it myself—but in business or government if the boss takes credit for what an employee has done, it’s not even worth mentioning, it’s so standard. Similarly I’d guess that bias against women in academic workplaces is something we hear more about because academics can be more free to speak out, compared to employees in other sectors. But I don’t have any real evidence to support this impression of mine.

      • Tian Zheng says:

        Our performance evaluation and promotion scheme is by no means more open than other work environment.

      • numeric says:

        I can only speak to my experience in government/industry, but for “technical” people (those with Ph.D’s doing quantitative type work), I’ve found an openness that is missing in the non-scientific academic sciences (and political science, psychology and economics are not scientific). Once a decision is made, of course, there isn’t the endless debate that would occur in an academic department. But there are real problems that the best solution needs to be applied, and that concentrates the mind wonderfully and doesn’t allow the type of specious obstructionism that you’ve documented so ably in this blog. I will also say that scientific departments are like a breath of fresh air compared to the non-scientific ones for intellectual thought.

        As far as credit, everyone knows that the analytics team did the work, not the boss above them. But isn’t it that way in everything? Eisenhower was in charge of SHAEF, but he didn’t plan the invasion–Bedell Smith did (and even he had a huge staff). But Eisenhower was responsible for the whole plan, and his head was on the chopping block.

  6. Kit says:

    I’m not sure how fast local news like this would spread so perhaps this is actually a response to the story I’m bringing up, but if not is very timely. The dept. of mathematics and statistics at the University of Melbourne has just advertised three positions which are open to female applicants only. As someone who graduated from that department under supervision from a (sadly now retired) trailblazing female mathematician I am proud of this decision but of course it is mostly a symbolic statement and change is going to have to continue throughout society before we can claim anything like an end to gender inequality.

    • Jonathan (another one) says:

      Without taking a position on this particular decision one way or another, do you really not see *any* path to an “end to gender inequality” that doesn’t have as a essential ingredient conscious counter-inequality?

      • Jonathan (another one) says:

        … by which I mean explicit “reverse” inequality

        • Tian Zheng says:

          From Chris Rock’s Oscar jokes:”If you want black nominees every year, you need to have black categories. That’s what you need. You already do it with men and women. Think about it: There’s no real reason for there to be a man and a woman category in acting. There’s no reason! It’s not track and field. You don’t have to separate them. Robert De Niro never said, ‘I should slow this acting down so Meryl Streep can catch up.’ No. Not at all. If you want black people every year at the Oscars, just have black categories.”

          Not entirely relevant but interesting to think about it for a minute.

        • Kit says:

          Probably, but it would be much slower.

    • Rahul says:

      Sounds like a particularly crappy decision to me.

      Should this ad hoc decision be replicated by other departments introducing reservations? For which all underrepresented groups?

      What’s the larger framework to judge this by? How does this scale?

      • Z says:

        It probably doesn’t scale, but I bet it’s a good money-ball type decision for that institution at this time to draw undervalued talent.

      • Kit says:

        Of course it’s an ‘ad hoc’ decision; there is no central body for hiring policy in University departments. That’s no criticism at all. And absolutely, other underrepresented groups should be subject to similar policies. As I said in my first comment, regardless of how ‘fair’ or otherwise this is, it ultimately furthers human knowledge more than the status quo. I struggle to see any argument by which this is not desirable which doesn’t also imply we should completely remove the institution of academia.

  7. Jim says:

    If only somebody (e.g. Katy milkman) could run a field experiment testing whether professors treat students differently by gender. That would sure help us to understand biases that exist in academia. Of course, this researcher would probably get chewed out by bloggers.

    • Andrew says:

      Jim:

      If only somebody could do such a study while compensating the participants for their time and effort.

    • James Savage says:

      Why do we need to run an experiment? If it’s gender effects we’re interested in, we get exogeneity of treatment from gender assignment alone. Perhaps I’m misunderstanding what it is you want to measure.

      • Z says:

        People who don’t accept unadjusted disparities as evidence of bias in this context could say:
        1) Gender might not be exogenous after selection on becoming a professor.
        2) Even if we have shown that gender is the cause of the rating disparities, that doesn’t mean that bias (or at least the bias of their students) is the mechanism. It’s possible that upstream bias somehow makes women worse teachers or, infinitely less plausibly, that they’re somehow genetically predisposed to be worse teachers.
        I don’t agree with (1) or (2), but they’re the types of objections that can be addressed through an experiment where students in online courses are randomly assigned to either be told that their teacher is a man or a woman. I’m sure the disparities in ratings would persist.

        • James Savage says:

          I thought the question was about feedback from professors to students?

          • Z says:

            Oh, you’re right, that’s what Jim who started this thread mentioned. Elsewhere in the comments everyone was talking about feedback from students about professors so I missed that. I think my points 1 and 2 still hold though, replacing ‘professor’ and ‘teacher’ with ‘college student’.

      • Anon says:

        It depends on how you define ‘gender effects’.

        In many cases (for instance, course evaluations), policy-relevant gender effects are not exogenous due to various selection hurdles individuals have to go through to become teachers in the first place.

        So, you could have a DAG (:)) that looks like this:

        Gender -> Characteristics of those that end up at University > Course Evaluations
        Gender -> Course evaluations (this one captures how students have biased perceptions)

        You’d want to control for teacher characteristics to measure gender bias in course evaluations. You wouldn’t want to control for that if you examine more general gender effects.

        Only the most general gender effect is always exogenous.

    • If only someone who wanted to do this kind of study would talk to someone like Andrew about how to actually design such a study and analyze it using a good statistical model and real measurement uncertainty etc.

  8. Z says:

    “I have no reason to think that academia is worse than other sectors when it comes to how women are treated.”

    I think a factor specific to academia is a notion of good work stemming from innate ‘genius’, which many people view (subconsciously) as a male trait. Add egos to the mix and a woman doing superior work becomes a threat to men’s ideas of their own innate genius.

    I don’t think men are threatened by women’s success in the same way in the corporate world where ladder climbing isn’t thought to be so strongly tied to innate worth.

    Wishy-washy, I know, just an impression I have.

  9. Sarah Cowan says:

    Hi Andrew,

    I have not read the comments because there are a lot of them and well, I’m trying to make it as a woman in academia. But the smartest people I know working — or having worked on this — are Paula England (http://sociology.as.nyu.edu/object/paulaengland.html) and Shelley Correll (http://gender.stanford.edu/people/shelley-j-correll). Neither focus on academia. I recall seeing a study on co-authorship and gender but I cannot give you any more hints than that. I’m thrilled that you are willing to take up the topic. Godspeed! And can you please do it quickly? Because my tenure clock is ticking.

  10. Doug Davidson says:

    Also concerning: “I have no reason to think that academia is worse than other sectors when it comes to how women are treated.”

    Does it take longer to finish education or training requirements in academics compared to other sectors? I’m certainly no expert on the topic, but I have heard the term “rush-hour of life” used to refer to the period of life when job stress and pressure to make decisions about starting a family are highest. If this time period is later for academics than some other sectors, then it seems to me there could be a difference.

    I don’t know much about this area of work, so I would be interested if this idea is still being debated.

  11. David says:

    As an avid reader of this blog with limited actual statistical background, I figured this highly controversial topic, where most opinions are simply that, would be a good opportunity to make my first post. Like Andrew, I’m no expert on the subject, my perspective is that of someone in the biological sciences, where, in times of generally increased competitiveness and scarcity of funding, there are more and more “affirmative action”-style opportunities for women. I have yet to come across a request for application that is aimed exclusively at men, but that just as an aside.

    I believe that whenever bias against women in Academia is being discussed, these discussions themselves are subject to bias.

    For one, positions that argue against the existence of a bias are mostly male, while arguments for the existence of a bias come mostly from females. So at the very least there is a gender bias. Why that is may be obvious or not…

    Then there is probably confirmation bias. Since we know there was a bias against women in academia in the past we tend to look for it and see it lurking on every corner:

    On a larger scale that translates into cherry picking and interpretation of data in a way that suits our theory and makes them newsworthy. For example, looking at the data on the NIH funded workforce by gender (NIH Data book: https://report.nih.gov/nihdatabook/index.aspx) reveals that women are awarded less than half the number of grants that men are. Enough to get out the pitchforks? Maybe, but when you look at the data for application numbers and success rate, it becomes clear that women and men have close to equal chances of getting funded. Granted, the fact that there are way more applications from men than from women does throw up the question of gender bias influencing the application process and it’s hard to tell how the data were analyzed exactly without disappearing down the rabbit hole.
    Overall, the public leads to a certain degree of change blindness when it comes to recent developments in bias against women in academia.

    On a smaller scale, confirmation bias leads to anecdotal evidence as reported by Jessica H. and backed up by stuff, like contributions by females being attributed to male reiterators, being interpreted as bias against women. I think that many people in academia, myself included, can recollect numerous occurrences of this highly frustrating event. However, based on my experiences I would like to propose that it is not bound to sex or gender, but rather a combination of reiteration effect on the part of the information recipient and cryptomnesia (best case) or plagiarism (worst case) on the part of the reiterator, which occurs frequently in the context of scientific meetings.
    Cryptomnesia is not a new concept and has been linked to high cognitive load, possibly making its occurrence in this specific setting more likely.
    Very little conclusive science has been done on the reiteration effect which would demonstrate its influence on perceived importance and validity of communicated information, and what little there is has shown it to be weak (http://library.mpib-berlin.mpg.de/ft/rh/RH_Reiteration_1997.pdf), but I believe it is, at least in part, what applies to this situation.

    Again, this is just a theory, which probably just scratches the surface, but I think its essence touches on the concepts of awareness of multiple perspectives and context-dependence as outlined here: http://www.stat.columbia.edu/~gelman/research/unpublished/objectivity10.pdf

  12. Moreno Klaus says:

    A few commments:

    From what i see from my own anedoctal evidence there are definitely some gender related problems in academia. I think in fields with few women, its just inherent due to being minority and are similar to racial prejudices, etc. But there are also problems in fields where women are more present like medicine for instance.

    Also for some reason, I see women seem just to have a lot more problems (or complain a lot more?) in general at work, even if the supervisors are other women, so i dont think this is men’s fault only.

    This tension between workload versus child rearing is still a big problem, especially for women who are the beginning of their carreer. Hiring comitees know this of course, and some bias may occur from this. But this is the case also in other jobs. I think we need better legislation which obliges men and women to take a certain amount of weeks off regardless, but this is not specific to academia.

    i have heard professors saying things to female phd students which literally made my mouth open and which would NEVER be said towards a male phd student. The specific thing to academia, is that professors have a lot of power over the phd student so it may become quite
    difficult to overcome abuse. This is i think a very important point specific to academia.

    Most of these problems will be solved as long as it becomes more common for women to rise to power (or not?). Of course, dont expect this to happen in your Math department anytime soon, so it will take generations…

    • Tian Zheng says:

      Very good points made. No one is saying this is “men’s fault”, which should be made clear.

      However, I do think men are in better position to advocate for this cause. There are more men with administrative roles and holding senior positions. In some fields, they are simply the majority. Equality can never be achieved without proactive involvement of the majority group.

      • Moreno Klaus says:

        This is difficult, for the simple reason that most men never have to face those problems. I am not sure whether you can understand the challenges that women face, without having even a small taste of them. I was also in the “this does not happen here” crowd but i have heard of couple of things which were a little bit outrageous. I guess, like at every office place i imagine.

    • Martha (Smith) says:

      Moreno,

      Several good points, but I’d like to comment on a likely connection between two of them that you mentioned but didn’t connect: “But there are also problems in fields where women are more present like medicine for instance,” and “it will take generations”

      The phenomenon of women being more present in medicine is one that has developed within my lifetime: When I was a child, a woman physician was very rare. When I was in college, a number of my fellow women students wanted to become physicians — but there was often well-intended opposition from parents. For example, the parent might insist that the student obtain teacher certification, because that was an insurance policy that the daughter would be able to support herself if need be, whereas there was doubt that the daughter could pass all the hurdles to becoming a doctor. But squeezing teacher certification requirements and pre-med requirements into a degree plan (without an extra year, which typically would be unsupported by parents) was nigh on to impossible. And those women who did manage to do a pre-med program faced pretty serious skepticism from the med school interviews. As a result, women physicians in my age bracket (pre-boomer) were also fairly rare (although maybe not quite as rare as in the preceding age cohorts). So the taking generations phenomenon in medicine is still in progress.

  13. Martha (Smith) says:

    Very sorry to hear that this still happens; I naively assumed that it had ceased or at least lessened. Possibly there is some effect of age of the woman — not to say that treating younger women this way is acceptable, but if age is a factor, at least it offers hope to younger women that it won’t continue all their lives. Or maybe I just stopped noticing it as I got older? Don’t really know.

  14. Robyn says:

    Although not academia, here is a recent survey of some of the top women professionals in Silicon Valley and their experiences. All 200+ survey participants had at least 10 years of experience.

    http://www.elephantinthevalley.com/

  15. clerio says:

    What about simply looking at all or predominantly female companies and colleges? Shouldn’t women want to work for those? What is their performance rating ? Attracting tons of undervalued talent, no men to hold them back. They must be “yuge”

    • Curious says:

      1. There is no reason to expect that large numbers of women should ‘want to work for those’ institutions relative to other institutions as these motivations are so multifaceted that a simple explanation such as the one you are putting forth would not be a reasonable assumption. The framing of the questions themselves suggest implicit sexism.

      2. The appropriate expectation is not that women teachers would have higher ratings, but that ratings would more accurately reflect actual teaching performance and that they would not be downwardly biased based only on the fact that the teacher is a woman.

      3. The appropriate questions would be — (a) Do teacher ratings for women teachers at all women colleges correlate with actual performance as compared to men teachers? (b) Compared to coeducational colleges? (c) Are rating thresholds similar for women teachers as for men teachers at all women colleges ? (d) Again compared to coeducational colleges?

      Even if all of the results to a study such as this were suggestive of no bias in ratings, it would not provide an answer to the existence of other types of biases as articulated by a number of commenters above.

      That said,

      • clerio says:

        Given the magnitude of the disparity, if it’s the discrimination that explains most, or even a lot of it, that discrimination, its magnitude and scale, must be enormous.
        Let’s assume that that enormous discrimination does exist. Historically, when ethnic groups like Italians, Jews, or Irish were facing discrimination of that magnitude and scale, they found it useful to form all-Italian, all-Jewish or all-Irish companies and succeed on their own. Sometimes to avoid discrimination they would even start colleges (like Gallatin and “local merchants” started NYU). Their success provided a compelling evidence that what causes hiring disparity is actually discrimination, not, e.g, differences in the type of cognitive abilities (or, in the case of women, risk-aversion ).

        • Andrew says:

          Clerio:

          I can’t imagine anyone seriously attributing women’s underrepresentation in academia to risk aversion! Academia is pretty notorious as the home of people who want to avoid risk, who are happy with a tenured job.

          • clerio says:

            I think it’s possible that within each group with the fixed average level of risk-aversion, women can be more risk-averse than men.
            Choosing family (less risky option) over job (more risky option) can be interpreted as revealed risk-aversion. Choosing less competitive (and therefore, less well funded) fields can created additional disincentives to stay in workforce.

            https://www.chicagobooth.edu/capideas/fall02/genderandcompetition.html

            • Andrew says:

              Clerio:

              I’m not saying you’re wrong—I have no idea—but I don’t really understand what you’re saying, for two reasons:

              1. Why do you say that family is a less risky option compared to job? This does not seem to fit into the way that risk-aversion is discussed in economics. I just don’t see this at all.

              2. It is my impression that the better-funded fields are less competitive, and the less-well-funded fields are more competitive, which is the opposite of what you’re saying. For example, trying getting a job with a degree in statistics compared to, say, history.

              • clerio says:

                1) I was thinking of “family” as a kind of a “guaranteed employment”, so the “less risky” option. Sort of like the “government job” described here:
                Public sector employees: Risk averse and altruistic?
                http://www.sciencedirect.com/science/article/pii/S016726811200131X

                I apologize if I am wrong. Is statistics as a field both less competitive and better funded than history?

              • Andrew says:

                Clerio:

                I guess to have this conversation one would need a clear definition of “risk aversion.” Family is not so guaranteed: people get divorced! And I also associate the public sector with risk aversion, but I see academia as part of the public or quasi-public sector. But I also see what you’re saying about the tournament-like aspect of tenured faculty positions having a risky feel to it.

              • clerio says:

                I can see how different kinds of competition can be associated with both less and more money. You can have a BS field which is overflowing with low quality candidates and is (deservedly) under-funded, and that shortage of money induces competition. Since the field is BS, there are no real standards of “superiority”, competition is bitter, but mostly “social”, you get ahead by gossip-peddling, forming political alliances and back-stubbing. On the other hand, a (deservedly) better funded field, with clearer criteria of success will also have a lot of competition – but this time it will be about merit and relevant abilities. Obviously real departments are somewhere in between the two extremes.

              • Elin says:

                “Family” is “guaranteed employment”? Maybe if you consider the extended family leave support in Sweden or something as a kind of guaranteed income. In the US there is no support for parents. Are you assuming that all moms are married and that all families can get by on one income? It’s really not the case any more since real income has declined so much.

          • Rahul says:

            Andrew:

            Not so clear. Many very smart engineering PhDs take up relatively uninteresting (to them) corporate positions to avoid the relative insecurity of tenure-track and the stress of publish-or-perish & getting funding.

            It’s easier for a mediocre but hard-working person to do OK in corporate jobs. The aggregation masks individual under-performance to a large extent.

            • Gaythia says:

              Hopefully, no women are currently experiencing what I experienced in my very first freshman geology class, from the professor: “Hey you, girl in the first row, what are you doing here, this is the geology for majors class! I eventually switched to chemistry.

              Sadly, sexual harassment is still a very serious issue, and one for which women are just now finding support if they chose to go public, http://www.dailycal.org/2016/04/11/campus-graduate-students-file-state-complaint-campus-assistant-professor-uc-regents/, and http://www.sciencemag.org/news/2016/01/caltech-suspends-professor-harassment-0.

              In my grad school experiences such problems were sort of finessed, not dealt with directly. The men were the ones in positions of power and influence, the women more likely to quit.

              Seemingly minor things can matter greatly. This Chemistry study in 2008 has to do with how recommendation letters are phrased: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2572075/. “recommenders used significantly more standout adjectives to describe male candidates as compared to female candidates, even though objective criteria showed no gender differences in qualifications” “Stuff” had a similar reference regarding gender bias and wording above. The academic employment situation in Chemistry is really really tight. Undoubtedly hiring committees have to sort through lots of resumes and make quick decisions.

              This is a more general but largely anecdotal UK based study: http://www.biochemistry.org/Portals/0/SciencePolicy/Docs/Chemistry%20Report%20For%20Web.pdf.

              There are of course, real biological differences. As noted in comments above, because the structure of things was determined by the fact men were there first, childbearing often becomes a career barrier. Unlike Clerio above, I can’t imagine that anyone would pick childbirth as a less risky or even less stressful option to a career in academia, although one can see that “barefoot and pregnant” is sort of a default for women with few options. In academia there are set timelines for PhD’s, Post Docs and such which hit key biological clock years for women who also desire to have children.

              Of course, more flexibility in academia and other employment venues would undoubtedly be beneficial for lots of people in lots of different ways.

              According to this American Chemical Society survey 6% of female members were employed part time as opposed to 3% of males and 2.4% of females were employed as post docs as opposed to 1.8% of males. http://www.acs.org/content/acs/en/careers/salaries/surveys.html. Chemists in general are not doing wonderfully, this report says 11% have accepted a position in 2015 with lower salary than the one that they had before. But this is a totally biased sample space. ACS membership costs hundreds of dollars depending on which journals one subscribes to. I’ve been in and out of the organization through the years. I’m more likely to join up if I need the journals but don’t have access to a library, or if I am about to change jobs.

              My own career was substantially impacted when I was laid off when 6 months pregnant by a major corporation. That same year, 2 of the (male) senior administrators of my industrial facility suffered heart attacks. Their abrupt departure was treated as one of those things that just happens when you employ gung-ho, type A achievers. Pregnancy did not seem to me to turn out to be another “just happens” thing that the company coped with. Maybe these managers’ abrupt absences didn’t create difficult to fill gaps. My attempts at advance planning may have backfired into convincing upper management that a pregnant chemist, and a planned leave of absence, in a one of a kind laboratory spot, was going to be a real pain. Maybe it really was just random. I was told that any involvement of lawyers could be outnumbered at a rate of 60 to 1. At any rate we reached a mutually agreed upon settlement involving changes in the layoff date to achieve insurance coverage and a substantial severance package.

              Being one of a kind is stressful. I know I’ve resigned from an equal employment committee on the basis that I was just too busy living this situation to want to spend even more of my time in lengthy discussions on the topic in company meetings.

              A lot of employment depend on networks of contacts. The relative paucity of women in those positions probably makes it harder for younger women to form those contacts and move forward. Or to realize that opportunities exist. Knowing women who’ve had negative experiences also could be discouraging. Maybe that is a sort of backhanded way of supporting Clerio’s position.

              • Rahul says:

                The network argument is interesting: Is it analogously harder for, say, a Lebanese or Korean chemist to move forward because of their limited network of Korean contacts? I suppose it is. But does this mean we should be doing something to increase the number of Korean chemists to fix this inequity? Probably not.

                My point is, we can find a lot of diverse explanations that make it difficult for each one of us to move forward with our own unique challenges. But can we level the playing field with interventions everywhere?

              • Gaythia Weis says:

                Rahul, Transparency, flexibility in employment time history patterns, the ability of those without powerful positions to redress grievances effectively and other such actions can definitely benefit everyone involved.

                On the other hand, we can’t pretend that evenhandedness from some new starting point now redresses difficulties in getting to that position. There is an interesting article on my first place of employment as a chemist here http://www.tri-cityherald.com/news/local/hanford/article62991407.html, which demonstrates the difficulties for African Americans employed at Hanford during WWII, worsening thereafter. This had residual effects on workers at the time I was there, even though it is a Federal facility in generally considered to be more liberal than most Washington State. Overcoming this exclusion is not a “unique challenge” in the sense of being directed at specific individuals or being surmountable by specific individuals.

                My point on networks isn’t that every group needs to create its own lines of insiders, but rather that the lack of people like oneself makes work difficult. More attention needs to be paid to how long time existing (often “old boy” networks) exclude people not like those in positions of power and authority. And again, such openness and attention to inclusion can work to the benefit of all.

              • Martha (Smith) says:

                Gaythia’s comment, “My point on networks isn’t that every group needs to create its own lines of insiders, but rather that the lack of people like oneself makes work difficult. More attention needs to be paid to how long time existing (often “old boy” networks) exclude people not like those in positions of power and authority. And again, such openness and attention to inclusion can work to the benefit of all,” seems to me to get at the crux of the matter.

                One term that has often been used to talk about these (usually unintentional but usually thoughtless) mechanisms of exclusion is “chilly climate”. Gaythia’s experience in freshman geology is one example. One I experienced, when I was the only woman on a faculty committee, was finding that when the committee chair closed his office door after everyone arrived for the meeting, I was facing a large poster of a not-entirely-clothed woman. It is extremely difficult to sit through a meeting under such circumstances, and also to work with that colleague later.

              • Rahul says:

                Transparency, grievance redresses, support-systems etc. are all great. We ought to do all those.

                But its wrong to jump to extreme, knee-jerk solutions like “reservations for women” as some even in this comment thread seem to support.

              • Martha (Smith) says:

                @ Rahul:

                I wouldn’t support “reservations for women” (or for any other group) as a general policy, but I can see how in some circumstances it might be an appropriate thing to try. For example, if 60% of a department’s undergraduate majors are women, 40% of its graduate students are women, but only 5% of its tenure track faculty are women (and the profession has an overall percentage of 25% tenure track faculty being women), and there is a past history of “chilly climate” or even overt discrimination against or harassment of women in the department, then reserving three positions for women as part of an effort to improve past poor practices might be warranted (or even part of a legal settlement).

                Your comparison with ethnic background groups misses the point that women are about 50% of the ambient population. However, in situations where the institution served predominantly an ethnic minority group, similar arguments might apply. For example, some state universities in or near U.S. urban areas serve predominantly African American students; some universities serve predominantly Hispanic students. It is possible that some university in some country serves predominantly Lebanses, or Korean, or some other ethnic group.

  16. VM says:

    At the risk of being called a denier, let me just say that I don’t think it is the case that there is a bias against women in all of academia. It really varies based on field and when there is bias present, it is usually not something academia-specific. While you may find fewer women in an engineering building on campus, you will find an equally disproportionate number of men in a social psychology or clinical psychology building. In fact, this was such a huge problem for me in grad school that a few of my papers have 90% female participants because my recruitment posters were placed around psychology buildings. They got published only because they were high-expense experiments and yielded interesting patterns of results but I wouldn’t know if at least some of the effects were driven by gender unless somebody comes up with enough cash and resources in the future to replicate it (but who would care to given that we had the first shot at it and got it published in a fairly high profile venue?).

    There are other statistics available (don’t have the time right now to dig them up) that suggest that women outnumber men when it comes to attending college. At least in my small field of study, I see female postdocs transition into very good tenure track positions just as often as male postdocs do. In fact, the ones that recently caught my attention have all been females. That doesn’t mean my experience equals what is true in general but I suspect there is little/no bias against women in many fields, especially ones that have been dominated by women over recent years. Certainly, things can improve in terms of encouraging females to take up engineering/math, etc.

    There is a wage gap issue that does need to be addressed. However, again, this problem is likely to be much more subdued in the academia as compared to the industry because many academic salaries are publicly available. At least until the postdoc level, salaries are uniform. Differences can arise during the negotiation process when transitioning to a professor job. Females may tend to be more cautious while negotiating because of biases they may have experienced previously (not just within the academia, but in general). So again, that may not be something specific to the academia.

    Just to be clear, I’m not suggesting that there is no bias in the academia or that there is nothing that needs improvement. That is almost never the case – things can always be better. What I am saying though is that the case of bias against women may not be as severe in the academia specifically (relative to other domains). If women do experience bias in an academic environment, it is because of more general factors (which need to be addressed) – e..g. in the US, lack of paid family leave and the general expectation that parents get back to work full time after handing over their baby to strangers within 12 weeks of birth may lead to greater burden on females than males in general. These are major issues for sure, but again, not specific to the academia.

  17. Gaythia says:

    I thought this might deserve a place in the file of odd statistics, if there is one: http://www.motherjones.com/politics/2016/05/even-female-supreme-court-justices-get-interrupted-lot-men.

    If you analyze this as is done here, it appears that Elena Kagan and Sonia Sotomayor get cut off more than the men. Sotomayor was interrupted 57 times during arguments, while Kagan got cut off 50 times. The next most-interrupted person on the list was Justice Stephen Breyer, who got interrupted 36 times EXCEPT if you averaged the men and compared that to the women, men might get interrupted less, because Judge Thomas is a zilch, since he almost never speaks, and thus can’t be interupted.

    The relevance to this post is to emphasize all of the small ways in which women professionals might fall behind men.

Leave a Reply