Skip to content
 

Mertz’s reply to Unz’s response to Mertz’s comments on Unz’s article

Here.

And here’s the story so far:

Ron Unz posted a long article on college admissions of Asians and Jews with some numbers and comparisons that made their way into some blogs (including here) and also a David Brooks NYT column which was read by many people, including Janet Mertz, who’d done previous research on ethnic composition of high-end math students.

Mertz contacted me (she’d earlier tried Brooks and others but received no helpful reply), and I posted her findings along with those of another correspondent. Unz then replied, motivating Mertz to write a seven-page document expanding on her earlier emails. Unz responded to that, characterizing Mertz as maybe “emotional” but not actually disputing any of her figures. Unz did, however, make the unconvincing (to me) implication that his original numbers were basically OK even in light of Mertz’s corrections. So Mertz responded once more. (There’s also a side discussion about women’s representation in mathematics, an interesting topic but one I’m ignoring here as not being relevant to the main point of discussion.)

Mertz’s latest seems reasonable to me. I particularly like this bit, which I think has more general application:

Unz considered “five minutes of cursory surname analysis” a sufficient basis on which to claim an important unexpected discovery, i.e., a rapid collapse in Jewish very high-end achievement in the 21st century. Most unexpected discoveries are found not to be true when additional analyses are performed to test their validity.

Exactly!

If I had put together a number based on a cursory five minute analysis, and if that number had appeared in the Times, and then someone went to the trouble of correcting me, I’d be on the phone with the newspaper right away asking them to issue a correction. I might disagree on the interpretation of the numbers, but I’d feel bad about putting a mistake into wide circulation, and I’d want to do whatever was necessary to correct it.

Again, to issue this correction would not necessarily require Unz to back off from all his larger conclusions; he’d just have to modify his claims in light of the data, which is a good idea in any case but especially true when confronted with much higher quality data than what you started with.

P.S. As before, the question might reasonably arise: why do I continue to post on this topic? For an answer to this question, I refer you to the last part of this earlier post, the part entitled, “A couple more things (for now).”

43 Comments

  1. nb says:

    I just wanted to comment that I’ve been checking more of Unz’s numbers, and yet more are incorrect. For example, he reported here that 3% of the 2010-2012 Intel Science Talent Search winners are Jewish. There are 30, so that means Unz identified only 1 Jew. As is the case with the recent IMO names, Unz did not apply his own stated methodology for identifying Jews: counting Germanic and eastern European names as Jewish, as there are 2 Intel STS winners with names classified as Jewish or ethnic German on ancestry.com, 2 more with names classified as possibly Jewish on ancestry.com, and 1 with a Russian name. I was able to definitively identify at least 3 of the latter as Jewish. Two live in areas of Long Island with high Jewish populations; one is described on the STS website as having a mother with one of Unz’s Weyl distinctive Jewish surnames, and the other spoke about her Intel research at a synagogue (which I was able to find from a cursory Google search). I was also able to find a Bar Mitzvah announcement for the brother of another Intel STS winner. This took about 10 minutes of googling, so it looks like Unz spent 5 minutes on the Intel STS names too, once again grossly underestimating the % of Jewish academic high achievers.

    It’s going to take me more than 10 minutes of googling to check Unz’s numbers for the 120 STS Finalists from 2010-12, but since the winners are a subset of the finalists, it’s clear that Unz’s figure of 7% Jewish is an underestimate. Also, Unz claimed that the 2010-12 STS finalists are 29% non-Jewish white, 64% Asian, and 7% Jewish (which adds up to 100%). On the STS website, there are pictures of each finalist, and at least 3 of the 2010-12 finalists are non-Asian people of color, so once again, Unz is ignoring the existence of black and Hispanic academic high achievers.

  2. Rahul says:

    Prof. Mertz, you write:

    And, yes, Summers was wrong in his 2005 talk regarding the primary reasons for the extreme scarcity of women among the tenured research faculty in top-ranked U.S. mathematics departments. However, that is another, unrelated topic better left for a different post.

    I would be eager to read that post. If not a full post, a comment perhaps?

    • RJB says:

      I was struck by this statement as well. Summers said

      There are three broad hypotheses about the sources of the very substantial disparities that this conference’s papers document and have been documented before with respect to the presence of women in high-end scientific professions. One is what I would call the-I’ll explain each of these in a few moments and comment on how important I think they are-the first is what I call the high-powered job hypothesis. The second is what I would call different availability of aptitude at the high end, and the third is what I would call different socialization and patterns of discrimination in a search. And in my own view, their importance probably ranks in exactly the order that I just described.

      .

      Is the order wrong, is a more important one missing, and/or is one of these of no consequence whatsoever? Inquiring minds want to know!

      • That Summers made a distinction between the first and third implies that he thinks the first is not the result of socialization/structure. Therefore, it arguably should be classed with his second hypothesis as “essentialist”. (That is, not only does he think there’s an essential difference in aptitude, he thinks that there’s an essential difference in interest.) Alternatively, he is perhaps just agnostic on this point and for that reason wants to distinguish it from the other two factors which he is more certain can be characterized as either social or inherent.

        Meanwhile, Mertz showed that there is huge variation in the gender ratios of math high-achievement across different societies, which implies that non-essentialist factors weigh more heavily than essentialist factors do.

        I can’t speak for Mertz, but I’d say that her work implies that the order should be (#1/#3, #2), where #1 and #3 are grouped together, and allowing for the possibility of #2 but not confirming it.

        • Rahul says:

          I think you are drawing conclusions stronger than warranted. So far as I can see, Mertz’s work shows that it cannot be #2 acting alone.

          But Mertz shows nothing about the relative importance of #1 #2 and #3. There might be evidence, but not that I can see in Mertz’s work.

          I might be mistaken, but then Mertz’s paper ought to say something about the quantitative weights of essentialist an non essentialist factors. I don’t think it does.

      • Janet Mertz says:

        Larry Summers stuck his foot in his mouth when he stated, “their importance probably ranks in exactly the order that I just described.” In other words, he was implying that gender differences in “intrinsic aptitude” in mathematics at the very high end (which he expounded upon in greater detail latter in his talk) was a more important factor than gender discrimination and differential socialization. He was speaking as the guest lunch-time speaker at a National Bureau of Economic Research-sponsored conference on women in STEM fields and what might be done to increase their participation in his roles as President of Harvard University and a top-ranked economist. Throughout the day, people were presenting standard conference talks with powerpoint slides full of hard data related to this issue. Summers, instead, talked off the top of his head, telling his personal biases in the absence of data or knowledge of the field. By hypothesizing that the major factors might be innate biological differences between the sexes, he was suggesting that little can be done to increase participation of women in STEM fields. In hindsight, he should not have been a speaker at this conference.

        Most of the women speakers at this conference were women in high-powered jobs such as Professor in a STEM field at MIT or Professor of Electrical Engineering and Dean of a top-ranked engineering school. They had clearly managed to overcome the problems Summers was stating of differential socialization, discrimination in a job search, etc. to reach their present jobs. Many of them felt through both personal experiences and numerous research studies in the field that the number 1 problem BY FAR nowadays was gender discrimination ON the job, be it conscious or unconscious, a factor Summers was largely ignoring and downplaying in importance in his talk. In addition to occasional major acts of gender discrimination, most women working in STEM fields face on a regular basis numerous minor slights that gradually accumulate to lead to different career trajectories or, even, lead them to drop out in favor of fields where the work environment is more female friendly. These unconscious slights can be items such as not being invited to join the guys for lunch, a game of golf, or a stop at the bar after work where experiments are designed and collaborations are formed. For anyone interested in this topic, I highly recommend the book, “Why So Slow?” by Virginia Valian. One might also read about the topic of “Implicit Bias”. There is a large ongoing study of various types of gender and racial implicit biases. See: http://www.biasproject.org/ You can examine your own implicit bias levels by taking online exams at https://implicit.harvard.edu/implicit/demo/ There is lots of scientific literature showing that women as well as men discrimination against women who work in STEM fields. The theory is that the reason this happens is because such women are violating “gender schema”, i.e., the social norms we all learn to expect that are unconsciously ingrained in us by our culture.

        For my response to Summers’ greater male variance hypothesis, I refer folks to my 2012 Notices of the AMS article (www.ams.org/notices/201201/rtx120100010p.pdf). In the US, the ratio of male variance in math performance to female variance in math performance tends to be ~ 1.08 on multiple exams of middle and high school students. Even if one needs to be 5 or more standard deviations above the mean (i.e., 1 in 2 million level) to become a tenured research math professor at Harvard, one would expect ~27% of their professors to be female if this were the only reason for their scarcity. Harvard hired their very first one in 2006! Furthermore, in some countries such as The Czech Republic, the distributions in math performance of boys and girls scores are essentially identical, centered around a peak similar to the world benchmark mean. Doesn’t that suggest that greater male variance among whites, when observed, is at least partially due to culture, not solely innate differences between the sexes?

        Lastly, I can’t take responsibility for the titles or content of news reports related to my publications in this field. Reporters are out to sell newspapers and have limited space for their stories and very limited time in which to research and write their stories. Their stories frequently contain distortions and errors.

        • Rahul says:

          “Doesn’t that suggest that greater male variance among whites, when observed, is at least partially due to culture, not solely innate differences between the sexes?”

          Of course it does! Where was Summers saying otherwise? Wasn’t that his #3 and possibly #1 too?!

          The real question is whether gender differences in “intrinsic aptitude” are entirely absent or not? And assuming such differences do exist, what fraction of current gender disparity is explained by intrinsic aptitude versus social factors.

          Does your work answer this? Quantitatively do we have a guess as to how important is nature versus nurture?

          • “The real question is whether gender differences in “intrinsic aptitude” are entirely absent or not?”

            No, that’s not the “real question”. It’s apparently important to you, and it’s important to many on the other side of the argument who want to deny it entirely, but it’s independent of either Summers’s or Mertz’s arguments.

            As Mertz wrote, Summers asserted that inherent differences are more influential than everything else. This is what Mertz contested. And the subtext of Mertz’s comment is that given all the structural/cultural/socialization barriers to women in these fields, which everyone including you and Summers agree exist, and given all the attestations by women of the intensity of these barriers in these fields, then defending the assertion that the primary factor in gender disparity in achievement is inherent is … suspicious and risible.

            Furthermore, your insistence on converting Mertz’s argument and the argument here into a denial that there are inherent difference at all is also … suspicious.

            My alma mater was St. John’s College, the “Great Books” school. When I was there twenty years ago, my entering class was lilly-white, almost no minority students at all. Of 125 or so in the class, there were a few hispanics and exactly one black person. SJC’s admission criteria are very idiosyncratic and subjective; the school doesn’t particularly weight standardized test scores that heavily and they practice no form of affirmative action. I prefer those policies for the school, but I feel strongly that minorities are badly under-represented. I argued then, and continue to argue, that SJC can stay true to its values while increasing minority representation by making an affirmative effort to aggressively recruit minority students. Here’s the point of my story: a friend of mine from college, who self-identified as “progressive”, disagreed with my argument because, as he put it, “those people don’t want to go to St. John’s College”. I was flabbergasted (actually, I became very angry).

            I don’t really understand what was going on his head, but I do know that his defensiveness (he was defensive) and his argument and the context all quite perfectly exemplify the instinctive protection of privilege. Basically, the argument is that if the school is all white people, then of course it’s because “those people” (non-whites) wouldn’t want to attend, anyway. And, you know, there’s no doubt some truth to that assertion (because of the “dead white guy” nature of the curriculum). But in the context of a culture that has a long history of discriminating against racial minorities, and particularly so with regard to elite, private colleges where even among those that are far more integrated, there’s a socialization factor whereby talented minority students are far less likely to even consider such schools (numerous studies show this), then it’s absurd to point to some sort of inherent difference whereby the status quo is “natural” and “justified” and dismiss these social factors as being less important. That this is the first and favored explanation for these things says more about those who favor it than it does about reality.

        • Rahul says:

          I’ll add two more points:

          (1) Besides aptitude , isn’t interest a separate dimension? e.g. I might have the aptitude to be a piano tuner but no interest in it. Do studies measure that dimension? Could “interest” be innate?

          (2) How valid is it to extrapolate from your (admittedly large) general-population sample to the super-high-achiever cohort that’s likely becoming, say math Professors at Harvard. Wouldn’t a TIMSS or PISA sample have a very tiny amount of such high-end candidates to make much of a difference to your measured variance? Won’t your noise (of mediocrity) swamp out most potential signal of genius (irrespective of which sex)?

          Isn’t that a bit like rating German-vs-American high-end racing cars by surveying performance at thousands of retail-car-dealership lots?

          • Jim says:

            (2) How valid is it to extrapolate from your (admittedly large) general-population sample to the super-high-achiever cohort that’s likely becoming, say math Professors at Harvard. Wouldn’t a TIMSS or PISA sample have a very tiny amount of such high-end candidates to make much of a difference to your measured variance? Won’t your noise (of mediocrity) swamp out most potential signal of genius (irrespective of which sex)?

            Yeah, I think this is a problem. This study showed that the gender gap “widens dramatically at percentiles above those that can be examined using standard data sources”. Among their findings were the following:

            [T]he AMC curve turns sharply downward at the percentiles above those that the SAT can measure. The male-female ratio reaches 6.2 to 1 for students in the 99th percentile of the AMC population (1,213 students with scores of 114 or higher) and 12 to 1 in the 99.9th AMC percentile (116 students with scores of 135 or higher). The top 36 scorers were all male.

            They argue that this cannot be due to fewer extremely high-ability girls than boys taking the AMC:

            At higher score ranges, however, it becomes increasingly implausible that gender related selection into taking the AMC could account for much of the effect: why would girls capable of scoring 130 or higher be only one-quarter as likely as boys who would do this well to take the test? Indeed, the knowledge and problem-solving skills needed to get a 130 are suffciently high so that we feel that almost all students (male or female) who have acquired such skills are probably taking the AMC 12, making gender-related selection nonexistent (for measurements of what the gender composition of high scorers would be under universal administration).

            And if someone thinks that high scores on aptitude tests aren’t predictive of STEM careers, this study found that among young adolescents in the top 1% of quantitative reasoning ability, those in the top quartile were 18 times more likely to later get a STEM doctorate and almost eight times more likely to get tenure in a STEM field than those in the bottom quartile of that one percent.

            • Janet Mertz says:

              Jim cites an Ellison study that states, “[T]he AMC curve turns sharply downward at the percentiles above those that the SAT can measure. The male-female ratio reaches 6.2 to 1 for students in the 99th percentile of the AMC population (1,213 students with scores of 114 or higher) and 12 to 1 in the 99.9th AMC percentile (116 students with scores of 135 or higher). The top 36 scorers were all male.”

              This graph was presented in a very strange way that highly accentuated the perfect score (of 150) data point which just happen to have had 0 out of 36 students being girls on this particular test, including these same 36 boys cumulatively in each of the other nearby data points. On this exam, 8% of the 116 students who achieved at the 99.9th AMC percentile among test takers (i.e., with scores of 135 or higher) were female; 14% were female at the 99th AMC percentile. It is only the top 36 US scorers who all happened to be male ON THIS PARTICULAR AMC12. There have been females in other years and other countries who have achieved a perfect 150 on the AMC12. For example, the 2009 AMC12A data had 20% females (1 out of 5 students) achieving 150. If one were to plot these 2009 data as was done in Figure 3, the right extreme end of the curve would rise up to 20% instead of crashing down to zero, leaving naïve readers with a very different impression due to a single perfect-scoring female. Furthermore, most of these perfect-scoring students are East Asians, e.g., Koreans and Taiwanese, not US students. The number of US students scoring in the 140s is such a tiny number that the statistical fluctuation on percent females becomes large at the extreme right end of the graph as can be seen in Figure 4. Inclusion of these 36 males in every point on the particular graph, rather than calculating % females for each AMC12 score, caused the drop off to appear to be much greater than it actually was.

              “They argue that this cannot be due to fewer extremely high-ability girls than boys taking the AMC”

              Nonsense. “Extreme high ability” isn’t sufficient. One also has to acquire the knowledge to excel on this exam. ALL math exams test knowledge, not just innate ability. Where does this knowledge come from to excel on the AMC12, AIME, USAMO, and IMO? It’s not taught in standard neighborhood public school math classes, especially in the US. Most of the kids who score above 130 on this exam have learned to excel on it via participation in extracurricular activities that teach these skills, e.g., summer math camps, online classes offered through the Art of Problem Solving. I’ve analyzed the gender and ethnicity of the students who attended one such camp: 5-10% of the kids were female and ~ 70% were Asian-Americans and Asians. Thus, it is not at all surprising that most of the highest scorers are male and a majority are ethnic Asian. Why aren’t more white boys attending these camps? It is likely for the same reason most US girls wouldn’t be caught dead in one of them, i.e., they would be socially ostracized by their peers for spending their free time that way. US culture accepts a few boys as being “math nerds” in middle and high school, especially if they are ethnic Asian; what happens socially to girls who are math nerds?

              My point is that we can’t separate nature from nurture to know how much of the scarcity of female (or, even, white males for that matter) among the kids who excel at the highest level in these difficult exams is due to a variety of sociocultural factors versus innate ability at math. While US culture believes the top math kids are naturally gifted in math, many Asian cultures believe the top math kids got there in large part through effort and hard work. That is likely part of the reason many Asian kids attend math camps while white kids are much more likely to attend sports camps, instead.

              • Rahul says:

                US culture accepts a few boys as being “math nerds” in middle and high school, especially if they are ethnic Asian; what happens socially to girls who are math nerds?

                What do you think should be done about this?

              • Jim says:

                There have been females in other years and other countries who have achieved a perfect 150 on the AMC12. For example, the 2009 AMC12A data had 20% females (1 out of 5 students) achieving 150.

                Sure, but is the percentage of female perfect scorers closer to 0 than 20 in a typical year? I’d guess the former. More importantly, if female representation in the 99th or 99.9th percentile of the AMC12 population is typically similar to the numbers reported by Ellison and Swanson, it suggests that your variance ratio method severely overestimates the number of females with exceptional math skills. You estimated that even among those with math skills five SDs above the mean 27 percent are women. However, in the 99th AMC12 percentile, which is certainly not five SDs above the mean, only 14 percent are female, and the percentage gets progressively lower in higher percentiles. If we regard these high AMC scorers as prime candidates for math-heavy academic careers, female underrepresentation in such jobs in unsurprising.

                The actual distributions of high-level mathematical problem solving skills are heavily skewed by gender. Factoring in motivational variables (which are probably rather heritable, too), there’s not much left to be explained by things like discrimination by university professors.

              • nb says:

                Jim stated, “there’s not much left to be explained by things like discrimination by university professors.”

                There’s quite a bit left. How do you explain that from 1977 to 1990, the East German IMO team had 5 girls (at least one of whom was a gold medalist), while the West German IMO team had none? This represents a significant disparity between two nations that were comprised of the same ethnicity and suggests the significance of sociocultural factors in the absence of girls on the West German IMO team, whereas you seem quick to attribute the results on the AMC to mostly heritable factors.

                You’re looking at the results of an exam administered in the US. The US IMO team did not have a female member until 1998. i.e. for over 2 decades, there were zero girls on the US IMO team, even though the USSR had had female gold medalists at the IMO in 1962, 1976, 1985, etc. The UK first had a female IMO team member in 1983, 15 years before the US. The US did not have a female gold medalist until 2004, while the UK had had 2 female gold medalists at least 10 years prior. These results show there is a lot to be explained by sociocultural factors at the very highest level of math problem solving performance in the world.

                See Table 5 here:
                http://www.ams.org/notices/200810/fea-gallian.pdf
                and Table 3 here:
                http://www.pnas.org/content/106/22/8801.full.pdf

              • Jim says:

                That does not contradict what I said. Regardless of country, IMO participants and winners are overwhelmingly male, so the existing cultural variation can be only a minor cause of gender differences.

              • Jim says:

                Just to be clear, I am not making any strong claims about the extent that gender differences such as those we’ve been discussing are heritable. We don’t know enough about this.

                Another thing is that the existence of large gender differences in some behavior does not necessarily mean that there are discriminatory mechanisms that disproportionately affect one gender. Sometimes equal opportunity may serve to increase gender differences by freeing people to pursue things that best match their interests and abilities. For example, Scandinavian countries are some of the most egalitarian in the world, but their labor markets are highly segregated by gender, more so than in, say, many Southern European countries.

              • nb says:

                Jim, you are once again quick to attribute disparities in female and male math performance at the highest levels to primarily non-sociocultural factors. Noting that IMO participants are overwhelmingly male, you argue that cultural variation is a minor cause of gender differences. I think the various data points I mentioned above indicate that cultural variation is a significant cause of gender differences. Girls represented over 7% of East German IMO participants from 1977 to 1990, while there were no girls on the West German IMO team. I guess to you this is a minor disparity.

                The point is that females are affected by sociocultural factors in every culture, so it’s virtually impossible to do a controlled study that establishes how significant sociocultural factors are vs innate ability. However, it must be noted that the 7% representation of girls on the East German IMO team does not represent the optimal raw performance of girls, stripped of the impact of sociocultural factors; it merely indicates that the significant disparity between 0% and 7% is primarily due to sociocultural factors. In fact, girls represented 20% of USSR/Russian IMO participants from 1988 to 1997.

            • Janet Mertz says:

              Jim,

              Just like the quantitative SAT, the AMC12 exam is also measuring speed with accuracy, just with harder problems, some of which require knowledge of mathematics not taught in US high schools. Most of the kids with perfect and near perfect scores on the AMC12 are East Asians being educated in East Asia, not US students. Even my 2xIMO gold medal, 4xPutnam Fellow son who achieved a perfect USAMO score one year never managed to achieve a perfect score on the AMC12 because he always made at least 1 stupid error while racing to complete the 25-problem exam in 75 minutes.

              There is a phenomenon called “stereotype threat” that leads to girls (and African-Americans) tending to underperform on these types of speed tests because of increased stress and, thus, more difficulty focusing simply due to knowing that girls are assumed to perform worse on these tests. One of the best studies of this phenomenon involved Asian-American girls. If they gave them background questions reminding them of their gender before the exam, they did significantly worse than if they gave them questions reminding them of their ethnicity, instead. For this reason, the AMC stopped asking students to indicate their gender until after they had completed the exam. Unfortunately, many students then failed to provide their gender, making the data from these exams relating to gender much less dependable since 2010.

              I have gone back to the raw data from the 2006-2008 AMC12s where gender was asked up front before taking the exam, i.e., stereotype threat might be affecting girls scores but essentially all students reported their gender. Strangely, not only did Ellison & Swanson use the data from the year with the lowest % girls among the very top scorers, but their reported data seem to be off somewhat. For 2007, I find 0 girls out of 24 (not 36) students scoring a perfect 150; 2 girls out of 71 students (3%) scoring 144 (1 missed problem) or above; 9 girls out of 132 students (6.4%) scoring 138 (2 missed problems) or above; 28 girls out of 271 students (9.4%) scoring 132 (3 missed problems) or above.

              The same data for 2006 was 0% girls out of 17 students with 150; 5% girls out of 41 students with 144 or above; 8.3% girls out of 108 students with 138 or above; and 16% girls out of 306 students scoring 132 or above.

              For 2008, the data yield: 5% girls out of 22 students with 150; 4% girls out of 51 students with 144 or above; 12% girls out of 83 students with 138 or above; and 11% girls out of 199 students with 132 or above.

              Folks can interpret these data in various ways. As I have already stated, these exams test math knowledge as well as ability; if fewer girls than boys bother to acquire the advanced knowledge due to either interest or sociocultural reasons, fewer girls than boys will excel at the highest levels in it. One can’t use ANY math exam to purely determine innate ability because all of them require acquisition of knowledge as well as ability.

              I’m not going to take the time to respond to Camilla Benbow’s article. She is the one who continues to talk about the 13:1 ratio of boys to girls in her talent search data from the 1970s even though the ratio has been 2-3:1 since the early 1990s. There are lots of problems with much of her work and most folks in the field disagree with her.

              • nb says:

                Here’s a critique of Camilla Benbow’s work:
                http://www.awm-math.org/benbow_petition/background.html

              • Jim says:

                There is a phenomenon called “stereotype threat” that leads to girls (and African-Americans) tending to underperform on these types of speed tests because of increased stress and, thus, more difficulty focusing simply due to knowing that girls are assumed to perform worse on these tests.

                The stereotype threat can often but not always be demonstrated in lab experiments, but I’m skeptical that it could have any meaningful influence on performance on high-stakes tests. See this paper for a discussion of common misunderstandings of the effect. This recent study found no evidence for stereotype threat effects on girls’ math performance.

                The AMC12 data you cite indicates that Ellison & Swanson’s analysis is accurate. The number of boys with high-level math skills vastly outnumbers the number of girls with similar skills. This is true internationally, as your own IMO research shows. The relative contributions of nature and nurture to this state of affairs unfortunately cannot be investigated using your methods.

                There are lots of problems with much of [Benbow's] work and most folks in the field disagree with her.

                Well, the same can be said of your work. For example, why do use the variance ratio method when you full well know that it produces highly misleading results?

                I cited Benbow’s article to prove that cognitive ability, as measured by standardized tests, is a good predictor of success in STEM fields. I did not cite it to make any point about gender differences as such. Moreover, I do not know of any significant criticisms of the very important SMPY results of Benbow et al., so I call BS on your claims about her.

                I don’t doubt that the gender gap in the SAT has narrowed due to girls taking more math classes, but I’ve long wondered if all the narrowing is due to this. The SAT has been tweaked many times. For example, items that show large group differences, including gender differences, have been eliminated.

              • nb says:

                The stereotype threat can often but not always be demonstrated in lab experiments, but I’m skeptical that it could have any meaningful influence on performance on high-stakes tests. See this paper for a discussion of common misunderstandings of the effect. This recent study found no evidence for stereotype threat effects on girls’ math performance.

                The first study is about how popular media misrepresents the results of stereotype threat studies. This is not surprising, as scientific reporting in the lay press is often poor. It is also not relevant to the present discussion. The second paper comes to a more measured conclusion than you: “we feel that more nuanced research needs to be done to truly understand whether stereotype threat impacts girls’ mathematics performance.”

                Well, the same can be said of your work.
                I posted a critique of Camilla Benbow’s work. Have any experts posted critiques of Prof. Mertz’s work?

              • nb says:

                oops, my last comment was worded poorly. I did not mean to suggest that I am an expert who posted a critique of Prof. Benbow’s work – I posted a link to an expert’s critique.

              • Jim says:

                Every prominent researcher will have their critics, but what I reacted to was Mertz’s sweeping claim that there “are lots of problems with much of [Benbow's] work and most folks in the field disagree with her”. This is a calumny against a prolific and highly cited researcher, not a scholarly criticism. Benbow and her colleagues’ research on the causes of high-level achievement is top-notch.

          • Jim says:

            Another problem in predicting right-tail distributions from variances in the TIMSS or PISA is that low-ability individuals tend to be disproportionately male, and if such people are excluded from the test it will reduce male variance more.

            • Janet Mertz says:

              Jim,

              I agree that standardized tests are not a great method to use for looking at the extreme right tail of a distribution since they do a lousy job of distinguishing the 99.99%ile kids from the 99%ile ones. That is why my 2008 article looked at data from the IMO, instead. However, one can’t measure variance from very high-end exams that are only taken by a tiny percent of a population, a self-selected one at that. Thus, one has no choice but to measure variances using standardized exams such as the TIMSS, PISA, and SAT taken by most students or a random sampling of students.

              In the Czech Republic data shown in my 2012 article , we looked at the entire distribution of TIMSS scores obtained by their 8th grade boys and girls; we did not simply calculate variance ratio. The two distributions look essentially coincident. In countries such as Bahrain, the variance ratio was ~1.5; however, that was due to there being lots of boys scoring at the very low end (e.g., 2 standard deviations below the mean!), not because there were more of them at the high end of the distribution. Possibly, these low-scoring boys were not dumb, but, instead, attending religious schools that taught little mathematics.

              There are lots of different sociocultural reasons the measured variance ratios in math performance range all the way from 0.9 to 1.5 among countries, with most countries having fairly stable measured variance ratios in math performance from year to year as well as between the TIMSS and PISA. For example, the variance ratio in the US is consistently ~ 1.08, a greater male variance way too small to account for the scarcity of women tenured faculty in top-ranked US math departments.

              Jim also says, “Benbow and her colleagues’ research on the causes of high-level achievement is top-notch.”
              Benbow has centered her entire career around a single, long-term study, the Study of Mathematically Precocious Youth (SMPY). Her primary research publications are various analyses of data from this one ongoing study. Her secondary publications are largely review articles in which she proclaims her interpretation of her data over and over again in numerous venues. Ceci & Williams large agree with her and her colleagues’ interpretation of these data and their own data. Most of the other folks I know in this field, including many female research mathematicians, strongly disagree with their interpretation of their data.

              The primary scientific problem with the Benbow, Ceci et al. data is “self-selection bias”. In other words, for the SMPY data, only children with the interest and opportunity to accelerate 3 or more years in mathematics by age 13 years who also knew about this talent search were picked up in this study as mathematically gifted. This is a tiny, non-random percentage of the kids with the innate ability to achieve 700 or more on the quantitative section of the SAT by age 13 years. Never picked up are very bright, socioeconomically underprivileged kids who were denied the opportunity to accelerate in math, most under-represented minorities and girls who would have been socially ostracized if they had accelerated in math, and, even, most white boys who didn’t want to be viewed by their peers as math nerds.

              In addition to this HUGE self-selection bias, the SMPY also suffers from using the SAT rather than some harder math test such as the AMC12 or AIME. The quantitative section of the SAT is a fairly trivial exam for bright kids. All of Benbow’s work is based upon the assumption that the SAT can be used to distinguish the 99%ile kid from the 99.99%ile one if given prior to age 13 years. This assumption is false. I’d bet that many 99%ile kids with the motivation and opportunity to accelerate in math by 3 years prior to age 13 years could achieve a 700 on this exam and qualify for the SMPY. This is likely why the number of qualifiers as skyrocketed in recent years, with a majority of them being Asian-Americans. Benbow’s published data may be fine, but many, many folks strongly disagree with her as to how they should be interpreted. Ceci & Williams articles also suffer from their misinterpretating their data given self-selection bias. Cathy Kessel has recently published articles in which she clearly explains alternative interpretations of their data, one of which N.B. has cited on this blog. I suggest you read some of Kessel’s article to see the opposing viewpoint.

              • Jim says:

                In the Czech Republic data shown in my 2012 article , we looked at the entire distribution of TIMSS scores obtained by their 8th grade boys and girls; we did not simply calculate variance ratio. The two distributions look essentially coincident.

                I decided to take a quick look at the Czech data using the International Data Explorer. I looked at the percentages of eight grade boys and girls at the highest TIMSS ‘benchmark’ level which is the 90th percentile or so (depending on year). In 2007, there was indeed no significant difference (p>0.05) between the percentage of boys and girls at the highest level. But is this a robust finding across years? There are Czech data from 1995 and 1999, and in both years the male percentage at the highest benchmark level was higher (p<0.05). See here for details.

                But perhaps the Czechs have reached gender parity only recently, explaining why the 90s results are different. The PISA math test can be used to investigate this. I looked at the percentages of eight grade boys and girls at the highest PISA proficiency level which is around the 95th percentile. The results are here. In each year (2003, 2006, 2009) there are significantly (p<0.05) more boys than girls at the highest proficiency level, with ratios ranging from 1.4 to 2.

                These international tests lack difficult items, their highest proficiency levels are not terribly high, and at age 15 sex differences may not have yet fully manifested themselves. Yet these results generally show that males are overrepresented in the right tail of the math skills distribution, suggesting that there's no gender parity in Czech Republic, either. I don't view these results as definitive, but they certainly don't support your analysis.

                For example, the variance ratio in the US is consistently ~ 1.08, a greater male variance way too small to account for the scarcity of women tenured faculty in top-ranked US math departments.

                According to your article, it’s actually 1.08-1.19 for the TIMSS and the PISA. I looked at the Lindberg et al. study from which the figure of 1.08 comes, but unfortunately they are very laconic about their variance ratio meta-analysis (about moderators etc.). In any case, we’re established that the variance ratio is a very poor predictor of the actual gender distribution of those with very high-level math skills.

                Benbow has centered her entire career around a single, long-term study, the Study of Mathematically Precocious Youth (SMPY). Her primary research publications are various analyses of data from this one ongoing study.

                So? It’s a great study whose results have shattered many myths about high achievement. Many relevant questions cannot be answered without using a longitudinal design.

                One of the most important results of the SMPY is that cognitive ability, even when measured at an early age, does not show diminishing returns as a predictor of intellectual and creative achievements. For example, those who score in the 99.9th percentile in adolescence really will achieve more, on the average, as adults than those who score in the 99th percentile — and this is not a small effect. This cannot be a selection effect, because it reflects variation in future achievement within the selected sample. Moreover, these results show, contrary to your claims, that SAT scores are valid for measuring differences among high-ability adolescents.

                Your different take on the SMPY may be due to your concerns about unequal representation of different genders or races in various fields. I don’t think such unequal representation is a social problem of any great significance, and in any case the SMPY’s most interesting findings are orthogonal to such concerns.

              • Eli Rabett says:

                Janet Mertz writes “only children with the interest and opportunity to accelerate 3 or more years in mathematics by age 13 years who also knew about this talent search were picked up in this study as mathematically gifted”

                This discussion and the previous ones here (thanks for the posts Prof. Gelman:) concentrates on the kids and their contemporaries and not the parents. Arguably the desire of the parents to push their kids to such high levels of achievement are key because it requires a great deal of time and effort on their part. One sees the same in many sports. For example to become a swimming or skating champion requires that your parents are willing to get up at ungodly early hours of the morning for many years, and as the child reaches higher in the rankings considerable expense. From this point of view the success of those of asian and jewish descent in the US mathematics community is not so surprising. In the later case one could probably tease out differences due to the Russian immigration bubble of the 1980s and 1990s.

        • Posting while high says:

          Professor Mertz,

          In one of Professor Gelman’s textbooks, he uses the example of the greater variability of the distribution of male heights, compared to female heights.

          National heights change over time with nutrition, disease, and other factors, but at any given time most of the variation among individuals can be ascribed to genetic differences in heritability studies. There is a general trend towards increased male variance, but this varies from country to country, e.g. if women are more likely to be deprived of food when conditions are bad, as in India, then this will increase their variance. In many ways, the pattern of variation in height resembles that in measures of mental ability and mathematical skills, except that measurement is easier to do accurately.

          Would you therefore expect that sex differences in the variability of height will turn out to be primarily cultural rather than biological?

        • Bud Wiser says:

          Janet

          Thanks for the link. I took a couple bias tests. I’m glad to see I scored neutral on the tests I took. I registered and will continue to participate.

          Unfortunately, I feel you’ve been made a victim of sorts to what I view as an “Unz Experiment”. He’s a bright guy…but I’m afraid he’s a victim of his own intelligence. Whether conscience of it or not, I consider his articles to be a form of race/gender baiting.

          There’s a fine line between genius and insanity.

  3. Withywindle says:

    I posted a comment to Mr. Unz’s site that hasn’t escaped moderation yet; I thought I’d put a version of it here.

    I do think that a major problem in this debate is that the Ivy League admissions offices don’t provide sufficient data. Let us grant for the sake of argument that every methodological criticism made of Mr. Unz is correct; I think part of the trouble is that he is trying to make an analysis working around data the admissions’ offices don’t release. I think a good part of this debate could be rendered moot if the Ivy Leagues provided the necessary data for a proper statistical study.

    This now counts as my own idee fixe, I suppose, but here is what I would love to see: that Mr. Unz, Prof. Mertz, and Prof. Gelman together draft and sign an open letter, with language mutually acceptable to all three, and send it to the various Ivy League admissions offices requesting a release of the relevant data–suitably anonymized for privacy–needed for a statistical study done according to the standards stipulated by NB, Prof. Mertz, and Prof. Gelman. I would also urge Mr. Unz to publicize this letter to the best of his ability.

    I suspect that Profs. Mertz and Gelman think this may be something of a non-issue, and therefore not a priority. Still, would it hurt to sign a letter requesting that the data be made available?–again, with suitable safeguards for privacy. If Profs. Mertz and Gelman find this idea interesting, maybe they could suggest it to Mr. Unz. In an ideal world, this might make it possible for Mr. Unz and Profs. Mertz and Gelman to make their arguments based on a much superior data sample. And it would have the pleasant side-effect of having the disputants joining forces on at least one, narrowly-defined issue.

    (This would still leave the question of how to count who is Jewish, Asian, etc., but I am presuming that Profs. Mertz and Gelman could come up with a method that they believe would satisfy their professional standards.)

  4. Janet Mertz says:

    I said, “US culture accepts a few boys as being “math nerds” in middle and high school, especially if they are ethnic Asian; what happens socially to girls who are math nerds?”

    Rahul responded, “What do you think should be done about this?”

    US culture needs to change. In most Asian countries, when a high school kid achieves a gold medal at one of the International Olympiads such as the IMO, it is front-page national news, with these kids being celebrated similarly to their countrymen who achieve medals at the sports Olympics. The first time my older son and another kid from our city both received gold medals at the IMO as members of the US team, our local newspaper reported it as a tiny 3-paragraph story buried in the middle of the local section; there were NO reports of this achievement in the national newspapers or TV news. When he achieved Putnam Fellow, there was only coverage of this happening within the math community and college newspapers. On the other hand, just about every local high school-level sports event gets coverage by both the local newspapers and local TV news stations; the regional and state championship matches get front-page coverage with photos of the winners and full-page stories. Just look at the news coverage of “March Madness” and football bowl games at the college level. The school districts include in their budgets lots of $s for sports coaches, sports equipment, and transportation to meets. Their budgets typically include $0 for math team coaches (i.e., they need to use unpaid volunteers) and very little money for transportation to math meets or math test competitions. My local high school even limits the number of students allowed to take the AMC12 exam to ~30/year because of cost (i.e., ~$5 per exam). The kids see what our society appears to value. And then we wonder why so few US students major in STEM fields, and US employers need to recruit them from other countries!

    • Rahul says:

      Agreed. No arguments about that.

      My question was more towards the “fewer girls in math” aspect. How would you fix that?

  5. Janet Mertz says:

    Jim says, “One of the most important results of the SMPY is that cognitive ability, even when measured at an early age, does not show diminishing returns as a predictor of intellectual and creative achievements. For example, those who score in the 99.9th percentile in adolescence really will achieve more, on the average, as adults than those who score in the 99th percentile — and this is not a small effect. This cannot be a selection effect, because it reflects variation in future achievement within the selected sample. Moreover, these results show, contrary to your claims, that SAT scores are valid for measuring differences among high-ability adolescents.”

    You (and Benbow) are misinterpreting what one can conclude from these data. There absolutely IS a very strong selection effect. Yes, Benbow shows that a fairly strong positive correlation exists between the kids she identified as mathematically advanced by age 13 years and career outcome. However, numerous others (including many economists) have shown that very strong correlations exist among the socioeconomic environment (including educational attainment of their parents/guardians) in which children are raised, their scores on the PSAT and SAT when taken in high school, their educational attainment, and their career outcomes. Benbow is simply finding that these correlations apply as well to the SAT when taken in middle school. The SMPY really is only identifying the small subset of kids who are gifted in mathematics who also have the motivation and socioeconomic advantages that enable them to score 700 or more on the quantitative section of the SAT by age 13 years. Very few US kids have this opportunity no matter how gifted they are in math. For social reasons, fewer girls have this opportunity than boys and fewer whites and under-represented minorities have it than Asian-Americans. Thus, more boys than girls and more Asian-Americans than other ethnic groups are identified by this measure. Her data tells us nothing about the real ratio of boys:girls or Asians:whites with very high innate ability in mathematics because of this selection bias.

    For example, my older son had this opportunity because he grew up in a household with a parent who had a Ph.D. in mathematics AND parents who could afford to send him to a private school that allowed us to “home school” him in mathematics so he could progress in it as fast as he desired. My younger son attended, instead, our local public school where it was essentially impossible both for social reasons and logistical reasons for him to accelerate more than 2 years in mathematics; thus, he would not have scored a 700 on the SAT prior to age 13 years because he had not yet studied geometry in school and had no interest for social reasons to do so extracurricularly. If it weren’t for these barriers, I have no doubt that he, too, could have achieved a 700 by age 13 years; he readily achieved a 5 of the Calculus BC exam in 12th grade.

    In addition, the SAT is NOT an aptitude test as originally claimed. Rather, it simply measures academic achievement. It contains essential no very difficult questions. Kids who are truly innately gifted in mathematics at a very high level have an intuitive feel for mathematics; they can solve problems that nobody has ever taught them how to do. For example, at age 6 years, my older son already solved a long division problem using his own invented method and proved that the sum of ANY 2 odd numbers is an even number with nobody having taught him about unknowns and formal proofs. The SAT does not examine such skills. Mathematicially highly gifted kids can’t demonstrate their innate ability on the SAT which is a speed-with-accuracy multiple choice test. That is why several % of high school kids achieve a perfect score on this exam. The SAT really can’t be used to distinguish between the kids with math aptitude at the 99%ile vs. 99.9%ile and higher no matter the age at which it is administered; rather, it simply identifies a small subset of the bright, socioeconomically privileged ones. This is a fundamental flaw with Benbow’s study that can’t be fixed.

    • Jim says:

      I already pointed out why your argument is wrong/irrelevant. SAT scores measured at age 13 or earlier are valid predictors of achievement among those in the SMPY sample. Differences between the 99th and 99.9th percentiles on the SAT do reflect real variation in capacity for high-level work even decades later. And like I said, I don’t view the unequal representation of different genders or races in various academic fields as a particularly important problem, and what is most interesting about these results is unrelated to any bean-counting focused on gender or race.

      Perhaps Benbow and colleagues would have found even higher predictive validity for early quantitative or other abilities had they used the measures you favor, but that’s neither here nor there. Using the SAT, they found plenty.

  6. Janet Mertz says:

    Have you taken a look at the quantitative section of the old SAT (prior to their making it a little harder recently)? It was a VERY easy exam, testing only standard k-9th grade math and the easiest parts of geometry. It really can’t identify who is innately gifted in mathematics and was never designed to do so. Achieving a 700 on it at a young age simply says one is a bright, socioeconomically privileged kid with access to a high-quality math education that lets you skip over highly redundant US middle school math. The SMPY is measuring privilege (which highly correlates with career outcome), not innate ability. To give you one clear example of have privilege alone can lead to a brilliant career outcome in the US, just consider the case of George W. Bush: he was a mediocre high school student, yet got admitted to Yale College; he was a mediocre student at Yale, yet got admitted to Harvard Business School; he was a partying alcoholic until age 40 years whose only successful businesses were funded and largely run by others, yet became President of the US; he was a VERY mediocre President yet got reelected to a 2nd term! The AIME and USAMO identify math ability among the kids who ALSO have access to a very high quality math education because these exams require creative thinking, are not multiple choice, and are 3 and 9 hours long, respectively, not speed exams. The SAT really, really does not identify mathematically highly gifted kids because it is way too easy an exam, regardless of what Benbow claims. If we can’t agree on this central point, I think we are simply going to have to agree to disagree here. Given this topic is very far removed from Unz’s Meritocracy article, I am going to stop posting about it here given this is just a distraction that I don’t have time for.

    • While reading your exchange with Jim, this point didn’t occur to me, but it’s obviously true. While much of the discussion here is well outside my competency, I am, on the other hand, well aware of how standardized tests are constructed and that they are less and less reliable the more extreme a testee’s (in)competency is relative to that of the population for whom the test was built.

      Jim wrote:

      Differences between the 99th and 99.9th percentiles on the SAT do reflect real variation in capacity for high-level work even decades later.

      Jim’s argument is suspect in direct proportion to which it concerns the very extreme levels of ability that are being discussed because the SAT is, by design, insensitive to the distinction in ability that Jim is assuming is signified by that 99th and 99.9th percentile difference.

      • Jim says:

        The SAT is a valid test for predicting college performance. The whole point of having children take a test designed for college applicants is that it allows for distinctions in the extreme right tail of the distribution. It is simply empirically untrue that the typical gifted 13-year-old finds the SAT easy. There’s plenty of variation in the SAT performance of the SMPY participants. Moreover, contra Mertz, most people who have successful careers in mathematical fields are neither child prodigies nor beneficiaries of an especially high-quality math education in childhood.

  7. Janet Mertz says:

    Jim says, “There’s plenty of variation in the SAT performance of the SMPY participants.”
    Yes, there is plenty of variation. We disagree on the CAUSE of this variation. You, incorrectly, believe it is due to differences in innate math ability; I believe it is simply due to differences among bright (i.e., 95%ile or so) kids in access to opportunities to accelerate at least 3 years in math before age 13 years given the SAT is an easy exam that makes no attempt to distinguish among the top few percentile.

    Jim say, “Moreover, contra Mertz, most people who have successful careers in mathematical fields are neither child prodigies nor beneficiaries of an especially high-quality math education in childhood.”
    I never said that. I thought we have been discussing who has the math ability to become a tenured professor at Harvard a la Larry Summers remarks. If we are discussing, instead, who has sufficient math ability/knowledge for a successful career in a STEM field, there is a quality peer-reviewed publication (I can’t remember the exact reference off hand) showing that a quantitative SAT score of 650 or above taking in 11th/12th grade is sufficient!

  8. namae nanka says:

    “There are lots of different sociocultural reasons the measured variance ratios in math performance range all the way from 0.9 to 1.5 among countries,”

    Try statistical.
    http://www.academia.edu/393769/Vos_P._2005_._Measuring_Mathematics_Achievement_a_Need_for_Quantitative_Methodology_Literacy

    “The point is that females are affected by sociocultural factors in every culture, so it’s virtually impossible to do a controlled study that establishes how significant sociocultural factors are vs innate ability. “

    Quoting from a comment that didn’t make it at Mr. Unz’s site:

    “Well, Ms. Mertz co-authored a paper with Janet Hyde, the veteran gender-gap buster, Gender, Culture, and Mathematics Performance. It’s available in full on PNAS and cites an older paper by Ms. Hyde which used data from 10 US states with total sample size of around 7 million. (Gender Similarities Characterize Math Performance 2008)

    If you look for its supplementary information, the variance ratios are all over place, moving around from grade to grade. I am sure the socio-cultural factors don’t change that much in a year in a US state.”

    “it merely indicates that the significant disparity between 0% and 7% is primarily due to sociocultural factors. In fact, girls represented 20% of USSR/Russian IMO participants from 1988 to 1997.”

    Russian boys chugging vodka and going to early graves is good for gender equality! Who needs to do maths when you can drink vodka and be merry, especially after you got shafted by disciples of Uncle Larry!

    “most women working in STEM fields face on a regular basis numerous minor slights that gradually accumulate to lead to different career trajectories “

    and what of the slights during more than a decade of schooling for boys? Surely the socio-cultural factors might explain why most gifted underachievers are males and thus the so-called gender equal countries with numerous women employed in gender-equality industry(just ask Uncle Larry) have not an under- but over-representation of girls than what their aptitude would merit?
    Even if you don’t consider the difference between adult women and young boys.
    Suppose girls were decidedly the second-sex in schools, earning lower grades for more than a decade of schooling, hearing taunts like ‘girls are the reason for all wars’ ‘girls are stupid, throw rocks at them’ and yet despite all these deep psyche-breaking crimes against future Marie Curies, were trouncing the boys on SAT. How long would it take for the schools to be burned down and the teachers hanged? Metaphorically, of course, I envision more throwing-up than military action. I suggest we should consult Uncle Larry about that.

    “Jim cites an Ellison study that states”

    that 750 on SAT-M is the best that the test can do to differentiate between students. It can’t resolve the differences beyond the 97th percentile. However he doesn’t point out that while a 750 is 98th percentile for whites, it’s merely 87th for asians. Since asians are a fifth of white test-takers, they probably are the majority of those capable of getting a perfect SAT-M. Just like SMPY.
    And he makes it clear that it’s not a difference in abilities between boys and girls. IOW, “I don’t wanna be Larry Summers 2.0!”

    “”There is a phenomenon called “stereotype threat” “

    which to put it respectfully, is kinda like the evil spirits of medieval times or the bad juju of patriarchy from women’s studies dept. John List last year was bothered by the unbearable non-existence of stereotype threat and so were the meta-analysts who chalked it up to publication bias.
    And didn’t it only show how girls perform worse than they would normally, i.e. the height difference between men and women won’t change, but groups of men and women with same average height would show a difference because the women will deflate their chests and sink their heads after hearing of the height difference.
    Unlike the individual trailblazers who are fired up by it, bellow loudly “women can’t do what!”, which kinda explains the whole feminist movement.

    “In addition to this HUGE self-selection bias, the SMPY also suffers from using the SAT rather than some harder math test such as the AMC12 or AIME.”

    Which should help..girls? And wouldn’t taking it at ages <13 would also help girls, who mature faster than boys and there is less time to imbibe the "eek! Ich habe boobs, nein danke maths"?

  9. Janet Mertz says:

    Eli Rabett says, “This discussion and the previous ones here (thanks for the posts Prof. Gelman:) concentrates on the kids and their contemporaries and not the parents. Arguably the desire of the parents to push their kids to such high levels of achievement are key because it requires a great deal of time and effort on their part.”
    Agreed. That is the reason I believe the SMPY study in large part only identifies the subset of mathematically gifted kids who are also being raised in socioeconomically/educationally privileged backgrounds. Thus, the good correlation Benbow sees is really a good correlation with socioeconomic privilege, not high giftedness in math per se.

    “From this point of view the success of those of asian and jewish descent in the US mathematics community is not so surprising. In the later case one could probably tease out differences due to the Russian immigration bubble of the 1980s and 1990s.”
    The Asian-Americans being identified in the USAMO and IMO are mostly children of recent immigrants or immigrants themselves. The Jews I identified are largely 3rd- or higher-generation Americans, most of whom have Anglicized their names and/or intermarried, part of the reason Unz failed to count them as Jews. Only Gol’berg and Nir are children of recent immigrants among the recent US IMO team members. Unz’s claim of collapse of Jewish academic achievement in the US is simply WRONG; he obtained his GROSSLY incorrect data by using faulty methodology! How many more times do I need to state this fact? It is long past time for Unz to admit his data in his Meritocracy article relating to Jews is incorrect.

    • Eli Rabett says:

      The influx of Jews from the Soviet Union in the 1980s/90s had a strong over-representation of academics. This is about the contingent that went to Israel, but the ones that came to the US were probably about the same.

      As far as third or fourth generation separated from immigration, remember that there were many fewer available slots for Jews in selective colleges up through the 1980s when the quota dam broke, so given the same population one would expect relatively more less good Jewish students at selective places afterwards.