Unz on Unz

Last week I posted skeptical remarks about Ron Unz’s claim that Harvard admissions discriminate in favor of Jews. The comment thread was getting long enough there that I thought it most fair to give Unz a chance to present his thoughts here as a new post. I’ve done that before in cases where I’ve disagreed with someone and he wanted to make his views clear. I will post Unz’s email and my brief response.

This is what Unz wrote to me:

Since there’s been a great deal of dispute over the numerator and the denominator, it might be useful for each of us should provide our own estimate-range of what we believe are the true figures, and the justification. Perhaps if our ranges actually overlap substantially, then we don’t really disagree much after all. I’d think if you’ve been reading most of the endless comments and refreshing your memory about my claims, you’ve probably now developed your own mental model about the likely reality of the values whereas initially you may have simply been questioning my own numbers or my methodology.

I’ll be glad to start. Based on my detailed analysis of the NMS semifinalist lists, I’d feel pretty confident that the national percentage of Jewish students is within the range 5.5-7.0%. My greatest irritation was that despite considerable effort I never managed to locate lists from NJ or CT, which have large, academically-elite Jewish populations. But the number of NY Jews is 2.5x larger, and unless the NJ/CT Jews dramatically outperform their NY cousins, I just can’t see the national total breaking out of my range. Meanwhile, the non-Jewish white percentage remains 65-70%. Obviously, questions can be raised about whether NMS semifinalist numbers are the best numerator to use as a high-performance proxy, but since SAT distributions aren’t available, I just can’t think of any better one.

The numerator is the percentage of Jews enrolled at Harvard and the Ivies, and the heated dispute there has been a total surprise to me. None of the colleges make their enrollment lists publicly available, so if we don’t use the Hillel figures at least in some modified form, I’m just not sure what we can use instead. I would never claim that the Hillel figures are precisely accurate—I emphasized the uncertainly in my text—but I just doubt they’re wildly inaccurate either.

Let’s take Harvard, which Hillel claims is 25% Jewish. My suspicion is that ethnic advocacy organizations always tend to exaggerate their numbers, so I’d regard the 25% as an upper bound, and could easily see the true figure being as low as 20%. Thus, my plausible range would be 20-25%, with similar sorts of ranges for the other Ivies. But I’d be pretty skeptical of whether the Hillel numbers were inflated by more than about 25% or so (20% => 25%).

Here’s my reasoning. According to the Hillel numbers and the official racial data, Jews constitute between one-half and two-thirds of all the white Americans enrolled at each of the Ivies except for Princeton and Dartmouth. Indeed, I found a reference on the College Confidential discussion forum to a 2012 Harvard Crimson article making the exaggerated claim that 3/4ths of all the whites at Harvard were Jewish, though unfortunately I haven’t yet managed to locate a copy.

These are huge fractions and if the actual reality were totally different, surely *some* Jewish students would have realized that Hillel’s numbers were ridiculous and complained somewhere. If Hillel regularly claims that 60% of the white students at some college are Jewish, but the true figure were 30%, it’s difficult to believe no one would have noticed.

Therefore, if we focus strictly on Harvard, my plausible range over the last few years would be 20-25% Jewish and 24-29% non-Jewish white (assuming the Race Unknown category is split 50-50 white and Asian), with the total Ivy figures following a similar pattern.

So the ranges I get are Jews as 5.5-7.0% of top performing students with NJWs at 65-70%, while the Harvard ranges are 20-25% for Jews and 24-29% for NJWs. Thus, the range of “raw” Jewish over-representation relative to high-performing NJW students is between 540% and 1200%, while the range across the entire Ivy League would be between 420% and 950%. It’s perfectly possible that I’ve made a stupid calculational mistake, so you might want to check these derived figures.

These are “raw” over-representation percentages, and we must obviously adjust for the significant impact of geographical skew, legacy effects, athletic admissions, and various other things, some of which would certainly reduce them. But I’d argue these raw figures are so enormous, it’s difficult to see how they wouldn’t still remain very sizable even after any reasonable provision is made for those factors. As I think I mentioned, during the late 1980s an Ivy “admissions anomaly” of 20- 30% for Asians was considered such strong evidence of discrimination that the Federal government launched an investigation.

Now it’s perfectly possible that your own “raw” over-representation estimates might not be too far from my own, and you might just believe that they could be completely accounted for by those various adjustment factors, which are somewhat difficult to quantify. But in that case, our disagreement would then shift into an entirely different area, and the current dispute would have been largely resolved.

As I repeatedly emphasized throughout my paper, it was the sheer magnitude of the anomaly that persuaded me it was real rather than merely an artifact due to a combination of underlying measurement errors.

Anyway, I always prefer dispassionate quantitative analysis to angry exchanges in comment threads, though I admit I may sometimes fall into the latter if I lose my temper. So if you would like to provide me what you think are the plausible ranges for the Jewish numerator (high-ability students nationwide) and denominator (Jewish Harvard/Ivy enrollments), perhaps we can begin to isolate and resolve the nature of our possible disagreement.

I don’t want to write a long response because I pretty much said it all in last week’s blog post, but briefly:

I have not directly studied these issues. I have done some work on name scale-up methods (there is a brief article on the topic in yesterday’s New York Times) but not on Jews in particular. I have no reason to believe that the factors of 12 and 20 from the Weyl method are correct. I’m not saying they’re wrong, I just have no particular reason to trust them. Nor do I know anything about how Hillel counts their numbers.

So I can’t supply my own estimates. All I can say is that, according to the person who sent me that email, if the Weyl method is applied to Harvard undergraduates, it gives an estimate of 10-11%, and if it is applied to NMS scholars, it is something close to that (whatever you get by taking the appropriate weighted average of 9-14% from Massachusetts, 24% from New York, 14-21% from Pennsylvania, etc). That’s what seems to happen if the same method is used to estimate both numbers. But I have no idea what the actual numbers are. The only thing we seem pretty sure of are the Putnam and Olympiad students because Janet Mertz asked them directly.

So I remain skeptical of Unz’s claims—the direct comparisons I’ve seen don’t seem to support them—but I wanted to give him a chance to present things here from his perspective.

P.S. Unz remarks elsewhere notes that I referred to him as “a ‘political activist’ who used ‘sloppy counting.'” He characterizes those as “insults.” I don’t think these are insulting! First off, it’s not an insult at all to call someone a political activist. That’s what Unz does! He’s run for office, he’s funded political campaigns, he bought a political magazine. There’s nothing wrong with being a political activist. It’s a noble calling. As to the second phrase, I agree that “sloppy counting” could be an insult in some settings but it wasn’t intended as such. It remains a mystery exactly how Unz came up with the claim that over 40% of Math Olympiad participants in the 1970s were Jewish while only counting 2.5% from the 2000s. Such sloppy counting makes a difference: it leads to an impression of a dramatic decline in Jewish performance in this area, while the best estimates from an expert in the area is that the decline is a factor of 2 rather than a factor of 15. The sloppiness in the counting comes from the use of an undefined criterion for classification which allows unintended bias to creep in. As discussed above, there was also the big mistake of incompatible numerator and denominator in comparing Harvard students to National Merit Scholar semifinalists, but I wouldn’t quite call this sloppiness, it was more of a mistake that arose because of an unexamined assumption from combining two different data sources. We all make unexamined assumptions all the time. If “sloppy counting” is too rude, let me replace by “inaccurate counting.” Speaking retroactively, I think any count that is off by a factor of 5 is pretty sloppy, but maybe that’s a judgment call. I’m happy to just call it “inaccurate” and remove any perception of insult.

Unz also writes, “I find it highly intriguing that although Gelman chose not to substantially engage the 1,000 word framework of my statistical analysis that I offered him for constructive mutual dialogue.” Just to repeat what I wrote above, I don’t want to write a long response because I pretty much said it all in last week’s blog post. A lack of point-by-point argument does not mean I agree with Unz’s claims; what it means is that I think it’s only fair to allow him to present his claims clearly in one place on this blog, so people can see right here what he has to say without having to sift through blog comments.

Also let me repeat what I wrote in a comment, that I do not view this as a “fight” or a debate. I have not directly studied these issues and would not want to imply otherwise.

Finally, let me emphasize that Unz’s statistical mistakes do not necessarily mean that all of his ideas are wrong or meaningless. There certainly have been large demographic changes in the United States in recent decades, and the result is increasing academic competition. These things are worth studying.

32 thoughts on “Unz on Unz

  1. The divide here is that you are making a methdological objection, and he is focused on the final conclusion.

    I am rather concerned at how hard it is to get people to focus on on, pay attention to or give any weight to methdological objections. Like Unz, they want to jump to the conclusion, to fight over the final answer.

    In my view, this smacks of what I call “conclusion-drive thinking.” They know the answer they want, and they will find a way to get there.

    Real inquiry, the scientific method and actual learning require careful attention to methodology so that can have a sense of how credible the findings might be. Given a strong methodological approach — which may require strong knowledge of the field and/or context — we have something worth talking about.

    With a poor methodolgical approach, we have fiction and fantasy to talk about. And the poor methodological approach.

    • “Like Unz, they want to jump to the conclusion, to fight over the final answer.”

      Actually, it’s worse than that. What many people do (and what I think Unz has done himself) is to first evaluate the partisan politics of the conclusion, assume (as you say) all arguments are really arguments about the conclusion, and that all participants are partisan depending upon the politics of that conclusion. This is most clearly shown when people get either the politics of the conclusion wrong or the partisanship of their opponents wrong. In the other thread, there’s one example of the former, and Unz himself assumes that the politics of those critiquing him are necessarily liberal.

      It’s not just that people focus on the conclusion to the exclusion of the integrity of the method, and not just that they focus on the politics involved — but that for many people, there’s simply nothing but partisan politics, period.

    • Rahul:

      I’m not fighting and I’m not running away. I’m just stating the facts. Given that I have not directly studied these issues, it would be inappropriate for me to imply otherwise. I am presenting data and analyses from others, most notably Janet Mertz who has published peer-reviewed papers in this area.

  2. Pingback: Meritocracy: Admitting My Mistakes | The American Conservative

    • My comment is awaiting moderation on TAC so I will post it here:
      Unz states: “A crucial part of the critique consists of the claims of an anonymous individual calling himself “NB,” which are based upon his private analysis of non-public data and cannot be externally verified.” I am “NB” and one of the audience members who spoke to Unz after his talk yesterday. I asked Unz to look at the Yale Alumni directory with me, precisely so that he would cease to describe my data as unverifiable. Unz initially agreed, but then started debating with me about other aspects of his article. I repeatedly tried to get our conversation back on track: performing Weyl Analysis on the names of Yale alumni, but Unz ultimately declined and walked away when I said I was not interested in a debate but wanted to show him the data I am using. (Unz has access to the Harvard alumni directory too.)

      Finally, as Unz met me yesterday, he knows I’m female. Incidentally, anyone with a passing familiarity with identifying Jewish names would know that my first name (which Unz revealed in his previous blog entry) is a common female Israeli/Hebrew name, which brings me to another point: Unz continues to claim, “during the thirteen years since 2000, just two of the 78 names of Math Olympiad winners appear to be Jewish, and this is also correct.” Since two of the names are from the Weyl list of distinctive Jewish surnames, Unz must be classifying only these 2 names as Jewish. There is another 2x US IMO team member with an obviously Israeli Jewish name and several others with names classified as possibly Jewish on ancestry.com. e.g.:
      http://www.ancestry.com/name-origin?surname=kane
      In contrast, I do not see how it is possible for Unz to have obtained the estimate that 44% of the 70s US IMO team members were Jewish unless he counted overtly German (and other non-Jewish names) as Jewish. e.g.
      http://www.ancestry.com/name-origin?surname=tschantz
      I request that Unz please list the last names of the 70s US IMO team members whom he classified as Jewish. The list is available here, so I recommend interested parties take a look:
      http://www.imo-official.org/country_individual_r.aspx?code=USA&column=year&order=desc

      • This thread has a logic problem in need untangling.

        Those critical of Unz wish to have their cake and eat it too.

        It’s one thing to question the methodology employed by Unz. Namely the Weyl Analysis. This is certainly a fair target. However, attempting to discredit Unz by using the same flawed analysis breaks the law of logic.

        • No, I am saying that Unz must use the same methodology on both data sets to obtain a statistically valid result. I cannot replicate Unz’s subjective direct inspection method on any set of names, but Unz stated that Weyl Analysis produced results within 0.1 percentage point of his direct inspection method on the names of NMS semifinalists. Thus, those methodologies produce virtually identical results. I can replicate Weyl Analysis on the names of Harvard alumni since it is an objective methodology (although there is some confusion about what Gold[] means in Unz’s description of Weyl Analysis) and doing so results in Jewish enrollment figures less than half as large as Hillel’s data for Harvard. Thus, we are saying it is not valid for Unz to compare Hillel’s Jewish enrollment figures to his estimate of the % of Jewish semifinalists.

        • NB

          Unfortunately, it seems the only thing you’re interested in, is obtaining a “statistically valid result”.

          Logic doesn’t necessarily work that way.

          I’m guessing you’re a student of Gelman’s? :)

          Ask him to integrate some logic along with his statistics.

        • Sometimes when I try to follow a discussion on the internet, it’s hard to figure out which side is right, if any. For example, that NYT write-up on Tesla.

          This particular discussion is not one of those times. I appreciate the good work done by nb on these threads.

        • Bud Wiser,
          Logic states that, “If X, then Y” is only false when “X” is true and “Y” is false. When “X” is false, the statement is true whether or not “Y” is true or false. Unz is trying to claim “Y” is true when we don’t know whether “X” is true because of significant errors in his methodology and one of his data sets (Hillel’s). “Garbage in, garbage out”. Q.E.D.
          p.s. N.B. is not a student of Gelman’s.

        • Kieth

          I’ll try to be polite. :) However starting our discourse by claiming I’m being willfully ignorant isn’t a good start. LOL.

          You said, “You’re making an implicit blanket condemnation of a perfectly respectable statistical method.” Respectable by whose standard? Gelman himself stated he hadn’t heard of the Weyl method until he’d read Unz’s essay. The only reason I bothered to come to Prof. Gelman’s sight was because I had serious issues with the method Unz was using. I also don’t claim to be a statistician, but it doesn’t take a genius to see the potential pitfalls. This is why I quoted Janet’s initial response to his method. It’s exactly what I was thinking. Unfortunately, she changed her tune. I’m guessing it was based on the suggestion that Gelman thought it could be accurate. After all he’s supposed to be the expert. I honestly don’t believe we can get reliable results. If I’m not mistaken, Unz made this particularly clear as to the Jewish surnames in his statement to you when you were discussing Asian enrollment in dental schools (or law schools).

          I don’t claim to be a statistician. I rather fancy myself a trailer park philosopher.

          What concerns me is something else you brought up. Ideology. What has been demonstrated, at least to me, is a type of tribal mentality. Something inherent in our human condition.

          It occurred to me that we might want to focus on the ethnicity of admissions boards and their officers. I’m guessing we could construct a model based on preferential bias. It might be more accurate.

          I was really hoping to hear from Janet. Does she really think we can get accurate results from a Weyl analysis?

          Till next time my friend. I do enjoy intellectual stimulation. This sight has given me a little pleasure.

          Thanks Andrew.

        • Bud Wiser said: “It’s one thing to question the methodology employed by Unz. Namely the Weyl Analysis. This is certainly a fair target. However, attempting to discredit Unz by using the same flawed analysis breaks the law of logic.”
          N.B., Gelman, and I would be delighted if Unz had employed Weyl analysis on both the NMS and lists of undergraduate students attending Harvard, Yale, Princeton, etc. That is, in fact, exactly what Gelman recommended Unz do so that whatever error exists is the same in the numerator and denominator, thereby cancelling. Given Unz seems to be refusing to do so, providing a series of lame excuses as to why it is unnecessary or can’t be done, N.B. has taken it upon herself to do this analysis. The problem is that Unz ONLY used the objective Weyl method to claim his subjective direct inspection method was valid; he did not use it anywhere else in the entire article. He compared his NMS data obtained by direct inspection (which very much under-counted % Jews on the US IMO teams in the 21st century) against Hillel’s numbers which are clearly over-counting % Jew for some of the Ivies, including Harvard and Yale. N.B. is using correct logic, NOT the same flawed logic Unz used. That is why I believe N.B.’s findings and not Unz’s.

        • Janet

          I understand the mistake Unz made. The problem isn’t with logic, it’s with statistical method.

          I’ll ask you the same question I asked NB on the TAC thread, “Do you feel the Weyl analysis is an accurate tool?”

        • Janet

          I appreciate your time on this subject. The fact you contacted people directly is admirable. But…By your logic we would have to throw this “research” out, and not include it in our analysis. Why… well it doesn’t follow the parameters you’ve set. Are you following this so far?

          Sadly we’re no closer to any kind of “truth” on the subject. What do you propose?

        • Janet

          You said, “N.B., Gelman, and I would be delighted if Unz had employed Weyl analysis on both the NMS and lists of undergraduate students attending Harvard, Yale, Princeton, etc.”

          I get the feeling you’d be “delighted” because it may render a result you’d be satisfied with. Is this the aim of your research?

        • Bud:

          1. Please leave comments one at a time rather than in batches.

          2. Please be polite and refer to Janet’s research as research, not as “research” in scare quotes.

          3. I can’t speak for others. But, as for me, none of this delights me. It makes me unhappy to see erroneous numbers appearing uncorrected in the Times.

        • My apologies Prof. Gelman.

          There is little nuance in this type of format. I’ll attempt to post less frequently with more substance when I do. I meant no disrespect.

          Thank you for allowing me to post to your site.

        • Bud Wiser,
          Yes, you are correct; I should have said “methodology”, not “logic”.
          I am trying to be an objective scientist here, hoping someone will use methodology that will enable us to determine the truth. I have no personal vested interest in the answer. Prof. Gelman, a highly qualified statistician, indicated at the very beginning of this discussion that the use of Weyl analysis for both % Jews among NMS semi-finalists and % Jews among undergrads at Harvard, etc. will provide a statistically believable answer because whatever the unknown correction factor by which the Weyl method is off won’t matter because this factor will appear in both the numerator and denominator. This contrasts with the methods Unz used in his article where the unknown correction factors were off in opposite directions, making them additive rather than cancelling. The Weyl analysis is quite doable and, hopefully, would be believed by all who seek the truth. Thus, I hope Unz will do it very soon and report his findings so we can all see whether they confirm the data N.B. has been generating by this method.

        • Janet

          Please forgive me, but I feel you’re being evasive. We know the Weyl analysis is doable, but I didn’t ask you that. I asked whether you thought it was accurate.

          At one point you stated concering the Weyl analysis, “…guessing their ethnicity and immigration status by their name alone has a VERY high error rate, so high that one can’t use the data to draw valid conclusions.” This is exactly what the Weyl analysis does.

          So why would you insist that we use a method that we can’t draw valid conclusions from? Gelman suggests that the correction factor would balance things out? You’ve got to be kidding me. Maybe we can add the names Miller, Kane and Lander to the Weyl list. Let us see what that churns out.

          When I first read the original article by Unz, I was extremely skeptical. Bells were going off in my head. More work has to be done on this model before I’ll be a believer.

          Let’s face it. Our admission policies will forever be flawed. Let’s just hope they still bring in enough good talent that our society doesn’t crumble.

        • Bud Wiser: “At one point you stated concering the Weyl analysis, “…guessing their ethnicity and immigration status by their name alone has a VERY high error rate, so high that one can’t use the data to draw valid conclusions.” This is exactly what the Weyl analysis does.”

          To be clear, that’s not what Weyl Analysis does. You consider a set of distinctive Jewish surnames, determine how often they appear in Census data, and then calculate, based on the estimated population of American Jews, an estimate of what fraction of Jews people with the specified distinctive Jewish surnames represent. Then search large data sets for just those names, scale up as specified, and arrive at an estimate of the % of Jews in the large data set. Prof. Gelman indicated in his first blog post abou this that I thought that Weyl Analysis yields underestimates of the actual % of Jews. It has the advantage of being an objective methodology that can be consistently applied to different data sets – that’s all I’ve claimed about it.

          The fact of the matter is that the NMS competition is not selective enough for us to come to any definitive conclusions about what Harvard should look like. A significant problem with the NMS semifinalist data (which I’ve mentioned previously) is the varying qualifying score by state. ~16,000 NMS semifinalists are selected from ~1.5 million juniors who took the PSAT/NMSQT. But these 16,000 are not simply the top 1% of PSAT scorers – they are the top scorers per state, and the total # of NMS semifinalists designated per state is proportional to each state’s share of HS students. So, for example, states like Oklahoma and Iowa have qualifying scores under 210 (i.e. SAT score of 2100), and NMS semifinalists represent the top 2-3% of OK and IA students taking the PSAT, whereas in MA with historically the highest qualifying score (221-223), NMS semifinalists represent the top ~0.7% of MA students taking the PSAT. In Unz’s data, OK and IA combined are given more weight than MA (Unz claims his including the estimate that 19% of MA NMS semifinalists are Jewish had no significant impact on the results) even though very few of OK and IA NMS semifinalists are actually Harvard material, while a far more significant share of MA NMS semifinalists are Harvard material. (To be clear, a 2210-2230 total score on the SAT is likely below average for Harvard.) The truth is that we can’t use this data to come to any definitive conclusions about what the ethnic/racial breakdown of Harvard students should be.

        • NB

          Let me post this again.

          “Philosophy matters to practitioners because they use it to guide their practice; even those
          who believe themselves quite exempt from any philosophical influences are usually the
          slaves of some defunct methodologist.”

          Gelman wonders why folks like Brooks are inimical to statistics. He need look no further than this thread.

          Garbage in garbage out.

        • Bud Wiser, you wrote in response to NB abd Janet Mertz regarding Weyl analysis” “Gelman wonders why folks like Brooks are inimical to statistics. He need look no further than this thread.

          Garbage in garbage out.”

          I’m sorry, but that’s willfully ignorant. You’re making an implicit blanket condemnation of a perfectly respectable statistical method. Like any such method — and this is, you know, the essence of statistics — it can be used inappropriately and then, yes, GIGO. When used appropriately and correctly, it’s a valuable tool that produces reliable results.

          It’s reasonable to question its use in this context because it’s sensitive to sample size and especially to what degree that sample might be idiosyncratic relative to the US population as a whole. I’m not writing as a statistician, but rather as someone who learned a bit about Weyl analysis when it came up with regard to discussion of a recent post here (as I recall, perhaps mistakenly, in response to someone who was asking about law school admissions of Asian-Americans in southern California).

          However, that’s actually independent of the criticism of Unz here about this. The point here is that both the sources and methods that Unz used were questionable — the Hillel data on the one hand, and the Weyl method on a small and demographically constrained sample on the other. Both of these could easily have large errors in *either* direction; as it happens, there’s good reason to think that reinforced each other generating a large disparity in the direction that Unz found (they could have reinforced each other in the opposite direction, finding an opposing disparity). Even if a Weyl analysis is questionable for one or both sets of data, using it for both has the virtue of being more likely to err both in the same direction. Remember, what Unz is looking for is a disparity. If a Weyl analysis was error-prone, but approximately equally in the same direction for both sets of data, then if a disparity exists, it will still appear.

          I think that last part is something that you’re not understanding. My impression is that you’re reasoning from an assumption that if data is less than perfectly reliable then nothing can be known — your GIGO point. But that’s the whole point of statistics and probability! It’s an attempt to deal with uncertainty in a rigorous way that can produce knowledge where uncertainty exists. Which is, frankly, everywhere. There’s always uncertainty, there’s no data or method that is perfectly reliable.

          The various critics of Unz, whether Mertz or NB, are variously attempting to reduce the uncertainty about one part of the data that Unz is using. Mertz looks at the actual composition of Math Olympiad teams because she has access to that information as a means of testing the reliability of Unz’s Weyl analysis. NB and others have criticized the Hilel data, showing that it badly overstates. This is sort of a two-part argument. The first part is that these are not-very-reliable data and it would be better to have used the same method for both sets so as to, at least, ensure that the errors don’t reinforce each other and produce the disparity that Unz was looking for. The second part is that when you examine each set and Unz’s conclusions about them, there’s very good reason to believe that both did, in fact, reinforce each other in exactly such a way as to grossly magnify the disparity that Unz was looking for.

          I’ve been reading these two threads for days and days now and I’m a bit at a loss to understand why some people are arguing about it. I do understand that there’s some, like Unz himself, who are ideologically and personally motivated to defend the analysis, regardless of its actual merits. But others, like yourself, seem to be perhaps missing the point entirely. You’ve been quite high-handed in making terse and blanket criticisms of both statistics and Gelman while not demonstrating much, if any, expertise on the topic. And speaking as a layperson who reads this blog simply out of a personal intellectual interest and while possessing utterly no expertise in statistics whatsoever, I’m not criticizing you simply for the sake of participating and having an opinion. But it’s not hard to google “Weyl analysis” and it’s not too hard to make substantive arguments. I, for one, do appreciate that you’ve been more respectful and polite in your recent comments. But I still question the spirit in which you’re writing them.

        • Bud Wiser,

          Sorry I haven’t replied again yet. I didn’t access the www all day Feb 25. I’ve been traveling, trying to have a mini-vacation, including from email and blogs :-) I also have a full-time job unrelated to this work.

          Thanks, Keith, for your comments. One point that should be made clear again is that Unz did NOT use Weyl analysis other than to confirm that his direct inspection method on the large NMS semi-finalist data set yielded similar % Jews, a finding from which he concluded that his direct inspection method was valid. Unz did not use the Weyl method on any small data sets. Everyone agrees that the Weyl method for % Jews would have a huge error bar on data sets that aren’t in the thousands because Jews are <2% of the US population and the Weyl method counts only ~1/12th or ~1/20th of Jews. Almost all of the Jewish data presented by Unz was obtained by direct inspection, i.e., Unz looked at each name and decided for himself whether he thought the name sounded Jewish. This is a highly subjective method. It is very easy with this method to see more or fewer Jews than actually exist due to one's own pre-conceived expectations. This is the point I was making when I claimed Unz over-counted Jews among the 1970s US Math Olympiad teams and grossly under-counted them by at least 5 fold among the 21st century teams. What he should have done is perform the direct inspection method in a "blinded" manner, i.e., ask someone else to use it to count the Jews on his lists who does not know what the lists are or the expected values Unz hoped to see.

          I've also stated before that the Weyl method may be accurate in counting US Jews overall, but under-count because of Anglicization of names and inter-marriage the non-Ultraorthodox Jews who make up essentially all of the Jews attending elite colleges. Thus, one would need to determine a correction factor to use this method to accurately determine % Jews. However, if one is only trying to determine the ratio (% undergraduate students attending a college such as Harvard who are Jews): (% very high academic achievers who are Jews), then one doesn't need to know what this correction factor is because it will appear in both the numerator and denominator, thereby cancelling. This is the point Prof. Gelman was making in his article.

          I will try to post by the end of the week a detailed response to Unz's post on his own blog in which he only admitted to very minor errors in his article, ignoring the major problems.

        • Janet

          Not a problem.

          I hope you enjoyed your reprieve.

          Your energies would be better spent on your family and job. I don’t see this thread producing much fruit.

          I value your opinion and input. I don’t have a great deal of regard for Weyl. Personally I think he was a crackpot, but that’s just my opinion. Yes I did my research. The only people giving him much credit are Murray and Herrnstein. He was not a respected statistician as Keith might suggest. As a side note, I would be interested to hear your views on the “Bell Curve”.

  3. Pingback: Unz on Meritocracy: Admitting My Mistakes | Ron Unz – Writings and Perspectives

  4. Pingback: Assorted links

  5. I can appreciate this.

    Guess who said it?

    “Philosophy matters to practitioners because they use it to guide their practice; even those
    who believe themselves quite exempt from any philosophical influences are usually the
    slaves of some defunct methodologist.”

    Can we settle on a method? Unz gave Gelman the opportunity to present a better model. He dropped the ball.

  6. I’m confident Gelman is aware of the Multi-Armed Badit.

    I’d be interested to hear exactly how Hillel produced their data.

  7. Pingback: Unz on Meritocracy: Almost as Wrong as Larry Summers | Ron Unz – Writings and Perspectives

  8. Pingback: Meritocracy: Almost as Wrong as Larry Summers | The American Conservative

Comments are closed.