https://www.nap.edu/read/13163/chapter/7#250

Of course there are objections to made to the NHST framework, but I think the chapter gets the logic of it correct. On p. 257, e.g., the authors state: “rejection of the null hypothesis does not leave the proffered alternative hypothesis as the only viable explanation for the data.”

]]>I gather it’s not the Swedish Evangelical Mission, though. ]]>

“Humans may crave absolute certainty; they may aspire to it;

they may pretend … to have attained it. But the history of

science … teaches that the most we can hope for is successive

improvement in our understanding, learning from our

mistakes, … but with the proviso that absolute certainty will

always elude us.”

Carl Sagan, The Demon-Haunted World: Science as a Candle in the Dark (1995), p. 28

Taking a class, even passing a class, even getting a good grade in a class is not the same as knowing the stuff you were supposed to have known in a way that actually carries forward.

]]>“The confidence interval measures how reliably the sold properties represent all of the other properties in the class or subclass.”

“A narrower range of confidence interval indicates a greater reliability of a statistical measure (e.g., the median).” – County of Douglas v. Nebraska Tax Equal & Rev. Comm.

Last month the U.S. 3rd Circuit Court of Appeals explained p-values like this:

” A “p-value” indicates the likelihood that the difference between the observed and the expected value (based on the null hypothesis) of a parameter occurs purely by chance.” – In Re: Zoloft MDL

]]>“They identify appropriate types of functions to model a situation, adjust

parameters to improve the model, and compare models by analyzing appropriateness of fit and making

judgments about the domain over which a model is a good fit. Students see how the visual displays and

summary statistics learned in earlier grade levels relate to different types of data and to probability

distributions. They identify different ways of collecting data—including sample surveys, experiments,

and simulations—and the role of randomness and careful design in the conclusions that can be drawn.”

The claim that students can succeed in a rigorous statistics class without needing to know the sort of things taught in Algebra II referenced in the paragraph above would appear … suspect.

]]>I am not sure what’s in an Algebra II course these days, so I looked this up for california public schools (high school):

http://www.cde.ca.gov/ci/ma/cf/documents/mathfwalgebra2lmgjl.pdf

The third page discusses what is learned in Algebra II. Overall I think it’s useful stuff in the description (how well it’s taught and how well the insights reach the children… is going to be pretty variable), but the part about dividing polynomials with remainder I remember being taught and it was a nightmare of by-hand long division all over again. I have a sense that maybe it would be better to include some computer programming and let the computer do the calculations, so you could talk more about what the idea is behind dividing polynomials. Plus, then they’d get some experience in giving an algorithm for something in a formal language.

]]>I don’t know that they make things any better; my expertise may not be particularly helpful in determining the truth when I’m paid by one side of the argument, regardless of the truth. Not too surprisingly, I think this exactly the same pressure felt by statistical consultants for academic projects.

]]>That is, this seems to me like judges getting the conventional stats training and not anything much worse than that. Now the question is whose fault is it that this rates as conventional stats training?

]]>The SCOTUS decision on whether genes can be patented is full of the same manner of nonsense. Lots of sentences are literally correct. But the decision is not logically consistent with itself in a way that shows they don’t know the biology and/or don’t care. They had several amicus briefs that carefully laid out the science at hand, and I personally know some of the clerks had conversations with geneticists.

But it’s not the least surprising. Logical inconsistencies are not unusual for SCOTUS decisions that weave together at least 5 strong-minded opinions into something like a consensus. When that decision process has to work over a complicated domain like stats, biology, human behavior, economics, etc., for whom almost everyone in the room will have had a few undergrad classes worth of background *at the absolute best*, what really could we expect?

]]>“In fact, popular college math courses like Statistics do not require intermediate algebra. Studies show that the very same students, whose futures are threatened by algebra policies, can pass a rigorous college-level statistics course without knowing intermediate algebra.”

It makes you wonder just how rigorous are these college-level stats courses. It may also serve as further evidence that lots of highly regarded lawyers whose eyes would cross three pages in to Student’s famous paper think that ignorance of mathematics is no impediment to properly interpreting statistical analyses.

]]>Sure, there are a lot of pseuds out there pretending to be experts. But my guess is that these people don’t even take the step of asking *any* experts. If they did, they’d at least have a chance to get things straightened out.

I don’t ask that Supreme Court judges be numerate, any more than I would demand that of Princeton psychology professors. I only would hope that these decision makers and opinion leaders would recognize the bounds of their own expertise and bring in experts as needed.

]]>“Once we know the SEM for a particular test and a particular test-taker, adding one SEM to and subtracting one SEM from the obtained score establishes an interval of scores known as the 66% confidence interval. See AAMR 10th ed. 57. That interval represents the range of scores within which “we are [66%] sure” that the “true” IQ falls. See Oxford Handbook of Child Psychological Assessment 291 (D. Saklofske, C. Reynolds, & V. Schwean eds. 2013).”

]]>Then, in discussing NHST, it explains the “logic” of it. It is you see proof by contradiction. If H0 is “X doesn’t cause Y” and NHST knocks over H0 then X must cause Y. Isn’t science cool?

You should read some of the opinions decided by a court’s interpretation of confidence intervals. You’d probably conclude that they’d do more justice if they interpreted tea leaves or chicken entrails instead.

]]>Sure, I have no problem with the idea that a bunch of middle-aged and elderly judges would be clueless about statistics, a field where awareness about key ideas has been rapidly changing in recent years. I do have a problem with these judges not recognizing they could use expert help on the matter, considering that given their positions they’re in excellent position to get high-quality expert advice whenever they want it.

]]>