Statistics in a world where nothing is random

Rama Ganesan writes:

I think I am having an existential crisis.

I used to work with animals (rats, mice, gerbils etc.) Then I started to work in marketing research where we did have some kind of random sampling procedure. So up until a few years ago, I was sort of okay.

Now I am teaching marketing research, and I feel like there is no real random sampling anymore. I take pains to get students to understand what random means, and then the whole lot of inferential statistics. Then almost anything they do – the sample is not random. They think I am contradicting myself. They use convenience samples at every turn – for their school work, and the enormous amount on online surveying that gets done. Do you have any suggestions for me?

Other than say, something like this.

My reply:

Statistics does not require randomness. The three essential elements of statistics are measurement, comparison, and variation. Randomness is one way to supply variation, and it’s one way to model variation, but it’s not necessary. Nor is it necessary to have “true” randomness (of the dice-throwing or urn-sampling variety) in order to have a useful probability model.

For example, consider our work in Red State Blue State, looking at patterns of voting given income and religious attendance by state. Here we did have random sampling—we were working with survey data—but even if we’d had no sampling at all, if we’d had a census of opinions of all voters, we’d still have statistics problems. So I don’t think random sampling is necessary for statistics.

To answer your question about nonrepresentative samples, there I think it’s best to adjust for known and modeled differences between sample and population. Here the idea of random sampling is a useful start and a useful comparison point.

Ganesan writes back:

Yes but all we seem to teach students is significance testing where randomness is assumed.

How far can I get away with saying that t-tests, ANOVAs are ‘robust’ to violations of this assumption??

My reply:

One approach is to forget the t tests, F tests, etc. and instead frame problems as quantitative comparisons, predictions, and causal inferences (which are a form of prediction of potential outcomes). You get the conf intervals, s.e.’s, etc from a random sampling model that you recognize is an approximation. This all loops back to Phil’s recent discussion.

56 thoughts on “Statistics in a world where nothing is random

    • Indeed. The word ‘random’ should be struck from the language for having no meaning, and the misleading mathematical term ‘random variable’ replaced, preferably by the word ‘variable’!

    • Yep, this.

      The only place I can see from the word random is in computer programming/Monte Carlo methods. If I call rand() then it’s random (of course, it’s really deterministic. It’s only random because I don’t know what state the generator is in!).

  1. I agree that statistics does not require randomness. Also, Ganesan seems to use the term ‘random’ to mean ‘uniformly drawn’, which seems overly restrictive.

  2. I’ve been having some of the same questions as Rama. I’m a social worker and quantitative social scientist not a statistician. My understanding has always been that random or probability samples are required for statistical inference (at least if one comes out of the design based as opposed to model based tradition). This is because the sampling distributions used in inference are based on the assumption of such samples. In fact, I recently had an email conversation with a statistician who is somewhat critical of machine learning and data mining because many in this field seem to ignore issues having to do with sampling, inference, and how these relate to their algorithms. Is this totally misguided in your view?

    • I’d say the criticism is probably ~20 years behind. “Statistical” learning is ubiquitous in ML now. To be honest, ML and statistics are becoming somewhat difficult to distinguish precisely. The main points of distinction seem to be the size of datasets and whether the emphasis is on prediction (ML) vs. modeling (statistics), but these lines are not clear cut either.

      • You might enjoy reading this paper:
        http://www.exp-platform.com/Pages/PuzzingOutcomesExplained.aspx

        From the abstract: “Online controlled experiments are often utilized to make data-driven decisions at Amazon, Microsoft, eBay, Facebook, Google, Yahoo, Zynga, and at many other companies. While the theory of a controlled experiment is simple, and dates back to Sir Ronald A. Fisher’s experiments at the Rothamsted Agricultural Experimental Station in England in the 1920s, the deployment and mining of online controlled experiments at scale—thousands of experiments now—has taught us many lessons. These exemplify the proverb that the difference between theory and practice is greater in practice than in theory. We present our learnings as they happened: puzzling outcomes of controlled experiments that we analyzed deeply to understand and explain. Each of these took multiple-person weeks to months to properly analyze and get to the often surprising root cause.”

  3. I think that more important than ‘randomness’ is ‘fairness’. In marketing stats, I’ve seen a lot of blatantly unfair samples touted as random, fair samples.

    For instance: in one program, the company called up new customers with an introduction call. The analysis compared customers that had completed the call with customers that hadn’t (they had either not picked up the phone or hung up halfway through the call).
    The analyst had claimed that that finishing the call was a random sample. At least they learned the buzzwords. When we got a chance to analyze the program with a randomly-seleted control group, we found no effect — all of the claimed positive effect was due to the analytic bias.

  4. Historically, “statistics” was seen more as the collection and presentation of numbers, something that confused me about nineteenth century mentions of the word until I figured it out. I assume the modern association of “statistics” with the meaning “cool mathematical conclusions we can present about random collections of numbers” comes along with some cool discoveries mathematicians made about the properties of random collections of numbers.

    I sympathize with any teacher who worries his colleagues will shun him if he takes statistics back to the age of Playfair, Quetelet and Florence Nightingale. Then again, I’m sure mathematicians have come up with cool things to say about ridiculously non-random collections of data now.

    In my business, I tend to shun the statistics of means, standard deviations and such like, in favor of quantiles, mainly because my customers (management) would be frightened by talk of standard deviations and the normal distribution. Curiously, they’re frightened by quantiles, and would like me to give them means, even though I try to explain that the mean would be very misleading.

  5. “I’m sure mathematicians have come up with cool things to say about ridiculously non-random collections of data now.”

    Very little that I am aware of (wonder what others, might think).

    I believe a lot of the problem was people talking, thinking and doing statistical things as if there was randomisation (i.e. calculating a confidence or credible interval and suggesting non-random errors had been dealt with or could be ignored).

    David Freedman went maybe the furthest on this with his chant – “no (at least frequntist) inference without randomisation”. On the other hand there were some Bayesians who even argued that it dod not matter at all if there was randomisation used in assigned comparison groups (I think Don Rubin convinced most that that was mistaken).

  6. I’ve always wondered what the definition of randomness is and whether or not randomness actually exists. Take the coin-flipping example. For a fair coin, we say that the bias is 0.5 and everything else is random. But theoretically, we can predict the exact toss of a coin every single time even with p=0.5 if we could measure things like the force of the toss, air flow, starting side, etc. such that there is no more randomness.

    To me, randomness is simply unmeasured covariates.

      • Coins are much too large for quantum physics to be relevant. There’s also no mechanism in a coin flip whereby quantum effects would be amplified to a macroscopic scale (as has been suggested in the case of brain function).

        One practical perspective is that – yes – if we knew enough of the state of the coin, its environment, and its initial conditions, we could model coin flips as deterministic. However, in many cases we are only interested in predictions/inferences for which the approximation of a random process is an incorrect, but sufficiently usable approximation (particularly given limitations in knowledge of the system state in real situations), to make some useful inferences and predictions.

        • Folks — lighten up! My only point is that the statement “randomness is simply unmeasured covariates” is wrong. Nothing to do with coins, RNGs or approximations.

        • But the statement is not wrong, because it is about the use of the word “randomness” in statistics. Quantum physics, of which I am sure we are all aware, is simply not relevant to this use.

  7. Randomness should not be treated as an objective (present or non-present) feature of the real world. Not even “proper random sampling” is really, objectively ranodm. Randomness is based on how we look at things. In traditional subjective Bayesian statistics, random variables model how we assess situations in which we are uncertain. In frequentist statistics, a proper description of what randomness in probability models mean is that *we think of* the process that generated the data as a random process.
    Because nonrandomness of the random number generator doesn’t seem important to us, we can think of “proper random samples” as random samples. This is all fine but doesn’t have to do with “real” randomness.
    With convenience sampling, we need to think hard (and check against the data) about what specific deviations from “proper random samples” could be caused by sampling, and we may model them with a more sophisticated probability model modelling some kind of “skewed randomness” that may still be wrong but more useful. Randomness is created by the observer.

    • Christian:

      I think that randomness of dice rolls is an objective feature of the real world. Sure, it is a macro-phenomenon arising from a chaotic dependence on initial states, but there are lots of objective features of the real world that are macro-phenomena. Tables and chairs are macro-phenomena too (after all, what are they but collections of atoms, temporarily bound together following quantum-mechanical laws), but they are objective features of the real world. I’d argue that the probability of a die landing “6” is of the same level of objectivity as the existence of a chair.

      • Andrew: Fair enough, but I think that whether you’re right about this or not is beyond observability, so don’t you think that it would be safer to not rely on it? How useful it is to model the die with a model assuming randomness doesn’t really depend on this.
        I wouldn’t expect you to argue that whether it is appropriate to analyze something using theory for random sampling depends on whether dice (that you believe are objectively random) or an OK pseudo-random numbers generator on a computer have been used for random sampling?

        • Christian:

          I’m not sure what you mean by “safer to not rely on it.” I think randomness is a useful model in some settings but not others. I’d use random variables to model pseudo-random numbers too. I agree there’s nothing special about dice; I just brought them up in response to your writing, “Randomness should not be treated as an objective (present or non-present) feature of the real world.” I responded with an example where I think randomness is an objective feature (although, sure, to respond to comment below, the distribution of the outcome is conditional on how the die is thrown; I’m picturing a hard throw where the die bounces off a couple of walls, not some sort of trick roll). As we discuss in Chapter 1 of BDA, I find it helpful to have some benchmark examples of randomness such as die rolls to use conceptually as calibration for frequency probabilities.

        • It probably all depends on how the word “objective” is used. To me the descriptions that you give quite clearly indicate that what we call “random”, even “objectively random”, depends strongly on the observer’s perspective, for example if what happens is complex enough that the observer cannot analyze it in detail.
          I’m still happy modelling it *as if* it were objectively random, but we should acknowledge that this is a decision made by a modeller.
          That said, I don’t think it’s all really objectively deterministic either; I think objectively random vs. deterministic can often not be distinguished by observation and I therefore think it’s safer if we don’t make ourself dependent on which one it is in our reasoning.

        • Christian:

          To me, the probability that a particular die gives a 6 is a physical measurement, in the same way that the weight of a particular chair is a physical measurement. In either case, the measured value depends on how the observer takes the measurement.

        • Andrew: Are you referring to frequency as a (somewhat imprecise) measurement of probability? I do think that there are differences to the chair. Certainly the probability doesn’t only depend on the die but also on how it’s rolled. The precise measurement procedure here doesn’t only potentially change the outcome but also the definition of what is measured. But then I have some constructivist tendencies, so I am not the best person to defend the chair weight against similar charges that a mean philosopher could make.

        • Christian:

          If you want to define the chair weight as subjective, that’s your call. My point is that probabilities are measurements just as weights are measurements: either way, the measurement depends on what you’re measuring (the specific properties of the die or chair) as well as how it is measured (with the die, there is an ideal form of measurement where the die is rolled many times, each time bouncing a lot on hard surfaces; with the chair there is an ideal form of measurement using an accurate scale indoors with no wind). In both cases I see the concept of a true underlying probability or weight as being useful to the extent that the measurement is robust with respect to reasonable changes in the measuring conditions.

          As you may know from reading this blog, I prefer continuous to discrete concepts. In this case, I see the concept of “objective measurement” to depend on the robustness (or, as psychologists would say, reliability and validity, or internal and external coherence) of the measuring process. I chose the probability distribution of a die as my example because it can be easily rolled in a way that forgets its initial conditions, hence defining an objective probability distribution. I would feel much less comfortable defining the probability that the Jets win next week as an objective probability. I’m happy to use probability to model such outcomes but here I think we’re extending the physical concept of probability to apply in a new setting.

      • I agree that the randomness of dice rolls is objective.

        Note that the probability distribution varies across dice (for the most part minutely, but there are also some loaded dice), and also varies across individuals, such as the small number of skilled craps shooters. For casinos, the key is to make sure that the probability distribution does not deviate enough to exceed the ‘vig’ for some numbers (for example, by monitoring the dice in play, and by identifying and barring skilled shooters).

        … We don’t need to know the probability distribution to infinite precision for it to be an objective feature of the real world.

        • Paul: If skilled shooters can change the frequency distribution, I do not see how it is objective. Is there some notional ‘perfectly unskilled shooter’ that defines the ‘objective distribution’? If so, how is this person defined?

        • Ian,

          You could, for example, have a machine throw the die. The real point, though, is that for a wide range of sorts of throws, the dependence on initial conditions is chaotic, hence a random distribution of outcome conditional on a fairly broad set of initial conditions.

        • Andrew (sorry for the funny order, but there was no ‘reply’ link on your post):

          I do not see that a machine throwing the die helps. What would be its specification? I still see no hint of an `objective’ distribution; in fact, I see no distribution at all, just a deterministic mechanical system.

          I do not see that the chaos idea helps either. I agree that in practice controlling the initial conditions might be hard, and that this would mean that each roll might have different initial conditions leading to different outcomes, but to produce a probability distribution over outcomes still requires a probability distribution over initial conditions, and where does this come from? The ‘problem’ remains.

          I say ‘problem’, because I do not think there is a problem if one just gives up the idea of probabilities being a property of the world, as opposed to our knowledge of the world. I do not see the barrier to doing this. As Jaynes shows, it does not result in ‘subjectivity’, in the sense of ‘do what you want’.

  8. I do not understand how the probability of a die coming up six can be an objective property, when it can change with our state of knowledge. If a mechanism can be made that rolls a dice so that it comes up six most of the time, then someone that knows this fact about the rolling mechanism will assign a different probability to someone who does not know. Could you explain what you mean a bit more?

    • The reason is that an individual’s state of knowledge is not the objective property (of the probability of a die coming up six).

      Your example shows this quite clearly. The person who does not know the rolling mechanism holds a belief that is different from another person who knows that the mechanism rolls 6’s most of the time. The second person’s belief about the objective property is more accurate than the first person’s. This accuracy is measured by comparison to the objective property.

      • I think there’s a subtler issue here. The “mechanism” example isn’t interesting at all, you could just say that the mechanism has its own “objective” probability which is weighted towards 6. The subtler issue, which I think Ian is trying to get at, is that if you measured enough state, you would know the outcome of the dice with probability 1 – you know exactly how it will land.

        I think this is a bit difficult to wrap one’s head around – can dice be both completely deterministic and have a probabilistic description that is “objectively true”? I think the solution may be to view both as different descriptions of the same system – one describes its behavior for a single trajectory and one describes its behavior for an ensemble of related trajectories.

        To say that the probability of a dice outcome has probability 1/6 of coming up 6 is describing how the system behaves over a hypothetical ensemble of trials with similar initial conditions, world states, and chaotic trajectories. In this sense, probability is an objective property of the dice as a system in the sense that “my car engine averages 30 miles per gallon on local roads” is an objective property of my car as a system. The MPG number encapsulates both microscopic characteristics of the car and its behavior over an ensemble of hypothetical trajectories.

        The view of the dice outcome as purely deterministic, given enough state information, is also valid. It is _not_ the case that knowledge somehow magically transforms nature. Instead, the deterministic view avoids conflicting with the probability description because it maps to the behavior of the dice for a specific trajectory, rather than describing the system over an ensemble of similar trajectories. Continuing the car analogy, you could say “my car went 10 miles on 1 gallon in the last hour as I was driving up highway X going uphill at a 30 degree incline in stop-and-go traffic” without contradicting the earlier statement about the car’s MPG.

        • That is indeed what I am getting at. There is no room for a property such as probability in the physical description of the rolling of the die: enough knowledge gives a probability 1 prediction.

          I do not think the ensemble idea works though, because it simply pushes the problem back a step (I am basically quoting the Jaynes chapter (http://omega.albany.edu:8008/JaynesBook.html) cited by BenE at the beginning of the comments here, which should really be read): an ensemble of initial conditions requires a probability distribution on initial conditions. Where does that come from?

          I also do not see a need for the ensemble idea, because the answer, as far as I can see, is much simpler (Jaynes again, because as far as I am concerned he has nailed it). This is that probabilities represent knowledge, and that probability theory is a form of logic for uncertain propositions. Probabilities are no more measurable than are logical truth values. There is a Jeffreys’ quote, which I will paraphrase from a paraphrase in a paper by Gallavotti, to the effect that someone speaking of an ‘unknown probability’ is either confusing ‘not knowing x’ with ‘not knowing the probability of x’, or is thinking of probability as frequency.

        • Ian:

          What Revo11 said. To put it another way (and as I put it in Chapter 1 of Bayesian Data Analysis), probability is a mathematical structure that corresponds to various scenarios, including physical probabilities, relative frequencies, and betting odds. I have no problem using probability theory as logic for uncertain propositions, and I also have no problem using probability theory for physically random events such as die rolls or radiation counts. One of these interpretations does not make other invalid.

        • Andrew: I think your book is great. I agree with everything you say about model checking, calibration, etc.,and your general point of view about Bayesian methods. However, on this particular point, I do not agree. Here is why.

          You want to use the word ‘probability’ to refer to four very distinct concepts. I think this is to be avoided, because it frequently leads to confusion about the concepts themselves, as we are seeing in these comments. But it is particularly inappropriate here because: one of the concepts does not exist; while for two of the others there is better and less confusing terminology. In detail:

          1) Frequency of outcome. This is a measurable, empirical quantity: one can predict it and count the results. Probability as logic is not a measurable, empirical quantity. One does not predict it and measure it. Even if history were different, I think it would be important to give these two very different concepts different names (especially as we already have the perfect word ‘frequency’ to hand). But given the historical identification of probability and frequency, and all the very poor thinking, both leading to this idea and to which it has given rise, I think it is a big mistake to use the same word. Students in particular should be taught to distinguish clearly between these two concepts, especially students with mathematical but no scientific training.

          2) Quantum probability (e.g. radioactive decay). Quantum probability is really frequency of outcome in repeated experiments. This is its interpretation, both theoretically and empirically. Why is the word ‘probability’ used if ‘frequency’ is meant? Because physicists are taught, and most of them never unlearn, that probability *is* frequency, and so they conflate the two words.

          What is more, there is a very clear distinction in quantum physics between quantum probability/freqency and ‘ordinary’ probability, as embodied in the distinction between pure and mixed states. These are distinct mathematically, and are not conflated by physicists. For example, pure states have no entropy, and pure states evolve into pure states. The fact that they are so distinct gives rise to the controversy over the ‘black hole information paradox’, for example, where the separation is apparently undone. Jaynes, in classic papers, showed that mixed states are probability distributions as logic over quantum mechanical pure states. Physicists find this hard to accept because of the probability/frequency confusion, and instead talk about imaginary ‘ensembles’.

          The above issues, coupled with the interpretational wrangles arising from the inability of quantum physics to predict more than these outcome frequencies, mean that it is, first: very important to keep the word *quantum* in the description, to separate it from ‘ordinary’ frequency and probability as logic; and second: preferable to replace the phrase ‘quantum probability’ by ‘quantum frequency’. This should also help avoid bogus arguments about the relevance of quantum physics to probability as logic.

          By the way, the distinction between pure and mixed states is exactly analogous in classical mechanics, where pure states are point masses in phase space and mixed states are all other, non-zero-entropy distributions. Physicists find it equally hard to accept the epistemological status of these distributions, i.e. of classical statistical mechanics, and again talk about ‘ensembles’.

          3) Physical probability (as distinct from quantum probability). This idea simply does not exist. Classical chaos is no different from any other deterministic system in this regard. If we know the initial conditions exactly, then we can predict exactly what will happen. If we are uncertain about the initial conditions, for example because we cannot control them in an experiment, or cannot measure them, then we will be uncertain about our predictions. The rate of growth of this uncertainty with time is different for chaotic systems, but that is all.

          In either case, the uncertainty arises from our lack of knowledge of initial conditions, and this requires a probability (as logic) distribution to describe it: different uncertainty about initial conditions means a different probability distribution means different uncertainty about predictions. There is no property of the world here, just our uncertainty. The fact that classical phase space can be defined either as the space of initial conditions or as the space of solutions to the equations of motion shows that this identification is complete.

          This is also clear from probability theory itself. The probability of a particular classical trajectory given the initial conditions is a delta function. The only way to convert it into a non-zero-entropy probability distribution over trajectories is to introduce a probability distribution over initial conditions.

          In addition, but by-the-by, dice rolling is not that chaotic. It seems that it is possible to influence the roll outcome, even without building a dedicated machine.

  9. Pingback: Filter Bubbles and Selective Exposure: What can a librarian do?

  10. So now suppose that there is someone else who knows even more about the mechanism, so that they can predict with certainty the way the dice will roll. What is the objective probability now?

    • Paul, Andrew: I am sorry but I cannot seem to reply to the comments in-conversation; or rather, sometimes it works and sometimes it does not. I have tried with several browsers, without success. Is there a secret?

        • It also seems like there’s a depth limit to replying. In such cases, it seems fine to just reply at the permitted level (and include enough so that it’s clear what you’re replying to).

          … By the way, based on your discussion of distinct concepts of probability, I am sure you and I could determine and share an unambiguous terminology. Given the current state of thing, I’m not sure what’s achievable in terms of a larger audience.

  11. Is Rama’s concern here really about not having a “random” sample or is simply about not knowing its “representativeness”–i.e., not knowing much about the sample frame or the population which the sample represents? When expressing his concern, Rama laments the lack of “real random sampling.” But his examples (e.g., internet sampling for marketing research) sound like problems of “representativeness.” That is, he doesn’t know to which population (or to what degree) he can generalize his t-test or ANOVA, etc. results.

    Nowhere do the statistical properties of t-tests or ANOVA F-tests, etc. depend on whether one’s sample is representative of a particular population (or even randomly drawn from that population). All that’s required for the F-test, for example, is that the different groups be normally distributed with equal variance.

    Maybe Rama’s discussion (especially with his students) needs to move towards what constitutes a representative sample, why that’s important, and why even the most beautiful-looking F-test result–when built from an internet convenience sample–can lead to wrong population generalizations.

      • All the Ramas I know are male from India (often it’s short for Ramakrishna who was a Bengali mystic in the late 1800’s). This contributes nothing of substance to the statistical foundations discussion, but at least we’re learning about human culture right :-)

    • How do you define _representative_? Isnt the point that one avoids the need of defining _representative_, replacing it with the concept (and practice!) of a random sample? How do you know that a non-random sample is _representative_?

      • Most of the time, we’re not even interested in random-ness; ’tis enough for the sampling mechanism to be orthogonal to the variables of interest.

  12. I recently started teaching MR part time; I’ve worked in marketing research for decades.

    I find the nested concepts of (1) population (2) sampling frame (3) sample design and (4) sample achieved to be helpful.

    If the students get the idea that by surveying those who “like” the company on Facebook they will be able to realize what their valid population for conclusions is. I have to repeat these points quite a bit and put them as required elements of projects, but either it sinks or or at least they can decently parrot it back.

    Marketing research has almost no random samples outside of CRM, and everything must be done “as if”, with attempt at bias correction.

    Rama, I’ve not found a good textbook. What are you using?

    • I like both your nested concepts and the Facebook example. I too have done MR for decades. …And to take your point one step further, at least with human subjects, very rarely do we ever *truly* know whether our sample is representative of a particular population for every aspect of a particular measure of interest. If ever did know that much about that measure of interest in the population, we likely wouldn’t need the darn sample or study! …Though that idea might just confuse your students.

      I also like the analogy the drunk man looking for his lost keys under a streetlight. Someone comes up and asks him the drunk if he’s lost something there under the light. The drunk man replies no, he lost his keys over in the bushes, but there’s more light here under the streetlamp. At their worst, convenience samples can function sort of like that streetlight.

  13. I purposely avoided this discussion before the break, but I am now I am lost.

    1. See for example, this link http://www.surveygizmo.com/survey-blog/significant-differences-and-convenience-samples/

    “….. but again doing statistical testing in a convenience sample is pointless since the assumptions about probability sampling are violated.”

    So my original question still stands. So, why bother to explain in painstaking detail statistical tests, and then say they will hardly ever have a chance to use them?

    2. I am a woman.
    (Rama can be male or female depending on emphasis which makes the meaning completely different)
    3. The marketing research textbook I use — I really do not recommend it because it is biased, dated and incorrect in parts. But I have not yet found anything better.

    • Rama:

      I disagree with the quoted statement, “doing statistical testing in a convenience sample is pointless since the assumptions about probability sampling are violated.” Indeed, statistical tests, estimates, etc in a convenience sample require assumptions—but statistical methods always require assumptions. Even if you have a random sample, you’re still usually interested in generalizing to new cases that were not in the original population.

      • “Moreover, nonsampling errors become increasingly important relative to sampling errors as sample size increases, and render meaningless confidence intervals computed by the usual statistical formulas which take into account only sampling errors.”

        From Churchill and Iaccobucci, Marketing Research p 523

        • Yes, and one strategy for discovering and exploring those “nonsampling errors” is to go ahead and compute the compute the intervals and see how they compare to reality. “wrong” != “useless”

    • Rama, I think that’s just a confusion caused by bad teaching.

      Because it’s easy to understand, people are taught that statistics is based around trying to draw inferences about physical populations from random samples – so they conclude that it these don’t exist statistical methods aren’t appropriate. But that’s isn’t the case. Statistics is about trying to draw inferences about data using models which are characterized by parameters. The trick is to to try and choose a model and parameters which allow you to say something interesting about the data and the real world. Now sometimes, the data is from a probability sample, the model is one of random sampling and the parameter is a characteristic of a population. But that just one special case – and statistical tests can be used in a much broader range of situations.

      The big problem for statisticians is because the idea of a probability model is hard to explain, lots of people get a very limited view of statistical methods.

      • What Alex said. We want our models to be reasonable. In real life they are never perfect anyway. It’s important to be aware of problems with a model, problems such as a sample not being random. But that’s no reason to give up.

  14. Andrew and Alex,

    My question is not so much about whether to give up. It is about what to Tell My Students. Because they do ask:

    “Now that we understand what is random (probabilistic) sampling, and sampling error, and inferential testing, what do we do when we only have convenience samples?”

    And yes, they do ask this, because by the time I am done with they understand things well enough to ask Such Questions!!

    In any case, I think my answer should be this:

    “What we teach you here is simple inferential testing. For more advanced methods, you might think about taking statistics courses or becoming a graduate student.”

  15. Pingback: Somewhere else, part 28 | Freakonometrics

Comments are closed.