Does quantum uncertainty have a place in everyday applied statistics?

Several months ago, Mike Betancourt and I wrote a discussion for the article, Can quantum probability provide a new direction for cognitive modeling?, by Emmanuel Pothos and Jerome Busemeyer, in Behavioral and Brain Sciences. We didn’t say much, but it was a milestone for me because, with this article, BBS became the 100th journal I’d published in.

Anyway, the full article with its 34 discussions just appeared in the journal. Here it is.

What surprised me, in reading the full discussion, was how supportive the commentary was. Given the topic of Pothos and Busemeyer’s article, I was expecting the discussions to range from gentle mockery to outright abuse. The discussion that Mike and I wrote was moderately encouraging, and I was expecting this to fall on the extreme positive end of the spectrum.

Actually, though, most of the discussions were positive, and only a couple were purely negative (those would be “Quantum models of cognition as Orwellian newspeak” by Michael Lee and Wolf Vanpaemel, and “Physics envy: Trying to fit a square peg into a round hole,” by James Shenteau and David Weiss). We expressed some vague skepticism but it’s hard for me to be really negative about the idea, given that classical probability theory is not actually correct, and we do indeed live in a quantum world (otherwise all our tables and chairs would fall apart, for one thing). I certainly see no logical reason why our models of probability and uncertainty should be restricted to the “Boltzmannian” simplification.

79 thoughts on “Does quantum uncertainty have a place in everyday applied statistics?

    • Looks like my institution has a subscription (they should stop wasting their money like that). I believe it is legal for me to put a copy here for my own personal use in the course of this discussion (I don’t always login from somewhere that allows me to get through paywalls): http://www.cs.sun.ac.za/~kscheffler/QP_cognition.pdf

      Of course I do not authorize anyone else to access this copy. That would be illegal.

    • Odd. I am at home and not logged into my university’s proxy server, yet I got the article without any problems.

        • Yes, it must have been my own error. I thought I tried both links and ended up at the paywall both times. But this morning I tried again and the second link took me right to a pdf.

          Sorry for the noise…

  1. I’m stumped. The paper starts by talking about (but not defining, I guess it’s supposed to be obvious to all) something called quantum probability, which has supposedly been “the dominant probabilistic approach” in physics for nearly 100 years. Now (to my knowledge, correct me if I’m wrong), quantum physics is a non-classical physical theory based on good old classical probability. Resorting to wikipedia (http://en.wikipedia.org/wiki/Quantum_probability) reveals that there is indeed such a thing as “quantum probability”, but it was only developed in the 1980s and certainly does not look (based on glancing at the wikipedia entry) like something to which the word “dominant” could be applied. Is this what they are talking about, or are they really claiming that quantum physicists invented a new non-classical probability theory in the 1920s?

    • Konrad:

      No, this is basic physics. In classical mechanics, particles follow so-called Boltzmann statistics (i.e., what we think of as the laws of probability). In quantum mechanics, particles follow Fermi-Dirac or Bose-Einstein statistics. Another example (discussed in our article) is the famous two-slit experiment. In quantum probability you get Heisenberg’s uncertainty principle and there is no underlying joint distribution describing all possible observables. In classical probability there is no uncertainty principle; you can model everything using a joint distribution. Classical probabilities are just amplitudes; quantum probabilities have amplitudes and phases; hence the interference that you see in a one-slit or two-slit experiment. This stuff is real, and it’s not new.

      • I’m not sure I’d site Fermi-Dirac or Bose-Einstein statistics as example of “quantum statistics”. Given the spin-statistics theorem (i.e. the Pauli Exclusion Principle) for the two different types of problems, then the calculation of the distributions is entirely classical (or “Boltzmann” if you like). There is some “structure” to the problem which has a quantum origin, but the actual derivation is standard probability stuff.

        In the theory of inventory control, which can be used to decide what items to order to restock a large grocery store, they sometimes use an entire family of distributions which include as special cases the Fermi-Dirac and Bose-Einstein distributions. The entire family is derived using standard statistical notions without a hint of quantum mechanics.

        • A better phrase might be “quantum probability” rather than “quantum statistics”. The point was that the Fermi-Dirac and Bose-Einstein are derived and used in ways entirely familiar to regular statisticians. There is a quantum element being put into the derivations, but that’s not really related to the nature of the probabilities used. Because of that, you can find purely classical problems, like inventory control, which provide the same structure and they lead to the same (supposedly “quantum”) distributions.

      • When we call this stuff “quantum” is there any quantity that’s actually quantized here? Or is that just a legacy quirk.

        • It depends. To take the simplest example: whether the energy is quantized depends on the potential. In some cases the energy won’t be quantized or will be quantized within certain ranges and continuous in others. The Fermi-Dirac/Bose-Einstein cases deal with quantized energy states, so the distributions are over a countable but discrete set.

        • Decisions are quantized. People have made the connection between the act of decision making (which closes off future possibilities corresponding to the options not chosen) and the act of measurement in quantum mechanics.

        • Thanks!

          I don’t like the analogy about decisions because it lacks the observer effect that’s so crucial to measurement.

        • To clarify what I mean: The fact what I do now affects my possibilities at a future point in time is almost tautological.

          OTOH, the fact that your very discretely observing me do something changes my future wasn’t so intuitive.

        • While my fingers hovered over the keyboard, I was mentally exploring a whole space of responses to ‘decisions close off future possibilities is almost tautological’. Clever things to say. Ways to conclude. Ideas to communicate. Then I started typing this meta-description, and most of those potential responses were flushed from my conscious mind. Now I was, and am, coming up with continuations to the single post I’m writing. The quantum connection is that the branching paths I consider approaching a decision can influence each other: I can come up with an idea I want to communicate via one path, and then decide it fits better in a different path. I can be seen as 80% certain I’m going to write a particular response (or buy a particular car, or see a particular movie), and in the end I do or don’t as a single, concrete act. The observer is just the moment where I actively commit to a decision. I think the analogy is useful.

      • In my understanding quantum physics (like any other theory in physics) posits a system with a state that evolves according to a set of physical laws. The system state in question happens to have amplitudes and phases, but this has nothing to do with probability theory. The system also has the property that probability distributions for certain observables can be obtained as a function of system state (specifically by calculating the amplitude-squared of the right state variables) – but these are just ordinary (classical) probability distributions. So I don’t see where any non-classical notions of probability come in?

        It might be useful to compare with non-quantum systems, e.g. in thermodynamics, which also have the property that probability distributions for certain observables can be obtained as a function of system state. A key difference is that the typically used state variables (e.g. temperature) in thermodynamics are understood to give only an incomplete description of the system – so the probability distribution is explicitly a description of our incomplete information state rather than of non-determinism. In contrast, in quantum mechanics the system state is claimed (at least in the Copenhagen interpretation) to be a complete description, so that the probability distribution is interpreted (again, in the Copenhagen interpretation) as describing non-deterministic aspects of the system. This points at a big difference in the physics of the two cases, but not at any difference when it comes to probability theory.

        • konrad, you can get a distribution for position P(x) and a distribution for momentum P(p) as you say, but they’re not entirely “classical”. There doesn’t seem to be a way to get a joint distribution P(x,p) whose marginal distributions are equal to P(x), P(p). The closest we can get to this is the Wigner Distribution (http://en.wikipedia.org/wiki/Wigner_quasiprobability_distribution) which clearly isn’t a classical joint probability distribution.

        • Clearly you know a bit more about this than I do. As I understand it, in QM you can’t simultaneously measure both x and p to arbitrary accuracy. So if they’re accurate measurements of those two quantities, they must necessarily be measurements of two separate states, the second one perturbed by the first measurement. Because of this it seems reasonable that the joint distribution wouldn’t be available in general, certainly there’s no way to get data to confirm or deny such a joint distribution due to the Heisenberg measurement uncertainty.

          I heard recently on the news about people running experiments over and over where they take repeated very low energy measurements of the location and position of particles. There was something about how they managed to violate Heisenberg’s law. But in further explanation this sounded like it was just typical popular physics hyperbole. What they had really managed to do seemed to be to create a repeated experiment where the multiple-data-points over many runs of this experiment gave them the ability to infer more about the states than Heisenberg’s uncertainty would allow for any *one* measurement. This sounded like it was in essence the application of Bayesian statistics to inference about QM states that were assumed to be identical.

          Wish I had a link to that, but it was something I heard on the radio while commuting a week or two ago.

        • Doing a little google detecting I suspect it’s related to this:

          http://prl.aps.org/abstract/PRL/v109/i10/e100404

          anyway I haven’t followed up on this stuff, but it seemed clear to me from the popular press hype that it primarily contributes to the muddle. the Heisenberg uncertainty principle is often thought of as coming from a disturbance that the measurement causes, but even in classical waves there is an uncertainty between frequency and location. It’s fundamental, so in some sense even before you measure anything you know there’s no way to define a precise frequency and a precise location for a wave at the same time.

        • http://arstechnica.com/science/2012/09/weak-measurements-show-quantum-uncertainty-is-inherent/

          gives a decent popular explanation, sounds like the main point is about the inherent inability to determine frequency and position of a wave independent of the “perturbing” caused by the apparatus. something which does not have a mathematical definition in terms of simultaneous values obviously wouldn’t be expected to have much of a joint probability distribution.

        • Actually, that’s a common misconception (due to the fact that QM books seldom deal with this issue), but you really can get a joint distribution for any quantum state. The problem is, you can get an infinite number of them and QM doesn’t offer a rule for picking out which one is physical. Leslie Ballentine has a chapter on this is his “Quantum Mechanics: A Modern Development” (Originally 1998 by World Scientific, but you can get a Dover edition now). Let me be lazy and just quote from his Chapter 15 (pages 406-407):

          “It is sometimes said that such a quantum phase space distribution cannot exist because of the indeterminacy principle (Sec 8.4), but that is not true. In order to satisfy the Heisenberg inequality (8.33), it is sufficient that…”[the joint distribution P(x,p)]”…should have an effective area of support in phase space of order…”[2*pi*hbar]”…(the numerical factor depends on the shape of the area), so that the product of the rms half-widths of…”[the marginal distributions P(x) and P(p)]”…is not less than…”[0.5*hbar].”In fact, for any…”[quantum state rho]”…there are infinitely many functions…”[P(x,p)]”…which satisfy the three equations above (Cohen, 1986). The problem is that no principle has been found to single out any one of them for particular physical significance.”

          Sorry for all of the breaks, but I’m not sure how to do mathematical symbols here (or even if it is possible) and I didn’t want to misquote. In case you’re interested, the Cohen-86 reference is:

          Cohen, L. (1986), “Positive and Negative Joint Quantum Distributions,” pp 97-117, in Frontiers of Nonequilibrium Statistical Physics, ed. G.T. Moore and M.O. Scully (Plenum, New York).

          I don’t mean to be contrary, but I thought you might be interested….

        • :) Ballentine’s QM textbook is superb. Not only is it foundationally/interpretationally coherent and consistent – dispelling common misconceptions like that phase space one and the one about H.U.P and ‘measurement disturbances’ – but it also derives the actual physics of ordinary QM in a modern, symmetry algebraic way (although I think a brief mention of Inönü-Wigner contraction (and maybe even Koopman-Von Neuman CM) would’ve helped to reinforce the idea that QM is ‘natural’ and ‘more fundamental’ than CM).

        • I seem to be missing something here (very likely since I have no background in QM and instead am taking mental shortcuts, translating the concepts back into more familiar notions – e.g. the (p,x) Fourier transform pair into (t,f) from signal processing, of which I have vague memories – I mention this because it’s worth bearing in mind that some of the issues usually associated with QM arise with any old classical Fourier pair): in my understanding, (p,x) is neither a state variable (because no quantum state corresponds to a specific (p,x) pair) nor an observable (since it is not possible to observe p and x simultaneously) – so before talking about whether a distribution for (p,x) can be obtained, we should talk about what the notion of such a distribution might mean and whether such a quantity is sensibly definable in the first place.

          We can trivially define (p,x) as the concatenation of p and x, but this doesn’t mean it makes sense to talk about a joint distribution on (p,x): any correlation between p and x is in principle unmeasurable, unidentifiable and probably doesn’t make any sense to define.

  2. There’s little hope of sorting out the quantum muddle, but a couple of points are worth mentioning. In QM they’re predicating frequency distributions (i.e. histograms) and the equations are used to relate histograms in one instance to histograms in another. A frequency distribution is a physical thing which you measure, like mass, or the number of people who’ve seen Weekend at Bernies. A measured histogram can conceivably have any relation to another histogram just like any other two physical quantities. How they are related depends on the physical theory. It’s only when you equate “frequency = probability” in every instance that you start expecting histograms to be related through the Kolmogorov axioms and are surprised when they’re not.

    Another point often missed is that Bayes Theorem is not fundamental to Bayesian Statistics. What’s fundamental are:

    (A) The interpretation of the probability distribution
    (B) the sum and product rules.

    That foundation leads to a host of connections between different probability distributions. Bayes theorem is just one example of that, but in different situations, distributions “transform” in different ways. If there’s a nuisance parameter for example which has to be integrated out, you don’t get a simple Bayes update with new data. There seems to be a cottage industry among Frequentist that involves finding a situation where the answer from A+B leads to something other than Bayes Rule and then triumphantly pronounce Bayes bunk because we aren’t “updating” with Bayes Theorem. It’s quite possible the quantum muddle is an example of that. Also, I’m not aware of any of the founders of QM who had exposure to Bayesian ideas since Bayesian ideas were at a low point in Germany at the time. (if anyone knows of any QM key players who were influence by Bayesian ideas I’d love to hear about it)

    • Uhh… interesting! thank you. I knew about Bayesian Statistics failing in areas of theoretical physics e.g. http://euroflavour06.ifae.es/talks/Charles.pdf but I admit I didn’t know Bayes Theorem was not something fundamental in Bayesian Statistics. Do you have any link/reference explaining the true fundamentals of Bayesianism?

      About QM key players being influenced by Bayesian ideas, I doubt it, they explicitly reject them… And by the way, for reasons unrelated to Fisher. Niels Bohr was five years older than Ronald Fisher so Bohr’s entire formal education could not possibly be based on anything pushed by Fisher et al. and the same goes for Schrödinger and Heisenberg or the giant Poincaré who claimed

      “Enfin, les problèmes où le calcul des probabilités peut être appliqué avec profit sont ceux où le résultat est indépendant de l’hypothèse faite au début, pourvu seulement que cette hypothèse satisfasse à la condition de continuité” (In short, the problems where the calculation of probabilities can be applied with success are those where the result is independent of the established hypothesis as long as such hypothesis follows the continuity principle) [my translation]

      My point is that Bayesian ideas were rejected in Science way before Fisher came to scene and it is not reasonable to expect any QM key figure in Germany (or anywhere else) using Bayesian ideas, especially having figures like Poincaré explicitly embracing a frequentist approach.

      • “I knew about Bayesian Statistics failing in areas of theoretical physics”
        That guy you linked to had no idea what he’s talking about. He states at the end that classical statics is useless of for creating span filters (it isn’t) and that “Bayes’ theorem does not apply in particle physics where one wants to interpret actual data in light of a given theory”. It’s been a long time since anyone put forth that howler with a straight face.

        “Do you have any link/reference explaining the true fundamentals of Bayesianism?”
        Sure: http://www.amazon.com/Probability-Theory-The-Logic-Science/dp/0521592712/ref=sr_1_1?ie=UTF8&qid=1368710637&sr=8-1&keywords=e.t.+jaynes

        “About QM key players being influenced by Bayesian ideas, I doubt it, they explicitly reject them”
        I didn’t claim they had rejected Bayesian ideas. I claimed they had never been EXPOSED to them. I have Einstein’s collected works and can’t find any mention or even a hint of Bayesian ideas. He doesn’t even mention them in order to “explicitly reject them”.

        • Ent:

          Thank you for the link, pity so expensive to find out about the true foundations for Bayesianism. Any free link or articles with examples of how can probabilities distributions connect/update to others without using Bayes’ Theorem?

          To be honest, I do not know enough physics to criticize if he knows what he is talking about, really, yet I will note that it is not just “one guy”, besides Jérôme Charles the claim is subscribed by at leastA. Höcker, H. Lacker, F.R. Le Diberder and S. T’Jampens.

          Well, about these historical figures not being exposed to Bayesian ideas… I find impossible to believe that someone of the Mathematical Caliber of Henri Poincaré was unaware of the Bayesian approach to probability. The pure physicists… well, maybe they were taught into Frequentism, I don’t know, I would like to find out too, but Le Grand Poincaré? One of greatest French mathematician of all times? Not a chance. He was more that exposed to Bayesian ideas and he had to know those ideas to its bits, actually he even used them in the trials of Alfred Dreyfus… He simply rejected Bayesianism for science.

          And about Einstein, he was a pure physicists so maybe he was taught into Frequentism. He often joked about his skills on math but he also declared his profound admiration for James Clerk Maxwell, and I mention Maxwell because he was a Mathematician and, therefore, he had to know to its bits as well the Bayesian approach to probability but, despite his work on statistical mechanics, he didn’t bring Bayes with him either.

          My point is that many brilliant trained mathematicians who also had successful careers in science knew about Bayes’ interpretation of probability but simply opt out. Maxwell, Werner Heisenberg, Henri Poincaré… All had deep mathematical training and, therefore, they had to know everything there was to know about Bayesian approach at the time. So yes, at the very least those with mathematical degrees were exposed.

        • The book is available here too, nearly complete:

          http://www-biba.inrialpes.fr/Jaynes/prob.html

          Chapters 1 and 2 are where it’s at, but I’ll note that Bayes rule is a consequence of the axioms chosen by Jaynes (and others). Ultimately, it’s not that different from what you already know, but Jaynes’ perspective on the matter is still incredibly clean, and Andrew will refer to it every now and then.

        • The sum and product rule imply Bayes theorem, but they also imply many other things as well. A Bayesian interpreting probabilities in a Bayesian way, but using one of these other consequences of the sum and product rule, is still doing “Bayesian Analysis” – if for no other reason then because they’re assigning probabilities to things which aren’t random variables. This is a triviality. You don’t need a reference for it. And if you’ve had so little exposure to serious Bayesians applications that you didn’t know this, well then that explains a lot of your comments.

          I to find it hard to believe Poincare had no exposure to inverse probability. But I don’t know, and he wasn’t one of the founders of Quantum Mechanics. I know of no indication that Heisenberg had any exposure (at least early on) to Bayesian ideas. Similarly to Einstein, it doesn’t seem to have been part of the German curriculum at the time. Even today, Statistics of any variety is not a standard part of a physicist’s training. Statistical and Quantum Mechanics merely require a basic ability to manipulate distributions which physicists pick up on the fly. Most physicists have no clue about hypothesis testing for example and only know about it if they make a special effort to learn it.

          You can be a productive scientist without being a Bayesian. You can be a productive scientist without knowing any statistics at all. No one has ever said otherwise.

        • Heisenberg was a mathematician besides a physicist, that is why I find hard to believe he did not know about Bayes. On Einstein I agree, for all I know he might not have even heard about Bayes since he was “just” a physicist.

          About my ignorance about Bayesian applications, well, I try to solve it out by asking questions. I thought the main two ideas for Bayesians were their definition of probability and Bayes’ theorem. Today I learned that Bayesians (at least Jaynes flavor) just care about the sum and product rule to update probabilities, so thank you for your help!

        • My understanding is that Heisenberg was never exposed to matrix algebra during his formal education. It just wasn’t something that was taught to physicists at the time. So, his orginal formulation of QM was as a series of sum rules. It was Born who realized that the sum rules looked like matrix multiplication, but he wasn’t comfortable with that sort of math either. That’s why they brought in Jourdan and Matrix Mechanics was born….

          Anyway, my point is that mathematics curricula were so different back then than they are today that it isn’t really fruitful to speculate about what these people must have known or been exposed to.

        • dab, that was my understanding too. I wonder if the story is bit overblown though. Maybe it was the often infinite dimensional nature of the spaces that was causing them trouble and the an unfamiliarity with finite dimensional matrixes.

        • I did find the biophysicists were the worst clinical research clients, perhaps because they could “pick up [manipulating equations] on the fly” and it was very hard to get them to think about learning from observations fully appreciating the uncertainties and risks.

        • It’s off topic, but I have pretty pessimistic take on physics. You can find it here: http://www.entsophy.net/blog/?p=40

          Basically, I estimated how much it cost to produce Maxwell’s equations and compared it to current research efforts. Current efforts don’t come off looking to well.

        • Speaking as a former particle physicist, these are pretty terrible arguments. Firstly, you’re ignoring Laplace, Jeffries, and Jaynes who all developed Bayesian inference while pursuing physics (Laplace was inferring planetary masses from ill-posed data before it was cool). Secondly, don’t necessarily trust a physicist, let alone a a theoretical one, on stats. Inference is horribly taught in the field, usually as an afterthought, and often reduces to histograms, max likelihood, and propagating uncertainties in quadrature. I’d listen to Poincare if he talking about the mathematical consequenses if the Komolgorov axioms, but not how to apply them in a practical setting.

        • I ignored Laplace because I was focused on last century mathematicians/physicists (with the exception of Maxwell because he was Einstein “hero” and I wanted to speculate on Einstein knowledge about probability.)

          Why you would not listen to Poincaré when it comes to Physics is beyond me; with all due respects to Jeffries and Jaynes, but there are great physicists/mathematicians and there are giants, and they are in no way close to the discoveries of Henri Poincaré in Physics or Mathematics.

          If the credentials of Jeffries and Jaynes are good enough for you to listen to them, then the credentials of someone like Henri Poincaré, who even Albert Einstein himself praised for his work in Physics, should suffice to anyone.

        • So he arrived at the quantum world when he was in his sixties and still had stamina to make a few (but only a few, right?) important discoveries in the quantum world before he died… Your point being?

        • I’ll think you’ll find that NOBODY is interested in trying to decide statistical questions based on authority. I’m not sure how you got off on that topic.

        • As Entsophy noted, authority should never be a big factor I these matters. Poincare was an unquestionable badass, but he never touched data and so has no relevance to arguments about the application of probability to data. I wouldn’t take Einstein’s word, either. I noted Jeffreys and Jaynes because they spent tremendous time working with data. Of course, that’s about inference. Philosophical questions about quantum mechanics (ontological vs epistemological, Bayesian vs frequentst) are a whole other beast with little resolution and little practical relevance at the moment.

        • Michael Betancourt:

          “As Entsophy noted, authority should never be a big factor I these matters.”

          Entsophy asked about what exposure to Bayesian ideas had key figures at the time and I just claimed that Henri Poincaré was soaking wet exposed to Bayesian ideas and yet he rejected them for science. If people don’t want questions answered they shouldn’t ask them.

          And sorry but, who claimed authority on what? I never said anything like “Since “Badass” Henri Poincaré embraced the Frequentist definition of probability for science, explicitly rejecting the Bayesian one, therefore Bayesian ideas must be wrong and whatever he says goes” Did anyone say that? Yet, it seems this thought is bugging some of you. Well… suck it up.

          “Poincare was an unquestionable badass, but he never touched data and so has no relevance to arguments about the application of probability to data. I wouldn’t take Einstein’s word, either.”

          You can picture me doing a double hand facepalm. It feels like if you just found out about Poincaré today.

          I noted Jeffreys and Jaynes because they spent tremendous time working with data. Of course, that’s about inference.

          Do you realize that you vehemently rejected authority in “these matters” to then claim authority in “these matters” because J%J spent “lot of time” working with data? What happened to your “authority is not a big factor” argument? You want to have it both ways? Rejecting the authority card for Poincaré but claiming it for J&J?

          And in any case, “working lots of time with data” does not sound like a terribly good argument to claim authority in statistics… or anything else for that matter.

        • I think you misunderstood my intent Fran. My overwhelming impression, which seems to be shared by pretty much everyone, is that Bayesian ideas played exactly no role in the formation of Quantum Mechanics. If you read the famous foundational papers in QM for example, you’ll see pretty quickly that Bayesian ideas are completely absent. They’re not even mentioned in order to summarily reject them. I was really asking if anyone knew of any instances where this impression is wrong.

          Poincare isn’t usually considered one of the founders of QM. And even if he were, he seems to have rejected Bayes (according to you) so it’s unlikely he’s an example of Bayes influencing the development of QM. Moreover, Poincare’s one contribution to QM was a completion of Plank’s great paper, essentially showing the quantization suggested by Plank of the energy couldn’t be avoided. Nothing in the paper relates to the foundations of statistics though, and Poincare could have written the paper just as easily if had of been a raging Bayesian. I think it’s pretty clear this isn’t an example of Bayes influencing the development of QM.

        • BTW, it’s ‘Jeffreys’, not ‘Jeffries’. He’s not a relative of mine (certainly no common ancestor in the past 400 years) but it is fun to go to a Bayesian meeting and have people ask if I have anything to do with Jeffreys priors :-) But my surname is spelled yet differently (and I know of several other spellings as well).

          (Interestingly, like Sir Harold, I am also an astronomer. I met him once when I was a graduate student on his only trip to this side of the pond).

        • Since no one’s ever seen you and Sir Harold in the same room at the same time, we’ll withhold judgment on whether you’re two different people or not.

        • Poincare was a genius in physics as well as math. I am not sure that there was any greater physics genius in Poincare’s lifetime.

        • Poincare really was pretty awesome. It seems like he’s under estimate a bit in the english speaking world. A lot of physicists aren’t aware of his role in creating relativity for example. I just bought the english translation of his 3 volume work on celestial mechanics. Learn from the masters as they say.

    • > If there’s a nuisance parameter for example which has to be integrated out, you don’t get a simple
      – anything, as the things you want to learn about are _entangled_.

      I believe there is a major problem being created in statistical learning and discussions by focusing on single parameter problems or using those often overly hopeful assumptions that imply things can be untangled (e.g. t.test with variances assumed equal) or untangled with a minimal loss of information (Fisher’s exact test). Nuisance parameters often being put in the advanced category for later study.

      We can sometime be that lucky but it should be understood as luck rather than expectation.

  3. We note that this article is not about the application of quantum physics to brain physiology. This is a controversial issue about which we are agnostic

    Glad. That stuff is wacky.

  4. I haven’t made it through all the material but I read your contribution. I liked it but thought the examples you gave are imperfect renderings of the equivalent quantum issue with 2-slit and the equivalent spin experiment. I think the comment by Entsophy gets at a reason. Not the but a reason.

    I really enjoy your blog.

  5. Whenever I think of errors in measurement for some reason I conjure up memories of driving an old beaten up Land Rover through a dirt track.

    The key is that even if you are on a straight line holding the driving wheel steady will not keep the vehicle traveling straight. The driving wheel is not a good measure of the direction you are going. Were it not for the fact that you can look out the window, and make continuous adjustments, you’d end up in a random walk, perhaps with some drift.

    Now, thinking fast and loose, the point is that I could not care less about the real underlying axle direction, or the error in direction transmission. Sure, it would be good to have zero error but if that is impossible, that is ok. If I can maximize an objective function using noisy measurements why should I care about the underlying “true” measure? I’m thinking Quine, Rortry, or revealed preference. And if all we need to care about is how the actual measurement relates to the objective function, can we not ignore the Heisenberg’s uncertainty principle?

  6. I agree with Entsophy, the Bose-Einstein and Fermi-Dirac distributions are just the classical distributions you get when you acknowledge some restrictions on the type of distribution that is possible. Namely that things must or can’t occupy similar states.

    The thing that’s weird about the quantum world is that the probability (frequency) distribution over outcomes come from the absolute square of a complex number. But once you have the probability distribution, the laws of probability are obeyed. For example if I know that the distribution of locations where an electron will be detected looks like f(x). then if I want to find out the probability that the first time I run the experiment my electron will be betweeen x1 and x2 and the second time I run the experiment the electron will be between x3 and x4 jointly, and I run the experiments in not-too-quick succession (so that they electrons are not flying through the apparatus at the same time affecting each other) I will get this from:

    integral(f(x)f(y) dx dy,x in [x1,x2], y in [x3,x4])

    so I don’t think it’s fair to say that the laws of probability are different. I just think that the processes that produce these probability distributions are strange

    • If you send the electrons at the same time or similar times, they interfere with each other, and then the probability distribution isn’t the independent f(x) f(y). But this happens in classical cases as well when talking about statistical physics.

      For example, suppose you have a marble of radius r in a box of radius R >> r. You give this marble some energy and let it bounce around in the box. Suppose that the box is being shaken slightly so that the collisions with the walls preserve the kinetic energy of the marble but give it perturbations to its momentum, or whatever. If you take a lot of snapshots of the state of this system, separated by enough time (so that the marble collides with the walls multiple times between snapshots) you might find that the distribution of marble locations inside the box is uniform over the box.

      If you put two marbles in the box, you will *not* find that the joint distribution of marbles to be in any pair of locations is uniform over the box. In fact, you’ll find that the two marbles are never found within a distance 2r of each other, because they can’t overlap, they collide. So it turns out the marbles aren’t independent because they’re in the same box.

      In quantum physics this is the same, except the nature of the non-independence is calculated using a wave function amplitude, the wave functions which are distributed in space are the things that “collide” in some sense.

      • Daniel:

        But in the two-slit experiment, you get interference with a single electron. So I don’t think your marble example quite works.

        • You get interference of the electron’s wave function with the material that makes up the slits. However, if you postulate that the slits are very stable physical things (ie. they don’t change their inter-slit distance or width appreciably from one experiment to the other) then you can calculate a probability distribution for the location where the electron will be observed (by say a Feynman path integral), and this probability distribution is stable from one experiment to the next. In the absence of perturbations to the system you can treat the probability of the joint locations of the observations of successive electrons as independent random variables corresponding to the product rule for independent events.

        • Compare throwing a die. We think of throwing a die well: hard, against a craps table wall, as a reproducible and successively independent event. We observe a frequency distribution over these throws and it more or less obeys the laws of classical probability.

          But suppose now instead of a craps table wall we throw it at a robot with a tennis racket who swats the die in such a way as to make 1 more likely to come up than any other throw. We don’t think of this as violating the laws of probability, we think of it as changing the experiment so that the probability distribution of outcomes is altered.

          Opening the second slit is a little like turning on this robot, it alters the physical properties of the experiment and the fact that it alters the probabilities of outcomes isn’t too surprising. The manner in which the outcomes are altered, the whole propagation of a wave function thing, is fairly intricate and somewhat surprising, just like a robot that knows how to swat dice with a tennis racket is a little intricate and surprising. But I don’t think it is a revolution in logic.

    • That’s right, the laws of probability are not any different. Quantum probability is only different if you subscribe to an interpretation that tries to attach greater meaning to the probabilities.

  7. Very interesting. Thanks for posting this, Andrew.

    Here’s an issue that doesn’t seem to be addressed directly by the authors. I’m not a physicist, so please forgive me if I’m unclear or incorrect about any of this.

    In physics, we already know when it’s reasonable to use classical probability because the Wigner distribution starts to look more-or-less classical if we don’t ask it for too much precision.

    Assuming that quantum probability is sometimes necessary outside of quantum physics, are there any guidelines for when we’d still expect classical probability to work? Is there something analogous to Planck’s constant that tells us when we’re asking too much of our joint distributions?

  8. You are right to criticize this paper, but I would’ve been harsher.

    QP (quantum probability) contains CP (classical probability) as a special case. The advantages of it are only in modeling systems that essentially do not arise in psychology experiments, such as entanglement. The authors say that their approach is harmless because real psych experiments contain things like complementarity, “wavefunction collapse” and so on, but these can all be equally well modeled within classical probability. The only thing that can’t be so modeled is entanglement, and I will eat my hat if entanglement is ever demonstrated in a psych experiment.

    So they are not talking about literal quantum mechanics, but instead using it as an analogy. What’s the problem? For one, the invocation of QP serves mostly to add needless layers of mystifying formalism, and to confuse basically everyone (exhibit A: this comment discussion). Also, what you really want is an underlying theory that can explain the world, and this should drive the choice of mathematics. Yes, asking a question can change how people think. But quantum mechanics is not the most parsimonious or useful explanation of of this.

    • “but these can all be equally well modeled within classical probability. The only thing that can’t be so modeled is entanglement, and I will eat my hat if entanglement is ever demonstrated in a psych experiment.”

      Well, Aerts and friends seem to have found entanglement in psych. experiments (and elsewhere): http://arxiv.org/find/all/1/au:+Aerts_D/0/1/0/all/0/1 The paper with a macroscopic physics example ( http://uk.arxiv.org/abs/quant-ph/0007044 ) is the only one I’ve actually read (a long time ago), and I don’t know anything much about psych. but I’d be surprised if all this stuff is just needless mystification.

      • At the scales at which psychology works it is bizarre to report any entanglement. This is akin to those folks claiming that the brain is a quantum computer. It just doesn’t make sense.

        • “At the scales at which psychology works it is bizarre to report any entanglement. This is akin to those folks claiming that the brain is a quantum computer. It just doesn’t make sense.”

          That was my reaction when I first saw that vessels of water paper. I’d not seen these concepts applied outside of QM except by crackpots.

      • That paper doesn’t demonstrate entanglement. It describes a bunch of classically correlated random variables that “violate” the CHSH bound. In the CHSH game, the maximum score achievable with classical correlations only is 2, with quantum entanglement is 2 * sqrt(2), and with communication (classical or quantum) is 4. What happens in their experiment is analogous to communication. They achieve a value of 4, which would not even be possible with entanglement alone and no communication. They also use the term entanglement in ways that don’t make sense. Here is an excerpt:

        “The reason that Bell inequalities are violated is that Liane’s state of mind changes from activation of the
        abstract categorical concept ‘cat’, to activation of either ‘Glimmer’ or ‘Inkling’. We can thus view the
        state ‘cat’ as an entangled state of these two instances of it.”

        Hat uneaten.

        • “That paper doesn’t demonstrate entanglement. It describes a bunch of classically correlated random variables that “violate” the CHSH bound. […] They achieve a value of 4, which would not even be possible with entanglement alone and no communication.”

          I believe there’s a full description in terms of entangled states rather than just correlations in (one of the) others and apparently the model described in §5 is a better mimic:

          “We have seen that quantum and macroscopic systems can violate Bell inequalities. A natural question
          that arises is the following: is it possible to construct a macroscopical system that violates Bell inequalities in exactly the same way as a photon singlet state will? Aerts constructed a very simple model that does exactly this (Aerts 1991).”

        • The models in the 1991 paper appear to involve communication between the two systems because of the pipe between the water tanks, or the rigid rod connecting the entangled particles. Bell inequalities are bounds on classically correlated systems with neither entanglement nor communication. A violation of a Bell inequality using entanglement (and no communication) is a surprising consequence of quantum mechanics. A violation of a Bell inequality using communication is not at all surprising.

          This doesn’t change anything, but the analysis of the water tanks also seems wrong to me. If a,b are unbiased and a’,b’ are always one, then E[a * b’] = 0, not 1, and similarly for E[a’ * b].

          From what I’ve read, I’d urge extreme skepticism for anything else coming from this author.

        • Obviously there is communication and nothing surprising in the results. I rather belaboured that point and thought I’d found errors in his analysis at first too, but in the end Aerts convinced me there was nothing wrong with it. Perhaps you’ve found something I missed but I’d urge extreme caution before applying extreme skepticism here. :)

        • So convince us with Aerts arguments. Aaram has clearly shown you where the paper’s flaw lies!

          PS. Was this ever published in a peer-reviewed journal?

    • Aram:

      You might be right that quantum mechanics is not the most parsimonious or useful explanation of of this. But it seems to me possible that going beyond standard additive probabilities might help us solve some social science modeling problems. So it seems worth a try to me. It’s not my highest priority (consider the time I spend on this topic, compared to the time I spend on Stan, for example), but I’m glad some people are looking into it.

      • “going beyond standard additive probabilities might help us solve some social science modeling problems.”

        …or you could just keep throwing more variables at the problem.

        If “standard additive probabilities” have sufficed for all other “macroscopic” fields ( except the truly physically quantum ones) why should psychology be the one that needs a fundamentally new type of probability paradigm? They’d probably be better off searching for more mundane flaws in their data, models and techniques than call for some bizarre, mysterious changes at the roots.

        • Rahul:

          1. There are different ways to solve a problem. “Throwing more variables” can work in some settings; in other settings it can make it harder to set up the model. For an analogy: suppose someone says we never need the gamma distribution because we can always work with mixtures of lognormals. That’s true, but it misses the point that for some problems using the gamma distribution can be helpful.

          2. I don’t think there’s anything special about psychology; that’s the field that got discussed because the paper appeared in Behavioral and Brain Sciences. I think quantum probability could possibly be useful in political science, economics, sociology, etc. It’s not that any of these fields need quantum probability, but I’m open to the idea that such ideas could be helpful. It’s not so clear to me that standard probability models have “sufficed” in social science. Probability is an extremely useful tool, but there are problems that don’t seem modeled so well in that way.

        • It’s fine to add more variables, and to consider extremely general models with wacky effects. Evaluating conjunctions as a series of projectors sort of makes sense.

          But the effects are not _really_ quantum, as evidenced by the fact that if you push on the theory then the quantum just-so stories go away.
          example 1: https://en.wikipedia.org/wiki/Quantum_Zeno_effect
          suggests that if you oscillate between happy and unhappy on a timescale of, say, 100 hours, and you merely ask yourself every hour whether you are happy, then it will slow these oscillations down to occur once per 10000 hours.

          example 2:
          If “happy” and “employed” are complementary observables, then by alternating asking those two questions (“are you happy?”, “are you employed?”) we should in general observe lots of randomness. If the “happy” and “employed” vectors are very close, then our sequence of answers will look like (with H = happy, S = sad, E = employed, U = unemployed):
          HEHEHEHEHEHESUSUSUSUSUSUSUSUSUSUSHEHEHE, etc.
          If the vectors are far apart, they will fluctuate more quickly.
          Obviously this is nonsense.

          Their paper doesn’t have many examples of quantum effects that have psychology analogues, but I’m sure that for every such analogue, there is a regime in which it generates nonsensical predictions.

          So what is going on here?
          1. There is a real effect, which is that context matters, and if I’m asked about happiness after being asked if I’m married or if I’m employed, I think about the question in the context of my marriage or my job.
          2. This is reminiscent of quantum, but any attempt to make a precise connection will lead to thought experiments that are clearly crazy. And pedagogically it’s terrible, since 99.9% of the audience doesn’t have a good intuition for quantum.
          3. The fact that this paper is a series of just-so stories is equivalent to choosing a model with a vast number of parameters, which can fit any behavior. The fact that those parameters don’t even properly fit further sequences of questions is a bad sign.

          Finally, the failure of classical probability is really the failure of a straw man. Classical probability is a big tent, with lots of room for new theories, including every single effect described in that paper.

        • Andrew says:

          “I don’t think there’s anything special about psychology; that’s the field that got discussed because the paper appeared in Behavioral and Brain Sciences.”

          I think it is telling that this thesis about macroscopic quantum probabilities got published in a psychology journal and not, say, geology, chemistry, Engineering or any one of many other fields which heavily use probablities and might also conceivably be expected to hit conventional limitations.

          My guess is this has a lot in common with one of your recent posts about the kind of papers Psch. Journals accept.

    • Glad you are saying this. The whole idea of adding in “quantum” into essentially macroscopic psychology modelling seems bizarre to me.

    • Aram, I think the fact that a proper formulation of QP contains CP as a special case was not obvious to P&B. My commentary was actually primarily focused on that, and some other little mistakes they make (like violating the no-cloning theorem in their discussions). Unfortunately, I have been away at workshops and haven’t had a chance to look at the other commentaries or P&Bs response.

  9. Pingback: Are human minds statistical machines? | Alea Deum

Comments are closed.