Behavioral and Brain Sciences
Sorry, not sure what happened.
Are you referring to Many Worlds Interpretation when you mention attempts to get the measurement out of the unitary evolution of quantum systems? If so, I don’t understand how this removes the usefulness of the probabilistic approach. If you view it as subjective probability (e.g. the observer is just learning which Everett branch she happens to reside in), then it all becomes unitary and preserves all the great probability stuff. No need for “measurement” or “collapse” at all.
I’m not sure if this affects your conclusions on how to think of all this in social science research (which seem reasonable to me). But I do think this is one place where subjective Bayes shines, and it’s gaining a lot more support among mainstream physicists. (Though to be sure, I do not all mean to imply that MWI is “obviously” the right approach).
What you are referring to as the uncertainty principle is actually a different phenomenon known as the observer effect.
Precise terminology aside, these things are related! Or, so I recall from my two semesters of quantum physics, many years ago.
They’re related historically due to Heisenberg’s use of the observer effect in a heuristic argument for the uncertainty principle. (I, too, recall this from undergrad physics!) But the mathematical derivation of the uncertainty principle does not require the observer effect, and the observer effect does not require quantum physics.
sorry, I do not get it: what is supposed to happen when clicking on the link?!
If you click on the link, it should open a pdf file for my article (with Mike Betancourt) that will appear in journal #100.
I understand that you are trying to strike a conciliatory tone (with an article I haven’t read), but (sorry to be blunt) the result is that this just comes across as pseudoscientific crankery.
It is obvious that, in a context where before-measurement quantities differ from corresponding after-measurement quantities (as you explicitly point out), you cannot use the same variable to refer to both (because they are not the same quantity). When you give the equation p(x) \neq sum_y p(x|y)p(y), the two x symbols refer to entirely different quantities! Who are you trying to fool? To do this and then say that the resulting issues demonstrate a problem with probability theory is ludicrous.
I’m sorry you don’t like our paper. I agree with you that one can give a different name to the variable after it has been measured, but researchers typically prefer not to do so. And recall that in the two-slit experiment, the variable is not renamed—after all, it corresponds to a measurable physical quantity—it’s just that Boltzmann probability does not hold in that case. If you think that quantum mechanics doesn’t “demonstrate a problem with probability theory” in that case, you have lots of other people than me to argue with. Whether this reasoning is relevant to social science statistics, I don’t know, but I don’t think the idea is “ludicrous.”
It’s not hard to find prominent physicists who theorize (or least consider) the possibility that the what is called a “probability distribution” in QM is no such thing at all (Bohm’s pilot wave for example). They could easily be very mistaken, but it does show just how much “muddle” there is in the “quantum muddle”. Trying to use the “quantum muddle” to set another subject in order is about like hiring a homeless man to organization your house.
Incidently, you might be interested in the Wigner joint “quasiprobability” function for position-momentum in QM:
Paul beat me to it.
I don’t know without studying this, but my first reaction was to think it was another Sokal affair.
I haven’t located a copy of the Porthos & Busemeyer paper, but I did come across a 2009 paper by them — “A quantum probability explanation for violations of ‘rational’ decision theory”. One attractive feature of their complex probability-amplitude formulation is that there are far fewer free parameters than would be required under your proposal that marginal probabilities need not be averages over conditionals.
Unlike some of the others comments, I found your review very helpful. I had seen a couple of articles and took a look at the first chapter of the book (Quantum Models of Cognition and Decision), but I had the same reservations that you outlined in your paper. I wondered if you or Shalizi had seen this and what you thought. Now I know. Still, this notion of entangled systems is compelling. The area of quantum cognition points to a number of findings that we ought to be considering. Judgments are created on the fly and not stored in memory (Kahneman and Miller Norm Theory). Cognition is not decomposable but interrelated in unexpected ways (Kahneman’s thinking fast). Context matters so that what we mean by concept A is dependent on the situation (Barsalou goal-directed categories and graded structure of concepts). You have already mentioned Mischel and the work on situated cognition. But do we need quantum probability theory? Or, do we need to better define the contextual constraints so that our concepts are well-defined?
I’m a mathematician who knows some quantum mechanics, not a statistician. Here’s how I weigh in on this. I find the last paragraph of Andrew’s review the most compelling. It begins with the line
In any case, the ultimate challenge in statistics is to solve applied problems.
I can then read the rest of the paragraph as suggesting that quantum probability might be a useful metaphor – not necessarily a particular formal structure – that does help shed light on practical problems.
[...] writes: In re your recent post: Can you make sense of [...]
I have the feeling that some of the commenters did not read the paper.
Andrew isn’t suggesting that quantum physics itself is useful in the brain and behavioral sciences, he’s suggesting that it’s not crazy that some of the math that people used for quantum mechanics might be useful math for modeling cognitive systems. Nothing in this article endorses crazy quantum consciousness stuff.