Binocular rivalry is a phenomenon which occurs when conflicting information is presented to each of our two eyes, and the brain has to cope with the contradiction. Instead of seeing a superimposition or “average” of the two, our perceptual machinery entertains both possibilities in turn, randomly flickering from one to the other. This presents an interesting way to stress-test our visual system and see how vision works. Unfortunately, talk of “perception” leads to talk of “consciousness,” and once “consciousness” has been raised, an invocation of quantum mechanics can’t be too far behind.
I’m late to join the critical party surrounding E. Manousakis’ paper, “Quantum theory, consciousness and temporal perception: Binocular rivalry,” recently uploaded to the arXiv and noticed by Mo at Neurophilosophy. Manousakis applies “quantum theory” (there’s a reason for those scare quotes) to the problem of binocular rivalry and from this hat pulls a grandiose claim that quantum physics is relevant for human consciousness.
A NOTE ON WIRES AND SLINKYS
First, we observe that there is a healthy literature on this phenomenon, work done by computational neuroscience people who aren’t invoking quantum mechanics in their explanations.
Second, one must carefully distinguish a model of a phenomenon which actually uses quantum physics from a model in which certain mathematical tools are applicable. Linear algebra is a mathematical tool used in quantum physics, but describing a system with linear algebra does not make it quantum-mechanical. Long division and the extraction of square roots can also appear in the solution of a quantum problem, but this does not make dividing 420 lollipops among 25 children a correlate of quantum physics.
Just because the same equation applies doesn’t mean the same physics is at work. An electrical circuit containing a capacitor, an inductor and a resistor obeys the same differential equation as a mass on a spring: capacitance corresponds to “springiness,” inductance to inertia and resistance to friction. This does not mean that an electrical circuit is the same thing as a rock glued to a slinky.
MIXING THE QUANTUM AND THE CLASSICAL
One interesting thing about this paper is that the hypothesis is really only half quantum, at best. In fact, three of the four numbers fed into Manousakis’ hypothesis pertain to a classical phenomenon, and here’s why:
Manousakis invokes the formalism of the quantum two-state system, saying that the perception of (say) the image seen by the left eye is one state and that from the right eye is the other. The upshot of this is that the probability of seeing the illusion one way — say, the left-eye version — oscillates over time as
[tex]P(t) = \cos^2(\omega t),[/tex]
where [tex]\omega[/tex] is some characteristic frequency of the perceptual machinery. The oscillation is always going, swaying back and forth, but every once in a while, it gets “observed,” which forces the brain into either the clockwise or the counter-clockwise state, from which the oscillation starts again.
The quantum two-state system just provides an oscillating probability of favoring one perception, one which goes as the square of [tex]\cos(\omega t)[/tex]. Three of the four parameters fed into the Monte Carlo simulation actually pertain to how often this two-state system is “observed” and “collapsed”. These parameters describe a completely classical pulse train â€” click, click, click, pause, click click click click, etc.
What’s more, the classical part is the higher-level one, the one which intrudes on the low-level processing. Crudely speaking, it’s like saying there’s a quantum two-state system back in the visual cortex, but all the processing up in the prefrontal lobes is purely classical.
Manousakis relies upon data from experiments some other people have conducted (no crime there, of course). The most interesting data comes from an experiment where subjects were tested in a binocular rivalry setup: conflicting information was fed to the two eyes, and the time for which the image from each eye was “dominant” was recorded. The fun part of the research is that the experiment was done in two variations, with LSD and without. Manousakis uses his model to fit a curve to the data in both cases. By itself, this doesn’t “explain” the difference between what happens with LSD and without. It just provides a formula for a curve with enough parameters so that the curve can be fit in both cases.
Here’s one problem I have with Manousakis’ results. His model includes three different timescales: the frequency [tex]\omega[/tex] of oscillation and two parameters describing how often the “observation” events occur. However, the graphs presented in the paper appear to have at most two characteristic timescales; the results of curve-fitting on such data would then be under-determined. Also, none of the parameters are the same across the situations, and no explanation is provided for why they might differ.
Two of his figures look like a Poisson distribution would be a reasonable first approximation. This would describe a situation where the image from each eye is entertained in turn, and the probability of flipping to the other eye is constant over time. Instead of comparing to a Poisson distribution, or any other reasonable first guess (like Fisher–Tippett), he compares his model to an exponential decay which looks nothing like the data. It’s fine and dandy to show that an exponential decay won’t fit the measurements, but that doesn’t help distinguish Manousakis’ hypothesis from reasonable null hypotheses like Poissonian behavior.
In fact, digging into the literature, one finds that the duration of dominance is described by a gamma distribution, a curve which is the sum of multiple independent, exponentially distributed random variables, each with the same mean. The gamma distribution has two parameters: the number of exponential random variables, and the mean of the exponential distribution.
Manousakis fits a two-parameter curve with a four-parameter hypothesis. The numbers he gets out are, at face value, meaningless (so it’s no surprise that they differ so radically between the different curves he fits). Two combinations of his parameters, one dimensionful and the other dimensionless, might have actual significance.
A POINT ON NETWORKS
Furthermore, optical illusions arise from mental processing which occurs at a level “before” or “beneath” that which we call consciousness. (Talk of such different “levels” is, it appears, commonplace in the field.) We don’t choose of our own vaunted free will to see the dancer spinning to the left, or the lines of equal length, or the boxes facing upward: something in our brain does that for us. The “I” can then choose, with a certain “effort of will”, to force the perception into another possibility (the dancer turns in the opposite direction; the boxes flip upside-down).
A neural network implemented in a computer, with no spooky notion of “consciousness” whatsoever, can be susceptible to “optical illusions” if it is presented with stimuli unlike those upon which it had been trained. The network might, for example, be trained to distinguish up-arrows from down-arrows; its space of possible states would have two attractors and could be modeled with a bistable potential. An “optical illusion” would be an input stimulus which does not have an unambiguous interpretation. With some stochastic noise present in the system, the network’s state could flip from one attractor to the other, changing the perception from up to down.
Since transitions from one perceptual state to another can occur without consciousness, I find the assumption that (in Mo’s words) “conscious awareness is generated anew each time one flips an ambiguous figure” to be unfounded.
WHY DO WE CARE?
Over at Neurophilosophy, Mo made note of an approving quotation, by a certain Henry Stapp:
If it is correct, this is a landmark paper that for the first time uses quantum mechanics to elucidate brain dynamics and both matches existing experimental data and provides testable predictions.
It is not too difficult to match an existing set of experimental observations; that’s what curve fitting is all about. The more challenging part is that bit about providing “testable predictions.” I’ve explained why I’m doubtful on that score: the numbers which come out are, I suspect, fundamentally underdetermined. Beyond that, the curve-fitting done so far doesn’t really test any deeply, intrinsically quantum aspect of the hypothesis — all the actual knowledge about “brain dynamics” goes into the classical part, the sequence of times at which the two-state system is “observed.”
Incidentally, just who is Henry Stapp? This is strictly speaking irrelevant to the scientific topic at hand, but it might be interesting to know. Turns out, he’s a fellow who has his own notion of “quantum consciousness.” The mathematician Ray F. Streater has pointed out three killing flaws in Stapp’s argument: first, Stapp believes that thoughts must be arrived at instantaneously, whereas experiments show that brain activity can initiate a half-second before the “conscious mind” thinks it has made a decision. Second, Stapp thinks that classical mechanics cannot include correlations, which is a real WTF moment for me. Third, thanks to his belief that thought requires instantaneous communication, Stapp needs some way to send information faster than light, and he finds that mechanism in — surprise! — quantum entanglement. However, the real world doesn’t work that way: even the “spooky action at a distance” seen in entanglement experiments doesn’t send information FTL.
Who, then, asked Stapp for his opinion about this whole affair?