Dangerous Ideas

I must admit that when I hear somebody talking about “dangerous ideas,” one of my eyebrows will — without voluntary intervention on my part — lift upwards, Spock-style. Such talk invariably reminds me of my old film-studies professor, David Thorburn, who said, paraphrasing the acerbic Gerald Graff, “if the self-preening metaphors of peril, subversion and ideological danger in the literary theorists’ account of their work were taken seriously, their insurance costs would match those for firefighters, Grand Prix drivers and war correspondents.”

Still, when Bee at Backreaction says something is interesting, I take a look. Today’s topic is the Edge annual question for 2006, “What is your Dangerous Idea?” Up goes the eyebrow. I don’t want to go near the Susskind/Greene spat about “anthropic” reasoning; frankly, without technical details far beyond the level of an Edge essay, “anthropic” talk rapidly devolves into inanities which resemble the assertion, “Hitler had to lose the war, because otherwise we wouldn’t be sitting around talking about why Hitler lost the war.” Suffice to say that neither Susskind nor Greene mentions NP-complete problems or proton decay.

So, moving on, let’s get to what Bee calls “the more bizarre pieces.” I was particularly drawn to and repelled from (yeah, it was a weird feeling) the essays of Rupert Sheldrake and Rudy Rucker. The latter goes off about “panpsychism,” which sounds like a fantastic opportunity to ramble about quantum mechanics, the inner lives of seashells and the dictionary of Humpty Dumpty, in which words mean exactly what the speaker wants them to mean, reason and usage notwithstanding.

Hey, “consciousness” is just one tiny part of what living things do, and life is a teensy fraction of what the Universe does. Why not give the rest of the biosphere a little attention and support “panphotosynthesism” instead?

Rucker proposes that “the mind is some substance that accumulates near ordinary matter — dark matter or dark energy are good candidates.” On the contrary, dark matter sounds like a terrible candidate for mind-stuff: how does it even interact with the ordinary, non-dark matter which we already know is doing stuff in the brain? It’s dark, meaning that it doesn’t interact via electromagnetism, only through gravity (and possibly the weak nuclear force). We know that neurons fire in different patterns depending on a person’s mental state. How does the dark matter pull on the vesicles holding the neurotransmitters without breaking the neurons apart?

And dark energy is even worse. As most everybody should have heard by now, dark energy is the something-or-other which makes up about seven-tenths of the Universe’s total energy density, and which exerts a kind of pressure making the Universe expand faster and faster. Pump that into your cortex, and you’re going to get a swollen head (or, possibly, an eternally inflating ego).

Really, offloading the unsolved problems of cognitive science onto some terminology filched from astrophysics doesn’t solve any problems. To his credit, this is only one of the options Rucker proposes. “On the other hand,” he says, “mind might simply be matter viewed in a special fashion: matter experienced from the inside.” He then goes on to say,

Some have argued that the experience of mind results when a superposed quantum state collapses into a pure state. It’s an alluring metaphor, but as a universal automatist, I’m of the opinion that quantum mechanics is a stop-gap theory, destined to give way to a fully deterministic theory based upon some digital precursor of spacetime.

Um, quantum superpositions collapse to “pure states” all the time. We’re talking about decoherence effects which happen on molecular distance scales. Many collapses happen in the span that a single neuron fires, and many more to make a solitary thought go. It takes a lot of molecules to make up a brain, and there’s no indication that a quantum formalism is necessary to describe what brain-stuff does. Precisely because molecules decohere, environmental interactions forcing their delicate superpositions to “collapse,” we can put all those quantum oddities in black boxes and study neuron behavior using classical tools.

Then, too, the people who injected this “quantum mind” meme into the meme pool — yes, I’m pointing at you, Penrose — don’t even think our current understanding of quantum physics is good enough to explain what the brain is doing. No, we have to modify the science to shoehorn in a special place for this one phenomenon — human thought — which has only been observed in a tiny fraction of the physical Universe.

I also get antsy about people who call quantum mechanics a “stop-gap theory” and wish ardently for a classical description somehow underlying the quantum world. It runs against my preference for parsimony: we’ve got a classical world, a level of phenomena which emerges from a deeper, quantum regime. Without a darn good reason, why cut yourself on Occam’s Razor in coveting a classical layer beneath that? Let’s not even get into the Bell’s Inequality arguments: I raise the issue every time somebody advocates a classical level beyond the quantum “stop-gap,” and every time I get some mumble about a new kind of long-distance correlations. Of course, the people who offer that mumble don’t address what happens when you try to fix the Bell issue with such correlations: you break relativity.

Rucker continues:

David Skrbina, author of the clear and comprehensive book Panpsychism in the West, suggests that we might think of a physical system as determining a moving point in a multi-dimensional phase space that has an axis for each of the system’s measurable properties. He feels this dynamic point represents the sense of unity characteristic of a mind.

Well, somebody is being either unclear or dead wrong here. The trivial observation that a system’s state can be represented by a point in space if you make each “measurable property” a coordinate has nothing to do with any “sense of unity.” To make this more concrete, imagine a pendulum, swinging back and forth on a pivot. At each instant of time, its “state” can be characterized by its position and its velocity; know those two numbers, and you can tell what the pendulum has been doing and what it will do in the future. (If you only knew the position, you couldn’t tell what the pendulum was doing: it might be at rest, hanging vertically, or it might be passing through the vertical direction during a high-speed swing.) Each possible state can therefore be represented by a point in a 2D plane, with one axis — say, the horizontal — representing position and the other representing velocity. The motion of the pendulum over time is then a trajectory in this plane.

We could describe a bigger system by upping the dimensionality of the “phase space.” You have two pendulums? Well, you’ve got a 4D phase space: two axes of position and two for velocity. In general, the state of N pendulums is a point in a phase space of 2N dimensions.

But if the pendulums do not interact, then the coordinates of one have nothing to do with the coordinates of any other. Even though you can describe them by a single point, there’s no meaningful “unity.” You still need to specify 2N numbers to indicate the state of the total system. If the pendulums were all tied together so that they moved in synchrony, then you could give a single position and a single velocity for the whole shebang, but without saying how the parts of the system interact, this talk of “unity” is — no pun intended — pointless.

There are plenty of reasons to think that our own “unity” is an illusion of sorts, too. Ever been driving your car and found that you’d reached your destination “on autopilot”? Ever responded to an insult “without thinking”?

Ever fallen asleep and had a dream?

All of these experiences make plausible, I believe, the hypothesis that the brain is a multiprocess machine. Not only are all the neurons firing in parallel, but also at a higher organizational level, the system is running multiple “programs” at once, only a small clutch of which are directly involved in what we call “self-awareness.”

(Comparisons of the mind/brain issue to the software/hardware relationship should only be taken in a general, illustrative sense; all analogies have their limits. Dang it, why isn’t the mind open-source?)

So, Rucker’s notions of “panpsychism” are founded on misrepresentations of actual science. That would be enough reason to chuck them in the circular file, but an additional, more general point should also be raised. I think I have a mind, a “consciousness” of some sort, and based on prolonged observation I see other people acting as if they had “consciousness” too; the natural conclusion, at least the best pro tem explanation, is that we’ve all got the sentience bug. In order for this to make sense, my estimation of what is conscious and what isn’t has to be at least a viable rough guide. Consequently, I must look askance at any proposal which broadens “mind,” “sentience” or “consciousness” to cover the whole bloody Cosmos.

Rupert Sheldrake’s contribution is even worse. He makes a great bother about how we don’t know the way “green turtles find Ascension Island from thousands of miles away to lay their eggs,” and he insists quite vigorously that the currently extant explanations for animal navigation aren’t adequate. Not only does each proposed mechanism — navigating by the Sun, say, or by the Earth’s magnetic field — fail in some circumstances, but there’s no way to combine them:

The obvious way of dealing with this problem is to postulate complex interactions between known sensory modalities, with multiple back-up systems. The complex interaction theory is safe, sounds sophisticated, and is vague enough to be irrefutable. The idea of a sense of direction involving new scientific principles is dangerous, but it may be inevitable.

Humbug.

It’s only “vague” if you don’t look at the details of the interactions people have actually proposed. (Too “complex,” perhaps?) And given Sheldrake’s mystical blather about “morphogenetic fields” (stealing a perfectly good word from developmental biology), I’m not surprised he wants animals to migrate using “a sense of direction involving new scientific principles”. Ssh! Don’t tell him about the people actually working on the problem. It’s certainly simpler to imagine whole new forces and energy fields than to think that turtles might be able to combine multiple types of sensory data. Of course, all the results showing that animals can and do navigate by the sun, by the Earth’s magnetic field and even by ocean currents should be discarded in favor of the navigation-by-morphic-field hypothesis. No doubt about it!

What a load of tripe. Even my mother’s cat can associate stimuli from different senses with common referents (sound of can opening = cat food; smell of fish = cat food). I don’t think the cat’s whiskers are picking up vibrations in the morphic field broadcast by canned tuna.

And while we’re talking of cats, don’t forget the rats. The classic story is told in Feynman’s “Cargo Cult Science” speech:

For example, there have been many experiments running rats through all kinds of mazes, and so on — with little clear result. But in 1937 a man named Young did a very interesting one. He had a long corridor with doors all along one side where the rats came in, and doors along the other side where the food was. He wanted to see if he could train the rats to go in at the third door down from wherever he started them off. No. The rats went immediately to the door where the food had been the time before.

The question was, how did the rats know, because the corridor was so beautifully built and so uniform, that this was the same door as before? Obviously there was something about the door that was different from the other doors. So he painted the doors very carefully, arranging the textures on the faces of the doors exactly the same. Still the rats could tell. Then he thought maybe the rats were smelling the food, so he used chemicals to change the smell after each run. Still the rats could tell. Then he realized the rats might be able to tell by seeing the lights and the arrangement in the laboratory like any commonsense person. So he covered the corridor, and still the rats could tell.

He finally found that they could tell by the way the floor sounded when they ran over it. And he could only fix that by putting his corridor in sand. So he covered one after another of all possible clues and finally was able to fool the rats so that they had to learn to go in the third door. If he relaxed any of his conditions, the rats could tell.

Combining data from multiple sensory modalities: rats can do it. Cats can do it. Surely, we should expect turtles to do it, rather than embracing Sheldrake’s silly morphobabble?

I’ve got a “dangerous idea” for you people. What if basic facts of science were taught effectively in American schools, along with skills of critical reasoning and the fine art of baloney detection? Then people might recognize garbage when they see it, and charlatans might find themselves out of work.

REFERENCES:

  • A. Litt et al., “Is the Brain a Quantum Computer?” Cognitive Science 20 (2006): 1–11.
  • J. McCarthy, “Review of The Emperor’s New Mind by Roger Penrose” Bulletin of the American Mathematical Society 23, 2 (1990): 606–16.
  • Johnsen, S. & Lohmann, K. J. “The physics and neurobiology of magnetoreception.” Nat. Rev. Neurosci. 6 (2005):703-712.

9 thoughts on “Dangerous Ideas”

  1. On the contrary, dark matter sounds like a terrible candidate for mind-stuff: how does it even interact with the ordinary, non-dark matter which we already know is doing stuff in the brain? It’s dark, meaning that it doesn’t interact via electromagnetism, only through gravity (and possibly the weak nuclear force).

    Which actually brings up an interesting hypothetical. Since we already have one type of matter that only interacts through one of the fundamental forces (gravity), it doesn’t seem too far out to suggest that there might be a type of matter that only interacts through one of the others: e.g., electromagnetism.

    There’s no evidence for such a thing, unlike dark matter, which is well-established, but it at least has a plausible mechanism for influencing brain activity.

    So positing that “mind” is composed of dark matter is something like sticking a square peg into an imaginary round hole; perhaps electro-matter (to coin a term) is more like sticking an imaginary round peg into an imaginary round hole.

  2. Joshua:

    Gravity “couples” to energy, so in anything but a seriously wacked-out theory, all particles which have energy will feel the force of gravity. Then, too, general relativity interprets gravity as the curvature of spacetime, so “electro-matter” would have to be immune to that curvature. I’m pretty sure that this could lead you into some interesting paradoxes.

  3. Does the attempt to combine “mind” with dark matter remind any one else of a little thing called Dust? You got fiction in my reality! You got reality in my fiction!

    Also:

    “[I]f the self-preening metaphors of peril, subversion and ideological danger in the literary theorists’ account of their work were taken seriously, their insurance costs would match those for firefighters, Grand Prix drivers and war correspondents.”

    How I WISH one of my professors in this MA film studies program were frank enough about their work to say something like this…It’s always a delight when one of them says something implying that, ya know, much of what we talk about in this field is self-serving BS :)

  4. “All of these experiences make plausible, I believe, the hypothesis that the brain is a multiprocess machine. Not only are all the neurons firing in parallel, but also at a higher organizational level, the system is running multiple “programs” at once, only a small clutch of which are directly involved in what we call “self-awareness”

    In a way I actually think that the A.I. folks are somewhat for breeding all the confusion that gave us all this absurd quantum mysticism about the brain. Especially on the Minsky/Kurzweil extreme, you’ve had people operating on the assumption that the Church-Turing thesis absolves of all the ambiguities of operation and implementation, and all we have to do is wait for Moore’s law to endow us with enough computing power to emulate human consciousness.

    The realization that the human brain is a complex device is important. Especially when it comes to Penrose’s arguments from his famous tilings. If I remember correctly, Penrose has argued that uncomputable problems like a global solution to one of his tilings shows that you must have some “non-algorithmic” means out there of solving them. Well, no. You can partition a tiling solution into local recursive functions that have the peripheral effect of producing the desired pattern. I doubt this argument would seem plausible if we weren’t ever convinced that the only difference between a serial von Nuemann machine and a human brain was the scale you needed to implement the latter on the former.

  5. Hah. A very good essay except for this:

    ” What if basic facts of science were taught effectively in American schools, along with skills of critical reasoning and the fine art of baloney detection? Then people might recognize garbage when they see it, and charlatans might find themselves out of work.”

    If you have any idea how to do this, what the heck are you doing running a blog? These are all well educated scientifically literate people, far beyong the run of the mill student, and you are absolutely correct in ripping on them. So how again will it help to teach something better? How would it be done. This sounds like pixie dust in education. When I see all these anthropic discussions, in physics not philosophy I start to get very discouraged.

  6. Hey, there’s no rule that says smart people will stop saying stupid remarks. I just figure that upping the number of people who can catch on when that happens would have to be a good thing.

    (Also, I’d dispute any characterization of Sheldrake as “scientifically literate.” Knowing how to string polysyllabic words together to achieve a simulacrum of science jargon isn’t exactly the same as scientific literacy, although it could probably land you a job writing Star Trek.)

Comments are closed.