Apropos an announcement from the AAAS annual meeting, Steve Novella ponders the task of reverse-engineering the human brain. For those of us who share a materialistic view of the brain — i.e., for people who subscribe to actual science instead of woo — this task is likely to seem possible in principle, although daunting in practice. If the mind is the activity of the brain, and a finite number of genes can direct the growth of a brain in a finite amount of time, and the molecules which make up the brain are being exchanged in and out all the time anyway, it’s reasonable to speculate that we’ll be able to mimic the process in another medium. Novella argues that the “software” part of this task will be harder than the “hardware” side:
Sure, we may run into unexpected technological hurdles, but so far we have been able to develop new approaches to computing technology to keep blasting through all hurdles and keep Mooreâ€™s Law on track. So while there is always uncertainty in predicting future technology, predicting this level of computer advancement at the least can be considered highly probable.
The software extrapolation I think is more difficult to do, as conceptual hurdles may be more difficult to solve and may stall progress for a undetermined amount of time.
Broadly speaking, I agree. The exact amount of processing power needed to implement the brain in a Linux box is as yet unknown; it depends on things like the complexity of an individual synapse, and how much data is required to represent the state of a neuron. Then, too, for every hardware advance on Moore’s side of the ledger, Gates is there to bloat the software by a corresponding amount, and the applications of computer technology which have most radically affected life in recent years have depended not on raw cycles-per-second, but on networking and mass storage, neither of which necessarily improves at the same rate as processor speed.
Ray Kurzweil may be the most famous evangelist of the view that explosive increases in computer power will give us artificial intelligence on a par with our own in the near future. He has elaborated upon this idea in several books, a couple of which I used to have on my shelf; a commenter at NeuroLogica, Sciolist, still has The Age of Spiritual Machines (1999) close at hand.
Kurzweil claims that manâ€™s merger with machine is inevitable, because the pace of evolution has been increasing exponentially — when we reach the edge of biological evolution, we must transition into artificial substrates so that can continue traveling up that exponential curve into binary godliness. This, he predicts, is inevitable. Thatâ€™s at least a misreading of the theory of evolution; Iâ€™d argue itâ€™s also a bit kooky.
Indeed, Kurzweil’s attempts to anchor his “Law of Accelerating Returns” in geological deep time are singularly silly, to steal PZ Myers’ phrase. They rely upon condensing multiple historical events into single data points to get a pretty curve, and instead of reflecting any deep truth about evolutionary processes, the curve you get reveals a recentist bias — the “proximity of the familiar.”
I recall that bothering me when I read the book, eight or so years ago, but eight years have gone by since then, making my memory only slightly more reliable than that of a HAL 9000 unit being fed a tapeworm. Thus it was with surprise and glee that I read Sciolist’s recounting of the predictions Kurzweil makes for one decade after the book’s publication, 2009:
books will be replaced by electronic reading devices, children will be taught to read predominately with computers, most text will be produced with voice recognition software, phones will translate between languages in real time (â€œwhere you speak in English and your Japanese friend hears you in Japaneseâ€), the last decade will have been a period of constant economic prosperity, artists will create paintings and music with the active collaboration of robot artists, the deaf will hear and the blind will see. Here he is on war: â€œThe security of computation and communication is the primary focus of the U.S. Department of Defense . . . Humans are generally far removed from the scene of battle. Warfare is dominated by unmanned intelligent airborne devices.â€
Surely these predictions were qualified in some fashion?
He did qualify these predictions — he implied they were conservative.
If you grew up with James Burke’s Connections (1979), as I did, you’ll notice something else about this brand of futurism-by-exponential. If the innovations of this century happen anything like the innovations of those past, we’re going to see different bits and pieces of technology, produced perhaps by reverse-engineering different aspects of the brain, each of which change our lives in their own way, and which are recombined by other inventors like pieces of a mad jigsaw. . . Strong AI wonâ€™t pop into being like a cartoon lightbulb over a mad scientist’s head â€” even lightbulbs themselves didn’t happen that way.
Why does that process of innovation matter? Because, for one thing, it means that the material origins of mind will have technological and therefore social consequences long before we have a full-up brain emulator in an Ubuntu package. As pointed out by Kosik and Myers (and implicitly acknowledged by Shubin), denials of neuroscience will be the creationism of tomorrow: all it takes is a single political hot-button issue, and mysticism will run rampant once more. What times we have the fortune to enjoy, when scientists can seriously contemplate the synthesis of conscious mind, while the school boards still endorse the literal truth of Bronze Age mythology.