Apropos an announcement from the AAAS annual meeting, Steve Novella ponders the task of reverse-engineering the human brain. For those of us who share a materialistic view of the brain — i.e., for people who subscribe to actual science instead of woo — this task is likely to seem possible in principle, although daunting in practice. If the mind is the activity of the brain, and a finite number of genes can direct the growth of a brain in a finite amount of time, and the molecules which make up the brain are being exchanged in and out all the time anyway, it’s reasonable to speculate that we’ll be able to mimic the process in another medium. Novella argues that the “software” part of this task will be harder than the “hardware” side:
Sure, we may run into unexpected technological hurdles, but so far we have been able to develop new approaches to computing technology to keep blasting through all hurdles and keep Moore’s Law on track. So while there is always uncertainty in predicting future technology, predicting this level of computer advancement at the least can be considered highly probable.
The software extrapolation I think is more difficult to do, as conceptual hurdles may be more difficult to solve and may stall progress for a undetermined amount of time.
Broadly speaking, I agree. The exact amount of processing power needed to implement the brain in a Linux box is as yet unknown; it depends on things like the complexity of an individual synapse, and how much data is required to represent the state of a neuron. Then, too, for every hardware advance on Moore’s side of the ledger, Gates is there to bloat the software by a corresponding amount, and the applications of computer technology which have most radically affected life in recent years have depended not on raw cycles-per-second, but on networking and mass storage, neither of which necessarily improves at the same rate as processor speed.
Ray Kurzweil may be the most famous evangelist of the view that explosive increases in computer power will give us artificial intelligence on a par with our own in the near future. He has elaborated upon this idea in several books, a couple of which I used to have on my shelf; a commenter at NeuroLogica, Sciolist, still has The Age of Spiritual Machines (1999) close at hand.
Kurzweil claims that man’s merger with machine is inevitable, because the pace of evolution has been increasing exponentially — when we reach the edge of biological evolution, we must transition into artificial substrates so that can continue traveling up that exponential curve into binary godliness. This, he predicts, is inevitable. That’s at least a misreading of the theory of evolution; I’d argue it’s also a bit kooky.
Indeed, Kurzweil’s attempts to anchor his “Law of Accelerating Returns” in geological deep time are singularly silly, to steal PZ Myers’ phrase. They rely upon condensing multiple historical events into single data points to get a pretty curve, and instead of reflecting any deep truth about evolutionary processes, the curve you get reveals a recentist bias — the “proximity of the familiar.”
I recall that bothering me when I read the book, eight or so years ago, but eight years have gone by since then, making my memory only slightly more reliable than that of a HAL 9000 unit being fed a tapeworm. Thus it was with surprise and glee that I read Sciolist’s recounting of the predictions Kurzweil makes for one decade after the book’s publication, 2009:
books will be replaced by electronic reading devices, children will be taught to read predominately with computers, most text will be produced with voice recognition software, phones will translate between languages in real time (“where you speak in English and your Japanese friend hears you in Japaneseâ€), the last decade will have been a period of constant economic prosperity, artists will create paintings and music with the active collaboration of robot artists, the deaf will hear and the blind will see. Here he is on war: “The security of computation and communication is the primary focus of the U.S. Department of Defense . . . Humans are generally far removed from the scene of battle. Warfare is dominated by unmanned intelligent airborne devices.â€
Surely these predictions were qualified in some fashion?
He did qualify these predictions — he implied they were conservative.
If you grew up with James Burke’s Connections (1979), as I did, you’ll notice something else about this brand of futurism-by-exponential. If the innovations of this century happen anything like the innovations of those past, we’re going to see different bits and pieces of technology, produced perhaps by reverse-engineering different aspects of the brain, each of which change our lives in their own way, and which are recombined by other inventors like pieces of a mad jigsaw. . . Strong AI won’t pop into being like a cartoon lightbulb over a mad scientist’s head — even lightbulbs themselves didn’t happen that way.
Why does that process of innovation matter? Because, for one thing, it means that the material origins of mind will have technological and therefore social consequences long before we have a full-up brain emulator in an Ubuntu package. As pointed out by Kosik and Myers (and implicitly acknowledged by Shubin), denials of neuroscience will be the creationism of tomorrow: all it takes is a single political hot-button issue, and mysticism will run rampant once more. What times we have the fortune to enjoy, when scientists can seriously contemplate the synthesis of conscious mind, while the school boards still endorse the literal truth of Bronze Age mythology.
Kurzweil’s main problem is that he misplaces the real hurdle of Strong A.I. Computing power is an obstacle, but the real problem is the software implementation. Most of the focus in computer science research has been on analytic computation, not on more “biomimetic” tasks like pattern finding, inference, learning, abstraction, etc. Research in these areas has come a long way in the past few decades, but we’re still far from an A.I. that can scale up to its natural equivalents.
I wouldn’t even say that that’s the main problem with Kurzweil. His main problem can be found in any calculus textbook.
First we set out and solve differential equations governing exponential growth and decay. These are rather simple to solve, and they have simple motivations: the more bacteria there are, the faster the colony will produce more. In Kurzweil’s case, the more technology we have, the faster we’ll produce new technology.
But he stops reading before the very next section! Populations live in environments, and environments have carrying capacities. Thus there’s a counterterm to the equation: the closer a population comes to the carrying capacity, the less quickly it grows. Kurzweil fails to even consider other terms in the model.
Just because Moore’s Law is shooting up exponentially now doesn’t mean that as it increases we’ll get to a regime where the counterterms are more dominant. If Kurzweil were in my calculus 2 class, I’d fail him outright.
I think he also fits straight lines to log-log plots with linear least-squares regression (but like I said, I don’t have the book close at hand right now).
Ah, it’s gratifying to see someone else I know independently posting on the same topic. I give my less esoteric reasons for doubting the cyber-Pollyanna’s oracular abilities here. Enjoy!