Apropos an announcement from the AAAS annual meeting, Steve Novella ponders the task of reverse-engineering the human brain. For those of us who share a materialistic view of the brain — i.e., for people who subscribe to actual science instead of woo — this task is likely to seem possible in principle, although daunting in practice. If the mind is the activity of the brain, and a finite number of genes can direct the growth of a brain in a finite amount of time, and the molecules which make up the brain are being exchanged in and out all the time anyway, it’s reasonable to speculate that we’ll be able to mimic the process in another medium. Novella argues that the “software” part of this task will be harder than the “hardware” side:
Sure, we may run into unexpected technological hurdles, but so far we have been able to develop new approaches to computing technology to keep blasting through all hurdles and keep Mooreâ€™s Law on track. So while there is always uncertainty in predicting future technology, predicting this level of computer advancement at the least can be considered highly probable.
The software extrapolation I think is more difficult to do, as conceptual hurdles may be more difficult to solve and may stall progress for a undetermined amount of time.
Broadly speaking, I agree. The exact amount of processing power needed to implement the brain in a Linux box is as yet unknown; it depends on things like the complexity of an individual synapse, and how much data is required to represent the state of a neuron. Then, too, for every hardware advance on Moore’s side of the ledger, Gates is there to bloat the software by a corresponding amount, and the applications of computer technology which have most radically affected life in recent years have depended not on raw cycles-per-second, but on networking and mass storage, neither of which necessarily improves at the same rate as processor speed.
Ray Kurzweil may be the most famous evangelist of the view that explosive increases in computer power will give us artificial intelligence on a par with our own in the near future. He has elaborated upon this idea in several books, a couple of which I used to have on my shelf; a commenter at NeuroLogica, Sciolist, still has The Age of Spiritual Machines (1999) close at hand.
Kurzweil claims that manâ€™s merger with machine is inevitable, because the pace of evolution has been increasing exponentially — when we reach the edge of biological evolution, we must transition into artificial substrates so that can continue traveling up that exponential curve into binary godliness. This, he predicts, is inevitable. Thatâ€™s at least a misreading of the theory of evolution; Iâ€™d argue itâ€™s also a bit kooky.
Indeed, Kurzweil’s attempts to anchor his “Law of Accelerating Returns” in geological deep time are singularly silly, to steal PZ Myers’ phrase. They rely upon condensing multiple historical events into single data points to get a pretty curve, and instead of reflecting any deep truth about evolutionary processes, the curve you get reveals a recentist bias — the “proximity of the familiar.”
I recall that bothering me when I read the book, eight or so years ago, but eight years have gone by since then, making my memory only slightly more reliable than that of a HAL 9000 unit being fed a tapeworm. Thus it was with surprise and glee that I read Sciolist’s recounting of the predictions Kurzweil makes for one decade after the book’s publication, 2009:
Read More »