Computer-science guru Scott Aaronson has begun to put lecture notes for his “Great Ideas In Theoretical Computer Science” course online. (That’s 6.080 in MIT-speak: words are for the weak.) Prepared by the students and edited by the instructors, the notes are terse but clear and enjoyable. I particularly like lecture 6, “Minds and Machines,” of which the following is a taste:
We already established that no Turing Machine exists can solve the halting problem. If we had a proof system that could analyze any Turing machine and prove if it halted or did not halt, we could use this system to solve the halting problem by brute force (also known as the â€œBritish Museum algorithmâ€) by trying every possible string that might be a proof. You would either terminate and find a proof that it halts or terminate and find a proof that it doesnâ€™t halt. If this proof system was sound and complete, it would violate the unsolvability of the halting problem. Therefore, there is no such sound and complete proof system.
I also like this bit:
Searle gets a lot of the mileage in his thought experiment from careful choice of imagery: â€œmere slips of paperâ€! However, the human brain has immense computational capacity (roughly 1011 neurons and 1014 synapses, with each neuron itself much more complex than a logic gate), and performs its computations massively in parallel. Simulating the computational power of the brain could easily require enough slips of paper to fill the solar system. But in that case Searle’s scenario seems to lose its intuitive force.
If we drastically oversimplify a neuron, we might suppose that we could describe its state on a printed page. Supposing, for the sake of argument, that this is good enough, then describing 1011 neurons would require a stack of paper ten thousand kilometers high. Spreading this stack across a desk for reference would require a table-top roughly twenty million kilometers long (unless I’ve misplaced a zero somewhere). This is much longer than the distance from the Earth to the Moon, and a sizable fraction of the distance from here to the Sun.
Can we get a more accurate figure, including, perhaps, the information required to represent synaptic connections? Well, the Blue Brain Project has found that it takes about 100 gigabytes to represent a column of 10,000 nerve cells. (These are fairly rough values, of course, so we won’t fret too much about the difference between 103 and 210.) At about one hundred neurons per gigabyte, or 10 megabytes per neuron, that’s about four thousand pages per nerve cell, so all 1011 neurons would equate to a printed stack 4 × 1010 meters high. That’s 40 million kilometers. Side-by-side, that works out to 80 billion kilometers — enough to stretch from the Sun to Neptune eighteen times, and one five-hundredth the way to Proxima Centauri.