Computer-science guru Scott Aaronson has begun to put lecture notes for his “Great Ideas In Theoretical Computer Science” course online. (That’s 6.080 in MIT-speak: words are for the weak.) Prepared by the students and edited by the instructors, the notes are terse but clear and enjoyable. I particularly like lecture 6, “Minds and Machines,” of which the following is a taste:

We already established that no Turing Machine exists can solve the halting problem. If we had a proof system that could analyze any Turing machine and prove if it halted or did not halt, we could use this system to solve the halting problem by brute force (also known as the â€œBritish Museum algorithmâ€) by trying every possible string that might be a proof. You would either terminate and find a proof that it halts or terminate and find a proof that it doesnâ€™t halt. If this proof system was sound and complete, it would violate the unsolvability of the halting problem. Therefore, there is no such sound and complete proof system.

I also like this bit:

Searle gets a lot of the mileage in his thought experiment from careful choice of imagery: â€œmere slips of paperâ€! However, the human brain has immense computational capacity (roughly 10

^{11}neurons and 10^{14}synapses, with each neuron itself much more complex than a logic gate), and performs its computations massively in parallel. Simulating the computational power of the brain could easily require enough slips of paper to fill the solar system. But in that case Searle’s scenario seems to lose its intuitive force.

If we drastically oversimplify a neuron, we might suppose that we could describe its state on a printed page. Supposing, for the sake of argument, that this is good enough, then describing 10^{11} neurons would require a stack of paper ten thousand kilometers high. Spreading this stack across a desk for reference would require a table-top roughly twenty million kilometers long (unless I’ve misplaced a zero somewhere). This is much longer than the distance from the Earth to the Moon, and a sizable fraction of the distance from here to the Sun.

Can we get a more accurate figure, including, perhaps, the information required to represent synaptic connections? Well, the Blue Brain Project has found that it takes about 100 gigabytes to represent a column of 10,000 nerve cells. (These are fairly rough values, of course, so we won’t fret too much about the difference between 10^{3} and 2^{10}.) At about one hundred neurons per gigabyte, or 10 megabytes per neuron, that’s about *four thousand pages per nerve cell,* so all 10^{11} neurons would equate to a printed stack 4 × 10^{10} meters high. That’s 40 million kilometers. Side-by-side, that works out to 80 billion kilometers — enough to stretch from the Sun to Neptune eighteen times, and one five-hundredth the way to Proxima Centauri.

Is it possible to simulate the physical structure of the brain without needing to build a software model? I mean, in theory it is – you duplicate the physical structure and voltage potentials and you duplicate the brain – but how far out of our technological grasp is it?

Also, I had a thought, the other day; human beings become self-aware only after a certain number of inputs have collaborated to build a comprehensive enough world-model in that human child’s brain. If we built a RUDIMENTARY AI – like you see in video games, a faux AI if you will – and allowed it to assimilate and react to stimuli roughly equivalent to human sense-organs… I imagine this approach has been tried, but it struck me as a very basic sort of way of allowing intelligence to develop naturally.

Most likely the rudimentary AI I imagine is far too basic to properly assimilate the inputs or even make anything of them, but hey, I’m trying… LOL.

To your first point, I’d guess that the answer is “very.”

On your second point, have you heard of Cog?

I think the main weakness in Searle’s argument is that he doesn’t take into account the differences in computational complexity between

pattern matchingandformal language recognition. Searle doesn’t understand Chinese, but with more intensive effortcouldunderstand Chinese. His argument in no way demonstrates that the same isn’t of a sufficiently powerful computing device.Interesting. I’d thought, vaguely, about that distinction, but I hadn’t considered phrasing it in terms of computational complexity.