Blindsight (4) – The Chinese Room Argument

This is the fourth and final article in my series focusing on issues Peter Watts addresses in his SF novel Blindsight. In the prior three articles, we have looked at whether consciousness can be considered an impediment, whether it would be worth living in a virtual reality environment, the brain and a range of unusual neurological impairments, and lastly, artificial enhancements up to and including transhumanism. This article will discuss a very famous thought experiment called the Chinese Room Argument and the challenge it poses to the computational theory of mind.

The Argument

First proposed by John Searle, the Chinese Room Argument asks you to imagine an English speaker who doesn’t know any Chinese sitting in a room with boxes of Chinese symbols (a database) and a book (written in English) explaining the rules for how those symbols can be meaningfully manipulated and combined (a program). Such a person could then receive strings of Chinese characters written by people outside the room (which, unbeknownst to our English speaker are actually questions), and, with the resources at his/her disposal (none of which explain what any of those symbols actually mean), produce corresponding strings of Chinese characters, which will take the form of coherent answers to the questions. The idea is that the person in the room, while appearing to understand Chinese, in actual fact, doesn’t know a single word. The broader conclusion the Chinese Room Argument seeks to draw is that one can’t get semantics (meaning) from pure syntactical manipulation.

In the book, there are two references to the Chinese Room Argument. The first is through the main character, Siri, who, as a result of an operation which involved the removal of half his brain when he was a child, has completely lost the capacity to understand emotions. Being unclouded by his own emotions, and uninfluenced by those of others, makes him ideal as an objective spectator. He is able to discern a wealth of information about people (beliefs, attitudes, thoughts, feelings, etc.) just through observation; listening to their voice, watching their movements, expressions, poses, etc. He calls it uncovering content, or meaning, through topology. The really curious thing about this is that Siri has no idea how he deciphers these external signs; the understanding just comes to him naturally.

The second reference comes when the crew of Theseus establish contact with another ship, Rorschach, conducted in English (which the latter picked up by listening to the electromagnetic noise Earth emits). Despite the fact that Rorschach is able to answer their questions and carry on a coherent conversation, the crew of Theseus aren’t sure whether they are conversing with a sentient, intelligent being or a sophisticated algorithm that, like the English speaker in the Chinese Room, is simply manipulating symbols without any real understanding.

Discussion

The main consequence of the Chinese Room Argument is as a refutation of the claim known as “Strong AI,” which says that a sufficiently advanced computer could be designed which would be able to understand natural language and share other mental properties humans possess. It does this by putting into question the Turing Test: a ‘test’ envisioned by Alan Turing in which a human interlocutor and a computer would carry on a conversation. If the person wasn’t able to tell that he/she was talking to a computer, we ought to attribute human-like intelligence to the computer (program).[1] The Chinese Room Argument also works from the other direction; that is, by rejecting the computational theory of mind; a theory which holds that human minds and mental states are the result of computation or information processing, and can therefore be produced in non-biological systems.

I find the Chinese Room Argument to be pretty convincing. As far as I understand current ‘cleverbot’ style conversation programs, they tend to operate at the level of the sentence. They search a database of human conversations for questions that match the one it is currently being asked, and then select one of the answers. One feature of this is that you almost always get an answer that is coherent and understandable because the program is simply reproducing an answer given by a real human. The problem is that there is clearly no genuine awareness or understanding going on here because the answers they generate often don’t make sense in a context that extends beyond the last question asked. Still, it’s possible to imagine that, as computers become more powerful, a more sophisticated Chinese Room type algorithm could be created, which would be able to carry on a meaningful conversation simply by manipulating symbols according to a set of detailed rules. It is just as obvious that such an algorithm would have no more understanding of what it was ‘saying’ than a modern computer has when it flashes the word “Welcome” on the screen when it boots up.

It is mystifying to me how AI fanatics, while acknowledging that modern computers, as they manipulate symbols, execute algorithms, and in general, process information, aren’t even a little bit conscious, in the same breath, express the belief that if only they can induce the computer to process more information, or process that information faster, or integrate the information (in some fanciful, but as yet undetermined, manner), or build in more feedback loops, or some other variation on the computation/information processing theme, that somehow, one day, the lights are suddenly going to turn on. This is a bold belief, emphasis on the word ‘belief,’ because there is not a shred of evidence to back it up, and a pretty good common sense argument against it; namely, that if a modern-day computer processing information isn’t conscious, it’s hard to see how doing more of the same could somehow one day produce consciousness (or genuine intelligence, or understanding).

This is where one might hear the argument that such a thing (non-conscious processes somehow producing consciousness) has, in fact, already happened once in the history of our planet; i.e. in humans. At some point, pure information processing in some animal became complex enough, or integrated in the right way, or achieved some other point of critical mass allowing conscious awareness to spark into life.

The problem with this claim is that, while it correctly identifies computers as pure information processors, it erroneously reduces human minds to the same. Of course, it must be true that at some fine-grained level of description, what is happening in the brain is ultimately reducible to the firing of neurons, something we can very loosely call ‘information processing,’[2] but it doesn’t follow from this that if we want to replicate conscious understanding in a computer, all we need to do is throw together a bunch of logic gates that can either be ‘on’ (firing) or ‘off’ (not firing).

While the firing of neurons is a necessary condition for consciousness, it isn’t a sufficient one. In addition to our neurons, there is a whole edifice constructed on top that includes; organs which secrete chemicals, a biological structure for which those chemicals feel good or bad, limbs that allow for physical interactions with an external world, the capacity for self-locomotion, an innate desire to learn and make sense of the world, the ability to communicate and participate in meaningful interactions with other conscious subjects, biological needs that must be satisfied for survival, and probably a hundred other things I haven’t thought of.

The immediate AI enthusiast response to this will be that all of these are either irrelevant or programmable. Chemicals? A biological structure for feelings? Unnecessary. Self-locomotion? Programmable. Limbs? Buildable. But is the biology unnecessary? Are feelings unnecessary? Imagine if our distant evolutionary ancestors didn’t actually feel dissatisfied when that prey escaped from their clutches? What motivation would there be to do better the next time? What if they didn’t have to eat to survive? What if their survival wasn’t threatened by a hundred factors that they had to plan for and actively try to avoid? In other words, what if external, uncontrollable situations hadn’t forced them to think, plan, and act? Would we be conscious today?

We can build artificial limbs and attach them to our robots. We can even program them to move around on their own. However, very few people are pursuing this type of physically interacting AI. The AI enthusiasts’ digital overlords aren’t Terminators, they’re intangible algorithms lurking in our IoT devices, or lines of code on the Internet that have suddenly ‘woken up.’ The reason ‘robotics’ isn’t a magnet for AI speculators is that there is a significant, but wholly unjustified and unproven, belief that the physical is irrelevant. This takes us back to the previous paragraph. What use are limbs if there is no motivation, or need, to use them for anything? Yes, we can program a robot to move its limbs, we can even write code so that if its limbs encounter resistance, it adjusts its behaviour in some way, but no matter how sophisticated the code, there is no path outlined here that would allow a reasonable person to expect that we could go from a robot executing algorithms exactly like a modern-day computer to suddenly caring about those algorithms, or having conscious experiences.

Yes, consciousness has emerged from (presumably) non-conscious processes on Earth,[3] but in light of the way it has (i.e. biologically and with an intimate connection to the physical), it offers absolutely no support for those trying to create a sentient computer program out of logic gates and algorithms, and, in fact, suggests that the whole endeavour might be wishful thinking. We will certainly get narrow AI capable of outperforming humans on specific (perhaps all) tasks, but even as our algorithms are driving with fewer accidents, diagnosing diseases with more accuracy, and creating more moving symphonies than us, there isn’t a single compelling reason to believe they will actually understand any of it.

A Response to the Chinese Room Argument

There have been a number of responses to the Chinese Room Argument, one of which, since it comes up in the book, I’ll briefly address here. The ‘systems reply’ argues that it isn’t a problem that the English speaker doesn’t understand Chinese because the system taken as a whole (i.e. the person, the boxes full of symbols, and the rulebook) does. Just as with an actual Chinese speaker, while no single neuron understands Chinese, the speaker, considered as a whole, does.

This response seems pretty obviously flawed. The problem is the false equivalence created between the system ‘person-boxes-rulebook’ and the system ‘Chinese speaker.’ The former is clearly not a conscious subject and therefore not something that can properly be said to ‘understand’ anything at all. In drawing the false equivalence, the systems reply hollows out the meaning of the word ‘understand,’ which actually refers to a mental act, or conscious state, of a conscious subject, reducing it to something like ‘capable of generating a meaningful response.’ Adopting this latter definition though clearly loses something essential once we realise that we are then forced to acknowledge that calculators ‘understand’ math, and my smartphone ‘understands’ the act of making a phone call.

Conclusion

The Chinese Room Argument is a compelling criticism of the belief that a syntactical device, purely through the manipulation of symbols, can somehow make the leap from syntax to semantics; that is, come to actually understand what it is doing. The idea that computer programs, by simply processing information more efficiently, or in a more ‘integrated’ fashion, or by using bits held in a quantum superposition of the states ‘on’ and ‘off,’ makes two mistakes. On the one hand, it overstates what purely mechanical processes are, essentially doing what human beings have done for millennia; that is, seeing agency in inanimate, non-conscious objects. On the other hand, it dramatically understates actual human consciousness and cognition, reducing it to a caricature that can be fully explained by the mere firing of neurons. Raising computers while simultaneously lowering humans like this makes it easier to find a theoretical coincidence between the two. Unfortunately, the price we pay for this convergence is an impoverished understanding of both.


[1] This isn’t exactly the test as Turing envisioned it, but it is the way it is usually imagined these days.

[2] Technically, this isn’t information processing because information is meaningful data, and data can only be meaningful for a conscious subject; i.e. one already capable of understanding it, thereby presupposing the very thing we are trying to explain.

[3] I haven’t closed the door on panpsychism yet.

Leave a comment