9

The Chinese Room argument attempts to prove that a computer, no matter how powerful, cannot achieve consciousness.

Brief summary:

Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a data base) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols which are correct answers to the questions (the output). The program enables the person in the room to pass the Turing Test for understanding Chinese but he does not understand a word of Chinese.

How is this any different than what goes on inside our brains?

Certain impulses are received from sensory organs and processed by neurons. This is a completely deterministic process and to these neurons, individually, the input/output has absolutely no meaning. Individually, they possess no consciousness. Sure, it happens 10^n times simultaneously and maybe there is some recursion involved, but the concept is the same - the origin of the input and the destination of the output are irrelevant.

The only difference I can think of is that, in the brain, the instructions/look-up tables/whatever can be modified by this process. The experiment makes no mention of this, because there is no need for it - language syntax remains relatively constant over a short period of time. But as long as these modifications are carried out according to a set of rules, it would make no difference.

Am I missing some crucial part of Searle's argument?

(Inspired by this question)

J.Doe
  • 99
  • 1
  • See http://philosophy.stackexchange.com/questions/34358/how-can-one-refute-john-searles-syntax-is-not-semantics-argument-against-stro – Alexander S King Jun 01 '16 at 17:42
  • Also http://philosophy.stackexchange.com/questions/30091/on-the-difference-between-knowing-and-understanding – Alexander S King Jun 01 '16 at 17:43
  • 5
    If you have no personal experience of your own experience ... you get the official philosophy.stackexchange.com Zombie badge. It astonishes me that people pretend to be unaware of themselves. – user4894 Jun 01 '16 at 17:45
  • You might like this quote from Scott Aaronson: "Like many other thought experiments, the Chinese Room gets its mileage from a deceptive choice of imagery -- and more to the point, from ignoring computational complexity. We're invited to imagine someone pushing around slips of paper with zero understanding or insight. But how many slips of paper are we talking about? How big would the rule book have to be, and how quickly would you have to consult it, to carry out an intelligent Chinese conversation in anything resembling real time?... – Tim kinsella Jun 01 '16 at 18:13
  • If each page of the rule book corresponded to one neuron of (say) Debbie's brain, then probably we'd be talking about a "rule book" at least the size of the Earth, its pages searchable by a swarm of robots traveling at close to the speed of light. When you put it that way, maybe it's not so hard to imagine that this enormous Chinese-speaking entity -- this dian nao -- that we've brought into being might have something we'd be prepared to call understanding or insight." http://www.scottaaronson.com/democritus/lec4.html – Tim kinsella Jun 01 '16 at 18:13
  • @Timkinsella Thanks for the Aaronson quote. It makes me think how undecipherable would be a film shown (and heard) at 1/1000 speed or something. And also the Star Trek episode "Blink of an Eye". Speed as qualitative (and not just quantitative) difference... – Jeff Y Jun 01 '16 at 19:12
  • @Timkinsella Awesome quote, would upvote for dian nao alone if i had the privilege – J.Doe Jun 01 '16 at 19:26
  • Yeah Aaronson is great. I also saw this recently and thought it was cute http://smbc-comics.com/index.php?id=4124 – Tim kinsella Jun 01 '16 at 19:33
  • 1
    Hahaha thank you sir. I think thats going on the fridge – J.Doe Jun 01 '16 at 19:44
  • Maybe to put your question backwards, what evidence do you have to suggest that this is what's going on in your brain? If that's what consciousness is, it does an exceptionally good job of hiding the process from the one experiencing it. – virmaior Jun 01 '16 at 23:53
  • Correct me if I'm wrong, but isn't this also known as Searl's homunculus argument? – NationWidePants Jun 03 '16 at 11:54

4 Answers4

5

I would like to suggest that your puzzlement arises from confusing intelligence and consciousness. Neither concept is well defined but nonetheless they are distinct. Searle would say that a Chinese room cannot be conscious, not that it cannot appear to be intelligent. In fact the original argument revolves around the concept of understanding which is another blurred concept with no clear definition. Searle is a philosopher who believes that the mind cannot be expressed in terms of computations, or that in other words, a computer may never have a mind, regardless of its architecture and particular computation. He does not rule out that machines in general may have a mind, only that mechanisms (a subset of machines) may never amount to a mind. You can still disagree with him (as most people do) but to do that it is important first to understand him (pun intended).

nir
  • 4,786
  • 16
  • 27
  • 1
    The person in the room is following an algorithm, which means that each step is simple and does he does not "devise his own response". – nir Jun 02 '16 at 14:00
  • Right, sorry, just found that on the wiki page. – Tim kinsella Jun 02 '16 at 14:05
  • So do you feel like it's a problem for the experiment that the "algorithm", written in English, on paper, might require a filing cabinet the size of a planet? And that the subject in the room might die of old age before he or she completed a single exchange? – Tim kinsella Jun 02 '16 at 14:13
  • 1
    It is not a problem since all that is required is the capacity in principle, not in practice. Functionalist like dennett believe that once the computation is complex enough nothing will be missing. searle believes that the complexity of the computation is irrelevant (me too). searle used the thought experiment to argue that computation is what he calls observer relative. but other than that it is not very different than Leibniz's mill - http://home.datacomm.ch/kerguelen/monadology/printable.html#17 – nir Jun 02 '16 at 14:22
  • Thanks again. One would think though, that if it were possible "in principle", there would be a thought experiment which delivers the result which is purported to be possible. The luxury of a thought experiment is that the dreamer is allowed to dispense with all practical limitations. If you do that and you still come up short, maybe you did so for reason of some underlying "principle". This is what's always seemed to me slightly dishonest about the Chinese room. – Tim kinsella Jun 02 '16 at 14:33
  • I don't understand what you mean – nir Jun 02 '16 at 14:34
  • The whole thing or a particular sentence? By "come up short" I mean fail to deliver a real time conversation in Chinese. – Tim kinsella Jun 02 '16 at 14:36
  • And by "the result which is purported to be possible" I mean the result that a real time conversation can be had with an entity which clearly does not understand the content of the conversation. And by dreamer I of course mean the experimenter. In this case, Searle. – Tim kinsella Jun 02 '16 at 14:46
  • John Searle was pretty explicit in his lectures and subsequent discussions that the Chinese Room wasn't about consciousness but about meaning. – Alexander S King Jun 02 '16 at 17:14
  • @AlexanderSKing, I did not write that Searle said the Chinese Room was about consciousness but that he would say (if asked) that it cannot be conscious. Anyway, he does say it explicitly in Minds Brains and Science: "The reason that no computer program can ever be a mind is simply that a computer program is only syntactical, and minds are more than syntactical. Minds are semantical, in the sense that they have more than a formal structure, they have a content. To illustrate this point I have designed a certain thought experiment." and he goes on to describe the Chinese Room. – nir Jun 02 '16 at 19:21
  • 1
    @Timkinsella, I think the thought experiment is purposely phantasmagorical. It is clearly impossible for the person using the rule-book to produce answers in a timely manner, and yet that point is irrelevant. For what does the timescale of the scene have to do with the principle? – nir Jun 02 '16 at 19:30
  • For a couple of reasons: 1. I think it's slightly dishonest to present a thought experiment, and then to dismiss certain considerations inside the totally unfettered universe of the thought experiment as mere "practical- therefore irrelevant" limitations. The whole point of a thought experiment is to isolate practical from theoretical limitations. If your thought experiment fails irremediably to give a certain result, then it does so for theoretical reasons, by the very definition of a thought experiment – Tim kinsella Jun 02 '16 at 19:48
  • The Chinese room rests on an appeal to intuition based on specious imagery.; it's what Dennett calls an "intuition pump." It's not just about time, but also scale. We're told to imagine a single person sitting alone in a room with a stack of papers. Once were honest about the scale of the entity in the room- a huge team of robots swarming around a planet sized filing cabinet at the speed of light, as in the Scott Aaronson quote- it loses its intuitive punch.
  • – Tim kinsella Jun 02 '16 at 19:53
  • Also the "timeliness" of computation is not just a practical consideration. Bounds on the amount of time it takes to compute certain functions express something deep about the universe and epistemology. If P=NP, then the truth or falsehood of any mathematical proposition would be knowable with the click of a button, for instance, despite the fact that P=NP is just a (probably false) statement about how long it takes find out whether a graph is connected (or whatever, I can't remember any short NP complete problems). – Tim kinsella Jun 02 '16 at 20:05
  • I should have said "provability or refutability" instead of "truth or falsehood". (Gödel) – Tim kinsella Jun 02 '16 at 20:16
  • 1
    @Timkinsella, Dennett and Searle fundamentally disagree. Dennett believes that the chinese room can be conscious and Searle believes that it cannot. But their difference of opinions does not hinge on the details of the intuition pump. To the extent they are fighting about its details, it is just inconsequential skirmishes. BTW, what do you think? can a computer be conscious in principle? what do you think about Leibniz's mill? http://home.datacomm.ch/kerguelen/monadology/printable.html#17 – nir Jun 03 '16 at 06:25
  • @Timkinsella, also as a note. It is not clear to me why timescale matters. Imagine that you do not simulate a brain that interacts with the "real" world but an entire room with a person in it sitting on a couch, reading a book and listening to music. Now what does it matter if each simulated second of that room takes one second or one billion years of our time? What does it matter if you simulate it using the combined computing resources on earth, or by moving around rocks on an infinite stretch of sand? https://xkcd.com/505/ – nir Jun 03 '16 at 06:31
  • @nir I didn't make any claims about Dennett's writings on the Chinese room except to say that he calls it an intuition pump. And I don't think I mischaracterized Dennett's definition of that phrase. – Tim kinsella Jun 03 '16 at 19:15
  • @nir with respect to your second comment, I can only say that IMO the persuasiveness of the experiment depends entirely on an appeal to our intuitions about a room containing a human who is working with pencil and paper. After all, what else distinguishes this particular challenge to strong AI from the more straightforward argument that a Chinese-speaking Turing machine could not understand Chinese since it consists only of a tape-head reading ones and zeros? – Tim kinsella Jun 03 '16 at 19:34
  • I.e. If you don't think the imagery or scale of the Chinese room is relevant, why not replace the human with an even more oblivious tape-head, and let the machine run at full speed? Then the thought experiment has no more content or novelty than our intuitions that an AI can't actually understand anything because it's just a hunk of circuitry. – Tim kinsella Jun 03 '16 at 19:34
  • And that's why it matters, IMO, whether you postulate some kind of huge time dilation that makes a billion years inside the room equivalent to a second outside it. Once you do that, all intuition goes out the window. Leaving aside the question of whether and how much information exchange between the two regions the laws of physics actually permit, once you postulate something like that any appeal to intuition becomes absurd. – Tim kinsella Jun 03 '16 at 19:46
  • Humans have no intuition for the kinds of absurd things that can occur over time scales that large. To take one example, if took only a few billion years for our brains to spontaneously assemble themselves from some raw materials sloshing around randomly in a primordial ocean. – Tim kinsella Jun 03 '16 at 19:47
  • Also, sorry, I haven't gotten to Leibniz's mill yet, but I will shortly. Thanks for the link :) – Tim kinsella Jun 03 '16 at 19:55
  • @Timkinsella, I would like to know what your personal opinion is. do you believe that a computing system may be in principle conscious in the fullest sense as Dennett believes? may Leibniz's mill of moving (wooden?) cogwheels be conscious? – nir Jun 03 '16 at 20:00
  • @nir Yes, I'm inclined to believe that if you made an neuron-for-neuron-isomorphic copy of my brain using transistors (I think those are the right analogue? But Idk much about electronics), then it would be as conscious as I am. – Tim kinsella Jun 03 '16 at 20:03
  • @Timkinsella, another related question. when you look at the world around you. do you concede that it is entirely in your head like a dream is? a neurologist once put it as “Life is nothing but a dream guided by the senses”. The opposite belief, that we perceive the external world directly as it is, is called naive realism. The classic example is that of color. are you aware that color is a phenomena in your mind rather than a property of the objects you look at? – nir Jun 03 '16 at 20:11
  • @Timkinsella, the reason I ask is that naive realists do not think that a theory of mind needs to account for that inner "virtual reality". – nir Jun 03 '16 at 20:12
  • Interesting. I think the brain creates some kind of model of the external world, a sort of messy homomorphism created from sensory data. So I guess that's a sort of middle ground between those two positions. – Tim kinsella Jun 03 '16 at 20:16
  • @Timkinsella, I don't understand what you mean. take for example the white of the screen in front of you; do you think that the white color that you now experience is a thing in your mind or a thing in the external world? – nir Jun 03 '16 at 20:19
  • So if you can't tell, I don't have any training in philosophy, let alone phenomenology, so we might be talking past each other.But I'll give you this much: When I look at some thing white, I certainly have an intuition that there exists something called "the feeling of white", which is hard to pin down. However I don't think there's much reason to take our intuitions about our sensations too seriously when trying to sort out what's going on with minds and brains. Maybe when writing sonnets, but not if you really want to understand cognition and sensation. – Tim kinsella Jun 03 '16 at 20:30
  • But I don't mean to be dismissive. I realize these are deep questions. I just don't place much stock in humans' intuitions about what's happening in their own heads. – Tim kinsella Jun 03 '16 at 20:39