2

I thought an important feature of the Turing test was that the situation was exactly equal for each contestants, human and computer. The interrogator communicates with each using a teleprinter. Turing in his 1950 paper when talking about the interrogator communicating with player A in the imitation game: "The ideal arrangement is to have a teleprinter communicating between the two rooms [interrogator's and player A's]" then in the next para: "We now ask the question, 'What will happen when a machine takes the part of A in this game?'".

So there's one teleprinter in the human's room and another in the computer's room, and the interrogator types the questions on their teleprinter and gets printed responses back from the contestants. Everything is equal except one contestant is a human and the other a machine.

But the computing machine has no sensory apparatus. It can't see the questions printed by the teleprinter in the computer's room. If it can't see the questions then it can't understand them. In fact the computer must be wired directly into the interrogator's teleprinter, and the computer gets voltages - not words. The computer might have its causality defined by a human programmer (by programming the computer) such that the computer sends voltages back to the interrogator's teleprinter and words are then printed by it, but still, the computer gets voltages, not words.

Since the causality of the computer is defined by the human programmer, doesn't that mean that the Turing test, as Turing describes it, actually tests the intelligence of two humans, the human contestant and the computer programmer?

Roddus
  • 721
  • 3
  • 15
  • "But the computing machine has no sensory apparatus." - create a robot with cameras, image from which is handled by the AI. Well, in order to [artificially] create a great intelligence ones the creators themselves should be very clever. – rus9384 Aug 01 '18 at 00:12
  • Yes, a robot with human-like sensory apparatus should be the computer contestant, but there is still the question of to what extent the behaviour of the robot is dictated by the human programmer. Even with a robots whose causation is defined or largely defined by a human, the TT is still testing the intelligence of two humans isn't it? – Roddus Aug 01 '18 at 00:30
  • It's hard to say if it's simpler, harder or exactly as difficult to create an intelligence as good (or bad) as one's own. But if the third variant is false, the test will be unfair comparing intelligence of the creator and the contestant. – rus9384 Aug 01 '18 at 00:35
  • You have my vote, The Turing test tests the ability of programmers to pass it. If the programmer cannot pass it then they are not going to be able to build a machine that does. Suppose as the human I were to ask 'What makes you angry'. Nothing would, obviously, so to pass the test the machine would have to be programmed not to answer questions as an honest human being would. I suspect that it's generally agreed these days that it is not an effective test of anything more than the programmer's skill at deception, but I may have just stumbled on a few unrepresentative articles. –  Aug 01 '18 at 11:46
  • Empirically, it's easy to write a program with unexpected behavior. It's also possible to write a machine learning algorithm (like an artificial neural net) that mere humans can't figure out, because the knowledge is expressed as a collection of numbers bearing no obvious relationship to what the machine is doing. It's possible to write a program with unexpected and highly useful behavior, such as template metaprogramming in the C++ language. – David Thornley Aug 01 '18 at 15:02
  • @PeterJ: All the alleged Turing tests I've read about have been cases of people not being able to tell if something is a computer or a human, often with forewarning that the "human" has certain restrictions. Turing intended a session with a tester, a human, and a computer. Whether success in this case is deception or the creation of a real mind is far too large a question for a comment. – David Thornley Aug 01 '18 at 15:05
  • @DavidThornley, human-like behavior can hardly be described as an unexpected. – rus9384 Aug 01 '18 at 18:03
  • @rus9384, human-like behavior can indeed be unexpected. I wouldn't expect it out of a mailbox, for example. In this case, I mean that the programmer(s) might have expected some behavior, but got better than they expected. It is possible to write a program, such as a neural net, that will get results the programmer(s) will not understand. – David Thornley Aug 06 '18 at 16:49
  • @DavidThornley, well, if NN will be that good developed, people probably will upload their minds in those NNs. – rus9384 Aug 06 '18 at 17:22

5 Answers5

3

The Turing Test is perhaps best understood as a thought experiment aimed at answering the question "if something purely mechanical could display all the perceptible signs of consciousness/intelligence, would there be any valid reason to deny it possessed those qualities?" Or, to put it perhaps more correctly, "is there any meaningful definition of intelligence other than 'able to display the empirical signs of intelligence?'"

Turing's own answer is "no." Who constructs the machine, and the details of how the machine communicates with the world are peripheral to Turing's aim, which, beyond the immediate question above, is to demonstrate that human intelligence itself admits a purely mechanical explanation, it doesn't require any mystical or supernatural soul to animate it. Turing isn't primarily concerned with the competitive aspect of the Test, it's merely a vehicle for this idea.

The Turing Test is most easily understood in a larger context of the 20th century British and American philosophical push towards redefining all concepts solely in terms of their empirical traces. There are many people who reject this, and for a variety of reasons. Most criticisms of the Turing Test, including your own, are perhaps best understood as disagreements with Turing's (still controversial) fundamental assumptions (since any practical quibbles about the implementation of his test are largely irrelevant to his larger point). He did anticipate some of these disagreements, and formulate replies, you may find those of interest.

Chris Sunami
  • 29,852
  • 2
  • 49
  • 101
  • You say Turing seeks to show “human intelligence itself admits a purely mechanical explanation”. Do you mean behavioral explanation? Intelligence as internal process/structure (the common concept) might still be mechanical with no implication of behavior. My problem with intelligence-as-behavior is it fully fails to explain the inner processes/structures that yield human-like general intelligence, and without that knowledge, how could AI create genuine machine intelligence? Doesn't accepting any process/structure that yields some intelligent behavior just avoid this problem? – Roddus Aug 03 '18 at 00:02
  • 1
    I'm not defending Turing's position, just trying to explain it. He believed that intelligence itself would eventually be shown to be an artifact of a Turing Machine. Towards that end, he sought a redefinition of intelligence wholly in terms of its outward signs. Your objections are rejections of Turing's position, they cannot be reconciled with it. – Chris Sunami Aug 03 '18 at 13:17
  • Yes, Turing wasn't trying to explain intelligence as commonly conceived (i.e., something internal) but to redefine the meaning of the term "intelligence". He says as much in his 1950 "prediction": "Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted". Which isn't a prediction but a redefinition. How can redefining "intelligence" as behaviour explain how to make a machine intelligent? Doesn't the TT let AI dodge this? – Roddus Aug 03 '18 at 22:56
  • It's a basic philosophical disagreement. For someone like Turing, the talk of something "internal" is incoherent anyway. Just as new definitions of logical operators cleared away centuries of fuzzy thinking, and paved the way for a science of logic, he thought he could do something similar for the concept of "intelligence" by redefining it. – Chris Sunami Aug 06 '18 at 13:39
  • @Roddus: How could you observe intelligence in others except by observing behavior? General educated opinion can and has changed dramatically over the centuries, and word use does also. Turing may have been predicting that the definition of "thinking" would be more exact by 2001. – David Thornley Aug 06 '18 at 16:53
  • @David Thornley We judge intelligence in others by their behavior. But what causes the behavior? If we knew the principles of human intelligence, including perception, we could try to realize them in a silicon-based system. But we don't know the principles. Human intelligence can define the causation of (i.e., program) computers so they behave intelligently in limited domains. But I think we really need to discover the principles of basic functions like learning from experience and generalization. E.g., no AI system can generalize like a human – not even close. We don't know the principles. – Roddus Aug 10 '18 at 09:15
  • @David Thornley In suggesting that the meaning of the term "thinking" will eventually be behavioral, I think Turing is dodging the main issue. We don't want to redefine words. We want to understand what happens inside humans when they think. What are the processes? What are the structures? So to me, Turing's proposed redefinition of "thinking" is, to some extent, smoke and mirrors that distract from the most important question: what is (internal) thinking? – Roddus Aug 10 '18 at 09:15
  • 1
    @Roddus Not everyone finds Turing compelling, and for those who don't, your line of argument is entirely typical. This is a central and live debate in the fields of biology, psychology, neuroscience and computer science as relates to the brain, the mind and the intellect. // With that said, I'm not sure there's much to be gained by simply repeating what --to be honest --were already the main objections to Turing at the time he first made his argument. – Chris Sunami Aug 10 '18 at 14:16
  • @Chris Sunami I agree this is old ground. My arguments that the intelligence is in the programmer, and the internal principles are what's most important, are hopelessly typical. But I'm looking more at Searle's uncritical acceptance of (a) computers are Turing machines, (b) computers process symbols (interpretable shapes), and (c) TMs process symbols. (b) is a premiss of the CRA, but is false. (c) is true, but taking (c) and (a) together disguises the falsity of (b). To me, it's a great idea to abandon the idea that computers process symbols. But if (b) is false, do computers compute? – Roddus Aug 12 '18 at 11:35
  • @Roddus - You have to accept some of your opponent's premises, or you're not debating them, you're just disagreeing. I think Searle tried to accept as many of Turing's assumptions as he could, in order to highlight what he thought were the most essential failings of the Turing argument. – Chris Sunami Aug 13 '18 at 14:21
  • @Chris Sunami I think you need to understand your opponent's premises, but maybe you could argue all of them are false? To me, Searle's CRA is a mixture of true and false premises: That symbols in themselves are semantically vacant: TRUE. That computers process only symbols: FALSE. That computers are Turing machines (what Turing called "logical computing machines" ('Intelligent Machinery', 1951)): FALSE. – Roddus Aug 14 '18 at 09:41
  • @Chris Sunami It would be great to debate these premises. I'd start by arguing that symbols are tokenised shapes that have meanings. "Have" here not meaning contains or physical possesses. A shape gets a meaning by a cognate observer assigning a meaning to it (by, say, learning a language). A human perceives the shape (which activates an internal neural representation (for want of a better word) of the shape), then (by, say, the learning process) this is connected to another neural structure – the meaning. So meanings are distinct from the shapes in the clearest way: internal v. external. – Roddus Aug 14 '18 at 09:41
  • @Chris Sunami That computers process only symbols. Searle: "a computer is a device that by definition manipulates formal symbols" (Mystery of Consciousness, p 9). A formal symbol is one I can identify by its shape alone (ibid, p 14). But do clocked voltage levels have shapes? It's not clear the idea of shape makes sense for clocked voltage levels. Then, has any human perceived and assigned a meaning to clocked voltage levels? No. Humans can't perceive them (lack the sensory apparatus). So computers don't process symbols (it's concluded). – Roddus Aug 14 '18 at 09:42
  • @Chris Sunami What about taking a more abstract view of what computers process? Searle: "digital computers insofar as they are computers have, by definition, a syntax alone" (Minds, Brains and Science, p 34). (interesting that Searle almost suggests computers might non-compute). Syntactic means reacts to a formal property (e.g., shape) of what is processed. But what about reacting to relations between symbols? Maybe this is syntactic too? But maybe not. Maybe reaction to symbols plus reaction to relations between them can build a semantics. These ideas I'm really keen to discuss. – Roddus Aug 14 '18 at 09:42
  • @Roddus https://philosophy.stackexchange.com/questions/50200/what-is-the-term-for-the-fallacy-strategy-of-ignoring-logical-reasoning-intended/50205#50205 – Chris Sunami Aug 14 '18 at 14:25
2

But the computing machine has no sensory apparatus. It can't see the questions printed by the teleprinter in the computer's room. If it can't see the questions then it can't understand them. In fact the computer must be wired directly into the interrogator's teleprinter, and the computer gets voltages - not words. The computer might have its causality defined by a human programmer (by programming the computer) such that the computer sends voltages back to the interrogator's teleprinter and words are then printed by it, but still, the computer gets voltages, not words.

Your thoughts are instantiated in electrical activity in your brain. So we know that a physical system that uses electricity can instantiate thoughts.

Now your brain receives electrical signals from your sense organs does stuff to those signals and sends other electrical signals to your muscles telling them what to do. So your brain receives signals, processes the information in those signals and sends out other signals. Your understanding of the world is a pattern of information processing.

The Turing machine is a universal computer - it can compute anything that can be computed by any other physical system and can simulate any other physical system to any desired level of accuracy. Your desktop computer can do the same operations as a Turing machine so it can also simulate any physical system, including your brain. So a computer that is programmed the right way and receives information similar to the information you receive can think in a similar way. And it won't just reproduce the appearance of doing the same thing, it can also simulate all the internal processes leading up to whatever thoughts you come up with. So it will think in the same way a human being thinks. We don't currently know how to write such a program, but the laws of physics say that it can be written.

See "Godel,Echer,Bach: An Eternal Golden Braid" by Hofstadter, "The Fabric of Reality" by David Deutsch chapter 5, and "The Beginning of Infinity" by Deutsch, chapters 5-7.

alanf
  • 7,748
  • 13
  • 20
  • TM is a model much more powerful than any physical system due to unlimited memory. – rus9384 Aug 01 '18 at 09:14
  • @rus9384 If a physical system runs out of memory you can add more. There is no known upper bound to how much memory you can add. – alanf Aug 01 '18 at 11:13
  • But you must add it, unlike in TM. In fact real computer is a 2-way finite automaton. – rus9384 Aug 01 '18 at 12:44
  • 1
    No. Since memory can be added a real computer has the same repertoire as a Turing machine. – alanf Aug 01 '18 at 15:51
  • @alanf There are only 10^80 hydrogen atoms in the known universe. If you turn them all into memory chips you'll still run out. "No known upper bound?" What are you talking about? Where are you going to get an endless supply of stuff to make chips out of? – user4894 Aug 01 '18 at 20:58
  • @user4894 The relevant issue is how much stuff the laws of physics will allow us to use. We're not close to understanding enough about the laws of physics to settle that issue. We don't have a good understanding of cosmology and so can't say exactly what resources we will have access to. There are cosmologies that allow indefinitely large amount of computation https://arxiv.org/abs/gr-qc/0302076. There is dispute about what dark matter and dark energy are made out of and whether they exist. Nor do we know whether it will be possible to make new universes: https://arxiv.org/abs/1801.04539 – alanf Aug 02 '18 at 07:39
  • 1
    @alanf You're just waving your hands. You have no evidence you can build an infinite TM in the physical world. What you claim is contrary to known science. – user4894 Aug 02 '18 at 09:12
  • @user4894 My guess is that supplying a computer with an indefinite supply of information storage media is possible. This guess may be right or it may be wrong. My position has the merit of not requiring a replacement of the existing theory of computation, unlike your position. There are some unsettled issues that are relevant to the truth of my guess. There are unrefuted theories consistent with the truth of my guess. There are unrefuted theories consistent with the truth of your guess too. – alanf Aug 02 '18 at 13:17
  • @alanf My position IS the existing theory of computation. What you earlier presented as fact you now refer to as a "guess." You just conceded my point. Go learn some science. – user4894 Aug 02 '18 at 16:12
  • Whether or not it is possible to infinitely add the memory is another thing. There are weaker TMs which have their memory added when the input becomes longer. But they are weaker. – rus9384 Aug 02 '18 at 17:28
  • @user4894 I'm probably going to be called a complete idiot for saying this, but... computers aren't Turing machines. It might be convenient to think computers are Turing machines when human use is at issue. But it's a really bad idea when AI is the issue. Turing machines process symbols but computers don't. Because of the semantics of symbols, to think a computer is a Turing machine makes it almost impossible to think clearly about how a computer might be intelligent in its own right. (Searle makes the error, saying his Chinese room – which processes symbols – is a computer, but it's not.) – Roddus Aug 03 '18 at 01:38
  • @user4894 I referred to both of our positions as guesses because they both are guesses. Science consists of guesses controlled by criticism. – alanf Aug 03 '18 at 07:20
  • @rus9384 Computers to which memory can be added indefinitely have the same repertoire as a Turing machine: http://rspa.royalsocietypublishing.org/content/425/1868/73.short – alanf Aug 03 '18 at 07:23
  • @alanf Of course. But those are not physical computers. – user4894 Aug 03 '18 at 14:40
  • @Roddus: Turing machines are mathematical models of computers, useful because they can model anything reasonably described as computation and are simple enough to prove things on. Turing machines process symbols in exactly the same way computers do: otherwise meaningless configurations that can be assigned meaning. A Turing machine can model any sort of computer, and a computer can model a finite version of a Turing machine. (And why doesn't the Chinese Room count as a computer?) – David Thornley Aug 07 '18 at 18:20
  • @David Thornley Why isn't the Chinese room a computer? 1. The Chinese room processes symbols that have meanings. But the meanings don't come with the symbols (the man in the room processes Chinese ideograms, but he knows no Chinese (the meanings are not inside the man either) so he can't understand the ideograms). The man is forever a prisoner in a world of syntax – intrinsically meaningless shapes. Since nothing else in the room could conceivably understand Chinese, the room will never understand Chinese. And anyway, all the room gets is symbols, and symbols are semantically vacant. – Roddus Aug 08 '18 at 08:16
  • @David Thornley Why isn't the Chinese room a computer? 2. Humans use eyes to sense words and can understand them, but lack the sensory apparatus to sense clocked voltage levels, semiconductor switch states etc., so can't understand what computers process. Can computers themselves understand what they process? No, since computers also lack the sensory apparatus to sense clocked voltage levels, semiconductor states, etc. And why should they? We don't have the sensory apparatus to detect what our brains process (neural pulses, etc.). – Roddus Aug 08 '18 at 08:17
  • @David Thornley Why isn't the Chinese room a computer? 3. Also, if the computer situation mirrors the Chinese room situation, then clocked voltage levels exist outside the computer in the environment and inside the equivalent of books, and someone (or thing) has given these external clocked voltage levels meanings. But this doesn't seem even remotely plausible. – Roddus Aug 08 '18 at 08:17
  • @David Thornley That Turing machines are mathematical models. The problem I have with using Turing machines to think about the mind is that TMs process symbols (linguistic descriptions including abbreviations that Turing reformats into "Standard Descriptions", or "S, D."s). I think we need to completely forget about the idea that computers process symbols. The symbol-semantic issue causes so much trouble. Computers don't process symbols. The things they process don't have meanings. They have no semantics. The symbol-processing idea is just a giant red herring, as far as I can see. – Roddus Aug 08 '18 at 08:35
1

Here is the question:

Since the causality of the computer is defined by the human programmer, doesn't that mean that the Turing test, as Turing describes it, actually tests the intelligence of two humans, the human contestant and the computer programmer?

The OP also mentioned the teleprinter that takes information as input from one side of the Turing test, processes it, and delivers information to the other side.

Note that both the teleprinter and the computer set up for the Turing test are very similar. Both input information, process information, and output information.

The two humans, contestant and programmer, have similarities as well regarding understanding. Regardless of whether the teleprinter or the computer under a Turing test understand anything when they process information, there is no doubt that these humans do understand language.

There are at least three reasons to remain hesitant about claiming that machines understand just as humans do.

First, John Searle in "Minds, Brains and Programs", where he presented his Chinese Room Argument, reprinted in Mind Design, pages 291-2), mentioned:

If strong AI is to be a branch of psychology, it must be able to distinguish systems which are genuinely mental from those which are not. It must be able to distinguish the principles on which the mind works from those on which nonmental systems work; otherwise it will offer us no explanations of what is specifically mental about the mental.

Second, Searle mentions in the same article (page 303) that what the computer, or the teleprinter, do when they "process information" implies that they have "a syntax but no semantics":

Thus if you type into the computer "2 plus 2 equals?" it will type out "4." But it has no idea that "4" means 4 or that it means anything at all. And the point is not that it lacks some second-order information about the interpretation of its first-order symbols, but rather that its first-order symbols don't have any interpretations as far as the computer is concerned.

Third, there is the the fallacy of anthropomorphism. Bradley Dowden describes this fallacy as:

This is the error of projecting uniquely human qualities onto something that isn't human. Usually this occurs with projecting the human qualities onto animals, but when it is done to nonliving things, as in calling the storm cruel, the Pathetic Fallacy is created.

Claiming that the computer understands as humans do because it processed information could be viewed as an example of the fallacy of anthropomorphism, or more specifically, the pathetic fallacy.


References

Bradley Dowden, "Fallacies", Internet Encyclopedia of Philosophy.

John R. Searle, "Minds, Brains and Programs" reprinted in Haugeland, J. (1981). Mind Design: philosophy, psychology, artificial intelligence (Mongtomery, VT, Bradford Books).

Frank Hubeny
  • 19,397
  • 7
  • 30
  • 93
  • Searle makes an awful lot of assumptions, in addition to his constant begging the question in "Minds, Brains, and Programs". Uniquely human qualities? Exactly what's unique to humans? It isn't necessarily intelligence. – David Thornley Aug 02 '18 at 19:57
  • 1
    Bradley Dowden used the phrase "uniquely human qualities" in the last passage I quoted, not Searle. He was defining the fallacy of anthropomorphism. There are a lot of assumptions in strong AI. Here are two: (1) processing information makes something conscious, and (2) processing information is what humans do in their brains to make them conscious. Both of these need to be justified. In particular whatever models of information processing offered for consciousness need to be biologically plausible in terns the how neurons actually behave in humans. @DavidThornley – Frank Hubeny Aug 02 '18 at 20:10
  • @FrankHuberry: Something makes people intelligent and conscious, and we don't know that it isn't possible on a computer. Unless you're an old-fashioned dualist, you must acknowledge that the brain is a physical device. Given that, it's conceivable that another type of physical device could do the same thing. Unless we know what consciousness is well enough to determine the mechanism(s), we can't say it's a "uniquely human quality". (Indeed, some animals do show intelligence, although we have no test for consciousness, so intelligence isn't uniquely human.) – David Thornley Aug 06 '18 at 17:00
  • We don't know that it is possible on a computer either. Having a link between what a computer does and what the human brains do is critical to claiming that a computer could be conscious by its "processing information". One of the problems is "biological plausibility". See Seanny123's question/answer on Psychology and Neuroscience SE for references on this issue: https://psychology.stackexchange.com/q/16269/19440 In general there is less problem with animals that have brains than there is with a computer. @DavidThornley – Frank Hubeny Aug 06 '18 at 18:10
  • @FrankHuberry, Searle is attempting to show that turing machines and computers can't be conscious, and his reasoning is not sufficient to show that. Personally, I think it is possible to produce conscious computers that think and understand things, but here I'm just concerned with opposing Searle and noting that "uniquely human qualities" is very ill-defined. – David Thornley Aug 09 '18 at 17:24
  • Again, "uniquely human qualities" was a phrase used by Dowden. I don't know if Searle used it. Dowden makes no claims about AI to my knowledge, just anthropomorphism as an informal fallacy. I think Searle has successfully shown that Turing machines cannot be conscious. That is why I recommend that people be hesitant to accept any claim or assertion that they can be conscious. @DavidThornley – Frank Hubeny Aug 09 '18 at 18:52
0

You seem to be making two arguments here. Let me rephrase:

  1. A computer cannot see the words. It just gets voltages. Therefore, it cannot possibly understand the questions.

Response: Getting voltages is getting a kind of input. In fact, by your logic, you could argue we humans aren't seeing words either: we're just getting hit by light waves. But of course we are seeing words. And a computer is perceiving words as well ... just through a different sensory medium.

  1. It's the programmer that created the program. Therefore, any intelligence we attribute to the program when doing the Turing Test should really be attributed top the programmer, not the program

Response: Why would it matter how the program was created? You and I were created by our parents .. should they get the credit for our abilities rather than us? If I build a fast car, does that mean that the car isn't fast, because I built it? Of course that doesn't follow. Yes, I built it ... but it is also true that the car is fast. Likewise, if I create a computer program that is able to solve problems, make decisions, do reasoning, etc. ... should the fact that I created it mean that the program is in fact not doing any of those things? No. Of course, the question is whether I can create a computer program that has all these cognitive and mental abilities, but if I can, the fact that I did it does not take away from its abilities.

Bram28
  • 2,709
  • 11
  • 14
  • Yes, maybe the interrogator's teleprinter is a sense organ of the computer. And "Getting voltages is getting a kind of input", which input to the computer is the output of the teleprinter. But in this case what is being sensed? Taking the keys to be in a keyboard, this sense detects press-release events at different locations within the keyboard, not words.
  • – Roddus Aug 03 '18 at 02:13
  • So the question is, does the program inherit the intelligence of the programmer? And if so, is the inherited intelligence about the same thing as the programmer's intelligence. Say I program the machine to print “fine thanks” in response to a human typing “how are you?”. By coding this program (of the form: if input = “A” then output = “B”) does the computer inherit my knowledge of the meanings to the words?
  • – Roddus Aug 03 '18 at 02:13
  • The “question is whether I can create a computer program that has all these cognitive and mental abilities”. I agree that if you could do this, then the computer would be genuinely intelligent (to the extent of those abilities). Though the issue seems more about data structure than program. The program of intelligence seems intractably complex and coding it, impracticable. But what if the complexity of intelligence is in structure, and the program, very simple? Maybe an adequate structure could be derived from the world via sensory detection, not from human design?
  • – Roddus Aug 03 '18 at 02:14