-1

Alan Turing bases his famous test for human-like machine intelligence on a party game between a man and a woman. Each communicates with a hidden judge by teleprinter (text alone). Nowadays, consoles comprising monitor and keyboard could be used. The machine simply replaces the man and continues playing the game. So the machine is a robot. It sees the judge's text displayed on the monitor, understands the meanings of the shapes of the text, then types out an answer, just as the man did.

If the machine has human-like intelligence, it will understand the judge's questions (assuming it knows the language). So it will understand the meanings of the shapes displayed on the monitor. And from this understanding of the questions, it will be in a position to answer them.

But in the test as performed (e.g., in the Loebner competition) the machine is not a robot. It's just an electronic box lacking eyes and fingers. It is wired directly into the judge's console. All it gets from the judge is what comes down the wire. (I've tried to expand on this here).

The difficulty I see is that in this setup of the test, the setup always used in practice, there is no step of interpretation of text shape. The shapes of the judge's questions don't flow down the wire. The machine never gets the shapes of the questions. The man got them. They appeared on his monitor. He looked at the monitor, interpreted the shapes and through this, understood the questions. But the machine never gets the shapes, it never gets the questions. How, then, could it be in a position to answer them?

@Ben. I don't want to suggest that I'm objecting to the Turing test as Turing described it in 1950. He implied that the machine contestant must be a robot at least to the extent of having an eye and a finger in order to "take the place of" the man and use his teleprinter as the man did. Rather, I'm objecting to the test as performed. The machine is never a robot. And this (I argue) makes a world of difference.

The reason why the computer requires eyes and fingers is that firstly, Turing implied that they're needed so the machine can "take the place of" the man and use his teleprinter as the man did. But I think the essential answer is that communication is by text alone. It's not by text to one contestant and clocked groups of electrons (moving at about 3 gazillion groups a second) to the other. It's text to both. That is, words, things that have interpretable shapes.

It might be objected that when the judge presses keys on their keyboard, the electronic pulses transmitted as a result down the wire to the machine have the content of the judge's textual questions. But this can't possible be true. The content (the meaning) is in the brain of the observer. The man sees the text on his monitor and understands the meanings of the shapes. The meaning, the understanding, the content of the questions, is inside the man.

But one might suppose that the computer interprets the pulses arriving from the keyboard in a semantically equivalent sense as the man interprets the shapes that appear on his monitor. In this case, the things pulsing down the wire to the machine are interpretable to the machine as the shapes of the words on the monitor are interpretable to the man. And the machine must contain the meanings (the interpretations) of those interpretable pulses (as the man contains the meanings of the words on his monitor).

This seems like a really interesting question: how does the machine learn or otherwise previously acquire the meanings of the pulses? But I think this is a can of worms.

Roddus
  • 721
  • 3
  • 15
  • 1
    What is the "shape of the question", and why can't it be encoded in binary and transmitted down the wire? The machine gets through the wire whatever it needs to display it on the monitor for another human, if need be. Whatever it does to generate the response counts as "interpretation" and "understanding", as long as it can perform no worse than the human would. That humans possess some inexplicable extra magic for "interpreting shapes" and "understanding" is exactly what artificial intelligence is supposed to disprove. – Conifold Jan 04 '22 at 04:36
  • This is what I think happens. The programmer, earlier, configures the machine with a set of conditionals embodied as semiconductor switch states. The machine gets pulses down the wire, converts these to switch states then compares the states to the first parts of the conditionals. On a match, it converts the second part to pulses and sends them back up the wire. These cause the judge's monitor to display shapes (maybe good answers). But nowhere in this process does the machine understand the question. The human does,by interpreting the shapes of the text. But the machine never gets the shapes. – Roddus Jan 04 '22 at 05:38
  • I think they are using artificial neural networks these days that are trained on large conversation samples rather than simple minded conditional switches, but that does not change the principle much. The point is that the machine matching the man makes it plausible that there is nothing more to human "understanding" than a complex network of adjustable "conditional switches". "Shapes" are just figments, like spirits of trees and rivers. See SEP, Chinese Room for replies to a similar argument made by Searle. – Conifold Jan 04 '22 at 05:48
  • @Conifold -- Current large language models such as GPT-3 (playground requires an account) and GPT-J (playground 1, playground 2) are trained on data from various sources, including Common Crawl (400+ billion tokens), Wikipedia (3 billion tokens), and The Pile (800+ GiB). – Michael Jan 04 '22 at 15:44
  • @Conifold. I think neural nets are still elements of conditionals. Instead of matching an input to a set of switch settings, a group of inputs are "matched" to a neural net (many switch settings), where "matching" means something like getting a return value that means over 75% likelihood. So it's still a conditional. If the likelihood is > 75% then do such and such. The main point about this to me is that a human determines what the such-and-such is, and on the basis of the meaning to the human of the shape of the label used to define the training set. – Roddus Jan 12 '22 at 01:11

1 Answers1

2

Your concern here conflates the issue of communication content with its form

Why would a computer require eyes and fingers? It is capable of taking in an input signal, processing it, and then giving an output signal. The whole idea of the Turing test is to see if such a device can communicate with a human with sufficient sophistication/intelligence that the content of its communication (not its form) is indistinguishable from the communication from another human. Since the focus of the test is on the content of the communication, we do not care if the computer uses the same senses and forms for communication as a human. That is, we don't care if the computer "sees" the message with "eyes" or not, and we don't care if the computer outputs its message with a "voice" from its "mouth", etc.**

You say that for a machine to have human-like intelligence, it will "understand the judge's questions ... . So it will understand the meanings of the shapes displayed on the monitor." That position conflates content and form. The Turing test is not concerned with determining whether the machine uses the same senses and methods of interpretation as a human. It doesn't care whether the computer takes in the communication through a visual reading of symbols on a monitor, or through a direct signal through a cable, or through any other means. All it cares about is whether the machine has sufficient sophistication in interpreting information and communicating that the content of its communication is indistinguishable from a human. For this reason, the Turing test is set up in order to ensure that the human judge cannot evaluate the machine/person at the other end based on the communication processes or the form of the message.

The other concern you note is that in some setups, a human may have more information than the machine, through having additional senses that allow the human to see aspects of the communication that the machine cannot sense. This is a legitimate objection to the test setup. In order for the Turing test to work properly (i.e., to focus on communication content) the communication should be limited only to things that both the human and the machine can sense (although they may sense this in different ways). For example, if the "shapes" of the letters is a thing that can be varied by the human judge (e.g., by choosing different fonts, sizes, etc.) then the machine must also be given this font information. In this case the human subject might see the font, letter size, etc., visually, whereas the machine might get this information directly through some kind of computing syntax. What matters is that the machine and the human both get the same information from the human judge, even if they sense this information using different senses/syntax.


** In any case, these days you can program a computer to detect and read text on a screen, and you can also program a computer to mimic a human voice reasonably well. These are not necessary elements of the Turing test, since they pertain to the form of communication.

Ben
  • 1,926
  • 7
  • 15