-2

I seem to be having a bit of difficulty explaining what seems to me to be an important failure of the Turing test as performed. A failure that means that to date, no performance has yielded any indication of the intelligence of the machine contestant.

This is my argument:

  1. The test comprises questions and answers by text alone. This means by the shapes of words, numerals, punctuation and special characters.

  2. In text, the shapes of words (etc.) bear the meaning of the text. In reading a book, a human sees the shapes of the words, understands the meanings of the shapes, and by virtue of this knows the text.

  3. In order to understand the judges questions, the computer contestant must be exposed to the shapes of the judge's questions. (And also needs to be able to react to the shapes.) Without exposure to the shapes, the machine could not possibly understand the questions (though Turing does raise the issue of telepathy).

  4. In the test, as performed, the machine is not exposed to the shapes of the questions. Rather, it is wired into the judge's keyboard. All the machine gets from the judge is what flows down the wire. The shapes of the questions do not flow down the wire.

  5. Hence, the machine never gets the questions. And since it never gets the questions, it could never understand them. And since it could never understand them, it could never answer them.

  6. The judge assesses intelligence on the basis of the answers. Since the judge never gets the answers, the judge could not possibly assess intelligence.

My questions is: is anything wrong with this argument? In other words:

(a) Are there false premises? And if so, which ones and why? and, (b) Is the logical form valid? And if not, why not?

Roddus
  • 721
  • 3
  • 15
  • Have you ever interacted with a customer service chat bot? How do they work? – user4894 Jan 13 '22 at 03:45
  • The shape of the question is not relevant, only the meaning is. It is true that the shape provides the meaning for a human reader, but there isn't just one shape that provides that meaning. You can change the fonts, change the spacing, wrap the text around a curve, and a reader can still grasp the meaning. All that is needed is to identify each character in the message. The computer can do that by the signals that come from the keyboard. – David Gudeman Jan 13 '22 at 07:17
  • 2
    Premuise 2 is false, blind people do not see any shapes but can understand words, and most people can learn to read binary code directly, the same code computers read. Even if 2 were true the inference to 3 is invalid. Assuming that people can only understand through shapes (which they don't, but let's assume for the sake) does not imply that other entities cannot do it without them. – Conifold Jan 13 '22 at 08:04
  • Whatever is sent over the wire is sufficient for a computer to construct word shapes as much as it likes if that were relevant. These days the computer could also be send photos of questions over the wire. But shapes are irrelevant, luckily, that's why we can use sounds as well to transmit words. – tkruse Jan 14 '22 at 04:30
  • @David Gudeman I agree about fonts, but the meanings of the shapes are inside the human (are the interpretations of the shapes). On your explanation presumably the computer contains the meanings of the shapes, and these meanings are activated by the clocked pulses received from the keyboard. But how is the pre-existing relationship between the clocked pulses and the meanings of the shapes established? I can't see any way that this could happen. – Roddus Jan 18 '22 at 09:09
  • @tkruse I agree that clocked voltages can be emitted by a computer and received by a screen, then by its design, the screen will display certain ASCII shapes, or word shapes. But there are no word shapes inside the machine. The ASCII characters exist only on exposed surfaces of attachments (such as screens) so people can see and interpret the shapes. The computers we are talking about have no eyes so can't see the shapes on the screen. The machine is never exposed to the shapes, and hence is never in a position to understand what the shapes mean. The TT is limited to text shapes. – Roddus Jan 18 '22 at 09:37
  • @Conifold In talking about text I'm sticking to Turing's specification of text-only communication. So when you say people can understand through other than shape, I completely agree, but I'm limiting the sensory modalities to just the one Turing specified (vision). I agree that a machine might understand by virtue of reacting to other than shape, where the reaction is to access meanings. But there needs to be a preexisting relation between the meaning and the non-shape. I can't see how a computer could ever establish that sort of relation. – Roddus Jan 18 '22 at 09:43
  • The shape of the messages are symbols of the alphabet, they can be transmitted to other humans as well in writing, in sound, as morse-code or in braille. Messages can be translated between these formats many times without losing meaning, so the visual shapes do not carry meaning. The messages can also be encrypted without loosing meaning, such as replacing each letter with the next letter in the alphabet. In textual communication, word shapes are not intended to carry meaning. That's different from trying to solve a Rebus word puzzle. – tkruse Jan 18 '22 at 10:41
  • It is unclear what "meaning" is supposed to be as you use it, but whatever it is there need not be any preexisting relation between it and whatever carrier. It can be established through training and learning, which both children and computers are capable of. How it is done depends on the theory of "meaning". If you are saying that computers lack enough physical interaction with the world to establish "meanings" then you are in good company. Many AI researchers believe that "true" AI will have to be robotic, i.e. implement embodied cognition. – Conifold Jan 18 '22 at 11:08
  • Also consider the fact that before the human sees the shapes of the words, the photons coming from the page must first hit his/her eyes, which then have to translate those stimuli into the shapes. So one could say the photons are the ones that bear the meaning of the text. And there is very little difference between the photons and the electrical signals from a keyboard. – Sam Jan 18 '22 at 18:28
  • @Roddus, there is no meaning inside the computer, only bits of silicon and other materials in various electronic states. – David Gudeman Jan 18 '22 at 19:11
  • @David Gudeman, I agree that computers as configured and used today contain no meanings. But why not under some configuration and storage content? I suppose we are talking about the same sort of thing - meanings as items existing inside human brains as distinct from references (say external objects). AI's challenge is presumably to work out how bits of silicon etc. could constitute meanings (if that is possible). But cells constitute meanings inside human brains. Is there something about silicon etc. that excludes it from possibly embodying meaning? – Roddus Jan 21 '22 at 02:01
  • @Sam, Of course a human eye doesn't create an internal version of the seen shape. Rather, saccades index on angles, contrast differences and so on. But that aside, you say the photons could bear the meanings. I suppose it would be the pattern of photons. This seems fair enough. But the pattern has been assigned a meaning by a person or community. For the KB pulses to have meaning to the machine, some machine or other would have had to have perceived similar pulses (as a human perceives shapes) then assigned meanings to them. ... – Roddus Jan 21 '22 at 02:15
  • @Sam cont. Then the machine at issue would have to have learned the meanings of the groups of clocked voltage levels (as a human learns the meaning of groups of shapes, ie words). This seems interesting. One issue is that the pulses are internal to the machine. When a human learns the meaning of a shape, the shape is external to the human. The human doesn't learn the meanings of neural pulses. – Roddus Jan 21 '22 at 02:21
  • @tkruse, I agree that symbols don't carry meaning in the sense of contain or carry like a backpack, and they don't in themselves indicate their meanings. Which is what Searle says over and over - there's no way to get from syntax (shape, sound, Braille, semiphore, ...) to the meaning. The situation is that the shape is a term of a relationship the other term of which is a meaning. To get to the meaning you need not only the shape, but also the connective element - the relationship per se. ... – Roddus Jan 21 '22 at 02:34
  • @tkruse cont. you say "In textual communication, word shapes are not intended to carry meaning", but I disagree in the following sense. I want to communicate a meaning, I know the related shape and write it on a piece of paper. The recipient (who knows the language) sees the shape. Inside their brain, a "representation" of the shape is connected to the meaning (also internal). Seeing the shape activates the representation. The process then follows the connection to the related meaning and activates the meaning. This is what "carry meaning" means (the idea of "carry" being very confusing). – Roddus Jan 21 '22 at 02:44
  • @Conifold, by "meaning" in a human brain, I mean a 3-D neural structure. In a computer it would be a computer "memory" structure realized in "linear" memory by use of pointers. By "preexisting relation" I mean that in order for a system to understand the meaning of a value of a property of something (a certain shape being a value of the property of shape), that value must have previously been assigned a meaning (internal representation of the value associated with an internal meaning). I agree that most meanings entail situatedness (are inner effect-sensor structures). – Roddus Jan 21 '22 at 02:55
  • @Roddus, meaning isn't any kind of configuration (or a 3-D neural structure). Meaning is something that happens in the mind. It has no physical structure, and there is nothing like it in the physical world that anyone has ever been able to identify. Even something like a reference or pointer in a computer program is not like a meaning except in the mind of the programmer. Every place people claim to find meaning in the physical world eventually comes down to some sort of meaning or relationship that they are merely imputing--in their mind. – David Gudeman Jan 21 '22 at 04:20
  • @tkruse, I've answered my question as a way to reply at length to key points in your answer. The gist is that I think you are using the idea of information to understand what flows down the wire from a keyboard, but the concept, though very widely used, is hopelessly confused and an impediment to clear understanding, in my view. What I've said about keyboards also goes for OCR scans of documents, in that the shape "A" (in a document) is not inside the computer. The computer has no access to the shape. All it has is a set of clocked voltages or semiconductor switch states. – Roddus Jan 24 '22 at 04:31
  • I think I struggle to get the meaning of your question. The cause must obviously be that after you typed your question into stackexchange, the text was transferred to servers in digital form, losing the shapes and all the valuable semantics in the transformation process. Worse, those signals were transformed again a second time to light on my screen, losing even more information, and transformed a third time in my eye from optical shapes to neural correlates, so electric signals again (oh no!). So you see, the meaning of your words cannot possibly reach another human. – tkruse Jan 24 '22 at 06:10

2 Answers2

3

The Turing test is not a formally specified experiment, bust just a thought experiment. As such, the specifics given in the original example are not important, a Turing test can be performed in many different ways than the original example.

The only crucial point of detail for the Turing test is that the interrogator cannot directly perceive whether they are interacting with a human or an artificial intelligence, and need to make a judgement based on observed behavior. All other details are arbitrary examples.

In particular, setting up a Turing test such that the machine does not get the text in binary format, but as written on paper, with the machine scanning the question via a camera, is also a viable Turing test. It's just a pain in the ass to set up. Philosophically there is no difference.

Nowadays there are websites where photo images can be uploaded for free and an algorithm will detect words in the photo at high confidence. So computers can read visual images if necessary (if printed sufficiently clear). In 1950 that would have been much more difficult practically.

That only leaves the question on whether machines could get the meaning of words without having access to the original shape. On websites like https://www.wolframalpha.com it is possible to ask questions in natural language and get suitable answers for plenty of questions. It's not good enough to pass the Turing test, but it is good enough to pass the "Roddus-Test" of transmitting word meanings to machines in binary form, so that the machine "gets the question". So this part of the experiment setup is already proven to work. This also works on Smartphones and other devices where you can ask questions in spoken language, and the device will respond to many such sentences in useful ways. So very obviously word meanings can be send to machines without visual shapes.

This is evident to any child today, but it was also evident already in 1950 philosophically, which is why Alan Turing did not bother to make the experiment any more complicated than necessary.

tkruse
  • 3,787
  • 7
  • 21
0

This is a reply to tkruse, who I think raises two of the most important matters central to the Turing test as performed. The first is the concept of "text in binary format". The second is that typing on a keyboard is a way "of transmitting word meanings to machines in binary form".

Text in binary format

Specific to the matter at issue, the claim is that what flows down the wire from the judge's keyboard to the computer contestant is text in binary format. So the judge presses the key with the shape "A" imprinted on its top surface. In response (ignoring make-break pairing) the keyboard via its designed electronic internals, emits a sequence of clocked electrical pulses along the wire exiting the back of the keyboard.

These pulses are at either of two approximate voltage levels which can be called "high" and "low", and the transmission is called "binary" to reflect this. The actual pulses comprise groups of electrons or electron "holes".

The two binary levels are called "high" and "low". But they are also called "0" and "1". To slightly simplify what happens, in the case of pressing the plastic key inscribed with the shape "A", the emitted group of pulses, in a system implementing the ASCII standard, is named "01000001". This is the name of the group of electrical pulses. Instances of the shapes "0" and "1" don't flow down the wire. Rather, the shapes "0" and "1" are simply names of what flow down the wire, names invented by human observers. The shapes of the electron groups bear no relation to the shapes of the names of the electron groups. And the machine has, by its design, no power to react to the shapes of the electron groups.

The assertion is that this process of pressing a key inscribed on its top surface with the shape "A", to the keyboard emitting a group of clocked voltages down the wire, amounts to a change in format of the text shape "A".

So what, exactly, does it mean to change format?

I think the key issue is semantics. Continuity of semantics is fundamental to the idea of change of format. There is some physical change but there is no change in meaning. If the semantics changes, then it's not a change of format. Rather, it's a change of meaning.

The shape "A" has a meaning. For the pulses to be a change of format, they will have the same meaning to the machine as the shape "A" has to humans. What we want to know is, how does the pulse group get the same meaning?

To repeat, this is the meaning to the machine. We are talking about the machine understanding the pulses as the human contestant understands the shapes. So how does the machine get the meaning of the pulses? And why are those meanings the same meanings as those of the text shapes?

The answer is that there's simply no way that the internal machine pulses could have meanings in the sense that external shapes do to humans. The pulses result from designed causal reactions within electronic circuitry. Meaning doesn't come into it.

Transmitting word meanings to machines in binary form

The second claim is that typing on a keyboard is a way "of transmitting word meanings to machines in binary form".

I think the above analysis of the concept of text in binary format defeats this second claim that the machine can get meanings in binary form.

But there seems a much wider issue. Using the concept of information to explain things appears to have big problems. It seems that information is usually considered to be some sort of semantic substance. One might say to someone about some problem, once you get the information (meaning text) you'll understand the solution. In this case, what is called "the information" is regarded as having semantic content, of meaning something. The text has semantic content, contains meanings. And if the form of the text is changed, then the thing that results from the change also contains meanings, and the same meanings as the text.

So if those resulting things then flow down a wire to a machine, they contain the meanings that the original text contained. There's no need for the machine to already have meanings inside itself. There's no need for the machine to have learned anything. What flows down the wire contains the needed meanings.

On this (deeply flawed) way of thinking, typing on keyboards is a way "of transmitting word meanings to machines in binary form". But the problem is: electron pulses don't contain meanings. The meanings at issue are ones inside human brains which are activated via seeing shapes. The electron pulses don't contain parts of human brains. The idea that pulses from the keyboard are "word meanings in binary form" is simply false.

Roddus
  • 721
  • 3
  • 15
  • stackexchange does not work by using answers as replies. If you wish to discuss, this site is not the right place for you. You can of course give an answer to your own question, but that seems a bit pointless in this case, since you are just agreeing to what you already wrote. – tkruse Jan 24 '22 at 06:06
  • @tkruse, First, can I respectfully point out that the arguments in my answer are new (and I think those arguments against your claims about format and transmitting meanings are not only new (per this discussion) but also very good). I think you're plainly wrong. Secondly, the issues of format and meaning transmission are important. I was hoping for a genuine debate. What do you think of my arguments? Or does reasoned argument and the advancement of science take second place at philosophy.stackexchange? Where the only thing that really matters is where something is said - and by whom? – Roddus Jan 26 '22 at 05:50
  • @ tkruse, I need to add that what you've said about about format and transmission of meaning is very helpful, as an expression of the established wisdom. – Roddus Jan 26 '22 at 05:56
  • If you want a debate, this is definitely the wrong site. See https://philosophy.stackexchange.com/help/dont-ask for some reasons. If you think the rules of this site are wrong, you can try your luck in https://philosophy.meta.stackexchange.com. – tkruse Jan 26 '22 at 12:53
  • "I was hoping for a genuine debate. What do you think of my arguments? Or does reasoned argument and the advancement of science take second place at philosophy.stackexchange?" It doesn't have any place at all. I don't like it either but that's how the site is set up. The Stackexchange family of sites are for questions and answers, but never debate or discussion. It's a very poor fit for philosophy, and that's been noted before. Just how it is. There are philosophy discussion forums online where discussion is welcome, but this unfortunately is not one of them. – user4894 Jan 28 '22 at 06:43