Alan Turing bases his famous test for human-like machine intelligence on a party game between a man and a woman. Each communicates with a hidden judge by teleprinter (text alone). Nowadays, consoles comprising monitor and keyboard could be used. The machine simply replaces the man and continues playing the game. So the machine is a robot. It sees the judge's text displayed on the monitor, understands the meanings of the shapes of the text, then types out an answer, just as the man did.
If the machine has human-like intelligence, it will understand the judge's questions (assuming it knows the language). So it will understand the meanings of the shapes displayed on the monitor. And from this understanding of the questions, it will be in a position to answer them.
But in the test as performed (e.g., in the Loebner competition) the machine is not a robot. It's just an electronic box lacking eyes and fingers. It is wired directly into the judge's console. All it gets from the judge is what comes down the wire. (I've tried to expand on this here).
The difficulty I see is that in this setup of the test, the setup always used in practice, there is no step of interpretation of text shape. The shapes of the judge's questions don't flow down the wire. The machine never gets the shapes of the questions. The man got them. They appeared on his monitor. He looked at the monitor, interpreted the shapes and through this, understood the questions. But the machine never gets the shapes, it never gets the questions. How, then, could it be in a position to answer them?
@Ben. I don't want to suggest that I'm objecting to the Turing test as Turing described it in 1950. He implied that the machine contestant must be a robot at least to the extent of having an eye and a finger in order to "take the place of" the man and use his teleprinter as the man did. Rather, I'm objecting to the test as performed. The machine is never a robot. And this (I argue) makes a world of difference.
The reason why the computer requires eyes and fingers is that firstly, Turing implied that they're needed so the machine can "take the place of" the man and use his teleprinter as the man did. But I think the essential answer is that communication is by text alone. It's not by text to one contestant and clocked groups of electrons (moving at about 3 gazillion groups a second) to the other. It's text to both. That is, words, things that have interpretable shapes.
It might be objected that when the judge presses keys on their keyboard, the electronic pulses transmitted as a result down the wire to the machine have the content of the judge's textual questions. But this can't possible be true. The content (the meaning) is in the brain of the observer. The man sees the text on his monitor and understands the meanings of the shapes. The meaning, the understanding, the content of the questions, is inside the man.
But one might suppose that the computer interprets the pulses arriving from the keyboard in a semantically equivalent sense as the man interprets the shapes that appear on his monitor. In this case, the things pulsing down the wire to the machine are interpretable to the machine as the shapes of the words on the monitor are interpretable to the man. And the machine must contain the meanings (the interpretations) of those interpretable pulses (as the man contains the meanings of the words on his monitor).
This seems like a really interesting question: how does the machine learn or otherwise previously acquire the meanings of the pulses? But I think this is a can of worms.