1

Gödel claimed that what the Theorems do entail (specifically, the Second Theorem) is that mathematics is inexhaustible:

It is this theorem [i.e., the Second Theorem] which makes the incompletability of mathematics particularly evident. For, it makes it impossible that someone should set up a certain well-defined system of axioms and rules and consistently make the following assertion about it: All of these axioms and rules I perceive (with mathematical certitude) to be correct, and moreover I believe that they contain all of mathematics. If someone makes such a statement he contradicts himself. In the Gibbs Lecture, thus, Gödel acknowledged that [his theorems] do not rule out the existence of an algorithmic procedure (a computing machine, an automated theorem prover) equivalent to the mind in the relevant sense [...]. However, if such a procedure existed “we could never know with mathematical certainty that all the propositions it produce[d were] correct.” Consequently, it may well be the case that “the human mind (in the realm of pure mathematics) [is] equivalent to a finite machine that … is unable to understand completely its own functioning”: a machine too complex to analyze itself up to the point of establishing the correctness of its own procedures. Gödel inferred that what follows from the incompleteness results is, at most, a disjunctive conclusion:

Either mathematics is incompletable in this sense, that its evident axioms can never be comprised in a finite rule, that is to say, the human mind (even within the real of pure mathematics) infinitely surpasses the powers of any finite machine, or else there exist absolutely unsolvable Diophantine problems of the type specified… It is this mathematically established fact which seems to me of great philosophical interest. In other words, either the mind actually has a non-algorithmic and not fully “mechanizable” nature, or else there exist absolutely undecidable mathematical problems. But [Gödel's Theorems] don’t allow us to go further and conclude that the true disjunct is the first one. According to Gödel, then, what follows from [them], and especially from [the Second one], is that if our mind is a computing machine, it is one such that it “is unable to understand completely its own functioning.”

While the inference about disjunction is reasonable, I am unable to understand that why Godel didn't reject possibility of algorithmic nature of mind. It is always reasonable to argue that because human developed, say, an engine, his mind is more powerful than the internal logic of engine. Godel took particular interest in Turing's analysis because the result allowed Incompleteness Theorem to hold in full generality by providing the required understanding of what a "reasonable system" is.

Fact F: Turing (mind) developed the notion of machines (including the infinite -Universal Turing machine).

So my question is why was the fact F not sufficient for Godel to solve the disjunction and conclude that the human mind is strictly more powerful than a finite machine?

Ajax
  • 1,139
  • 6
  • 14
  • 2
    Isn't this like saying our mind came up with the laws of physics, therefore it should be sufficient to claim we are beyond the laws? Seems like Godel didn't want to make such a sweeping claim about the mind (i.e. use of if or either a lot in your quotes), instead giving the possibilities. – J Kusin Jul 04 '21 at 18:24
  • Lucas thought Godel rejected the idea of undecidable mathematical problems, but given that he said Godel was "implicitly" arguing this rather than explicitly saying it, Godel probably never gave an argument for it in his writings or speeches (unless Lucas and the IEP just missed some critical record). Would you accept an answer with general arguments about why our ability to construct computers doesn't prove we are superior to all possible computers, and likewise our ability to go beyond any axiomatic system known to us doesn't show we are non-algorithmic? – Hypnosifl Jul 04 '21 at 18:29
  • @JKusin No. We are beyond those laws only in terms of our power of reasoning. Because I understand them, my mind is more powerful than the internal logic of those laws. – Ajax Jul 04 '21 at 18:29
  • @Hypnosifl I will appreciate well presented arguments. – Ajax Jul 04 '21 at 18:31
  • 2
    "Because I understand them, my mind is more powerful than the internal logic of those laws" It seems to me that "more powerful" is an ambiguous phrase--presumably the laws themselves don't have their own consciousness and thus lack self-understanding, so if you understand the laws you are "more powerful" than them in that sense, but would you take that to prove that your understanding can't be generated by physical processes in your brain that obey those laws? If so, why? – Hypnosifl Jul 04 '21 at 18:34
  • 2
    @Ajax To claim that I think you must be assuming some kind of mind/body dualism. – J Kusin Jul 04 '21 at 18:34
  • @Hypnosifl when I say I'm more powerful, I mean my "reasoning" is more powerful than the "reasoning of the law". – Ajax Jul 04 '21 at 18:40
  • By "reasoning of the law" do you mean the reasoning of any physical system governed by those laws? If so, how do you know your brain isn't such a physical system? (are you specifically assuming some argument related to Godel's theorem shows this, or do you think this conclusion is obvious from more general philosophical considerations?) – Hypnosifl Jul 04 '21 at 18:42
  • @Hypnosifl You obviously cannot reason about your own reasoning : isn't that the whole point of Godel's Theorem? – Ajax Jul 04 '21 at 18:42
  • @Hypnosifl You cannot reason about your own reasoning (G's Thm). Reasoning equals Turing's Formalism. Once I have pinned down Turing's formalism, it must follow that my reasoning architecture (Turing's mind) is superior to Turing's machine. – Ajax Jul 04 '21 at 18:45
  • Are you arguing that based on some specific mathematical result like Godel's theorem/the halting problem, or do you think it's clear from more philosophical considerations? I don't see why an AI running on a Turing machine couldn't understand Turing's formalism, this idea doesn't lead to any basic conceptual contradictions. And the fact that no Turing machine can solve the general halting problem doesn't prove we are superior to any possible Turing machine, since we can't solve the general halting problem either. But if your argument is specifically based on Godel's th. I can address that. – Hypnosifl Jul 04 '21 at 18:49
  • @Hypnosifl Specifically Godel's Theorem in conjunction with Turing's definition of reasonable system (Turing machines) – Ajax Jul 04 '21 at 18:51
  • Cool, in that case I'll try to write up an answer about why I don't think Godel's theorem specifically rules out the idea that a Turing machine program could have identical capabilities to a flesh-and-blood human. Might be a little lengthy but hopefully I can get it done in the next few days. – Hypnosifl Jul 04 '21 at 19:23
  • @JKusin It might be the case. After all, question is on mind, not brain. – Ajax Jul 05 '21 at 11:25
  • What is superior to what? A compter that can be (re) constructed or a human being that cannot be constructed. A human being from whose actions a computer can appear or computers from which' actions a human beibg can never appear? – Deschele Schilder Jul 05 '21 at 17:11

2 Answers2

2

Problem of consciousness and inexpressible

Let's us first try to give definition of algorithmic mind. In a colloquial terms, algorithmic mind would be akin to a machine, it would perform certain strictly defined instructions according to a strictly defined rules. Starting from initial conditions (which in our case could be axioms) it would come to certain results after certain number of algorithmic steps. Or it would lock itself in a perpetual looping. Anyway, in any case, algorithmic mind does not need to be self-aware to be efficient. Like in Chinese room experiment, it could solve problems without having consciousness.

But ... there came Gödel's theorems. Without going into too much mathematical technical details, first theorem claims that if set of axioms is consistent (no contradictions among them) then it is incomplete, i.e. there are statements expressed in same formal language as axioms, that cannot be proven either true or false. For our algorithmic mind this is very damaging, because for certain inputs (axioms and statements) there would be no finite algorithmic steps to prove them or disprove them.

Second Gödel's theorem is even more restrictive in this regard: it claims that set of axioms could never prove its own consistency, i.e. there are no contradictions arising from axioms in the set. Even more damaging for our algorithmic mind, because starting from the same axioms and using different algorithms it could prove two opposite statements ( Sun is hot, Sun is cold ! ) .

This now leave us in area bordering with madness and paradox :) If we consider human mind to be well-ordained algorithmic machine, with a set of initial axioms, now we know that such machine is inherently flawed : it could never be absolutely sure to perceive truth, and some problems would be forever undecided and unsolvable.

Or we could accept non-algorithmic view of a human mind, one that borders with mysticism : i.e. mind is not a Chinese room, its fundamental property is consciousness and self-awareness, some things could not be expressed by formal language . Or in other (Gödel's) words: the human mind (even within the realm of pure mathematics) infinitely surpasses the powers of any finite machine.

rs.29
  • 1,196
  • 5
  • 9
  • Especially the last parapraph very true! – Deschele Schilder Jul 05 '21 at 22:21
  • Note that many philosophers would disagree that the Chinese room would lack consciousness, including those who do believe consciousness presents philosophical problems for reductive physicalism (for example, David Chalmers coined the term "hard problem of consciousness" to talk about the latter, but he does argue in his book The Conscious Mind that there are good reasons to believe the Chinese room would be conscious and would understand Chinese, even if the "demon" executing the program would not). – Hypnosifl Jul 05 '21 at 23:12
  • @DescheleSchilder Last sentence is Gödel's :) – rs.29 Jul 06 '21 at 05:40
  • @Hypnosifl Well ... Original idea of John Searle is that entity in the room (himself, computer etc...) does not have a clue about Chinese, does not know what is the question or the answer, merely follows instructions. For our purpose it could be complex set of cogs, if it is easier to understand something dumb :) Of course, we could debate does Chinese room essentially does have consciousness, but that is a topic for another question. In this example, you could just imagine a machine that works with set of axioms and some algorithm without understanding them. – rs.29 Jul 06 '21 at 05:51
  • But Searle doesn't just presuppose that nothing in the room has conscious understanding of Chinese, he tries to make an argument for that based on the idea that the "demon" doing the symbol-shuffling needn't have such an understanding. As noted in this section of the SEP article, many philosophers (including Chalmers) have responded by arguing that some other parts of the system as a whole would have a consciousness separate from the demon's, and that separate consciousness would understand Chinese. – Hypnosifl Jul 06 '21 at 20:30
  • @Hypnosifl Original idea of Searle was that Chinese person standing outside and asking questions would assume that some intelligent entity understanding Chinese language, and of course with consciousness, answers this questions. Searle just pointed out that daemon in the room does not need to understand Chinese, and going further does not need to have consciousness at all. It could be assumed that something outside that built Chinese room understands Chinese, i.e. that Chinese room did not arise spontaneously. – rs.29 Jul 07 '21 at 09:04
  • @Hypnosifl To further clarify the point, example from real life: Simple handheld calculator could answer the questions asked in arithmetical language like for example 1234+8765 . Does it have consciousness ? Most would agree that it does not have it. Yet, for someone from lets say 16th century it would look like magic box with real daemon inside . – rs.29 Jul 07 '21 at 09:09
  • A calculator doesn't give evidence of some higher level conceptual understanding of its output, while it's assumed that the Chinese Room can converse about various topics like a Chinese-speaking human (i.e. it passes the Turing test). That'd be the main reason there are plenty of philosophers who would be ready to attribute human-like consciousness to the Chinese room but not to a handheld calculator. Chalking it up to the consciousness of the being that created the system isn't seen as an answer since the system can give novel responses. – Hypnosifl Jul 09 '21 at 16:22
  • @Hypnosifl Calculator has appearance of understanding arithmetical language. Which is not so rich as Chinese, but nevertheless principle is the same. Turing Test is actually meaningless in this regard (and in general) , because as I said person from 16th century would likely consider that there is living sentient entity inside simple calculator. And we would likely be fooled with computer from 23rd century. – rs.29 Jul 09 '21 at 20:31
  • But even a person from the 16th century would, if they were somewhat rational, not conclude there was a being with a fully humanlike mind in the calculator--they could imagine a being that only knew how to do math but didn't have any other human mental traits. The point of the Turing test is to show humanlike verbal behavior on a wide range of unpredictable subjects, giving evidence of actually understanding what it's talking about. Of course if a 23rd century computer could do that, the same philosophers I mentioned would conclude it probably had a humanlike consciousness as well. – Hypnosifl Jul 09 '21 at 20:41
  • @Hypnosifl Not necessarily, calculator could likely be proclaimed to be "magical" , i.e. possessed by (evil) spirit , which are again much older and smarter than humans. In that sense, Turing test is meaningless, already there were programs that fooled some people i.e. passed the test. With advance of AI , this would become much more common. Yet, none of these programs were really conscious. – rs.29 Jul 09 '21 at 20:54
  • calculator could likely be proclaimed to be "magical" , i.e. possessed by (evil) spirit, which are again much older and smarter than humans That's why I specified a "somewhat rational" person from the 16th century (say, Johannes Kepler)--though they might think it possible the calculator contained an entity with all the mental capabilities of humans, they would have to admit they had seen no direct evidence for mental capabilities beyond arithmetic. Similarly, one has to assume a long-term Turing test that gives you as much behavioral evidence as with ppl you have close relationships with. – Hypnosifl Jul 09 '21 at 21:26
  • And note it's usually stated as an assumption of the Chinese room thought-experiment that it shows all the conversational abilities of a Chinese-speaking human, so it's not really relevant to point out that people can be fooled by short interactions with programs that don't in fact have the same level of conversational abilities as typical humans. – Hypnosifl Jul 09 '21 at 21:28
  • @Hypnosifl You are missing the point. And the point is simple : lot of people in 16th century would be fooled by simple calculator. In 20th-21st century some people were fooled by existing applications. It could be assumed that in 23rd century even more people would be fooled by advanced applications and computers. In other words, Chinese room is gradually becoming a real possibility. Yet, in all those cases there is really no proof that calculator, modern applications, or these future applications would have consciousness. After all, they are built on same principles. – rs.29 Jul 10 '21 at 12:24
  • Fooling people is different from being behaviorally indistinguishable from a human in principle. And the philosophers who suggest that systems functionally identical to a human would have humanlike consciousness aren't just arguing that based on personal intuition, they have philosophical arguments--for ex. Chalmers argues there must be "psychophysical laws" determining which physical systems have which types of consc., and then uses thought-experiments involving gradual replacement of neurons to argue for a principle of "organizational invariance". – Hypnosifl Jul 10 '21 at 14:44
  • @Hypnosifl Actually, no. Fooling people is practically only way something could be judged to be indistinguishable from real human. Exactly because of Second Gödel's theorem we cannot ever find "psychophysical laws" that could determine with full certainty what it takes to have full human consciousness, because we are humans. We could only judge "humaneness " of certain AI to a degree it manages to fool us into thinking it is a human. – rs.29 Jul 10 '21 at 18:12
  • Fooling people is practically only way something could be judged to be indistinguishable from real human Since this is a thought-experiment we need not be concerned with whether a method is "practical" or not, only with what is possible in principle. For example, one might imagine creating a near-infinite number of physical copies of a given human and exposing them to all possible sensory inputs, then doing the same for a near-infinite number of copies of a program designed to simulate that human, and seeing if there were any statistical differences in their responses. – Hypnosifl Jul 10 '21 at 18:37
  • Exactly because of Second Gödel's theorem we cannot ever find "psychophysical laws" that could determine with full certainty what it takes to have full human consciousness, because we are humans Even if one buys the argument that Godel's theorem shows minds to be non-computational this doesn't follow, for example a believer in Penrose's theories about the brain using quantum gravity to do non-computational things could still believe it's possible to discover the laws relating physical states in a complete quantum gravity theory to types of conscious experiences. – Hypnosifl Jul 10 '21 at 18:39
  • @Hypnosifl Gödel's theorem does not show mind to be non-computational. It shows that mind (as a system) could not prove itself (to be complete and consistent). In other words, no matter what axioms we take, we cannot be certain that some paradox would not arise. These axioms are your "philosophical laws" . Therefore, no matter what rules you set to your thought-experiment, there is only one criteria that is practical (i.e. possible) and that is ability of AI to fool people. – rs.29 Jul 11 '21 at 11:16
  • @Hypnosifl In fact, consequence of Gödel's theorem is a fact that we could not prove ourselves to be fully human :) – rs.29 Jul 11 '21 at 11:19
2

"A created B; therefore A is strictly more powerful than B" is not justified reasoning.

  • Humans create machines that are physically much more powerful than humans, such as trains.
  • Humans create computers that are able to calculate much faster and more accurately than humans.
  • Indeed, the purpose of making a tool is so that the tool can do something better than you could do without it; all tools are intended to surpass their creators in some way.
  • Humans create more humans through reproduction. The child may surpass the parent.
  • Humans have the technical potential to create more humans artificially, through cloning. The reason this has not been done is for ethical reasons, not technical ones. It is not technically more difficult to clone a human than to clone a sheep.
  • Computer viruses can copy themselves to create more computer viruses just as effective as the original.
  • If something is able to copy itself, and introduce some random mutation in the copy - as humans and other organisms do, and computer viruses could potentially do - there is some chance that the random mutation will make the copy better than the original. With selection to weed out worse results, this process can repeat until the result after many generations is much better than the original.

There is no empirical evidence that humans can do anything beyond the theoretical capabilities of a Turing machine. We're made of the same kinds of atoms as computers, operating under the same laws of physics; what would grant our atoms any special capability forever beyond the computer's atoms?

A Turing machine can simulate the interactions of atoms as well as you desire, as long as you're willing to wait a long time. From this it follows that it could in principle simulate a human brain, and therefore do anything the brain can do, albeit much slower.

Now let's look at the relevance of Gödel to all this. Gödel's result shows that we are able to verify certain theorems beyond the capability of certain formal systems. A human can verify that a Gödel sentence for arithmetic is true, and this truth cannot be determined within arithmetic itself. Some observations about this:

  • This does not imply that humans are able to verify all theorems.
  • There are also formal systems that are able to verify that Gödel sentence for arithmetic; humans are not unique in being able to verify it. (These formal systems have their own Gödel sentences which they can't prove, of course.)
  • It may be that there are things like "Gödel sentences" for the human mind - propositions that humans inherently can't determine the truth of, but some other mind could. For instance, suppose I say to you proposition P: "David can never determine that P is true." If David says P is true, then he is contradicting himself, but if I say P is true, there is no contradiction (and I'm right). In some ways P is analogous to a Gödel sentence, since Gödel sentences can be interpreted as claiming that they cannot be proved by a particular formal system. Of course, this is not a rigorous argument, but it's conceivable that something like it could be made rigorous.
causative
  • 12,714
  • 1
  • 16
  • 50
  • ""A created B; therefore A is strictly more powerful than B" is not justified reasoning. Humans create machines that are physically much more powerful than humans, such as trains. Humans create computers that are able to calculate much faster and more accurately than humans." Question is on the reasoning power, not physical or speed. – Ajax Jul 06 '21 at 10:17
  • @Ajax "strictly more powerful" was your phrasing. There are many cognitive tasks at which a computer surpasses a human. What say you of the other examples - if a computer program copies itself, doesn't the copy have the same reasoning power as the original? If a person has a child or makes a clone? Even if you limit your claim to reasoning power, you still have no justification for it. – causative Jul 06 '21 at 10:31
  • "if a computer program copies itself, doesn't the copy have the same reasoning power as the original? If a person has a child or makes a clone? " : You are distorting categories. "Computer" and "person" are different categories. The entire question is about these two categories. Computer is able to copy itself not due to some accidental/mystical capability, but a technical capability developed by "person". – Ajax Jul 06 '21 at 10:51
  • @Ajax What does it matter how the computer program acquired the capability, as long as it is able to do it? I haven't seen you articulate any clear reason why you think a created object would have less reasoning power than the object that created it. – causative Jul 06 '21 at 11:06
  • Because there is a priori mind behind the computer. It provides validity to the computer. ' – Ajax Jul 06 '21 at 11:08
  • Another way to interpret Godel's Thm is that understanding is not reducible to rules. Your understanding or interpretation is what allows you to work with the system in the first place. – Ajax Jul 06 '21 at 11:10
  • 1
    @Ajax Godel's theorem does not prove anything about "understanding is not reducible to rules". Your understanding may simply be - in fact is - just the result of another mechanistic system. Godel's theorem says nothing against that. – causative Jul 06 '21 at 11:15
  • 1
    @Ajax It is true that you can understand the Godel sentence for arithmetic, which means your thinking is more powerful than arithmetic, but there are plenty of formal systems more powerful than arithmetic as well. Arithmetic with the Godel sentence added as an axiom is one of them. So nothing about this says your thinking can't be another formal system. – causative Jul 06 '21 at 11:37