9

I was reading an article by J Mark Bishop "The danger of artificial stupidity" on ScientaSalon, where he quotes his own research, John Searle and Hilary Putnam, among others, as proof of the impossibility of strong AI. I've always felt that strong AI deniers were closeted substance-dualists. People who believed in souls, but were unwilling to come clean about their religious/metaphysical beliefs for fear of being ridiculed. So instead they come up with all sorts of pragmatic arguments against strong AI like qualia or computers lack of insight, which don't really hold.

My reasoning for why denying the possibility of strong AI implies substance dualism is the following:

  1. Any finite sized physical phenomena can be reproduced given sufficient technological means and a sufficient understanding of the underlying physical processes.

  2. Denying the possibility of strong AI means that no matter how advanced our technology and how comprehensive our knowledge of neuroscience and psychology will become, we will never be able to reproduce the functionality of the human mind.

  3. Per 1) The only reason we would not be able to reproduce the mind's functionality is if there is something non-physical about how the mind works.

  4. Saying there is something non-physical about how the mind works is the same as substance dualism.

My question is the following: Is this indeed the case, that denying the possibility of strong AI implies substance dualism?

Alexander S King
  • 27,390
  • 5
  • 70
  • 188
  • what's "AI"? Artificial Intelligence? if so, "AI" in some different terms than what Computer Scientists mean (like "Expert Systems" or "Machine Cognition" or similar)? or "AI" in some deeper metaphysical sense? and what is "strong AI"? (or "weak AI"?) – robert bristow-johnson Mar 12 '15 at 01:43
  • I mean strong Artificial Intelligence. – Alexander S King Mar 12 '15 at 02:31
  • 1
    By strong AI, I mean whatever combination f expert systems, machines learning algorithms, fuzzy logic, genetic algorithms, support vector machines,...you name it, necessary to simulate all of the functions of a normal educated adult mind. – Alexander S King Mar 12 '15 at 04:00
  • okay, so "AI" means "Artificial Intelligence". dunno if it's the "AI" like computer geeks think about it or if it's more of a Ray Kurzwiel thing about one's sense of consciousness existing among silicon-based technology rather than carbon based. and then i still dunno what is meant by "strong* Artificial Intelligence". what's with the "strong"*? – robert bristow-johnson Mar 12 '15 at 04:01
  • You said it yourself: Weak AI is basically a set of human behavior inspired programming methods (like computer geeks think about it). Strong AI seeks to have a computer be 'as intelligent as' a normal educated adult human, in the same way that an adult human is more intelligent than a dog or an amoeba. Although not unanimous, most people agree that such a level of intelligence implies consciousness, or at least self awareness, which more of a Ray Kurzweil thing. – Alexander S King Mar 12 '15 at 04:09
  • okay, my question is this "normal educated adult mind" a functional thing (like the computer responds with answers or response to stimuli in the manner we would expect from a normal educated adult mind. or is it about the computer taking on a qualia or consciousness itself (where we might have to think about the ethics of pulling the plug on this computer)? – robert bristow-johnson Mar 12 '15 at 04:10
  • "You said it yourself: Weak AI is basically a set of human behavior inspired programming methods (like computer geeks think about it)." --- i didn't call that "Weak AI" or anything. --- "Although not unanimous, most people agree that such a level of intelligence implies consciousness, or at least self awareness, which more of a Ray Kurzweil thing." --- yes, that's far from unanimous. – robert bristow-johnson Mar 12 '15 at 04:12
  • so, if the silicon-based technology develops to a sophistication comparable to nature's carbon-based technology, are you asking if that means that the silicon-based technology has qualia or an emerged consciousness? – robert bristow-johnson Mar 12 '15 at 04:17
  • @AlexanderSKing In claims 2 and 4 you seem to be using the wrong word. The word you should be using there is "mind" -- not "brain." To use the word brain is to be confused as to what exactly is at stake in the argument. More generally, you seem to conflate belief in souls with substance dualism. In doing so, you're skipping over hylomorphism and a wealth of similar views (possibly because you can't tell the difference???). – virmaior Mar 12 '15 at 04:27
  • 1
    @virmaior I changed the question according to your suggestion. I am conflating souls and substance dualism, but not out of if ignorance. That is the exact gist of my question: The way I see it, a property dualist account of the soul is perfectly compatible with strong AI, since a property dualist (hylomorphic or other) soul is to the brain what software to a silicone based computer. Am I missing any other possibilities? (oh and thanks for the condescension - classy) – Alexander S King Mar 12 '15 at 04:51
  • What you're talking about is the perfection of human cloning, not of development of strong A.I. Sure, we can create something identical to a brain (i.e. create a brain), but what about that is artificial? – Scott Mar 12 '15 at 05:03
  • I don't know what you mean by calling a hylomorphist a property dualist. You're going to have to connect some dots for me. The software/hardware analogy doesn't seem to capture either the traditional Cartesian dualist's view or the hylomorphist's view. – virmaior Mar 12 '15 at 05:03
  • i'm still wondering if what you're inquiring about is whether silicon-based hardware supporting Intelligence has the properties of qualia or consciousness? and if that might lead to a consideration of the natural rights of that AI, as to whether or not it would be ethical to literally pull the plug on the hardware conducting that AI? – robert bristow-johnson Mar 12 '15 at 05:09
  • I'm not sure about "substance dualism", which appears to be a pretty vague notion introduces by Descartes (who was one of those I-proved-my-god-mathematically people, he had a bunch of proofs). But a denial of the possibility of machine intelligences based on digital computers, is a belief that minds are not possible with just known physics (which can be simulated to any desired accuracy). I.e. they necessarily believe in something supernatural, or, like Penrose, that the brains of human mathematicians (!) support gravitic quantum function collapsing or something like that. ;-) – Cheers and hth. - Alf May 24 '15 at 05:35
  • Given we don't understand human intelligence properly, it is hubris to say that strong AI is impossible. There will need to be some radical change of direction from ChatGPT etc to make it happen though (IMHO) – Dikran Marsupial Sep 07 '23 at 14:47
  • Where exactly in the article does he claim that strong AI is impossible (rather than just infeasible)? – Dikran Marsupial Sep 07 '23 at 15:33

9 Answers9

8

I can think of a few alternatives:

  • One could argue for a case where a human mind grade AI is theoretically producible, but the universe lacks sufficient resources to do so. This would be a practicality argument, not a theoretical possibility argument.
  • Idealism can claim strong-AI is impossible, without being dualistic.
  • Not all finite sized physical phenomena can be reproduced. You have to be able to measure it first, and there may be unmeasurable values in the universe (QM has shown presumably unmeasurable values exist).

There is also the cheating argument, to claim that "strong AI" is not defined sufficiently to allow us to accomplish it, but I don't believe that is what you are looking for.

Cort Ammon
  • 17,775
  • 23
  • 59
  • I have thought of each of those. In reverse order: 1) I don't buy the brain as a quantum computer. The mind is a macroscopic phenomena and quantum decoherence indicates a strong likely hood that the brain operates on a classical (non quantum level). Plus that would contradict the Church–Turing–Deutsch principle. – Alexander S King Mar 12 '15 at 03:46
  • Idealism is logically possible, but I will follow previous examples and refute it by kicking my foot against a rock. I should have added that as a caveat in my original question, [disregarding idealism],...and would idealism produce a mirror symmetry of materialism, where the mind and the body would still follow the same set of rules, meaning strong AI is possible.
  • – Alexander S King Mar 12 '15 at 03:52
  • The mind as so complex that accurately simulating it is intractable is the most interesting retort all 3. I guess wouldn't dismiss it entirely, but I have to note that it puts serious strains on the holographic principle. That is the principle that the amount of information in a system is upper bound by the surface of the volume containing it. Maybe the mind is too complicated to be simulated by a laptop or 10 laptops. But is it truly so complex, that a machine with the equivalent power - say of the all of Google's hardware - can't simulate it? That would be far fetched,
  • – Alexander S King Mar 12 '15 at 03:57
  • @AlexanderSKing in your order: 1) QM is one source of unmeasurable, it is simply the most accessible. Look into simulated automata and the idea of nonquiescent entities for a less accessible but less handwavey unmeasurable, also look into gardens of eden, which are not exactly in the direction you are looking, but are related enough to be of interest. 2) There is no guarantee that the idealistic form of materialism will choose(freewill) to free itself from whatever limitations material provided. However, I am fine to accept the caveat. I just wanted to point out that the decision is not binary – Cort Ammon Mar 12 '15 at 15:36
  • As for 3, it depends on how high of fidelity you have to model the human brain. If it turns out it can be broken down into components (ALU, memory, cache, etc...) then it may be trivial to model. If the tiniest quirks have to be modeled for it to actually function, then this gets harder. As for the computing power, consider protein folding. Consider that proteins fold over a time on the order of a millisecond or less, and the body is constantly folding literally millions of proteins at any time. One protein fold on Folding@Home took 10million hour of CPU time. – Cort Ammon Mar 12 '15 at 15:43
  • Another one to consider, after reading quen_tin is to look at Chaos Theory and what happens if you try to model a continuous function using discrete values (like floating point numbers). Any argument for computing strong AI will have to argue that the chaotic portions of the human brain have to be contained in statistically representable forms. – Cort Ammon Mar 12 '15 at 15:47
  • If arguments about feasibility are valid here, is the fact that human brains do not routinely tumble into massively chaotic states good empirical evidence that they are resilient, and not hypersensitive to initial conditions, and would this in turn be good evidence that approximate modeling would be good enough? Questions of feasibility may be moot in this issue, however, as the people mentioned in the question seem to be claiming that modeling is ruled out in principle, not merely infeasible. – sdenham Apr 06 '18 at 15:09
  • @sdenham That gets into an interesting corner case which is the idea that systems appear chaotic only if you measure that which is chaotic. For example, if we want to know if it will rain or shine in New York in a month, the variable we use to describe that is highly chaotic. However, if we are interested in the average rainfall in NY over the course of 10 years, that is a variable which is currently "resilient". Likewise, much of what we care about in these discussions regarding the human brain are chaotic, but as you point out, if you just look at "does the brain keep us alive," ... – Cort Ammon Apr 06 '18 at 15:17
  • ... then it does not look very chaotic at all. However, I do believe that if you start from the assumption that these approximate modelings are good enough because human minds are not sufficiently sensitive to initial conditions, then I think you also very quickly arrive at the conclusion that there is no need to treat different humans as individual entities. Indeed, some feel that corporate business treats us all as "cogs," in that that which makes us unique can be easily replaced because the aspects of humanity they care about are, indeed, resilient, and thus replacable. – Cort Ammon Apr 06 '18 at 15:19
  • As no-one is suggesting that the weather cannot be modeled in principle, this example is consistent with what I am saying: despite a degree of underlying chaos, approximate modeling works in weather forecasting. Furthermore, the sensitivity to chaos necessary to rule out strong AI would not be a corner case, it would be pervasive at all levels, and we would routinely see people suddenly, and for no particular reason, become uncommunicative, unresponsive, and maybe even dropping dead, even though their brains remained physically undamaged and metabolically functional. – sdenham Apr 06 '18 at 16:17
  • Strictly speaking, a hypothesis about the brain is not refuted by it having implications that we dislike, but I do not think that is an issue here: the individuality of people is an empirical fact, and I am not sure that ethical treatment is predicated on that fact anyway. In addition, I see a broad continuum here, not the dichotomy that you present, which seems to me to be equivalent to "either the brain is too chaotic to be modeled (even in principle), or we can treat people without regard to their individuality". Why would the latter have anything to do with the former? – sdenham Apr 06 '18 at 16:31
  • @sdenham Is the individuality of a person an empirical fact? To answer that we have to define what we are using to measure individuality. As a straw man, if the only measure I use is whether they live or not, 100% of people die eventually, suggesting no need to treat people as individuals at all. The point of this is to beg the question of what makes an individual an individual, and whether that thing is measurable and quantifiable or not. – Cort Ammon Apr 06 '18 at 22:31