10

In his book Consciousness Explained Dennett writes "Anyone or anything that has such a virtual machine as its control system is conscious in the fullest sense" [p281] referring to a Joycean machine which (if I understood correctly) may be implemented/simulated by a Turing machine.

Suppose I define Qualia as that thing which will always be left out by any implementation/simulation of consciousness by a Turing machine.

So it seems Dennett believes that Qualia defined in that sense does not exist.

The surprising thing to me is that most of my friends insist that no such Qualia exists, and they are all intelligent, and often software developers, who I expect, are supposed to know something about the nature of computation, even if only intuitively.

So far I failed to make even one of them realise that there is something in their inner experience that cannot conceivably be reproduced/simulated by a Turing machine.

They are in good company, BTW; here is an amazing "Closer to Truth" interview with Marvin Minsky where he explains away qualia: https://www.youtube.com/watch?v=SNWVvZi3HX8

Note I am NOT interested in any arguments for or against physicalism, idealism or dualism.

I am curious as to why so often non-philosophers, but nevertheless intelligent people who are supposed to know something about the nature of computation, insist there is nothing in their inner experience which cannot be reproduced/simulated by a Turing machine.

And my request is for references to discussions of this curiosity by philosophers, if such discussions exist.

nir
  • 4,786
  • 16
  • 27
  • 3
    I'd be more interested in why you believe that there is. – Roger Aug 25 '14 at 15:52
  • 2
    Because they're wrong. – user4894 Aug 25 '14 at 16:07
  • 3
    @user4894 If there exists a set Q defined as "things that a Turing complete implementation cannot simulate", then the onus is on those who believe it to be non-empty to demonstrate that this is the case. The alternative is proving a negative. So my question to the OP is, if the greater number of opinions gathered by people who are, by your own admission, intelligent and competent, disagree with you, should you not try to consider the possibility that it is you, not them, who are wrong? What makes you so certain that you are right, in the absence of any qualitative proof of that? – Roger Aug 25 '14 at 18:56
  • 3
    @Roger A TM is an abstract mathematical construction. A TM can no more be conscious than the set of even integers or the category of topological spaces can. Now perhaps the question means, can an implementation of a TM be conscious; the implication being that humans are implementations of TMs. Which is false, since we are finite. But the claims that we are memory- and time-limited TMs is totally unproven. Our brains don't work like TMs do, with a read/write head making and reading marks on a tape. The more you think about this question the more outlandish it gets. – user4894 Aug 25 '14 at 19:32
  • @user3894 TM's also reduce to general recursion and lambda calculus. None of these things work precisely the same way. It's unreasonable to assume no reduction because the syntax is different. – Calvin Aug 25 '14 at 19:51
  • @Roger, there is a third radical option that both you and them are not wrong. – nir Aug 25 '14 at 19:55
  • @nir Very true, and also a fourth option that both sides are wrong and that truth is stranger than both sides can conceive. But that wasn't the question. The question, to my reading, reduces to, "All of these supposedly smart people believe something that I think is wrong; why is that?" – Roger Aug 25 '14 at 20:02
  • @Roger, that's where the third, shocking, option comes in. – nir Aug 25 '14 at 20:03
  • @Roger Can we summarize this real quick? So 1. Turing Hypothesis holds, Qualia does not exist. 2. Turing Hypothesis false, Qualia exists. 3. Turing Hypothesis holds, Qualia exists, foundations of formal logic are incorrect? 4. Turing Hypothesis false, Qualia does not exist. I feel like we can come pretty close to writing off 3 at least... – Calvin Aug 25 '14 at 22:19
  • @Calvin, that is not what I meant; the third option is that you and they are both not wrong, by not having the same kind of inner experience. – nir Aug 26 '14 at 05:15
  • @nir Ah! Thank you. So 3. Turing Hypothesis holds conditionally on Qualia not existing in an individually, human minds fundamental diverse? – Calvin Aug 26 '14 at 14:19
  • It may be fruitful to ask why proponents of qualia, or skeptics of machine-qualia, are not committed to skepticism about "other minds", human or otherwise. After all, if you believe Turing's thesis, then you are bound to believe that there exists a Turing machine that would simulate a human brain up to behavioral isomorphism. This means the TM would behave exactly as you would in conversation, respond to questions as you would respond, and, interestingly, profess a deep intuition of its own extra-physical "inner experience"which could not possibly be instantiated in some other substrate – Tim kinsella Aug 27 '14 at 02:05
  • It seems to me that the skeptic of machine-consciousness would have no more empirical or logical grounds to dispute the TM's claims to "inner experience" than he would to dispute the claims of subjectivity and "inner experience" of other humans. I suppose you could say that the TM's "brain" doesn't look like yours, but you can see that you are clearly on the book foot in that argument. – Tim kinsella Aug 27 '14 at 02:16
  • @Timkinsella Well, the grounds the skeptics have here is the Turing Hypothesis. And that's about as far as it goes. As with all religions (and indeed, all thought in general), the only completely reasonably position here is no position at all, but the Turing Hypothesis does lend greater weight to skepticism ceteras paribus. – Calvin Aug 27 '14 at 14:52
  • Reference: http://philosophy.stackexchange.com/questions/15198/is-it-possible-to-be-truly-unbiased/15205#15205 – Calvin Aug 27 '14 at 14:53
  • 1
    "non-philosophers, but nevertheless intelligent people". Don't you think you are a bit insulting here? BTW. My computer asks me to tell you that it is indeed insulting. He (he thinks of himself as male) would comment himself, but doesn't have a stackexchange account. – gnasher729 Aug 27 '14 at 22:50
  • @gnasher729, I described my friends and myself; why do you find it insulting? – nir Aug 27 '14 at 22:54
  • @gnasher729 I think establishing sides of the argument on basis of training and thinking style is acceptable in this case. And perhaps it would be better for your computer to identify as a man, as I would expect a computer to more likely have gender than sex. – Calvin Aug 28 '14 at 20:19
  • 1
    What constitutes "fully conscious"? -- I'd expect that "manifesting the outward appearance of consciousness" is (as most) a necessary condition, but also being "identical to human consciousness in all possible ways" is too strict, it rules out by fiat consciousness that doesn't involve human brains (even in most forms of dualism there is a relationship between the mental and physical). – Dave Aug 29 '14 at 13:18
  • I'm afraid I don't have the slightest idea how to answer your question other than to suggest that the human race is not as rational as some would like to believe. Quite how Dan D got away with his book title is beyond me. Your comment "non-philosophers but nevertheless intelligent people" implies that philosophers are always good thinkers but this is a misunderstanding. –  Sep 03 '18 at 15:04

11 Answers11

10

In answer to your question, I think Scott Aaronson, a computer scientist at MIT, expresses the strong AI position very eloquently on his blog and in the notes for some of his courses. For instance, http://www.scottaaronson.com/democritus/lec4.html

Here's an excerpt in which he mentions qualia:

So, I asked you to read Turing's second famous paper, Computing Machinery and Intelligence. Reactions?

What's the main idea of this paper? As I read it, it's a plea against meat chauvinism. Sure, Turing makes some scientific arguments, some mathematical arguments, some epistemological arguments. But beneath everything else is a moral argument. Namely: if a computer interacted with us in a way that was indistinguishable from a human, then of course we could say the computer wasn't "really" thinking, that it was just a simulation. But on the same grounds, we could also say that other people aren't really thinking, that they merely act as if they're thinking. So what is it that entitles us to go through such intellectual acrobatics in the one case but not the other?

If you'll allow me to editorialize (as if I ever do otherwise...), this moral question, this question of double standards, is really where Searle, Penrose, and every other "strong AI skeptic" comes up empty for me. One can indeed give weighty and compelling arguments against the possibility of thinking machines. The only problem with these arguments is that they're also arguments against the possibility of thinking brains!

So for example: one popular argument is that, if a computer appears to be intelligent, that's merely a reflection of the intelligence of the humans who programmed it. But what if humans' intelligence is just a reflection of the billion-year evolutionary process that gave rise to it? What frustrates me every time I read the AI skeptics is their failure to consider these parallels honestly. The "qualia" and "aboutness" of other people is simply taken for granted. It's only the qualia of machines that's ever in question.

But perhaps a skeptic could retort: I believe other people think because I know I think, and other people look sort of similar to me -- they've also got five fingers, hair in their armpits, etc. But a robot looks different -- it's made of metal, it's got an antenna, it lumbers across the room, etc. So even if the robot acts like it's thinking, who knows? But if I accept this argument, why not go further? Why can't I say, I accept that white people think, but those blacks and Asians, who knows about them? They look too dissimilar from me.

Oh, this is also an entertaining discussion http://bloggingheads.tv/videos/2561?in=18:56&out=22:10

Tim kinsella
  • 579
  • 2
  • 14
  • 1
    Thanks for the references, I'll try to read both the lecture and Turing's paper. However, it seems to me from this excerpt that Aaronson confuses thinking with qualia. I do not think that proponents of qualia, such as Chalmers for example, argue against thinking machines. – nir Aug 27 '14 at 22:23
  • @nir I think when he says "AI skeptics" he means, in particular, strong AI skeptics, not just skeptics of thinking machines generally, if that clarifies anything. – Tim kinsella Aug 28 '14 at 02:29
  • 3
    I would like to emphasize the notion of meat chauvinism. – Calvin Aug 28 '14 at 20:22
  • @Timkinsella, the conversation between Yudkowsky and Pigliucci is great! – nir Aug 28 '14 at 21:45
  • 1
    Aaronson's argument here is a strawman. AI skeptics don't argue against AI because machines don't look like humans. The usual argument is that consciousness is not the same as computation. See, e.g. the frame problem https://plato.stanford.edu/entries/frame-problem/ – transitionsynthesis Sep 03 '18 at 06:11
  • 2
    @transitionsynthesis: The counter to that implied in the quote is that all such supposedly "higher level" arguments are just disguised chauvinism. Some of it is easy to spot, such as Penrose's conjectures that tie consciousness to unproven magic quantum networks. But a lot simply echoes the same pseudo-science that was truly believed about the inferiority of different races, cultures or women - it is scientific philosophy constructed and structured with a goal of justifying a pre-existing belief. – Neil Slater Sep 03 '18 at 07:05
  • @nir, how can we know other people have qualia? What if their qualia are different? Can we recognize them as qualia? Same for machines. Maybe they could see everything symbollically, not geometrically. But I would argue they still are qualia. – rus9384 Sep 03 '18 at 07:07
  • @NeilSlater Now I'm confused. Is the argument really that anyone who doesn't hold the computational theory of mind is a disguised chauvinist, even when they propose an alternative theory? – transitionsynthesis Sep 03 '18 at 17:03
  • @transitionsynthesis: I over-stated that, probably it is some other logical fallacy. However, there are attempts to reject the possibility of artificial intelligence, that are based on starting with "special" and unprovable features of humans. Proposing an alternative theory does not absolve a person of being prejudiced. Although of course, any person could equally have approached the problem "purely", stated their axioms and worked through them to find out what their theoretical position should be. The problem with resolving "hard AI" debates is still with the axioms. – Neil Slater Sep 03 '18 at 17:44
  • I find Scott's arguments merely rhetorical sleight of hand; it's not only possible to be sceptical about consciousness, but also about reality which lands one in solipsism and Scott's rhetoric skirts this. I can't say that I'm impressed. – Mozibur Ullah Sep 10 '18 at 03:17
  • @Timkinsella A general remark that Scott Aaronson has mis-stated Penrose’s stand on the whole debate. Penrose accepts Turing’s test. That is, if an AI in a room can convince Penrose over WhatsApp that it is a human being, Penrose would be ready to accept human beings are Turing machines. – Prem Jul 21 '20 at 10:30
  • @Premkumar Very interesting, thank you. But then penrose's position is even stranger: he rejects the strong church-turing thesis. – Tim kinsella Jul 21 '20 at 15:34
7

People believe that we have qualia because it seems that e.g. red is like something not reducible to declarative knowledge. (I've never heard a satisfactory account of why procedural knowledge isn't as vexing as qualia. But that's an aside.)

People believe that we are Turing-computable because all the physical processes that seem to be in play in biological systems can be described very well with mathematics that is Turing-computable. Empirically, we can't distinguish our universe from a Turing-computable one (at least at the scale of our consciousness). We can see that those things that we describe well with mathematics have a profound impact on consciousness (e.g. neuron reversal potential).

Because we believe we have qualia because we seem to, and because we believe the universe is Turing computable because we have a staggering quantity of empirical evidence consistent with that hypothesis, and we have no proof (aside from what boils down to argument from increduility, which is notoriously weak--see how well vitalists fared!) that qualia is Turing-incomputible (e.g. qualia cannot solve the halting problem), we conclude that qualia are computable.

Tossing aside computability because of a vague hunch that it feels wrong for qualia is just as foolish as tossing aside qualia because they seem awkward to compute.

Rex Kerr
  • 15,970
  • 1
  • 23
  • 46
  • 1
    It could be an explanation for why some people believe they are Turing computable, but since most people are not physicist, nor philosophers of mind, and have not bothered very deeply with such questions as whether a galaxy or the universe itself are Turing computable; for such reasons I find it surprising that so many people I talk with (including people answering and commenting here) insist on having nothing in their inner experience which cannot be simulated by a computer; and this is why I am interested in discussions of this phenomena by philosophers. – nir Aug 27 '14 at 20:55
  • 1
    @nir - You can be aware of results from physicists without being a physicist. Why do people believe that stars are far away? They're not astronomers who have calculated parallax or used standard candles or red-shifts etc. – Rex Kerr Aug 27 '14 at 20:58
  • 1
    other notes: a) Chalmers discusses why vitalism is not a good analogy to Qualia in his Facing Up paper. b) (a naive question) why don't we consider the randomness and the absurd aspects of elementary particles the characteristics of an underlying non-computable phenomena? c) doesn't the claim that a Turing machine may be conscious in the fullest sense mean that a string of bits may be conscious in the fullest sense? – nir Aug 27 '14 at 21:17
  • also, regarding the intuition of people, while some theoretical physicist and philosophers may theorize that the universe is information, or that there is no distinction between a phenomena and its simulation, I would expect ordinary people to distinguish by intuition between a phenomena and its simulation or representation, just as you distinguish between a person and a photograph of that person. – nir Aug 27 '14 at 21:22
  • @nir - Chalmers' argument is: "Given any such [physical] process, it is conceptually coherent that it could be instantiated in the absence of experience. It follows that no mere account of the physical process will tell us why experience arises." That is, the whole argument relies upon a hunch that there ought to be an identical physical situation where there is no consciousness (that is, it's begging the question). That this passes as serious philosophical argumentation is embarrassing, especially after he's already admitted that the "fifth and reasonable strategy" could work if continued. – Rex Kerr Aug 27 '14 at 21:28
  • @nir - As to the rest, pseudo-random number generators produce probability distributions indistinguishable from true randomness; the string of bits would not be conscious in the same sense we are but might be in some sense; and people are plenty familiar enough with machines that do amazing things that if physicists say, "Well, everything's actually a machine," it's not so terribly hard to accept. – Rex Kerr Aug 27 '14 at 21:31
  • pseudo-random numbers may be used to approximate a lot of complicated things, for example the behaviour of a crowd of people; nevertheless, isn't the randomness of an elementary particle the possible manifestation of a non-computable phenomena? – nir Aug 27 '14 at 21:48
  • my reference to Chalmers was to his short argument on the analogy to vitalism; I don't think it is a good idea to quote him out of context on other things. – nir Aug 27 '14 at 21:55
  • If a Turing machine can be conscious to the fullest sense, and since a TM consists of a tape, head, state register and (a small universal states) table, the question arises, what is fully conscious in that combination? The most complex thing and where the information presumably is, is the tape; but that is just a string of symbols. We can reproduce the changes on tape, by recording and concatenating all the versions of the tape content to a giant string; don't you expect that string to be fully conscious? why not? – nir Aug 27 '14 at 22:12
  • @nir - True randomness is not predictable but that does not mean it can't be implemented by a pseudorandom algorithm with a state that you can't access. Chalmers was arguing why vitalism was different, that is, that it was about mechanism; but his argument about consciousness boils down to "they haven't found a mechanism so I am free to imagine that there is not one". The quote was thus directly in context. And as to what in a TM can be conscious--it would be an emergent property, of course. If you remove the time dimension and place it in space instead, it won't be conscious over time. – Rex Kerr Aug 27 '14 at 22:24
  • record all the string versions (and optionally the state register values) produced by the Turing machine, and use another machine to print the strings on tape, one after the other; is consciousness manifested now? – nir Aug 27 '14 at 22:27
  • 1
    @nir - That's pretty hard to answer without knowing what "manifested" actually means, and what the emergent property actually is, if there is one, isn't it? "If you don't have a fully-worked-out answer already, there isn't one" is just an argument from ignorance. When dealing with stupendously complex objects with many amazing and baffling properties, you have to fall back on what general principles you understand. If there is a reason why consciousness is not subject to the sorites paradox, then we can get somewhere. Otherwise it's just hunches flying in the face of empirical evidence. – Rex Kerr Aug 27 '14 at 22:45
  • is there anything in your inner experience which cannot in principle be reproduced by that series of strings printed one after the other on tape as I described? – nir Aug 27 '14 at 22:52
  • @nir - There is not anything that cannot in principle be reproduced. It is a weird sort of existence from our gedankin-privileged position of inspecting the string from the outside. From the inside, nothing would seem strange at all (by postulate). The intuition that this is too weird to be what is going on is equivalent to the lack of intuition for just how weirdly powerful Turing computability is. But this isn't very enlightening. It just invites errors of perspective: here outside the string the string doesn't look conscious, so the computed entity isn't on its own terms as much as we are. – Rex Kerr Aug 28 '14 at 02:10
  • @nir A turing machine isn't just a finite sequence of strings; it's a dynamic machine that will perform a computation on an input string in a manner analogous to a human answering a question. Of course, it has finitely many states and a finite alphabet, so its transition function can be encoded by an integer, but that's like encoding your brain as an enormous directed, weighted graph and then encoding that with an integer and then calling that integer "nir." Clearly there's some temporality and extensionality involved in consciousness. – Tim kinsella Aug 29 '14 at 21:34
  • @nir Also, I think some of your skepticism about conscious Turing machines comes from a deceptive choice of imagery in a way similar to Searle's "Chinese room" thought experiment. Here are some responses to the deceptive imagery of the chinese room which, I think, apply to your thought experiment about the Phenakistoscope as well. http://en.wikipedia.org/wiki/Chinese_room#Speed_and_complexity:_appeals_to_intuition – Tim kinsella Aug 29 '14 at 21:39
  • @Timkinsella, the number of states required for a universal TM is negligible and the same states would be used for simulating a calculator or a retro ATARI gaming machine; a TM is deterministic, and its entire run (say 10^27 subsequent steps) is entirely determined from the initial configuration of the tape; therefore please explain why a simulation of a brain by a TM would be different from a later replay of the distinct tape strings by a recording machine, in terms of being conscious in the fullest sense, as Rex, Calvin and Andy believe. – nir Aug 29 '14 at 21:59
  • @nir This will take a second to digest, but let me start by saying that a universal Turing machine doesn't exhibit human-like intelligence- it is merely capable of simulating an intelligent TM if you managed to write that intelligent machine's code on the universal TM's tape. – Tim kinsella Aug 29 '14 at 22:03
  • @nir Ok, what do you think of this analogy?: The "log" of a TM computation is to the TM that carried out the computation as a written reply (this one, say) is to the person who authored it. Btw, this is a really great post :) It's really making me blow the dust off some personal beliefs I haven't examined in far too long. – Tim kinsella Aug 29 '14 at 22:07
  • @Timkinsella, I do not understand how your analogy explains the difference between the run of a TM and the replay of the distinct tape states; but at least it does seem as if you see a difficulty that you are trying to resolve in the idea that a TM can be conscious in the fullest sense. to which "camp" do you belong? :) – nir Aug 29 '14 at 22:11
  • @nir I'm in the strong AI camp, but I'm the first to admit that the notion of conscious, feeling machines is incredibly counterintuitive. – Tim kinsella Aug 29 '14 at 22:18
  • @nir - You are taking into account that the Turing Machine has all sensory inputs encoded somewhere on its tape, right? So it's not like you can take a static log of one TM run and let it loose in the world somewhere new and get consciousness-in-that-world. You would just have a record of consciousness-in-the-world-the-TM-entity-was-presented-with. – Rex Kerr Aug 30 '14 at 18:37
  • @RexKerr, if you accept that the brain and anything in the universe may be simulated by a Turing machine, then you may have a Turing machine simulate an isolated system which consists of a living room with two people in it having a chat; the entire run of the TM is determined by the initial configuration of the tape; and the state of the tape in each step will include information corresponding to what the two people see and hear; in particular you cannot practically know what they will chat about until they do; it may even be the case that they will realize some amazing truth about the cosmos. – nir Aug 31 '14 at 15:55
  • @nir - Of course. Do you see that as a problem? – Rex Kerr Aug 31 '14 at 20:27
  • @RexKerr, I meant, in what sense should one "let loose" a TM or a recording of a TM "in the world somewhere" if the TM may be the entire world? – nir Aug 31 '14 at 20:52
  • @RexKerr, BTW, Chalmers would say there is a difference in causality between a computation and its recording, as an explanation for why the first may be conscious and the later may not. – nir Sep 01 '14 at 13:10
  • @nir - I still can't tell if you think there's a problem. I basically agree with what you've said in the last three comments. – Rex Kerr Sep 02 '14 at 20:40
  • @nir, you asked " b) (a naive question) why don't we consider the randomness and the absurd aspects of elementary particles the characteristics of an underlying non-computable phenomena?", I wonder what precisely you meant by "randomness and the absurd aspects". I'm a theoretical physicist who studied elementary particles for a large portion of my professional career, and I haven't seen a single Turing-incomputable problem within the scope of current elementary particle physics. – Jia Yiyang Mar 01 '19 at 08:28
7

As a former professional computer software engineer, physicist, and "thinker" - but perhaps not philosopher - I feel well placed to answer the core of your question:

"...insist there is nothing in their inner experience which cannot be simulated by a Turing machine."

And the answer is simple. These are educated and often pragmatic people who will heuristically and intuitively expect justification for an argument that doesn't match their experience and the progress in science, mathematics, and computation over the last few decades. We have not found any system that cannot at least to a good approximation be simulated using Turing machines, so why should there be any exception? If consciousness etc are emergent properties of complex dynamic systems then if you simulate those complex dynamic systems those same characteristics will emerge based on the evidence of other such simulations - e.g. weather forecasting.

I suppose this is also an intuitive appeal to Occam's Razor http://en.m.wikipedia.org/wiki/Occam's_razor This will be natural to software engineers as it's highly applicable to good software design.

Andy Boura
  • 209
  • 1
  • 6
  • Regarding intuitions on the nature of simulation and weather forecasting, this reminds me of Searle who said "no one supposes that a computer simulation of a storm will leave us all wet" – nir Aug 28 '14 at 17:12
  • 1
    What is your opinion btw? Will a Turing machine running the perfect simulation of a human brain be conscious in the fullest sense as Dennett put it? – nir Aug 28 '14 at 17:18
  • I've not read Dennett however my belief is that it is an emergent property so yes I believe if you simulate the underlying systems the rest will follow. And a virtual storm could leave a virtual intelligence wet ;) – Andy Boura Aug 28 '14 at 17:23
  • It seems that if you accept that it is possible in principle to simulate the brain with a Turing machine, and that such a TM would be conscious in the fullest sense (as we are), you may end up concluding that we are just as conscious as a (giant) piece of paper with a string of symbols written on it (see my exchange of comments with Rex Kerr). Today I talked with two people who admitted that such a piece of paper would be as conscious as they are, and if I correctly understood Rex and Calvin (in the comments to their answers), they believe that too. Can you see a problem there? – nir Aug 28 '14 at 20:47
  • 1
    @nir An interesting exchange above. I didn't follow some of the references in that discussion but I believe I generally agree with Rex. A couple of key points though...the paper is like a "paused brain" is someone brain dead conscious? All their neurons etc are still there. It is the act of execution that causes the emergent consciousness to occur. Also the paper must be capable of storing and amending state if it is not to be permanently locked in an instant. – Andy Boura Aug 28 '14 at 21:20
  • @nir Further to your argument on whacky particles as evidence of non computability - I've never heard of anything that couldn't be simulated well enough to convince an observer who only had the output...additionally pseudorandom isn't an issue in a chaotic system. If you were really bothered there are true random sources available to computers. (static and radioactive decay spring to mind) – Andy Boura Aug 28 '14 at 21:25
  • I proposed to record each different state of the string in a new line on a paper; later you may replay the lines without the TM head similar to a Phenakistoscope; where does this proposal go wrong? – nir Aug 28 '14 at 21:39
  • @nir Yes, that ought to work. Remarkable that the replay would believe itself to have free will unless we convince it otherwise via external input ;) – Andy Boura Aug 28 '14 at 21:52
  • a) What if two different points in the giant phenakistoscope are being observed at the same time? b) What if the phenakistoscope stands still while the observer is moving relative to it? – nir Aug 29 '14 at 05:57
  • @nir You are an external observer - what if you watched a human at two points in time? Remember the emergent property is experienced within the medium not through external inspection. – Andy Boura Aug 29 '14 at 08:19
  • @nir Added reference to Occam's Razor... – Andy Boura Aug 29 '14 at 20:19
  • 1
    There are different kinds of approximations: Approximating a number by a sequence of approximations will eventually get you to that number; approximating a landscape by a painting, will never get you the landscape - it will always remain an imitation. This, Turing recognised, but he argued, pragmatically, it makes no difference. Ontologically, however, the difference remains. – Mozibur Ullah Sep 10 '18 at 03:20
4

I am software developer and I actually tried the same thing as the poster. I found it is most people find it hard to think about this problem, not just engineering types.

I see the same thing happening in the answers posted here. Most people are utterly and completely stuck in a specific kind of thinking. Qualia are hard and they basically just ignore the problem. Saying stuff like "I can reduce your behaviour to a machine and I wouldn't know the difference". This sort of thinking is totally oblivious to the real issues.

The qualia are so close, it is near impossible to see them. Once you do, you understand the problem and you immediately grasp the impossibility of it all. For some reason this seems to be a all-or-nothing kind of deal.

Peter
  • 41
  • 1
  • 2
    I think I agree with you. I don't think qualia can be reduced to a program running on a Turing machine. I would reference Searle's Chinese Room Argument, but I wonder if you have any other references I could check. – Frank Hubeny Mar 29 '18 at 13:43
  • It occurred to me that emergent properties, if they exist, are caused by the observer---- not the object. An ant doesn't see a colony, and if there weren't people to say, "look-- an ant colony" then there would be no ant colonies, just ants. – elliot svensson Sep 11 '18 at 14:13
3

I'm not sure what the strong arguments in favor of conscious Turing machines are, but I'm aware of at least two reasons people deny it! ...the technical limitation of simulating (specifically) the human brain, and the mind-body problem. The mind-body problem has all the appropriate literature, so I will not attempt to repeat this, except to say that whatever a Turing machine could do, is done by "body".

In my capacity as a mechanical engineer, I have from time to time used commercial software that implements computation to simulate physical systems, such as finite element analysis of the stress at each element of a bracket or axle (etc) resulting from applied forces. The computer writes equations at each element with the goal of solving for an equality and uses iterative methods to arrive at a solution, since empirical solutions (as with differential equation solutions) are so rarely practical. Even so, the solution is fraught with interpretation, as when an internal 90-degree corner has a singularity in stress, the mathematical result of "stress concentrations."

If memory serves, a typical analysis that I performed a few years ago involved about 200,000 elements and my 2005-era computer could finish it in less than an hour. Using Moore's law and guessing that a transistor increases computation speed linearly, a computer today could do the same thing in between 1 second and 60 seconds.

Increasing the number of elements (or "degrees of freedom") is desirable because it increases the precision of the output, but at an exponentially increasing computer cost, as demonstrated here.

What would be necessary to simulate the human brain? The human brain has 85-86 billion neurons. If we take "Computational Time for Forward Elimination Steps of Naive Gaussian Elimination on a Square Matrix" as a reference, reasonably there is about an increase by 10 ^ 10 in computation time for each ten-fold increase in the number of elements in our computation, so when we add six factors of 10 to my 200,000-element computation, we need sixty factors of ten for the computation time.

However, my physical system computes elements that are only related to four or six nearby elements. Each neuron has on average approximately 7,000 synaptic connections. At a minimum, this can be multiplied by my element count for this purpose (of guesstimating a reasonable minimum computation time to simulate brain activity) so we need another thirty factors of ten on top of our computation.

My computation was linear, in that a given input in force causes a proportional response in stress, deflection, etc. Neurons are non-linear, so we must somehow account for this increase in computation time due to complexity. My computation was static, meaning that time-dependent aspects of the computation weren't part of the equation. Neuron activity is highly time-dependent (see this again).

Without addressing non-linearity and time-dependent issues, a brain computation would reasonably take a computer today between 1 and 60 seconds multiplied by ten to the ninetieth power. To reduce the computation time back to under a minute, Moore's law would need to work for the next 660 years.

My computation was for a single input and a single output. The human brain is constantly receiving inputs and producing outputs at an unknown or (more likely) stochastic rate. If this were 100 per second, then that tacks easily ten more years on our Moore's law estimate, placing a working brain simulation nicely in the year 2780.

Nobody believes that Moore's law will actually continue working for the next 670 years. Will quantum computers save the day? Google's quantum computer is 100,000,000 times faster. But this only shaves 8 factors of ten off of our 90, resulting in only 82 factors of ten for the brain computation and 550 years for Moore's law to work: computer brains by 2570.

elliot svensson
  • 4,119
  • 8
  • 25
3

Turing himself in the paper which he proposed the Turing game side-stepped this question. He felt that this was too difficult a problem to tackle. The essential notion of the Turing game is that imitation is enough.

Computation is a very visible artifact and pervasive feature of our technical civilisation and is also fairly new in its ubiquity though it has been theorised since Charles Babbages time. Hence people take a leap and suppose that the imitation of consciousness and consciousness itself are the same thing - when in fact, they are not.

Mozibur Ullah
  • 47,073
  • 14
  • 93
  • 243
  • I would like to note that Turing's paper is about thinking not about consciousness. He poses the question "can machines think?" not "can machines be fully conscious?" these are different questions. Indeed he alludes to the problem of consciousness in the second part of the paper where he contemplates possible attacks on his position, and he clearly considers consciousness to be a separate problem, not the subject of that paper. – nir Sep 05 '18 at 03:52
  • @nir: Nevertheless consciousness and thinking are associated and not entirely separate problems. – Mozibur Ullah Sep 10 '18 at 02:51
  • Just as seeing and consciousness are associated. But no one links the problem of seeing machines with the problem of conscious machines. – nir Sep 10 '18 at 06:05
  • @nir: I tend to think 'seeing' is connected to consciousness. A video camera is a machine that records visuals but in no sense is it a seeing machine. – Mozibur Ullah Sep 10 '18 at 07:39
  • naturally, but please don't let a good argument force you to cling to the wrong position. No one would have taken much interest in Turing's paper had it been about whether machines can see. actually an imitation game in the context of seeing, which is all about measurable performance, such as detecting objects and colors, sharpens the distinction between the the problem of seeing and thinking on the one hand, and the problem of the mysterious phenomenon of consciousness on the other. Feel free to concede... – nir Sep 10 '18 at 07:53
  • @nir: How do you know that? Plenty of people are interested in what Turing wrote. Had he written a paper about seeing machines I'm pretty sure there would be lots of interest. To get back to your original point, Turings paper was not about 'thinking' but about the imitation of thinking. And he was quite explicit about that. – Mozibur Ullah Sep 10 '18 at 08:01
  • @nir: would you be 'grateful' if I conceded? – Mozibur Ullah Sep 10 '18 at 08:02
  • @nir: Which - can I remind you - is what you told me years ago when I first came on this site. So let me ask you a straight question, why were you telling me to be 'grateful' for and for what? – Mozibur Ullah Sep 10 '18 at 08:38
  • I don't remember what you are talking about. can you add a link? – nir Sep 10 '18 at 09:15
  • @nir: Whereas I remember what you wrote, I don't remember which page it was on. I was rather shocked at the time thinking when do people shout at others to be grateful. I should have confronted you at the time. Next time, I will know better. Lesson learnt. – Mozibur Ullah Sep 10 '18 at 09:24
  • I don't recall such an incident. I would like to believe that either you confuse me with someone else, or else you misunderstood me. you happen to be one of the users on this site that I enjoy reading. In fact I have an obscure recollection of having told you just that in the past. – nir Sep 10 '18 at 09:57
  • @nir: I do remember that. I recall you also used the word 'respect' ; this is why I was so shocked by your bullying tactics - at the time; less so now, since I am more acclimatised to how people interact on social media - if one can call it an interaction. – Mozibur Ullah Sep 10 '18 at 10:00
  • Mozibur, as far as I can see I all I did to deserve being called a bully publicly by you is to disagree with you and to propose that you concede the point above. unless I am missing something I would tend to think that your reaction is very exaggerated. but I admit that I am used to people losing their marbles when arguing with me but I have always suspected it has todo with human nature of disliking being proven wrong. What have I said in this discussion that can be considered so offensive? – nir Sep 10 '18 at 10:11
  • @nir: I was referring to the previous incident. If you can 'obscurely recollect' that I was one of the users that you 'enjoyed reading' then I'm pretty sure you can recollect that previous incident. I can't say I'm suprised at people losing their marbles arguing with you - I might be inclined to join them. – Mozibur Ullah Sep 10 '18 at 10:17
  • You certainly appear to have joined that group. Indeed I recall I have told you that I appreciate your activity on this site last time because you were offended by something I have written, but I can assure you that my compliment at the time was completely candid and not some part of an evil bullying tactic. I searched for that incident but was not able to find it so all we have is the present discussion. I therefore ask you again to go over it and think it really justifies this shit-storm. if it does not then you called me a bully for no good reason and you might consider apologizing. – nir Sep 10 '18 at 10:30
  • And btw, cosider the option that my comments to you concerning thinking vs consciousness were correct. – nir Sep 10 '18 at 10:33
  • @nir: I wasn't offended by anything that you had written - then. Your compliment came out of the blue as well as your - for the lack of a better word - insult. Your first remark above is as fine as it goes, but like I said, the two are related and not separate. Its easy to go from what I wrote above to the imitation of thinking is not thinking itself. The distinction, in the context of what I'm trying to explain above, is not particularly important. My reasons for calling you a bully, I gave above. If you don't care to recall the incident, well, theres nothing I can do about that. – Mozibur Ullah Sep 10 '18 at 10:52
  • I have found the old exchange here. Indeed, I was rude at the time. It appears that after the exchange you deleted all of your comments so it is possible that it was a two way street and that my comments did not come out of the blue. (btw, I did not ask or demanded of you to be grateful or anything of the sort). anyway, I apologize if I offended you at the time. – nir Sep 10 '18 at 14:04
2

I find few people actually hold the position "there is nothing in their inner experience which cannot be reproduced/simulated by a Turing machine." There are indeed people who assert that all of reality is a Turing machine, in which case they obviously will hold that position, but I find few people find this position useful. Instead, what I see asserted is more along the lines of "there is nothing meaningful in their inner experience which cannot be reproduced/simulated by a Turing machine." This statement holds up better, because it avoids making great statements about reality. Instead it makes statements about that which is meaningful, which itself is a rabbit hole because "meaning" has similar characteristics to "qualia" in debates such as this.

So far I failed to make even one of them realise that there is something in their inner experience that cannot conceivably be reproduced/simulated by a Turing machine.

What if I told you that there is an issue with your argument? You state that there is something that cannot be reproduced/simulated by a Turing machine. This is a strong statement. The weaker statement that there may be something that cannot be reproduced is easier to defend. As it turns out, we typically find that there is no proof either way. Both the position that there is something that can't be simulated and that there is nothing that can't be simulated are remarkably hard to prove.*

Indeed your argument has a fascinating twist. To prove that there must be some element which is not simulatable, we typically must assign it a symbol. The instant we assign it a symbol, the Turing machine can use that symbol and do operations on it. Your proof must contain a description of those operations, which puts it right in the realm of things that Turing machines are good at doing. Of course it may not be possible to determine if these particular operations halt.

If you are good at Math, you may be able to trap them. If they can define consciousness sufficiently, you may be able to demonstrate that the definitions of their words cause their description of reality to run afoul of Gödel's incompleteness theorems or Tarski's undefinability theorem. These are my preferred go-to arguments for cracking these kinds of beliefs, but they do indeed depend upon the other person's definitions being mappable into the domain and language used by Gödel or Tarski.

Alternately, you may find Guy Steele's Growing a Language to be a fascinating tool. It's a 53 minute long speech from 1998, and it's quite brilliant. For those who can't sit still or want to check his self-consistency, a transcript is available. In that presentation, Guy Steele builds up a language from the ground up. He shows how you can go from simple things to all the complicated beautiful things languages have to offer in a way which a Turing fanatic would appreciate. You will find quickly that he is able to describe all of the fancy qualia that one might ever want to describe using this approach, but there's a catch. I'll hide it behind a spoiler text in case you want to actually watch the process unfold:

Guy states that he will assume that monosyllabic words have an understood definition, but he must define all polysyllabic words using only words that have definitions at that point.

From this we can see where the interesting bits are. If you want to unsettle the Turing fanatic argument, don't look for the complex, look for the simple. Look for the simple things which are assumed by the other to be self evident, and simply ask them to be consistent and provide a definition for them.

I love pairing Guy Steele's speech with a quote by a character in the Robert Heinlein book, Stranger in a Strange Land:

Short human words were never like a short Martian word — such as "grok" which forever meant exactly the same thing. Short human words were like trying to lift water with a knife.

And [God] had been a very short word.


* Per the discussion in comments with Mozibur Ullah, I am apparently using the word "prove" wrong here. Apparently simply assuming one of these statements is true, axiomatically, qualifies as a proof in the formal sense. Here I'm using the word proof to describe a proof that consists of more than simply assuming the conclusion and writing "QED." I'm using "proof" in a sense of something which might actually convince someone that your conclusion is true when they didn't already believe it to be true axiomatically themselves.

Cort Ammon
  • 17,775
  • 23
  • 59
  • cort, you write: "Both the position that there is something that can't be simulated and that there is nothing that can't be simulated are remarkably hard to prove." I disagree. the position that there is an aspect of experience which cannot arise in a computation, cannot be the subject of any proof. and yet it is the first certainty. it has nothing to do with Godel, or being smart about it. it is not about complexity or definitions. – nir Apr 03 '18 at 20:52
  • @nir You describe a "certainty," but a certainty is not a proof. – Cort Ammon Apr 03 '18 at 21:19
  • Yet proofs must start with clear and distinct ideas - ie axioms that is to say, certainties. – Mozibur Ullah Sep 10 '18 at 03:09
  • @MoziburUllah Oh yes. Axioms are not proofs either. – Cort Ammon Sep 10 '18 at 03:31
  • @cort ammon: Axioms are proofs - formally speaking - if you are into that stuff: they are proofs of themselves. – Mozibur Ullah Sep 10 '18 at 03:37
  • @moziburullah fascinating. I've never heard of someone considering a n axiom to be a proof. I'd challenge you on it, but it's kind of clever. All you'd have to do is state that "an axiom is a proof" is an axiom you assume, and it would be proven, QED – Cort Ammon Sep 10 '18 at 03:47
  • Check out formal logic resources - I'm not going to dig them out for you. Alternatively ask on Math.SE. An 'axiom is a proof' is a definition in formal logic - and not an axiom. In philosophical logic, you can dispute this - but then there you can dispute what counts as proof. – Mozibur Ullah Sep 10 '18 at 03:56
  • @moziburullah is there a word or phrase I should use for proofs which do not simply consist of an axiom? I'm going to have to be more precise when dealing with door to door evangelists who claim to have proof. – Cort Ammon Sep 10 '18 at 04:04
  • @CortAmmon: I usually tell them that I'm not interested - which generally satisfies them - and they go away. – Mozibur Ullah Sep 10 '18 at 04:08
  • @MoziburUllah I know you said you wouldn't dig out formal resources, but can you dig one out? I went to go research this, and found several dozen resources which all considered a proof and an axiom to be different (in particular, "an axiom is a statement that is assumed true without proof" seems to be by far the most popular definition), and have not found a single one which claims that an axiom is a proof. – Cort Ammon Sep 10 '18 at 05:32
  • Which is what I referred to above; nevertheless when understood in formal logic what I wrote is correct; try adding 'formal logic' or 'model theory' to your search term; or alternatively, ask - like I also suggested - in Math.SE – Mozibur Ullah Sep 10 '18 at 07:42
  • @MoziburUllah Hey there! I'm a Math SE native who found my way around here thanks to Cort's post. I'd like an elaboration of your notion of "axiom is a proof" in formal logic theory, since I've asked quite a few reputable mathematicians and modern philosophers I know, and all repudiated the idea before I even suggested it. – Rushabh Mehta Sep 10 '18 at 16:32
  • @MoziburUllah In other words, I would like a concrete notion of what you mean by "proof" in order to allow such a strong statement. – Rushabh Mehta Sep 10 '18 at 16:33
1

I am curious as to why so often non-philosophers, but nevertheless intelligent people who are supposed to know something about the nature of computation, insist there is nothing in their inner experience which cannot be simulated by a Turing machine.

I'm no expert at all but I do believe in qualia anyway.

I suggest it is because of a fear of difficult problems. Life is easier to model if life doesn't have qualia. In a similar way psychologists often act with physics envy. They are dazzled by models that are highly mathematical and so they do away with the more troubling facets of psychology.

And so greedy reductionism (though, of course, Dennett wouldn't use the term here).

Frank Hubeny
  • 19,397
  • 7
  • 30
  • 93
  • 1
    As philosophers, we strive to explain complex ideas with rational argument. The mind computes; TMs explain compution. There is no fear, only synthesis. – Calvin Aug 25 '14 at 19:59
  • shoddy synthesis [which you admit is possible] is a sign of if not fear motivation then still an over quickness IMO –  Aug 25 '14 at 20:03
  • @user3293056, thanks for the reference; I can see how greedy reductionism may explain positions taken by philosophers; however, with "laypersons" like me and people I talk with, it seems more as if they really don't see the problem. – nir Aug 25 '14 at 20:11
  • Naturally imperfect synthesis is possible. But I don't think a completely undefined Qualia is evidence to overturn the Turing Hypothesis. – Calvin Aug 25 '14 at 22:13
  • @nir I actually probably shouldn't be responding to this issue because I absolutely don't see any valid question here, but I'm guessing that will be a common trait of people that don't take Qualia on faith, and if nothing else I hope my presence serves to foster dialogue. – Calvin Aug 25 '14 at 22:14
  • this is unnecessary Calvin, not everyone who believes in qualia do so as an article of faith. do they? –  Aug 25 '14 at 22:16
  • so here you are being both quick and hostile. classic fight or flight reaction IMO –  Aug 25 '14 at 22:17
  • double post, sorry –  Aug 25 '14 at 22:18
  • 1
    @user3293056 I'm not really trying to be confrontational here, but consider this. Science seeks to explain natural events with natural causes. The Turing hypothesis does this. Beyond the bounds of science, there is no objective argument for anything really, just philosophical ones. To take nothing away from philosophical arguments, they fundamentally rely on unprovable premises (thanks Godel! - who also produced a TM equivalent by the way) that are necessarily taken on faith. Believing in Qualia or not believing in it are basically religions. – Calvin Aug 27 '14 at 14:49
  • this isn't the place for such a discussion - so yes, you do appear confrontational. –  Aug 27 '14 at 14:50
  • if you actually do believe that you can prove qualia is a faith, then "answer" my recent question to that effect

    http://philosophy.stackexchange.com/questions/15472/how-have-philosophers-tried-to-argue-for-qualia

    –  Aug 27 '14 at 14:57
  • @Calvin: The assumption that all natural events have natural causes is also an unprovable premise that is necessarily taken on faith. Naturalists aren't exempt either. – Roy Tinker Aug 28 '14 at 19:14
  • @RoyTinker It is an assumption, but it's also just the definition of science. I'm merely remarking that Qualia is beyond the bounds of science. – Calvin Aug 28 '14 at 19:16
0

Any physical system can be simulated by a universal computer. For some purposes the computer in question would have to be a quantum computer. However, the human brain is a wet, warm system in contact with the environment and on the timescale on which though takes place it won't exhibit any distinctively quantum mechanical effects like interference or entanglement. So the human brain can be simulated by a classical computer and the Turing machine can simulate any classical computer. The sense in which the Turing machine can simulate a system not not just that the initial and final states of the computation are the same. The computer can be set up in such a way that there is a mapping between the states of the computer and the way information flows within the system while it is computing the result. For any pattern of information flow in the brain while it is instantiating consciousness, that pattern could be instantiated in some function of the Turing machine's tape. For reasons that I think Dennett explains it might not be a good idea to simulate the brain in that way. The brain's architecture looks nothing like a Turing machine and there is no particular reason to translate what it is doing so that it can be run by the Turing machine if some other architecture would work faster and use less memory, e.g. - a network of computational gates.

You say

So far I failed to make even one of them realise that there is something in their inner experience that cannot conceivably be simulated by a Turing machine.

I'm not entirely clear on why anyone would say that something that has not been explained (consciousness) cannot be simulated by a computer. I am not aware of any clear statement of what problem is solved by postulating qualia in the sense you gave above. If you haven't explained consciousness and don't have a clear statement of the problem, then you can't have an argument for saying that consciousness can't be simulated. You might want to read "Godel, Escher, Bach: an Eternal Golden Braid" by Douglas Hofstadter if you haven't already.

It is true that no explanation of how consciousness is instantiated in the brain has been given. It is also true that no explanation of what sort of program could simulate consciousness has been given. I think it is also true that both Dennett and his critics have some bad philosophical misconceptions that may obstruct their efforts. For an article that describes some of the problems, see

http://aeon.co/magazine/technology/david-deutsch-artificial-intelligence/.

Dennett and other people working on this areas also have problematic moral ideas that might pose a problem, see "The Meaning of Mind: Language, Morality and Neuroscience" by Thomas Szasz.

alanf
  • 7,748
  • 13
  • 20
  • 1
    a) there is evidence of quantum effects in (warm and wet) plants, and this possibly means that what you say about the brain is wrong. – nir Sep 01 '14 at 18:30
  • b) while it is indeed my position, I did not write that consciousness cannot be simulated; what I wrote is that I find it very surprising that most people I talk with, do not find anything in their inner experience which may not be simulated by a Turing machine; Calvin, Rex and Andy write in the comments to their answers here that a string of symbols may be conscious in the fullest sense. I am surprised that not only this position is so common, but that people refuse to see any difficulty in it. – nir Sep 01 '14 at 19:00
  • About (a), that describes events in single molecules, not spread over a region of many cubic centimetres taking place over something like 0.1 seconds. A single molecule can't be wet since that is a description of a bulk property of many molecules. I'm not convinced you can assign a single molecule a temperature either although if you have an explanation of thermal equilibration that implies that a single molecule can have a temperature that would be interesting. – alanf Sep 02 '14 at 08:51
  • Since a Turing machine can (albeit often inefficiently) simulate anything that can be simulated, your position implies that consciousness can't be simulated. Also, what's happening in your brain is basically that there are loads of chemical and electrical switches whose state could be interpreted as similar to the symbols that can be written on a paper tape. Those symbols are being changed according to the relevant laws of physics, all of which instantiate the same set of computable functions as the Turing machine, http://www.cs.berkeley.edu/~christos/classics/Deutsch_quantum_theory.pdf. – alanf Sep 02 '14 at 09:01
0

Try this. The Turing Machine argument supposes no difference between a programme run for a long time on a low power machine, and a shorter time on a high powered machine. But that doesn't fit our experience of minds. Slugs aren't just slower than humans, it is a qualitative as well as a quantative difference.

When we look at qualia, we are looking at an interaction between sensing and subjective mental experience. The subjective mental experience is continuous, relational, grounded in the whole process of genetic and social development.

Edited to add: This is an argument from https://en.m.wikipedia.org/wiki/Integrated_information_theory

CriglCragl
  • 21,494
  • 4
  • 27
  • 67
  • I believe you are wrong. imagine a Turing machine simulating not just your mind, but also the environment in which you are embedded. how would you ever know if it is running slowly, or quickly? how would you ever connect your experience of time to that of the "real" world in which the Turing machine resides - take a look here: https://xkcd.com/505/ (not to mention, that the entire idea of having any assumptions on that "real" world is absurd, including the idea that it should be compatible with our concepts of time or computability, but that is an aside) – nir Apr 06 '18 at 20:21
  • There are fundamental limits on energy, and entropy - which place constraints on what complexity can be modelled. Yes an 'exterior' universe could model a smaller structure like the whole Earth, but every time we 'reach out' and influence and are influenced by a wider layer of complexity, that puts limits on a layer that is modelling being astronomically higher. For the mind, it is not only the world but it's history, evolution. In that XKCD where is the food and entropy from for that eternity? – CriglCragl Apr 07 '18 at 22:07
  • I am intrigued how you can wish to hold your view. It seems like you haven't understood the nature of being Turing complete. A read-write head is fundamentally irrelevant to this. Your objection is exactly what all computer scientists take up. What magical juice seperates a computer from a brain in a complex environment? Only a matter of degree. I was trying to salvage your argument some dignity by pointing out the size of degree, but in the end only magic can seperate minds and computers. – CriglCragl Apr 08 '18 at 01:52
  • I believe limits on energy and similar comments have nothing to do with this since this is a philosophical argument about the (metaphysical) principle, and not a physical argument. there is no need for you to salvage my argument and I believe that I understand Turing machines well enough. And indeed what separates minds and mechanical devices is nothing short of utter magic. You can read more on my view here: https://philosophy.stackexchange.com/questions/48769/are-we-living-in-a-simulation-the-evidence/48790#48790 – nir Apr 08 '18 at 11:10
  • You are a meat chauvinist. You don't really have a reasoned view, saying 'reality is mysterious' is a cop out. Metaphysically you may be atoms, but yet you have mental states. So there will be 'spiritual machines'. – CriglCragl Apr 08 '18 at 11:39
  • I'm not a meat chauvinist and I don't need you to salvage any dignity from my arguments and I understand Turing well and mysteriansim is not a cop out so please avoid the condensing tone. I simply believe people wrongly believe reality in general and their minds in particular may be analyzed or simulated computationally. I have written more about it in other questions and answers which you can find from my profile. – nir Apr 08 '18 at 13:45
-2

Well, as a computer scientist, to me, your mind is a series of one bit processors (neurons) wired together on a direct weight graph (nerves). I understand that this model is based on physicalism, but it may also be completely formulated mathematically and I find it to predict my actions accurately. I would argue that there is considerable evidence that our understanding of mathematics is philosophically credible.

I'm entirely uncertain what about that I can't simulate on a Turing Machine, or what esle, if this is incomplete explanation, couldn't be simulated.

In that sense, I believe the onus is on you to provide any reason at all whatsoever that there would be any part of consciousness that isn't Turing complete. I'd say that this requires an explicit definition of consciousness.

That is, I'm struggling to see any reasonable question here without a hard definition on Qualia. Because any Qualia that I can think of, as my mind is Turing complete, would necessarily also be Turing complete.

I hope that makes sense.

Thank you.

==================================================================================

Edit: I thought I should produce a Turing machine.

Let us take as given that you have a finite lifespan.

Let us also take as given you can think a finite number of thoughts at any given point in time.

Let us also take as given that thoughts take non-zero time, so that time may be discretized.

Suppose then that Goddess monitors your every thought and constructs a Turing machine that procedurally outputs your exact thoughts under your same circumstances. Note that this machine would have a finite number of states.

I'd consider that an existence proof of you as Turing complete.

Calvin
  • 536
  • 3
  • 9
  • 1
    no i don't think that works, behaviour is not the same as life - a puppet is not alive –  Aug 25 '14 at 18:24
  • 3
    Life was not listed as a pre-requisite for consciousness. – Calvin Aug 25 '14 at 18:32
  • 1
    that's interesting! but then again, why care about consciousness then? –  Aug 25 '14 at 18:36
  • 4
    "your mind is a series of one bit processors (neurons) wired together on a direct weight graph" -- This is totally not how neurons and brains work. You should read up on the subject. – user4894 Aug 25 '14 at 19:34
  • @user3293056 I don't care about consciousness. I believe its normative. – Calvin Aug 25 '14 at 19:48
  • @user4894 Necessarily simplified for posting here. The philosophical implications are, I believe, unharmed by this reduction. – Calvin Aug 25 '14 at 19:49
  • 2
    @Calvin Well, that's exactly the question, isn't it? If the mind is not a TM, then the philosophical implications of your reduction are profound and lead you to the wrong conclusion. So you are doing nothing more than assuming the thing you want to prove. – user4894 Aug 25 '14 at 19:52
  • 2
    The mind trivially isn't a TM. It does, however, reduce. Everything computable in nature is computable on a TM by Turing Hypothesis. I feel fairly confident resting my lemma on that hypothesis given its strength.

    One bit processors on a directed graph is a very near approximation of the mind, likewise its inputs and outputs all function on integers. Any remaining complications are biological rather than philosophical discussion, and certainly bring nothing to bare against the Turing Hypothesis.

    – Calvin Aug 25 '14 at 19:56
  • @Calvin, I can say in response is that I've been contemplating consciousness and hacking with computers most of my life, and that I have studied math and am a software developer, and that there is something dominating my inner conscious experience which I can not possibly imagine being generated by a Turing machine that is switching between a few dozen internal states, moving its head left or right, reading and writing a handful of symbols on a tape; and yet I accept that you and many others don't see it and I wonder what could explain that "blindness". – nir Aug 25 '14 at 21:45
  • @nir I would argue that if you can imagine it, then it can be simulated by a TM. After all, physically brains do almost certainly reduce to Turing machines, even if there is something more to consciousness than just the physical aspect, then if our brains can think of it, its still a TM. I guess that's getting a little meta. – Calvin Aug 25 '14 at 22:11
  • Calvin: would they have human consciousness if they passed a turing test? –  Aug 26 '14 at 03:18
  • @user3293056 If you defined consciousness as equivalent to passing a Turing test, then sure. But lacking a hard definition on consciousness I find highly normative and potentially toxic to identify it exclusively with humanity. This would, however, be a legitimate view to take provisionally. And TMs (well, computers, but it reduces) have had pretty high degrees of success with the Turing test in recent years. – Calvin Aug 26 '14 at 14:18
  • @nir your mind isn't the state machine part of the Turing machine, it's the very large (formally infinite) tape and its changes in content. I.e. the existence and content of the tape is what is important. – Dave Aug 27 '14 at 13:58
  • @dave, while the tape is not limited in length, its content is finite at any given moment; in fact the entire history of all the different revisions of the content of that tape may be recorded as a giant string; Can that string be conscious in the fullest sense as Dennett asserts? maybe it is even hyper-conscious, as it contains the entire history of the Turing machine; "Where is my mind?" asked the Pixies; and this is what I ask you and the other Turing complete minds who read this. – nir Aug 27 '14 at 14:29
  • @Dave a TM needs both tape and state machine. The mind has adequate facilities to model both. Also, as the memory capacity of the mind has no hard upper limit, it is, effectively, unbounded. – Calvin Aug 27 '14 at 14:43
  • @nir I'm merely making the point that the complexity of a TM is formally unbounded due to its tape, and thus in considering TMs you need to consider the very large content of the tape as an integral part of the system; thus incredulity at the idea that mind reduces to just "a Turing machine that is switching between a few dozes states..." glosses over the key feature that makes a TM universal. Without a tape, a TM is just a finite state machine, which is a much less capable system. – Dave Aug 27 '14 at 14:43
  • @nir The TM is more than a string, it's also a state-machine. Of course, a string can encode a state-machine, so if you're asking if a string can be conscious in isolation, then, yes, a string can pass the Turing test, in fact, programs that may be encoded as strings already have. "The mind is its own place, and in itself can make a heaven of hell, a hell of heaven.." – Calvin Aug 27 '14 at 14:44
  • @Calvin, I have no interest in the Turing test. I do not deny that a long enough string may theoretically encode all the physical data that we care to collect about a distant galaxy; or that it may be used to encode a simulation of 2^27 particles according to QM; or that someday you will have a hard time distinguishing between a real person and a Google powered android companion which will inevitably also collects data about you to power focused ads; I am only interested in how it is possible that you think that a giant string can be said to be fully conscious as Dennett puts it. – nir Aug 27 '14 at 14:53
  • 1
    @nir If you can encode a galaxy in a string, you can encode me in a string. I certainly seem to fully conscious up to a reasonable degree of certainty. Then, to me, it makes no difference what method I'm encoded in. Perhaps we're all just simulated TMs running on a universal TM anyway and I am just a string. That would model our existence just as well as any other unification theory. – Calvin Aug 27 '14 at 14:57