4

If a robot had a cognitive system in its 'head' that can pass the Turing Test and its AI abilities where self-sustaining (and it can reprogram any part of 'itself' without causing self-sabotage) how far into its development would it take for it to become conscious?

There has been lots of Hollywood hype about such situations and I was wondering what it would take to reach this point? Not from a technological standpoint but from a philosophical one. Can a computer teach itself to be human? Or does a human have to be put into a computer (i.e., transcendence)?

As a computer scientist we like to believe the computational power of computers is limitless and if so it's more of a question on whether or not we need philosophers to describe the human to a computer. Can an essential "dumb" being figure it out much like we have, but without ever having it to begin with? It being consciousness or sentience.

present
  • 2,500
  • 1
  • 9
  • 23
user128932
  • 920
  • 1
  • 5
  • 14
  • Well, we still require outside programmer input (and ignoring the need leads to long-term catastrophic failure), so... ;-) – John Dvorak Apr 28 '14 at 04:19
  • If a robot with advanced A.I. could reprogram itself regarding important processes it might not have to wait for outside programmer assistance. Maybe this could happen in the near future.. – user128932 Apr 28 '14 at 04:27
  • 2
    About as far as Descartes – Frames Catherine White Apr 28 '14 at 10:10
  • It seems to me that you are asking something like : "if a robot has a cognitive system comparable to human mind, why he/she/it cannot learn the art of computer programming ?". Why not ? ... – Mauro ALLEGRANZA Apr 28 '14 at 15:31
  • 2
    I think the edits to this question have improved it quite a bit, and I'm reopening the question. – davidlowryduda Apr 29 '14 at 19:51
  • Forever. Because world is already full with robots... and not Artificially made. – Asphir Dom Apr 29 '14 at 23:53
  • Is a goal of artificial intelligence to 'make' a computer-like system that is self-sustaining and able to reprogram 'parts' itself to possibly increase efficiency and self-organisation? Could you characterize such a system as a 'cognitive engine' that is constantly 'working' to decrease or stabilize 'information-entropy' ? ( if the entropy concept can be applied to information) – user128932 Apr 30 '14 at 06:59
  • 1
    If an A.I. computer system could reprogram itself and be able to continually do this and manage all its resulting interacting programs and other info. packages it would not need any 'outside' programmers or their input. It could be its own 'personal' programmer and reprogrammer. All the while managing all these efforts ; its own computational 'projects'. Such a system I call 'auto-cybernetic' ,as opposed to a system that requires 'outside' input for important functions; this I call 'exo-cybernetic'. So an Auto- cybernetic A.I. system would 'be' its own information management system. – user128932 Nov 14 '14 at 03:41
  • What if the 'beginnings' of consciousness is partly just within a dynamic system that can manage and manipulate it's own information ; and that it has the ability to reprogram itself at ant time.. – 201044 Jan 25 '16 at 07:36
  • In short, no. Any other view would require some evidence of which there is none. The idea that consciousness is merely computation has no scientific support, albeit it is a popular one among scientists. Those who study consciousness usually dismiss it as contradicting the evidence. . –  Aug 14 '18 at 13:45
  • I make the case here that a structural shift is required for rule-creativity to happen, involving intersubjectivity or agent-based understanding of other agents in community developing rules dynamically: https://philosophy.stackexchange.com/a/86890/30474 – CriglCragl Nov 14 '21 at 12:35

4 Answers4

3

It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.

The thing I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.

I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461

2

At the end of the day, what is a human brain? It is a network of neurons that have been trained to trigger in certain ways, essentially simulating logical gates like NAND and NOR. One can construct a theoretical mapping from logical gates to neurons that shows that the logical gates are capable of simulating a human brain/nervous system. Therefore from a computational power/language hierarchy standpoint, the human brain is "no different" than a computer because they are reducible to each other. So, it seems the answer is Yes, it is possible.

The cost of a computer developing conscience, however, is a different story. The above mentioned mapping ignores practical steps and costs to how one would create an artificial brain/sentience in practice. For all we know, the costs of practically creating a self-sentient machine may be prohibitive. So, while it may be possible, it may not be practical.

James Kingsbery
  • 5,937
  • 1
  • 18
  • 41
  • Maybe the costs of practically creating a self-sentient machine and getting it 'started' would be exhorbitant but if the system is self-sustaining it could 'keep itself going' without any further costs or monitering. – user128932 May 01 '14 at 01:48
  • Also don't forget that once you have one such being, assuming it's computer-based, you can copy it easily so inital cost might be huge, but not per unit as you copy them. I guess the cost would be cost of development of the physical item, and then education which most likely would be akin to rearing a child. – user2808054 Jun 02 '14 at 13:53
  • There is a fascinating Movie on now or soon , called 'Automata', with Antonio Banderas ( forgive spelling) and Melanie Griffith about a future society where some robots discover how to change themselves and their own programming , to 'reprogram' themselves to be self-sufficient. Banderas works for some organisation ( I think) that is concerned about these robots 'reprogramming' each other , so he has to find them and destroy them before any 'revolt'. Though it is not a typical 'Robot Revolution' movie. It mentions how a robot changing its programming is like it establishing self-sufficiency. – user128932 Oct 22 '14 at 02:43
  • If Asimov's 3 laws of robotics are so useful in governing sentient robots then why has human morality been codified and compacted into a small finite number of 'laws' everyone should follow instead of the huge almost incomprehensible books of law only lawyers can understand? – 201044 Feb 27 '16 at 00:52
1

Assuming the robot passes the Turing Test, it can never make it's own decision without extra input. So a robot that can learn based off of it's own past experiences (though the use of sensors and an ability to notice that parts of it's own system failed up doing something) would be the best description of a conscious robot. However, in order for this to be as smart as a human, it would need to go through as as many experiences of every human in the world, and it would take human's too much time to teach it everything we know. A possibility is having robots speak to each other, so that when one learns something, they all do. Such as "don't walk off a cliff in order to continue being intact." At this point, any robot that has received this message will walk up to a cliff, access it's knowledge about walking off cliffs, and then be able to solve the question of "should I walk off the cliff?".

Keeping this in mind, once you have the robot to the point that it can move freely and learn, as well as be knowledgeable enough about it's surroundings that it can make rules based off it, it is conscious. It may still make some terrible mistakes, such as handing someone a knife too quickly and killing them such as in this article, but it will still be conscious. It would be wise to teach it certain things, but it doesn't matter much when we're only dealing with the point in which it becomes conscious.

Because you said that it can change things within itself, that satisfies the need to be self-aware. Being able to change something within itself implies it has capabilities to see it's own code and mechanics, meaning it can identify failures as well.

Likely, and considering at what point we are with robotics, we are waiting for a time that we know the robot will not injure us accidentally (which would set back the robotics industry by many years based on a lack of trust), and combing out the details in ensuring it can learn as necessary, and can be self-sustaining in terms of keeping itself safe. This will likely not be far off.

After all, if it can't perform the three laws of robotics, it shouldn't be performing self-sufficiently among humans.

  • Obviously you don't know what conscious is. Do you suggest you are this robot? – Asphir Dom Apr 29 '14 at 23:54
  • How would one define 'performing self-sufficiently amoung humans'? Would it be a purely self-oriented self sufficiency as many human beings 'use' or could robots be 'set-up' do what's best for themselves and any 'local' group of other robots and people they are with? I heard John Nash might have suggested something like this. – user128932 Apr 30 '14 at 06:48
  • @user128932, I consider performing self-sufficiently to be having the ability of movement without help, and the ability to be left alone while doing something without the fear that it will break itself in some way. The question didn't ask about robotics in regards to a certain task-robot, though for the sake of this I would say self-oriented self sufficiency, because that is the state that would cause it to learn the most and be most humanistic. – littlekellilee Apr 30 '14 at 15:13
  • 1
    @Ashpir I don't get why you even state this. Clearly there's nothing in my answer that suggests a human being aware for a robot and being their sensory system. I suggest you look at this dictionary.reference.com/browse/consciousness and reread my response. – littlekellilee Apr 30 '14 at 15:13
1

This sort of question assumes that a human's consciousness is an emergent property of the brain. We have no idea if this is true. If consciousness is immaterial, then it is incapable of being copied by a computer.

So, the prior question that must be answered is whether consciousness is material in origin. Common sense would say no.

yters
  • 1,877
  • 14
  • 20
  • A computer system that is 'somehow' self-sustaining might have a 'conglomeration' of programs with certain invariant necessary qualities. This hierarchy of interacting programs that is needed for the computer to function could be considered an IMMATERIAL part of the system just like using the analogy of 'software' representing the 'mind' and 'hardware' to represent the brain. – user128932 Jul 27 '14 at 03:35
  • You're saying consciousness is an emergent property, which is the point I respond to in the comment you commented on... – yters Jul 29 '14 at 21:52
  • 1
    Isn't a computer operating system a system of programs that are 'immaterial' yet they can be copied by a computer. If an operating system of an A.I. machine is 'self-sustaining' it could 'add' new qualities to its 'logic structure' and thus have emergent qualities. – user128932 Aug 03 '14 at 03:56
  • If an A.I. systems developed abilities to self-sustain all its important qualities it would be a self-orienting 'conglomeration' of information 'packages' and programs all constantly designing and redesigning itself to keep 'itself' existing. So this self-perpetuating 'set' of 'active' info. processes would be analogous to the information self-manipulating 'mind' (a self sustaining system of active 'semi-invariant' processes). I think any important process is considered intangible because it is not entirely 'explainable' by physical structures and their interactions. – user128932 Oct 12 '14 at 04:50
  • 2
    Why is it common sense that consciousness is immaterial? Perhaps I don't understand your use of the word material. –  Oct 20 '14 at 08:24
  • I'm just using the term immaterial because I have heard and read about many people describing consciousness that way , as though if it is not 'rooted' in actual physical processes in the brain or some sort of physical 'structures' of neurons and axons e.t.c., then it is not 'real'or it is 'ghostly'. I believe consciousness or a self-sustaining A.I. operating system ( an analog to the 'mind') are as real as the brain is. – user128932 Oct 22 '14 at 02:28
  • Is the argument from 'physicalism' that our 'minds' are identical to our brains? To take an analogy hardware can not restructure or reprogram software ( unless some hardware damages the software) , yet by analogy 'brain' hardware can reprogram 'mind' software.I think only software ( whether in the a computer or a 'mind-brain') can 'functionally' alter other software (so it still is functioning). – user128932 Oct 23 '14 at 05:21
  • If a number of self-sustaining self-sufficient A.I. systems existed that did not need any 'outside programmer' input then all these systems could be 'studied' from the 'outside' by seeing how each one programs and reprograms 'itself' and how each manages and organizes these programs. Also study how each system 'sets-up' its own 'project' or goals and how they carry them out. If studying such systems from the 'outside' the 'programs' and how they are manipulated become the basic 'units' of study. This could be considered a study of 'patterns' of program manipulation or 'Meta-programming'. – user128932 Nov 14 '14 at 03:52
  • Is a system of software that controls a machine 'material' in origin? If a system of software that 'makes' a new software 'combination' by a genetic algorithm say, and adds this new software combination to the rest of its programming is this addition material in origin? – 201044 Jun 04 '15 at 07:34
  • If I'm conscious and sentient by virtue of having a soul (or, perhaps more accurately, being a soul temporarily possessing a body), what's to stop a computer from getting a soul? I haven't existed forever, so my soul either came into existence at some point in my developing, or a pre-existing soul attached to my body. Neither of those processes necessarily require any sort of biology. – David Thornley Aug 14 '18 at 21:27
  • 1
    @DavidThornley the computer would not be sentient just like the body is not sentient. In both cases it is just the soul that is sentient. – yters Aug 14 '18 at 22:04
  • 'Common sense' has a pretty terrible track record, & is no basus for philosophy, or science. – CriglCragl Nov 14 '21 at 12:36