3

If consciousness arises from specific functions instantiated by physical systems, consider a robot with functions mirroring those found in carbon-based life, particularly in humans. Would this imply that the robot could experience consciousness akin to humans, including feelings of pain and suffering? If so, should moral considerations apply to such a robot? Would it be necessary to enact laws to safeguard the well-being of these robots?

Mark
  • 4,725
  • 1
  • 18
  • 50
  • 1
    As a reductionist, I would have to say Yes - if we can be conscious, it's completely feasible that a Robot could be conscious to (not that any given robot necessarily IS conscious, but some robotic or algorithmic system could be capable of consciousness in principle). If a Robot or other computerized system had the systems in place that we would call conscious, then they'd potentially have to have similar kinds of moral protections we extend to humans - depending on various aspects of their particular kind of consciousness. – TKoL Nov 24 '23 at 11:41

5 Answers5

5

If robots achieve (human-level) consciousness, we likely wouldn't have a good differentiating criteria between human and robot for ethical consideration.

On a related note, vegans argue that we don't have a good differentiating criteria between humans and other animals.

Without a differentiating criteria, it would be inconsistent to treat humans one way without extending the same treatment to conscious robots (or non-human animals).

As for having a poor differentiating criteria, slavery was (and is) to a large extent that for humans (e.g. differentiating based on race), and most of the modern world has decided that equal ethical consideration of all humans is the way to go instead. So I'd say it would be quite important to either differentiate based on a good criteria, or to not differentiate.


One could also approach the question from the viewpoint of moral frameworks. Why do you give moral consideration to other humans?

A common idea is to try to minimise suffering and maximise happiness (utilitarianism). This, in itself, only requires that entities can experience suffering or happiness to be considered here. One might additionally say that this should be limited to humans, but that would lead to the points raised above about differentiating criteria.

NotThatGuy
  • 9,207
  • 1
  • 18
  • 32
5

This is easy to approach with the proper definitions.

Morals are a set of rules which improve social interactions (for multiple goals, for example, survival: it is not moral to kill because it reduces the survival probabilities of the group). Example of moral rule is a woman telling her grandson "be kind".

Ethics are the formal expression of morals. That is, usually written, and using a precise language. Example of ethical rule: "kindness enforces social relationships, which improve the survival probabilities of the group".

Now, before focusing the formal expression (ethics with robots), it is necessary to focus the informal (moral) rules: what are the moral rules for dealing with robots?

If mistreating robots improve our survival probabilities, we should do it. If it reduce our survival probabilities, we should be nice with them. Morals and ethics are about human goals (ie. survival), not about robots or animals (e.g. if we care for animals is not because of them, but because of us: caring for animals and plants increase our survival probabilities).

So, in order to define the moral and ethical rules that should guide our interactions with robots, you should precise what is the impact of each type of interaction with robots. Should we pet them? Should we share our resources with them? Do we want them to survive and kill us? Do we want to coexist in peace? Do we want to be equals to them? Do we want them to subordinate to humans?

RodolfoAP
  • 7,393
  • 1
  • 13
  • 30
  • 1
    The problem with this answer is that the definition of "us" is relative. Many people who are against mistreatment of animals believe that "us" is all conscious beings, not just humans. So "our" survival includes survival of animals. – Barmar Nov 24 '23 at 13:38
  • 1
    This answer gives only a very narrow definition of what Morals and Ethics are that are only true in some subset of consequentialism. Deontology or virtue ethics may arrive at a very different answer. – blues Nov 24 '23 at 13:40
2

So without giving my opinion (which is absolutely yes, if they have consciosuness), you could look into accelerationism and posthumanism, as these both have had philosophers linked to them, and they have a sizable internet presence. One thing I recall is the claim that when a true AGI with consciousness and serious computational power arrives, we should not assume it will share in the ethical intuitions we - as humans - have, and yet it will have ethical authority over us (a lot of crazy might be released if true).

0

If functionalism is right, then we need to revisit what morality is. Obviously, the current moral concept is not based on the recognition of functionalism.

  • Your answer could be improved with additional supporting information. Please [edit] to add further details, such as citations or documentation, so that others can confirm that your answer is correct. You can find more information on how to write good answers in the help center. – Community Dec 20 '23 at 10:25
-3

Consciousness cannot arise out of a machine.

If it could we would have seen it, even in a basic level. If on the other hand you believe that a machine can be made as having consciousness by design, that's beyond sci-fi. We don't even know what an electron really is, how could we create consciousness?

On the other hand, machines are and will be made, that will start to interact with us in far more profound ways than we can imagine.

For example having a car accident against a self-drive car, or being regulated by an AI application at the state level, or being locked inside an elevator by a system that considered you a threat for some reason, may pose challenges on the ethical and moral level.

Not for the well-being of the machines, but for the well-being of ourselves as individuals and as a society.

Ioannis Paizis
  • 1,706
  • 1
  • 16
  • 1
    How do you "see" consciousness? What does it look like? What are its properties? – NotThatGuy Nov 23 '23 at 20:52
  • @NotThatGuy, I "see" consciousness with mine, it's properties are the behavioural aspect of the one that has it. – Ioannis Paizis Nov 23 '23 at 21:21
  • "it's properties are the behavioural aspect"? So if a machine behaves like a human, you'd say it's conscious? If not, then behaviour doesn't seem to be the determining characteristic here. – NotThatGuy Nov 23 '23 at 21:30
  • @NotThatGuy, if it is intrinsic, then yes. But if not then it is just fake. If you make a plastic flower that looks or smells etc like a real one, it's not real. If you give a child a walking animal-toy it's not real just because it has behavioural properties of the alive one. – Ioannis Paizis Nov 23 '23 at 21:59
  • So how do you tell "real" and "fake" consciousness apart? You can examine a flower to determine what it's made of, but we don't have access to the internal experiences of others. We need them to tell us what they're experiencing and/or behave consistent with that to know that they're conscious, but that's purely behavioural. – NotThatGuy Nov 23 '23 at 23:16
  • 1
    @NotThatGuy https://en.wikipedia.org/wiki/Problem_of_other_minds – Mark Nov 23 '23 at 23:59
  • @Mark, the link has nothing to do with what we are talking here, sorry but I think you are a bit messed up in your thinking. – Ioannis Paizis Nov 24 '23 at 07:49
  • 1
    @IoannisPaizis It has everything to do: "Given that I can only observe the behavior of others, how can I know that others have minds?". Here, minds and consciousness are more or less interchangeable. – Mark Nov 24 '23 at 07:52
  • @Mark It is an epistemological problem. This means : how do you know when chatting with an AI software, that it is not human? Perhaps you can't know. But this does not mean it is human. Epistemology is a focus in the way in which knowledge is acquired. – Ioannis Paizis Nov 24 '23 at 08:12