3

As far as I know, the current philosophical consensus is that chatbots like ChatGPT are not conscious.

However, in analogy with philosophical zombies, would it be possible to have a "philosophical ChatGPT"? That is, a system that is physically equivalent to ChatGPT (or some other AI chatbot which isn't conscious), but unlike ChatGPT, is conscious and experiences qualia?

Julius Hamilton
  • 1,559
  • 4
  • 29
Christopher King
  • 383
  • 5
  • 14
  • 2
    Is the question, "Can you attach an immaterial soul to a chatGPT-like thing?" The question seems to only make sense if you're talking about immaterial souls, since you said physically equivalent – TKoL Mar 28 '24 at 14:02
  • @TKoL I'm basically asking if you could give ChatGPT the thing that humans have but p-zombies lack. It seems that there is no consensus on what this should be, but I think "immaterial soul" is a defensible answer. – Christopher King Mar 28 '24 at 14:11

3 Answers3

3

"AI chatbot" is quite broad, as that can include any possible future technology, possibly incorporating some biological elements. That may include future AI that could potentially be conscious (at least under some physicalist view). One might debate whether that'll still be a "chatbot", but anyway. Current AI is at least commonly believed to not be conscious.


Under a physicalist view where consciousness weakly emerges from / reduces to brain processes, consciousness can theoretically emerge from artificial constructs, so a conscious AI can theoretically exist, but it can't be physically identical to a non-conscious AI. Philosophical zombies also can't exist under this view. This is because consciousness is directly and inseparably tied to physical state, on account of reducing to that. It would be like asking whether you can have 2 physically identical computers where one is capable of performing computation, and the other is not - that's not possible because computation emerges from the physical state of the computer.

Under some dualist view, it should be theoretically possible for consciousness to attach to some artificial construct, and therefore it should be possible for philosophical AI anti-zombies to exist. Although some might say that consciousness comes from a deity who probably won't inject consciousness into an artificial construct. Or they may say consciousness requires some physical state (which sounds a lot like physicalism with an additional unnecessary claim), and it's impossible to artificially create such a state (although we managed to artificially create a lot of things that earlier generations wouldn't have thought was possible).

NotThatGuy
  • 9,207
  • 1
  • 18
  • 32
  • 1
    Another interesting possibility is that to be conscious, you must be the descendent of something conscious. So even if you duplicated the physical state of a human brain perfectly, it doesn't "count" unless it came from a conscious human mother. – Christopher King Mar 30 '24 at 14:51
  • 1
    @ChristopherKing Under physicalism, descendent-based consciousness would still need to correspond to some arrangement and transfer of physical parts. In theory, it should be possible to artificially create such a physical arrangement to create consciousness (but in practice that may or may not be possible). – NotThatGuy Mar 30 '24 at 23:13
2

Assuming physicalist-materialism. This question is saying, what thresholds of behaviour and complexity indicate consciousness is present, and implicitly is true Artificial General Intelligence (AGI) possible? This is not a question with a clear or undisputed answer. The P-Zombie framing adds extra complexity, as it's a thought experiment aimed at questioning whether we can know about internal experiences from minds by observing external phenomena, which we can't truly answer until we have an accepted synthetic mind to test. I make the case here that the more complex the mind, the more it's internal experiences make it difficult though not impossible to predict, by shifting data needed to predict it by majority inside the mind: Can the goals of an organism be imputed from observation? There's also the issue of 'intelligible intelligence', that as computer systems increasingly train themselves, it's becoming more difficult to know how they do what they do - it can be argued that human sentience and self-awareness may be chiefly of importance in us humans inquiring into how and why our brains offer up the information they do, especially when that information is found to be incorrect or contradictory (see Kahneman, 'Thinking Fast & Slow').

Where in evolution, does consciousness occur? We generally grant humans as having a special quality of 'self-awareness'. But we know many animals pass the mirror test, indicating they can distinguish between their reflection and another being. The human neocortex seems to have emerged primarily to cope with the complexity of our social landscape, with our mimicry and linguistic knowledge being founded on intersubjectivity and the development of a 'social self', linked to the Default Mode Network.

We want our AI to interact meaningfully using language. LLMs seem to be able to do this far better than expected, and this can be argued it's because they rely on a 'low resolution image of the internet', eg here: ChatGPT Is a Blurry JPEG of the Web (New Yorker article). So although it is only predicting one word at a time, contextual clues allow them to mimic human behaviour. But they often fail on things where 'common sense' is needed, like a question 'if 3 towels take 2hrs to dry, how long do 9 towels take to dry?'. Chatbots generally say 6hrs, but humans who think about it can guess the drying time is fixed, regardless of numbers. What we really need is LLMs and chatbots that can go deeper into what Wittgenstein called 'modes of life', in order to look deeper for contextual cues, especially in regard to one-off creative actions or innovative behaviours. That could fix a lot of problems, but wouldn't necessarily require the bot to have a self-model.

I'd compare current bots to something like insect intelligence, where simple 'agents' can achieve complex things, but like 'blindsight' minds or individual neurons, they can have emergent complexity that just isn't necessary in each agent for it yo achieve it's goals.

I'd argue the best way we have to picture how humans can do what they do, is Hofstadter's idea that minds are 'strange loops', and they can do things Turing Machines don't seem able to because they can build 'tangled hierarchies', or loops of logic, and recursion in their nesting of layers that they use to understand the world. This provides a Coherentist and Anti-Foundationalist picture of epistemology that avoids Munchausen's Trilemma: to say it less technically, we tend to just start wherever we find ourselves and keep exploring renewing and relating together what we know about the world, including the self-loop starting with no or minimal purposes/self-knowledge. Sounds simple, very hard to get computers to do it - AlphaZero might be an example in a simple game-world, or Tegmark and Wu's AI Physicist (see discussion & links here Reference request: How do we grasp reality?). Strange Loops explicitly involve something processing information about the world, that includes a model of itself in the model, which allows it to try out different dispositions and intentions and their expected impacts, in order to decide how/who to be. There is then a cumulative process of adapting to the behavioural niche, comparable to an evolutionary algorithm - but, it has the capacity to investigate and cumulatively increasingly determine it's own true 'best interests' which clearly includes self knowledge, or to take up any other goals that emerge to fit what it began with, eg to further survival and replication, or to break with such goals for emergent reasons (in humans we choose to die for sometimes very abstract reasons, memes can be a helluva bug).

So in this view, self-awareness and self-consciousness would involve specific types of recursive structures, and a cumulative process of investigating and adapting to a niche which includes increasing self-knowledge. If this view is right, we probably aren't that far from conscious chatbots and true AGI.

Nick Bostrom has interesting things to say about the implications of this in his book Superintelligence, where he talks about the risk of 'malignant failure modes' or conflicts of interest between humans and computer minds, and specifically the idea of 'mindcrime', or causing of suffering in computer sentiences related to capacities and how they are treated.

J D
  • 26,214
  • 3
  • 23
  • 98
CriglCragl
  • 21,494
  • 4
  • 27
  • 67
1

Michael Levin has studied the early-stage development of biological entities like embryos and has argued that the transition from inert matter to conscious life is a “continuum”. He has also built “robots” out of biologically engineered cells, called “xenobots”. Neuroscientists like Alysson Muotri and Anil Seth argue that consciousness is fully explainable as a physical phenomenon attached to or somehow facilitated by matter, particularly neuronal cells. Muotri has grown brain organoids, which are lab-grown networks of brain cells.

The heart of your question is one of the most famous philosophy of mind questions ever, which is mind vs. matter. We are arguably at a moment in history where we can explore these perennial, impenetrable questions with experiments. Developments in biology (including synthetic biology), neuroscience (including neuroimaging), mathematics (such as integrated information theory), artificial intelligence (such as large language models), and in my opinion, quantum mechanics (including Penrose’s speculation that quantum systems are relevant to consciousness) allow us to come closer to testing hypotheses with manipulable, controlled scenarios that have regularizable outcomes. Nobody knows the answer to your question, but it seems we are going to come closer in our lifetimes.

But David Chalmers raises the deep question if even the above has real explanatory force in the face of the hard problem of consciousness. In my opinion, in an era of civilization marked by so many obscurities having been lain to rest, including the mechanics of the physical world, the origins of life, and a rough picture of the nature of intelligence, qualia remains as perhaps the most outstanding epistemic-scientific mystery of our time.

Julius Hamilton
  • 1,559
  • 4
  • 29