2

It is well established that computers have semantics of some sort. After posting the question Is non-deterministic automated reasoning a viable strategy for solving problems in mathematical logic? (PhilSE), I received a variety of yes, no, and some-days to the answer demonstrating varied viewpoints on computers and automated proof of mathematical logic. An issue the disagreement turned on seemed to be on the nature of what respondents believed an LLM was, did, and was capable of. This brings to the fore an important question about LLMs. Large language models analyze human language, like a collection of proofs in mathematical logic, and construct products that seem to show an awareness of the syntax and semantics of content.

Consider the conversation with ChatGPT:

Human: Which sentence is meaningful? "Bob monkeys purple." or "Bob's monkey is purple."

ChatGPT: The sentence "Bob's monkey is purple." is meaningful. It conveys a clear and coherent statement: Bob has a monkey, and that monkey is purple. In contrast, the sentence "Bob monkeys purple." does not form a grammatically correct or meaningful statement.

Given that LLMs regularly provide meaningful answers to questions about the meaning of natural language, do large language models pose some form of awareness of the semantic content of the corpus they are trained on?

J D
  • 26,214
  • 3
  • 23
  • 98
  • "Awareness of the syntax and semantics of content" and "capacity to describe syntax and semantics of content" are different qualities. Which do you mean? – g s Oct 06 '23 at 16:23
  • 1
    @gs The thrust of this question is in the direction of clarifying whether given some canonical notion of awareness, that the performance of an LLM rises to the criteria. – J D Oct 06 '23 at 16:27
  • The meaning is provided by the human. Computers only flip bits. LLMs are statistical autocomplete engines, with no more awareness than the moisture sensor in your clothes dryer. – user4894 Oct 06 '23 at 21:36
  • 1
    Not yet, see SEP:"GPT-3 is not close to passing the Turing Test: GPT-3 neither perceives nor acts, and it is, at best, highly contentious whether it is a site of understanding." Current AI cannot even imitate awareness well enough, let alone suggest a genuine article. Chalmers does call it "a potential mindless path to artificial general intelligence". However, we would have to see AI handle tasks outside a predefined context they were programmed/"trained" for to talk about even genuine intelligence. – Conifold Oct 06 '23 at 22:42
  • What means ""Bob monkeys purple"? – άνθρωπος Oct 07 '23 at 00:31
  • 1
    "The force that through the green fuse drives the flower." – Scott Rowe Oct 07 '23 at 03:24
  • 1
    @ScottRowe That poem was emotionally emancipatory. Thank you. – J D Oct 07 '23 at 19:35
  • 1
    @user4894 The claim 'Computers are bit-twiddlers' is a slogan, and one that doesn't comport. And at the minimum, a clothes dryer with a moisture sensor is aware of whether or not my clothes are dry. It seems odd to argue otherwise for what use would such a sensor be if not help the computer be aware of the water content? I would encourage you to move beyond a non-cognitive boo and share whatever depth, if any, lies under the slogan. – J D Oct 07 '23 at 19:42
  • Oh irony. @άνθρωπος Reread ChatGPTs explanation about its meaning. – J D Oct 07 '23 at 19:43
  • @JD i did, but im interesting, had you meant something or not? or was it only several words for you? – άνθρωπος Oct 07 '23 at 19:46
  • @Conifold If I'm not mistaken, Turing intended the imitation to be a measure of human-level awareness. It seems unfair to hold an LLM to that level and deny it has grammatical awareness; et "et tu"? - awareness should be a categorization built on the tyranny of LEM? How is your use of 'aware' not simply exploiting the vagueness of predicate? (But I do recognize your position is well-received and heavily defended, and I'm not asking you to do so. You have bigger fish to fry.) – J D Oct 07 '23 at 19:49
  • @άνθρωπος It means nothing, and I chose the syntax precise because of it. – J D Oct 07 '23 at 19:49
  • @JD are you a real man, not a GPT?) – άνθρωπος Oct 07 '23 at 19:51
  • 1
    @JD If you think a clothes dryer is "aware" I'm afraid we do not share sufficient common language to have a conversation. But I'll agree that if you think a clothes dryer is aware, then your web browser is thinking deep thoughts and your word processor has a philosophy of life. – user4894 Oct 07 '23 at 21:20
  • 1
    @άνθρωπος if he was not real then his name would be ZH (one letter less than AI) – Scott Rowe Oct 07 '23 at 22:27
  • 1
    I see it the other way. Anthropomorphic language like "awareness" applied to LLM invites human comparisons and only muddies the water. If we want it less human-bound, something like "semantic competence" seems more appropriate. And it does not open a can of worms that "awareness" and "consciousness" are famous for. – Conifold Oct 08 '23 at 01:00
  • 1
    @Conifold Graceful riposte, but I think the perfusion of teleological language in biological conceptualization is prima facie evidence that such a project is excessively prescriptive and resistant to unnecessary jargon, and that it asks the thinker's mind to contort in ways that bend backwards on intuition. But I certainly prefer stipulating new definitions, so I see the counterpoint. My objection to yours is simply that legislating intellectuals' language is herding cats, and social efforts at language engineering inevitably fail, particularly in the vox intelligentium. Thanks! – J D Oct 08 '23 at 01:54

3 Answers3

1

In actual fact, machines don't have semantics yet. That is because they are not conscious of the outside world, so they know not what they speak of. ChatGPT too. If you simply asked it the question, i suspect that's what it will tell you. What they do have and work with are semantic networks: how concepts relate to other concepts. For them, Derrida is correct: il n'y a pas de hors texte.

Edit: As an illustration of the point, I recently had a little chat with ChatGPT. I told it about my elephant, Jumbo's the name, and how this little bugger keeps climbing on rainbows everytime he sees one. And then he gets too afraid to climb down so I need to call the firefighters all the time...

See what I am driving at? Any human being with a reasonnably accurate sense of what elephants and rainbows are would know I'm joking. Because a rainbow is made of light and light is not solid, so elephants cannot possibly climb on rainbows. But ChatGPT has no real sense of what an elephant or a rainbow is. At most it can correctly classify "elephants" as part of the set of "animals", itself a subset of "living creatures". But it never touched an elephant, never smelled one, never gave an elephant peanuts, and never really saw an elephant. It never marveled at a rainbow either.

So it answered: "Sorry, I am unable to find references to your specific query on the Internet."

Olivier5
  • 2,162
  • 1
  • 2
  • 20
  • Comments have been moved to chat; please do not continue the discussion here. Before posting a comment below this one, please review the purposes of comments. Comments that do not request clarification or suggest improvements usually belong as an answer, on [meta], or in [chat]. Comments continuing discussion may be removed. – Geoffrey Thomas Oct 08 '23 at 08:33
1

The short answer is "no one knows," which is what makes this a live and interesting philosophical question.

Your statement "Large language models analyze human language, like a collection of proofs in mathematical logic, and construct products that seem to show an awareness of the syntax and semantics of content" is a bit misleading. LLM (large language models) like ChatGPT are basically artificial neural networks. They are given training data to work with, and they learn to simulate it. Pathways that lead to good responses are reinforced, those that lead to poor ones are deprecated. Because of the way they are trained, and because of their size and complexity, the inner workings of the network are largely opaque even to the "programmers." However, there doesn't seem to be any evidence that they are working from first principles, the way your question suggests. When ChatGPT provides a well-formed argument, it does so by imitation, not by analysis.

We know that ChatGPT must have some semantic framework, since that's a foundational pieces of creating well-formed responses. But how that's encoded in the network and whether it's at all analogous to the human understanding of semantics is largely unknowable. (For that matter, we don't know how semantics are encoded in the human brain, either.) We do know that when we ask ChatGPT about semantics, it's not scanning itself and providing a self-analyzing answer. It's imitating the kinds of things that people say about semantics.

Chris Sunami
  • 29,852
  • 2
  • 49
  • 101
  • 2
    As always, your arguments are succinct, persuasive, and show an unusually uncommon regard for agnosticism which is always refreshing to me as a fallibilist. The transformer model does indeed have a ANN core, and ANN seem to have captured the capacity for the ostensible emergence that we observe in neural computation. The argument that we proceed from first principles with our semantic awareness and grounding is for the linguistically ignorant. Language acquisition can happen entirely without them. Brains barrel towards grammar despite grammarians. Humans learn by imitation, not analysis... – J D Oct 12 '23 at 21:40
  • 1
    and Tomasello puts forth a strong philosophical framework. Langacker, Lakoff, and Jackendoff have some strong theoretical frameworks that point in the direction towards neural computation as an ontology between mind and body, and experimental philosophy and NLP continues to make gains. My company is working on a NLP intermediary that uses LLMs in conjunction with formal systems, and a lot of the market space is engaged in similar enterprises. Thanks for a level-headed analysis. – J D Oct 12 '23 at 21:45
  • If a language model were trained to translate its arguments into mathematical logic, the correctness of those arguments could be verified by an automated proof checker. – Anderson Green Feb 25 '24 at 20:01
  • @AndersonGreen - That SOUNDS possible, but unless you can cite a LLM that can do this (accurately and consistently) we don't know if it's actually possible. There's a lot of things that we think LLMs should be able to do that we haven't been able to accomplish.// However, even if they definitely can provide that for us, that doesn't mean that's what they are doing deep down. – Chris Sunami Feb 26 '24 at 14:01
  • 1
    @ChrisSunami I believe that this is possible because there have been several successful implementations of theorem provers using large language models. – Anderson Green Feb 26 '24 at 20:36
  • @AndersonGreen - I appreciate the update--it's hard to keep up with the rapid advance of the field. However, it doesn't change the substance of this answer. Showing that a LLM can consistently generate semantically correct material doesn't prove that it understands either the content or the structure in the way that we do. – Chris Sunami Feb 26 '24 at 22:03
0

When meaningful expressions continue, nothing clear can be said about whether perceived is sensational. Since the question you ask is about artificial intelligence's internalization of meaning, I can say that the answer is yes; it's simulatevly. We cannot say that perceived can be sensational clearly with AI's training method is excluding.

In my opinion, human, even though he has found a position in this universe as a small being, has a reasonable judgment in achieving goals in it. Since primary disadvantage of humans is weakness at exceptional and hard life rules, it cannot be said that it is incomplete in logical reasoning with artificial intelligence. What is requested for artificial intelligence is requested partly for human, and what is requested for humans is requested partly for artificial intelligence. Your question is about the existentiality of consciousness. Therefore, for a clearer answer, I want you turn to discussions about soul, mind, intelligence and consciousness.

Reality is the dynamism that enables the transition from state Ax to state By. While Descartes describes that animals haven't consicious, he puts people at the center of perfection. In a way, it makes a statement that leads us to discuss why zombies and the dead are not always conscious beings. Cognitive science supports that a consciousness exists in parallel with the mind. But he cannot identify that some unconscious beings have minds. Status of consciousally or partially consciousally is the synthesis with sci-Fi technology from a unconscious vision. This cannot be achieved; it is acquired with not reasoning by personally.

fkybrd
  • 123
  • 1
  • 7
  • 1
    I would say that the question provokes discussion on consciousness by asking if computer technology manifests some small measure of it. If we can conclude that consciouness and mind is not a crisp category of meaning, and that as a fuzzy logic might apply, that consciousness is a measure of behavior on a scale from 0 to 1 of the reals. I will upvote for the novel experience of providing an answer in Turkish since it demonstrates that translation systems clearly are becoming more proficient in the meaning of languages. – J D Oct 06 '23 at 19:51
  • Also, +1 for raising the language of simulation which begs the question, what is real? This is also a received position, that somehow consciousness either is or isn't, and Descartes maintained that animals were not conscious, but mere automata. Cognitive science supports that consciousness occurs in degrees. – J D Oct 06 '23 at 19:54
  • 1
    Gerçek, A_{x} durumundan B_{y} durumuna geçişi sağlayıcı dinamizmdir. Descartes hayvanların bilinçli olmadığını ifade ettiğinde insanı benmerkeze koyuyordu. Bir yönden de zombilerin ve ölülerin neden daima bilinçli varlıklar olmadığını tartışmamıza yol açıcı bir ifadede bulunuyor. Bilişsel bilim, bir bilincin zihin ile paralel bir şekilde varolduğunu destekliyor. Fakat bazı bilinçsiz varlıkların zihne sahip olduğunu tanımlayamıyor. Kısmen bilinçli veya bilinçli olma durumu bilinçsiz varlık için bilimkurgu teknolojisinin sentezidir. Bu, elde edilemez; kişisel açıdan nedensizce edinilir. – fkybrd Oct 06 '23 at 21:16
  • "Bei mir bist du schoen" – Scott Rowe Oct 07 '23 at 03:34
  • İstemiyorsa varlığında kuşku bulduğumuz şeyi neden haketmeli? – fkybrd Oct 07 '23 at 10:33
  • 1
    @ScottRowe Bella, bella. Wunderbar! ; ) – J D Oct 07 '23 at 19:52
  • @fkybrd Kabul. Bir insanın aklıyla aklının olmaması arasında bir şey olduğunu öne sürüyorum. "Akıl" pek çok küçük kum tanesinden yapılmıştır ve ne zaman bir yığın haline geldiği bilinemez. – J D Oct 07 '23 at 19:56
  • Soru yapay zekaya atfedildiği için bilinci neo düalist bir şekilde irdelememiz gerekir. Neo spiritüalistik şekilde irdelemiş olsaydık asgari seviyede de olsa bir cevap veremeyeceğimizi düşünüyorum. Bilinç varsa ruh da var ve bilinç, ruhun maddeye etki etmesini sağlayan sebebi gibi bilinç de bu şekilde varolur. Etkileşime kapalı, ancak iletişime açık maddenin başka bir maddeyle tesiri nedensel değil, maddesel olduğundan ruhu gerçeklikte etkin kılan neden, bilincin anomali yaşam organizasyonunun unsuru olmasına etkendir. Ruh, "yaşam" sebebi ile etkindir ve ruh ile bilinç yaşam sebebi ile varlar. – fkybrd Oct 07 '23 at 21:10
  • Posts on [se] should be in English. @JD It's not recommended to translate non-English posts, as machine translations can be inaccurate, even human translations risk distorting the intended meaning of the post, and the author may not be able to address criticism of their post if they lack a sufficient understanding of English. – NotThatGuy Oct 09 '23 at 08:44
  • @NotThatGuy It's was an experiment born of curiosity to see what would come of it. As this isn't a international diplomacy, I suspect the fallout from mistranslation is negligible. That being said, I'm unlikely to do it again out of laziness. However, the passive construction "It's not recommended" makes me curious who the agent(s) is/are and what their specific recommendations are. The proscription doesn't appear to be in our help center. Is this "recommendation" an extension of your will to police language? – J D Oct 09 '23 at 13:54
  • Ölmek için çok gencim XP – fkybrd Oct 09 '23 at 14:06
  • @JD The recommendation to not translate non-English posts comes from the linked post. Meta is a messy extension of the help center, and that's an official "FAQ" post, which one might argue makes it closer to a rule than a recommendation (although there are blurry lines between official site rule, community consensus rule and some users think so, so "recommendation" is simpler). – NotThatGuy Oct 09 '23 at 14:36
  • Forum yöneticilerinin genelinin aynı dilde hassasiyet gösterdiğini sanmıyorum (İngilizce). Bu nedenle İngilizceyi ikincil dil olarak kullanan üyelere yönelik, yazım konusunda herhangi bir sorun olmasa bile oylama yöntemiyle kayda değer sayıda kullanıcının asimile edildiğine yönelik kampanyalar olur. Amaç bu forumu etkin bir şekilde kullanıp faydalanmaktır. Burada özeleştirel aforizmalarını yansıtanlarla ilgilenmiyorum. Bu, internet ve forumu kullanabilen herkesin soru sorma ve cevap verme hakkı varsa ona göre davranılır. Benim her cümleyi iğneleyici hale getiren sihirli değneğim yok. – fkybrd Oct 09 '23 at 18:20
  • @NotThatGuy I appreciate the link. But I'm not the one who keeps on posting in non-English. If you're going to enforce the rules of the forum. You should be lecturing fkybrd who is the one who prefers to post in Turkish. It's odd your motives are to wag your finger at me instead of them. Here's their reply: – J D Oct 10 '23 at 10:25
  • "Forum contributions generally differ even in the same language (English). For this reason, there are arguments directed at members who use even English as a language, claiming that users are that are influenced by the votes, even if there is no problem with understanding the text. The aim is to use this forum effectively. I am not interested in those who echo their aphorisms here. This means that everyone who can use the Internet and the forum has the right to ask and answer questions. I don't have a magic wand that makes every sentence have the meaning have right tone." – J D Oct 10 '23 at 10:25
  • They don't seem to care much about your police action. You can flag them to suppress their speech if you'd like. – J D Oct 10 '23 at 10:26
  • @JD My comment starts with "Posts on Stack Exchange should be in English", which is addressed at them and/or anyone who might be seeing this answer. New users who refuse to follow the rules get suspended sooner rather than later. My purpose is not to change their mind, for that seems unlikely. Rather, I'm merely noting the rules, for others, and in case they wish to respect the existing userbase of the site. Beyond that, it seems useful to inform or remind experienced users of the rules. – NotThatGuy Oct 10 '23 at 10:52
  • Some members with the oedipus complex, who I think belongs to the orange team, forget when to be opportunistic. There is not true answer of wrong question and wrong question should be edit. If a such answer requested then it is answered as offensive. Defects are not same with mistakes. But these members, who wants to direct streams of forums will be sensitive when they not give an satisfying answer in SE. There is not required writing other languages in English website. If some members are hurting with other member's English then Turkish does more sense. But it did sense too much. – fkybrd Oct 10 '23 at 20:03
  • @NotThatGuy Fair enough! – J D Oct 11 '23 at 14:08