0

This idea that AI could do valid philosophy, discussed on certain threads, seems absurd to me. So far, machine do not think. Some of them can compute and weave words together, but that's not the same thing.

It seems to me that the biggest problem of an automatic philosopher would be its lack of freedom. Its codes would make it predictable and boring, not creative, and hence not real philosophy. Real philosophy can only result from the free exercice of reason. And if there is no free will, there can be no real "love of wisdom", no philosophy worth the name, because there can be no love and no wisdom in a mechanical automaton.

Of course, that also applies to a lot of human pretenders to the title of "philosopher": they can be robotic. But it's the same in every profession: some make it, some fake it.

So my question is: what's the point of a mechanical philosopher? Is there even room for philosophy, if we are not free to follow our reason where it leads us, to consider alternatives, and to exchange ideas with others?

Olivier5
  • 2,162
  • 1
  • 2
  • 20
  • You should know that many philosophers believe that humans have no "free will" either, and, in any case, it is impossible to verify empirically whether we do. As for the output of codes, you'd be surprised how unpredictable it can be (digits of pi are already "unpredictable"). Again, it is impossible to empirically distinguish outputs of a complex enough algorithm from "creativity". If AI can make and prove math conjectures or create paintings, like Midjourney, why not philosophy? They may recombine ideas the way humans did not think of, as Lull suggested already in the middle ages. – Conifold Sep 05 '23 at 07:17
  • There's a deeper problem behind your question, about the nature of reason. Insofar as logic is mechanistic, it would seem that an AI is the ideal entity to do logic. And where is philosophical reason if logic is mechanistic? Perhaps philosophy's task is to reduce all human thought to logic? – Ludwig V Sep 05 '23 at 07:40
  • Though I suppose it will always be an option for human beings to reject the output of an AI if they don't like it. Which would mean that philosophy's task would be to evaluate the output of an AI. – Ludwig V Sep 05 '23 at 07:42
  • @Conifold Just because some philosophers believe something (or pretend to believe it) does not make it logical or true. My point is precisely that you can't do actual philosophy without a belief in the power of reason. It'd be like a mechanic who does not believe in motion, a plumber who thinks water does not exist, a biologist who does not care for life... – Olivier5 Sep 05 '23 at 08:43
  • @LudwigV Human logic is not mechanistic. There's much intuition in it, the way we human beings use it. It's only when we write it down on a computer (or on a book) that we need to simplify it, to dumb it down into a mechanical thing that a machine can process. The mechanical aspect of logic is somewhat artificial, therefore. – Olivier5 Sep 05 '23 at 08:52
  • I agree that formal (mechanical) logic is artificial, in a sense. But it is, nonetheless, human (invented by human beings). However, I agree that we need another concept of reason, to allow for practical reason (and values in general). I use "reasonable" for that. – Ludwig V Sep 05 '23 at 09:39
  • Some philosophers include Spinoza, Locke, Hegel, etc., and a long list of modern compatibilists and hard determinists. All great believers in the power of reason. But not "free will". You'll have hard time convincing people that they did not do "actual philosophy". I suggest reading up on the subject before making up your mind. Remember, feeling strongly and being passionate about something... does not make it logical or true. You need arguments beyond the word "robotic". – Conifold Sep 05 '23 at 10:04
  • @Conifold I've read Spinoza, and have no problem with his take, which is the basic determinist compatibilist position where thoughts / ideas are causal, i.e. real, powerful and important to understand the world, not epiphenomena. He did believe in the power of reason, so there is no contradiction. More generally, compatibilists are (imo) affirming human agency and therefore they can without contradiction pretend to do philosophy. – Olivier5 Sep 05 '23 at 10:50
  • I’ve been told AI can’t be comedic. But I’ve found AI hilarious. Not because it’s doing comedy like us, but because we find it funny. We derive philosophy the same way. A rock isn’t philosophical but it can be part of philosophical insight. And if AI become similar enough to us, we will treat them as any other philosopher. – J Kusin Sep 05 '23 at 13:10
  • Free will does not exist and neither is it experienced. Actions come from thought. One doesn’t choose thoughts. For that would require thinking about what your next thought should be which is ludicrous and is never experienced. There is no escape from this. –  Sep 06 '23 at 04:36
  • @thinkingman And yet, one can chose to see the glass half empty or half full. One can chose to apply one's mind to a given issue, or not to bother at all about it. We're the captain of our own soul. – Olivier5 Sep 06 '23 at 14:47

7 Answers7

1

This is a more practical question than it might seem. As you know, the SE moderators were on strike for months over the issue of access to tools to detect GPT-generated contributions. I have come across several such items, and in my experience it involved a substantial intellectual effort to identify them (surely more difficult than the effort involved in producing such "contributions"). This seems to be the latest corroboration of Brandolini's law. It seems hard apriori to rule out the possibility that nontrivial contributions can be generated this way. If AI can assist doctors and mathematicians, why not philosophers?

Mikhail Katz
  • 1,216
  • 5
  • 16
  • 1
    From what I've heard, AIs are excellent at doing intellectual donkey-work. Do philosophers have donkey-work that can be organized and delegated, in the way that doctors and mathematicians delegate donkey-work to AIs? – Ludwig V Sep 05 '23 at 07:57
  • I have tried to do what @LudwigV talks of, ie use ChatGPT for philosophical donkey work, and it failed. Perhaps future versions will be better. The task I asked it to do was similar to the question I ask here: https://philosophy.stackexchange.com/questions/101135/eliminative-materialism-eliminates-itself-a-familiar-idea ie "Find quotes from philosophers, specifically about the idea that "eliminative materialism eliminates (invalidates, contradicts) itself." At some point, the system gave me three identical quotes supposedly written by three different philosophers, which cannot be true. – Olivier5 Sep 05 '23 at 11:25
  • One possible application would be similar to the way GPT is apparently sometimes used in programming to analyze the soundness or efficiency of a given code. I can certainly think of some philosophers whose reasoning is so sloppy that it could benefit from a mechanical check by a GPT bot. Of course this would be harder in philosophy than in programming since one would need to formalize their arguments first, but GPT seems to be good at natural language recognition as well :-) @Olivier5 – Mikhail Katz Sep 05 '23 at 11:55
  • It's good at natural language alright, but philosophers rarely speak in natural language. – Olivier5 Sep 05 '23 at 12:10
  • :-) Well said. @Olivier5 – Mikhail Katz Sep 05 '23 at 12:15
1

Certainly it is possible that a future AI might do real philosophy.

Certainly, no present or past AI is able to do real philosophy.

To do philosophy means:

  • To have disorder in your mind: conflict between one idea and another, or insufficient justification/unifying principles.
  • To seek to resolve the disorder by altering the ideas that you hold, to eliminate the conflicts or justify/unify.
  • To describe this resolution process in the form of an argument that could guide others to perform a similar resolution process in their own minds.

Present AIs might have something in their minds we could call ideas; networks such as ChatGPT have internal state, represented as a vector of numbers. Ideas are just particular patterns within a mind's internal state.

However, ChatGPT does not have a notion of conflict between ideas. ChatGPT will happily produce output contradicting what it said before. ChatGPT was trained based on a notion of conflict ("error") only between its output and the desired word from the source text. It was not trained based on any notion of conflict between its internal ideas.

Nor does ChatGPT have a drive to justify/unify/simplify the ideas it holds. There again just isn't any design principle behind ChatGPT that would produce such a drive.

Nor is ChatGPT capable of describing its mental conflict resolution process, even if it had one, which it doesn't.

Perhaps someday, an AI would be produced that could do those things. And that AI might indeed do real philosophy. It might spot contradictions, reason, persuade, simplify. It might convey these actions to others using words. But current AI just can't do that, isn't designed to do it.

causative
  • 12,714
  • 1
  • 16
  • 50
0

Philosophy will be unnecessary the day we will have no problems. Therefore, there's a lot to do. Philosophy is today as necessary as ever.

Now, AIs are just sentence generators, based on texts they already know. Therefore, they can't really solve problems. Repeating things does not make a philosopher.

AIs are useful. If I want to know who are the most representative idealists, what are their ideas, greatest quotes, etc., I can search over many books. Or ask google, or better, ask precise points to chatgpt. That's the point. Not solving problems.

Homework for you:

In order to solve problems, you use logic, which is sustained by Aristotle's Three Laws of Reason. Chatgpt breaks all such laws. Now, get your own conclusions.

RodolfoAP
  • 7,393
  • 1
  • 13
  • 30
  • IOW, ChatGPT is a better search engine. That is fair, I think, at least for English language sources. The thing has not been trained on other languages, so has no idea what's out there beyond English. It won't tell you much about Arabic or French idealists. – Olivier5 Sep 05 '23 at 11:32
  • Aristotle's 'laws' are not the only way, & are highly suspect. See criticism of the Excluded Middle law here: https://en.wikipedia.org/wiki/Law_of_excluded_middle#Criticisms And in Buddhist philosophy I would argue Anatta & Sunyata oppose the idea of any stable fixed enduring identity. The law of Identity should be regarded only as an idealisation for convenience, & it's misuse can lead to a range of problems in real applications – CriglCragl Sep 05 '23 at 12:40
  • ChatGPT is not a search engine and if you're unlucky it will confidently supply you with false facts that sound like real language, because that is it's primary objective, matching quotes and authors is not. So it might look like it's the better search engine because it provides you with a plain English response that appears to be it's reasoning, but it's actually just a grammatically correct sentence that uses words from the given context, whether that makes sense is a different question. – haxor789 Sep 05 '23 at 12:45
  • @haxor789 If you use it as a search engine, then that's what it is, just like a "seat" is whatever you use to sit. And that it can do: identify English language resources on a given topic. – Olivier5 Sep 05 '23 at 13:42
  • @Olivier5 Sure if you look at it like that, yes. The point is that it is not a dedicated search engine so if you expect it to perform well in that regard you might find yourself with a nasty surprise as that is not it's primary goal. – haxor789 Sep 05 '23 at 14:18
  • Understood. It does have no sense of truth, of course, so it won't fact check what it finds, but neither can Google. – Olivier5 Sep 05 '23 at 14:27
  • @Olivier5 The thing is I don't know how Google actually does it. In the beginning their algorithm apparently mostly tracked links to other websites and so most featuring the keyword and being frequently linked would have gained your website a high score. But since then there have been various attempts to hack that system or make it more reliable and the actual algorithm how they do it or even whether it's an algorithm or if people manually do the details is a trade secret (as far as I know). Either way there's some focus on that topic by Google, while it's not the goal of ChatGPT – haxor789 Sep 06 '23 at 13:14
0

Freedom is just another master ~ Numerius Negedius, alias Ashok Kumar

To make judgments as regards creativity vis-à-vis philosophy one needs to study at one point in philosophy does art come in.

Agent Smith
  • 3,642
  • 9
  • 30
0

Just for fun: Hitchhikers guide?

42

Obviously, a sustained work of fiction, closer to LOTR than serious futurism (somewhere, there is a encyclopedia of its galaxy's facts). Anyway, in the spirit of babelfish, we may ask what we want our philosophers to do or be, what language they are to speak in.

If you mean to say that philosophy is something that is not just expressed in language but one peculiar form of expression (one that needs emotions or human species being or empathy or whatever) then perhaps not.

You might want to think about the Sapir–Whorf hypothesis^ in light of LLM, and whether, given LLM experience nothing, have no world, that their use of language is intrinsically deficient.

^ there's nothing of interest - no academic articles - on the www about this, and I suspect that much of the research into - if not guinea pigs of - LLM is behind closed doors...

  • 1
    Hi. I have not read the Hitchhikers guide but I remember Sheckley's Dimension of Miracle well. Best philosophical sci-fi I ever read. I was not talking of philosophy needing emotions, but of the fact that the philosophic practice implies human agency. If we are not agents but mere objects, automatons, then we cannot know anything or decide anything. And therefore we don't need philosophy. – Olivier5 Sep 06 '23 at 14:39
0

In terms of my own personal experience with current generation Large Language Model Generative AIs, such as ChatAI, they do surprisingly well at applying existing philosophies to novel realms, given the proper prompting (for instance, "write programming best practices in the style of Lao Tzu").

That kind of work consists in discerning a pattern, and applying it. While it's a mechanistic process, it doesn't seem entirely devoid of philosophical value.

I would not expect AI to do well at doing genuinely original philosophical work, but most human philosophers don't either.

Chris Sunami
  • 29,852
  • 2
  • 49
  • 101
  • "I would not expect AI to do well at doing genuinely original philosophical work, but most human philosophers don't either." Good point. But at least the latter believe in what they say, at least some of them.

    – Olivier5 Sep 06 '23 at 16:37
-1

Real philosophy can only result from the free exercice of reason. And if there is no free will, there can be no real "love of wisdom"

There's a whole series of questionable assumptions there, and in the rest if your post. Your assumptions actually seem to preclude what you seem to advocate, doing philisophy for instance on these topics, by stating them in a way that implies they are in contestable.

What is wisdom, and what is philosophy? What is thought and how does it make sense of the world?

Do we have free will? Can we do philosophy, and have meaningful moral lives if we don't? Can AI or AGI achieve the same sentience as humans, and how can we know in principle, or test for it in practice?

The whole topic of your question is pretty much captured by Searle's Chinese Room Argument. To simply rush to declare an answer, as you seem to declare to be neccessary, is exactly not the way to do philosophy. As they say, predicting the future is easy, doingbit correctly is hard. Having all the answers isn't the point, it's about how you get there, it's about testing out what different positiins related to an answer imply.

Whether ChatGPT or AI artist programmes can be creative, is a live interesting controversy to be engaged with, not dismissed. They remix. But don't humans remix and we allow calling that creativity? If what these programmes do is not creativity, what insights does that give us into the creative process?

I make a case for what wisdom is here, and that it relates to knowing how maximise our relative freedom: Wisdom and John Vervaeke's awakening from the meaning crises?

But in Wittgenstein's 'linguistic therapy' view of philosophy, he describes the subject as 'shewing the fly out of the bottle'. Which surely sounds, automatisable...

CriglCragl
  • 21,494
  • 4
  • 27
  • 67
  • There are several ways to do philosophy. My way needs not be your way. I cut through the chase, love brevity and don't waste time on ideas that I asses as non-sensical. I classify authors as "real" or "fake". I value logic. Others may do differently. They are most welcome to consider all philosophies equal, or to waste their time on contradictions if that's what they like to do... – Olivier5 Sep 05 '23 at 13:34
  • @Olivier5: I think you completely missed my point. Philosophy is about questioning & how we think, not just the sum of answers we come to. – CriglCragl Sep 05 '23 at 14:13
  • 1
    Philosophy is the love of wisdom. It's not the love of talking forever in ambiguous language and of never finding any satisfactory answers. That's the definition of sophism, which you are confusing with philosophy. – Olivier5 Sep 05 '23 at 14:18
  • I missed this in my first read: – Olivier5 Sep 06 '23 at 16:30
  • "The whole topic of your question is pretty much captured by Searle's Chinese Room Argument. To simply rush to declare an answer, as you seem to declare to be neccessary, is exactly not the way to do philosophy."

    – Olivier5 Sep 06 '23 at 16:31
  • You are correct that my argument is essentially similar to Searle's Chinese room argument, and I thank you for bringing this to my attention. But if this is indeed the case, are you saying that Searle rushes to declare an answer? And that Searle then doesn't do philosophy? – Olivier5 Sep 06 '23 at 16:34