8

I am interested in studying AI, and I thought it would be a good idea to study the nature of intelligence before stepping into the field. I googled "books to read about intelligence", but it gave me a useless list of books that makes people intelligent.

I don't have a background in philosophy, and I would appreciate if you can refer to any introductory/intermediate books that cover the general subject of intelligence in philosophical point of view.

Thank you.

edit: After some research and thoughts, intelligence is nothing but an ability to process information. Then, intelligence can be viewed as:

F: World -> Meaning

F(information) = interpertation

and intelligence is the function F that maps the information to an abstract world in which our consciousness and ideas lives in.

James C
  • 191
  • 4
  • 4
    For definitional questions of this sort online encyclopedias are a better source, see Intelligence and references there. This site takes more focused questions. – Conifold Nov 26 '20 at 11:23
  • Max Tegmark said, "Intelligence is the ability to accomplish goals." – Scott Rowe Jun 07 '23 at 22:23
  • To understand what intelligence is, forget about tasks, algorithms, learning, planning, and so on. Focus on the essentials. I claim that "intelligence is the ability to recognize, process, and cause differences". That simple. To enable that we only need comparable properties. The core algorithm of cognition then is based on comparisons and filtering, not on calculating functions. You can find details here - https://alexandernaumenko.substack.com/ – user2204238 Sep 08 '23 at 08:48
  • It's a lot of work, ain't it? Jokes aside, asking a question is a sign of intelligence. Now tell me, which AI till date has asked a question (spontaneously)? – Agent Smith Sep 09 '23 at 01:11
  • I might suggest a book I just started by Catherine Malabou called "Morphing Intelligence: From IQ Measurement to Artificial Brains." She was a student of Derrida's and is thus somewhat "continental," yet with an interest in neuroscience. It is fairly readable, though her overall philosophical agenda is little obscure, at least to me. – Nelson Alexander Sep 09 '23 at 20:12
  • Intelligence is the ability to deal with any problem (old or new) and provide solution. Artificial Intelligence can not provide solution to an entirely new problem which has not been dealt with before in human history. Human intelligence has the ability to deal with any problem whether old or entirely new or fundamentally new. – Dheeraj Verma Sep 30 '23 at 13:05
  • In a nutshell. (I often suffer hyper-verbosity, be grateful for the brevity) I suggest intelligence is: > The use of logic and ration by default, > ... rather than a last resort. – Alistair Riddoch Feb 08 '24 at 09:07

5 Answers5

3

One of the most important causes of frequent confusion, is the difference between Artificial Intelligence and Artificial General Intelligence. Yes AI is pretty much just processing information but because intelligence has been redefined in this field. It's like how physicists use the term information in the context of entropy, it's a crucial, but somewhat slippery, distinction that contradicts everyday usage of the word. AI was originally intended to be used like AGI is now, but as people worked on it & we have started to understand how sophisticated things our brains do that we take for granted, the goalposts have moved. And AI has come to mean any little step on the journey to more sophisticated computers.

AGI is it's own big topic. The Human Brain Project aims to find out more. OrchOR theory suggests there may be quantum processes at work in the brain or at least in memory, that put the task several extra orders of magnitude away from the scope of that project - studying connectomes of simple organisms is key to this, & I would say currently supportive of OrchOR. Not much is known about quantum computing in practice (there are only 2 quantum algorithms), but they are simulatable by classical computers, which themselves given enough time could all simulate each other, because they are Turing Machines (or more correctly, TM-equivalent). Integrated Information Theory is an important proposal in how systems might become 'more intelligent, and it helps explain dfferent states of human consciousness & arousal, a previously neglected topic in intelligence.

I heartily recommend John Vervaeke's Awakening From The Meaning Crisis lectures, just as a short introduction to philosophy, but more for framework he builds of terms like relevance-realisation, cognitive grip, and salience landscapes. This gets at the fundamental difference between information and meaning - the sifting of information for what is relevant, and assembling the useful bits, in a task-related way.

The computer intelligence bible, that you will find a LOT more people in computing talk about than read, is Douglas Hofstadter's Godel Escher Bach: an Eternal Golden Braid. It's not that it's technical, it really isn't, it's written a little like Alice In Wonderland, & a little like early Greek philosophy, and is full of thought-experiments (Gedankenexperiments) which people outside of physics always underestimate. But, the book takes patience & tenacity to get through it's doorstop size, while it keeps stretches your brain. Some chapters take chewing over for ages to absorb. It's an amazing work.

Hofstadter proposed that the defining quality of consciousness is what he called being a 'strange loop'. This comes out of a context of Godel's Incompleteness & the halting problem in relation to Turing Machines. So it takes a little background to appreciate. He argues that something happens when you relate lots of sets of data together, that goes beyond circular reasoning, or using axioms, or infinite regress (Munchausen's Trilemma, a funny way of referencing 'impossible' things in the Baron Munchausen story), because you get tangled hierarchies, with self-reference, feedbacks, and moving around in our hierarchy of methods of knowing in a way that is not quite machine-logical, and that the origins of consciousness are there. This article has some nice examples of using it in philosophy - it's like a way to 'shew the fly out of the bottle', to use logic to go 'beyond' logic.

Here's a relevant short post: How is knowledge possible?

And a longer post working from the top-down philosophical side on meaning, and covering intersubjectivity, an important idea in peer-to-peer reality building: According to the major theories of concepts, where do meanings come from?

There is a powerful tendency for people in science and computing to think there is nothing very interesting or special about human minds. And unfortunately, a powerful strand in philosophy which says there is something so special about them, scientists aren't on track to figuring them out - the 'qualia' idea and the so called Hard Problem Of Consciousness. I strongly recommend not joining either camp. The story of physics has been from thinking we were a few results away from explaining everything in 1900, and now we don't know what 95% of the universe is made of - our greatest progress has been to begin understanding the scope of our ignorance. I feel strongly we are on a similar trajectory about intelligence.

J D
  • 26,214
  • 3
  • 23
  • 98
CriglCragl
  • 21,494
  • 4
  • 27
  • 67
3

Intelligence is related sometimes (heard it in a philosophy meeting, Toulouse, France) to the ability to think and act coherently towards our deepest and most prioritary goal: survival. So, it is not really intelligent for humans to make money by cutting trees or killing animals, we're just digging our own grave.

If an A-I entity would be able to survive and persist in time (autorepair, reproduce, enter into the ecological balance, counteract the 2nd law of thermodynamics), independently of us, with no help, perhaps it could be called intelligent.

Surviving along time is not simple. The entity should adapt to changing conditions. So, the entity would need to evolve. This is perhaps the most relevant element of any surviving entity: the ability to learn, about itself and the environment, and to act improving the interactions to keep an internal order.

Regarding entropy: it is a feature of a system, therefore the entity regarding its environment. The 2nd law states essentially that energy spreads, that is, the entity will tend to reach the same state as the environment: dispersal. So, the tendency to keep integrity and functionality should be embedded in the system.

An example:

It is easy to program a c++ object that keeps a variable "energy" which decreases with any interaction with the environment (e.g. on each method), and increases with some specific actions. But as long as such "energy" is not the electrical energy that keeps it existing, it is not valid. A robot perhaps would do the task: run to the wall plug and connect itself to recharge batteries. In such case, it would already need to recognize and find wallplugs.

But what if the entity would be built in France, and moved to England? Wallplugs are not compatible! The entity should have a mechanism that would allow it to find a solution.

Keep in mind that a simple battery discharge is the equivalent to the death of the entity. So, it should have a mechanism to prevent that. Now, that would be intelligent. And since then, since the moment robots adapt to a changing environment, and learn new solutions that increase their probabilities of survival, we should be careful of them.

Update: following the question, is it necessary for the entity to have a physical body? No, according to this definition. Perhaps if we would be able to develop an abstract entity simulating its survival, the same rules can be used to build it physically.

RodolfoAP
  • 7,393
  • 1
  • 13
  • 30
  • So, an amoeba gulping its food for survival is "smarter" than a sophisticated computer program that is designed to do a specific task. Also, self-learning programs wouldn't be an instance of intelligent being since it wouldn't struggle for its own survival. \ In order for a program to survive, it needs a body to contain the content (semantics) of the program. So does it imply that physical body is a prerequisite condition for a being to possess intelligence? – James C Nov 27 '20 at 03:00
  • Following this concept of intelligence, 1. yes, an amoeba is more intelligent than a mathematician working for a polluting company. 2. A lot of self learning apps try to survive in its context, but that's not real survival, although, it is an advance. 3. No, I don't think a body is mandatory, but if you can develop a game where entities find creative solutions to survive, that is be perhaps the most successful step towards a real intelligent entity. – RodolfoAP Nov 27 '20 at 03:05
  • "an amoeba gulping it's food for survival" is really a poor characterization. An amoeba is smarter than a supercomputer because it adapts to the environment, it keeps energy against the 2nd law of thermodynamics, it reproduces to avoid exhaustion of it... etc. Gulping is just a task, and is not what I would give credit an amoeba for. – RodolfoAP Nov 27 '20 at 03:11
  • If we pulverize all memory cards of every electronic device in this world, then no program would survive. So if a machine has an instinct to survive, it will employ all tactics possible to somehow preserve its body. In this sense, even though body is not a prerequisite of an intelligent being, an intelligent being will try to preserve its body for survival. Does this sound plausible?
  • – James C Nov 27 '20 at 03:18
  • You are mixing concepts. A body is not necessary if the entity is meant to survive in a video game. But if it is meant to survive in real conditions, yes, it would perhaps need a body (unless it exists and reproduces as an electronic virus in neural systems of animals). And in this case, it would fight to death for survival, possibly. – RodolfoAP Nov 27 '20 at 03:24
  • When young, I used to play a game called starcraft. AI (bot) struggles to survive while playing against me, and it is an intelligent being in the context of the game. So, our human mind which is geared to our own survival is an intelligent being. I hope I understood your logic. – James C Nov 27 '20 at 03:41
  • If this is correct, then how would I characterize a suicide? It clearly disobeys our axiom: intelligent is a struggle for survival. – James C Nov 27 '20 at 03:42
  • Regarding suicide: living entities have a motivation for survival. Some philosophers state that we experience constant pleasure. If the motivation is lost (e.g. one suffers more pain than pleasure), then, the deepest goal is not survival, but death. In such case, suicide is intelligent, because the entity reaches its deepest goal. – RodolfoAP Nov 27 '20 at 04:04