6

For example: we observe an ant carrying food back to its nest. We may speak like the ant has a goal of increasing the amount of food in the nest. We observe a student proofreading an essay. We may speak like the human has a goal of improving the grammar and wording of the essay.

The most explicit definition of "goal" I can give is this: a goal is an outcome that an entity is seeking. If the entity chooses actions that the entity believes will increase the chances or the value of the sought outcome, then the sought outcome is a goal.

To determine what goal the entity is seeking, we may observe first what the organism is doing, and second, how the organism changes its behavior in response to some disruption. If we place an impassable barrier in front of the ant, does the ant change its behavior to return to the nest by a different path? If we introduce spelling errors into the essay the student is proofreading, does the student fix them? Generally, if we suspect a system is seeking a goal, the way to test this suspicion is to put some obstacle in the way of the goal and see if the system can compensate for it and head towards the goal anyway.

The above concept is that of an attractor. If we perturb the system a little bit away from a region, does the system return to the region?

There is an objection that many systems tend towards attractors without being intelligent decision-makers. A pendulum with friction tends towards a stationary vertical position as an attractor. Let's just set this aside for now, and suppose that we are only trying to determine the goals of a system with some minimal intelligence, and not applying the methods to simple systems like pendulums.

There is an objection that the process of testing might take an excessively long time period, especially if the organism has secret long-term goals that they are deliberately hiding. But we are asking about what can be ideally found through observation, rather than with the limits of our current technology. With advanced enough technology we could measure the organism to such precision and detail that we can build a neural model of its brain, so that with a powerful enough computer we can answer all subjunctive questions about what it would do in different scenarios, without having to wait for long time periods. This is still empirical observation because the model was derived from empirical observation, and we'd still be looking for the attractors in the organism's behavior.

Let us consider specifically that we encountered an intelligent alien organism, so we have no prior reference for their behavior and cannot rely on metaphors with humans. The only thing we can do is look at its behavioral attractors.

So, is it generally possible to impute the goals or intentions of an intelligent creature from its behavior and its (subjunctive) reaction to setbacks, thus reducing "intention" or "will" to a matter of empirical observation?

causative
  • 12,714
  • 1
  • 16
  • 50
  • 5
    There is a lot of anthropomorphism in "entity seeking", "entity chooses", "entity believes", as applied to ants and whatnot. And "imputing" of purposes to plants, biosphere, evolution, nature, etc., proceeded along similar lines, which brings up Kant's "purposiveness without a purpose". It is not just pendulums that are problematic. What you are setting aside is what should be front and center, that is exactly where the devil is. We need something to impute all that anthropomorphism first, your "minimal intelligence", imputing purposes is then a secondary issue. – Conifold Mar 09 '24 at 12:03
  • Some plants change position with the sun. Is this the intent of the plant or a biochemical response to light? Can the plant, one day "decide" not to follow the sun? Separating actions with intent and responses to stimuli is not easy – Idiosyncratic Soul Mar 09 '24 at 15:23
  • @IdiosyncraticSoul As I mentioned with the pendulum, we may set aside excessively simple mechanisms, and only concern ourselves here with whether we can determine the goals of an organism of sufficient intelligence. You ask if the plant can one day decide not to follow the sun - then I ask you if a human can one day decide not to follow any of their goals. I would say no; every human choice is in obedience to some goal or another. An action not in obedience to a goal, we would call involuntary, and not a choice. – causative Mar 09 '24 at 18:03
  • 1
    If the goal is deception, how can one be sure by observation alone? A confidence man conceals their goals through their "visible" acts. The same is true when observing a sleight of hand artist. – Idiosyncratic Soul Mar 09 '24 at 18:29
  • 1
    @IdiosyncraticSoul I've edited my post to mention that the observer is allowed to build a full model of the organsm's internal structure so that they may answer what the organism would do in all situations. If the organism is deceptive about their goal, we could set up a subjunctive situation where it seems to the organism that the goal can be achieved without anyone else knowing. In such a situation, it would presumably openly pursue the goal without deception. – causative Mar 09 '24 at 18:35
  • Some consider a thermostat to have intentions, eg Dennett's https://en.wikipedia.org/wiki/Intentional_stance I recommend Krakauer for a ground-up picture of intentionality, that can help us picture how inanimate systems cross a threshold into being 'teleonomic' (see link in my answer). The plant's genes contain an image of it's niche, & what behaviours led to reproduction. But that's not yet intentionality because the map the plant has doesn't include itself. "Man can do what he wills but he cannot will what he wills." Your voluntary = goals, fails, because we don't always know our own minds – CriglCragl Mar 09 '24 at 23:06
  • Even with an exhaustively full model, you have issues around sensitivity-to-initial-conditions. If you can make a model of the mind of unlimited complexity, & run it through unlimited possibilities, you have a computational system more complex than the organism, that is it's own being & own mind, to which you would have moral duties in the 'Mindcrime' sense. You've also seriously stretched the normal meaning of 'by observation' now. – CriglCragl Mar 09 '24 at 23:13
  • @CriglCragl I don't think it stretches "by observation" at all to say that the scientist can build models of arbitrary complexity of what he observes. It's necessary for the practice of science; science can never hope to achieve a comprehensive understanding of anything without producing powerful and complex models. RE intentionality requiring a model of itself, what makes you think that's necessary even in humans? Subjectively it seems I often "forget myself" in a task, when I'm focusing particularly well on the task, and I would not say I am not exercising intentionality then. – causative Mar 09 '24 at 23:15
  • I would even say that when I "forget myself" in a task, I am exercising an exceptionally pure form of intentionality - "burning oneself up" in the task, as they say in Zen. – causative Mar 09 '24 at 23:18
  • There's a difference between modeling & copying. We have no clear idea how to copy a brain. Models need to shed irrelevant complexity, to make operating them tractable. Such reduction of complexity we relate to insight, or understanding. Copying, isn't that. Honestly I think your whole process of framing this question is deeply unrecoverably muddled. You are basically just asking now, 'Are minds physical?', while having already decided they are. You aren't getting at any meaningful issues. – CriglCragl Mar 10 '24 at 00:50

5 Answers5

2

Up to a point, yes, but the visual aspects of a person's behaviour may contain a degree of ambiguity. If you see me reaching for a medical textbook, am I studying for an exam, or researching to write a novel, or trying to diagnose symptoms I posses, or trying to diagnose symptoms someone else possesses, or planning to poison my annoying neighbour, or looking up an answer to a crossword clue, or compiling a pub quiz, or picking up an object to prop-up a wonky tripod, or etc etc. I am sure you can think of some other possible goals I might have in mind when reaching for a medical text. You might respond to my challenge by saying that were you to observe the behaviour for a longer period, much of the ambiguity might fall away. True. But I can also play the trick of extending the temporal period under consideration. Suppose I am study for an examination. Is my 'goal' to pass the exam, or is it to keep my parents happy, or is it to earn a ton of money from offering plastic surgery to the image obsessed, or is it to infiltrate the army medical corp as a Russian spy, or is it... etc etc.

In summary, yes you can probably figure out what someone is trying to achieve from looking at what they are doing. Who would have thought it?

Marco Ocram
  • 20,914
  • 1
  • 12
  • 64
  • Your argument is that you usually can. That isn't the question though. – CriglCragl Mar 09 '24 at 13:30
  • Rather than needing to observe actual behavior over a long time period, we can (in principle) through observation build a model of what you subjunctively would do in hypothetical situations, as an engineer might build a finite element model of a bridge via observing and measuring it. (e.g. this could include a whole brain neural model.) Then we can use the model to test how your behavior would change in response to disruptions, allowing us to impute your behavioral attractors without needing a long time period of direct observation. – causative Mar 09 '24 at 18:08
2

No. Because the nature of minds is the purposes can change and update through internal processes which cannot be observed from behaviour.

Consider how evolution doesn't just 'know' what to do, it tries things, and successful things remain as impulses to behaviour. If you can repeatedly observe a species in the correct circumstance to trigger that, you will be able to see the behaviour helping the animal survive and reproduce. But if it doesn't get triggered, you can't, and you won't know what's there to get triggered when. A good example might be cats that get terrified by cucumbers (links to an article by a veterinary behaviourist, about the topic). Only through observation of many cats with many objects could you stumble on that. And you can't know what hidden behaviours there are for what triggers.

More generally, research on post hoc reasoning shows humans often jump without reasoning to conclusions unaware consciously of why, and only then use reasoning to find justifications. We see this in the Is-Ought distinction in moral reasoning, in encounters with 'moral dumbfounding' (links to journal article about). We can look for explanation to Kahneman's Type 1&2 Reasoning, that full scrutiny is resource intensive and slow, and much of life requires getting on with 'good enough' responses until they fail or we face unknowns.

You can also look at the total failure of the Behaviourism research paradigm in psychology (link is to Wikipedia section). I would point to ideas about intersubjectivity, and cognitive models for social prediction evidenced by the Dunbar Number and the Default Mode Network, to understand how we go from observing the behaviour of others on to 'getting inside their heads'.

You might also look towards the idea of Strange Loops, where a systems ability to carry a model of itself in it's reasoning, to shape it's choice how to be, introduces unpredictable feedback loops that are no longer subject to simple forms of logic (ie First Order logics).

You are interested in Strange Attractors, so you might like to look into leading complex-systems theorist David Krakauer's framing of 'teleonomic matter', eg as discussed in this Mindscape podcast episode: Complexity, Agency & Information. It's an approach which can derive 'purposes' of living systems and non-living ones from the same framework.

CriglCragl
  • 21,494
  • 4
  • 27
  • 67
  • Internal processes are in principle subject to empirical observation, however, such as through a microscope or EEG. We need not assume the observer is blind; they can build a model of the organism's internal structures and processes, which they can use to answer all subjunctive questions about it, assuming they have sufficiently advanced sensors and computers. – causative Mar 09 '24 at 18:31
  • @causative: If you make a model as complex or more so than the organism, yes, you could predict it, because you have copied the organism. That's not truly predicting it, but just watching it potentially ahead of time in it's copied form. – CriglCragl Mar 09 '24 at 23:17
  • So you think that if a scientist makes a model of a physical system that is more complex than the system itself, then the scientist is not actually predicting the system? For example, if they use a Top500 supercomputer to predict how a rubber ball bounces, they aren't actually predicting the rubber ball? I don't think I'd agree with that, and I don't see what relevance these semantics over the word "predict" have to imputing goals through observation, anyway. – causative Mar 09 '24 at 23:29
  • The supercomputer might be able to draw out predictions even very large numbers of experiments could not, still reducing complexity, otherwise just do the experiments. Why is the supercomputer time being allocated? They say all the computing power on Earth, could fully simulate all the quantum data of a lump of matter slightly larger than a tennis ball. Ok great. But why do that? Trying to predict humans from atoms is, misguided. We have tools like 'character' in heuristic explanatory overlays to reduce complexity gigantically, at the cost of being imperfect. NB Borges' life-size map... – CriglCragl Mar 10 '24 at 01:03
2

There are two different questions being asked here. The one in the title, can we impute goals to organisms based on behavior, is a clear yes. Behavior provides the main data we work off of for imputing goals.

The second question is at the end of the discussion, can goals be REDUCED to behavior, as per the discussion of attractors, inhibitors, etc. The answer on this reductionism is a NO. And this is because, to infer goals, we are NOT only looking at behavior, we are also looking internally, at our own minds, and extrapolating off our internal intentionality and inferring a theory of mind for other agents too. This internally based inference is NOT just a replication of our own minds, but also involves speculative thinking about how other minds can be different from our own, is a significant secondary source of data, and is what allows us to understand complex motives, and deception.

The attempt to reduce mind to behavior alone is what was central to the philosophic and psychological movement called behaviorism. As a psychological theory, behaviorism is really only useful for very simple creatures that do not have awareness of the intentionality of other creatures. Behaviorism tried to do psychology without Theory of Mind, and has mostly been abandoned as an extended bizarre failure. Since awareness of the intentionality of other creatures is central to psychology it was basically an effort to abandon/ignore the psycho part of psychology.

Dcleve
  • 13,610
  • 1
  • 14
  • 54
1

Two cents.

We know goals (eg as mental states) exist because we have goals and experience ourselves acting towards satisfying those goals (although this does not mean we always succeed).

From this observation, about our goals, we can extrapolate, and assume other living systems can have goals as well (for example the closest species to us, some apes).

From this we can extrapolate more, only with some risk, and assume many or all living things have goals (although the exact experience of those goals may vary widely). This type of extrapolation is open enough to allow for variation that is quite plausible.

The OP tries to represent goals as attractors (of behavior). This is plausible, and by known results on dynamical systems (ie KAM theory) we know that a system perturbed tends to stay close to the original behavior given the perturbation is within some limits.

Can we attach goals to an alien creature based on observing their behavior?

My opinion is that it is possible under two conditions: a) a sufficiently long and varied period of observation which, b) implies without reasonable doubt that the best explanation for what observed is the presence of certain goals.

Nikos M.
  • 2,676
  • 1
  • 11
  • 19
1

Not in general, because not everything has goals. Many processes have effects that we can recognize, but that doesn't mean they were explicit goals.

For example, sunshine and rain (indirectly) cause plants to grow. But we would not say that this was their goal. They are spontaneous processes, and plants simply take advantage of them.

Ancient civilizations did not have the same perspective, though. They inferred intentionality everywhere, and formulated the existence of gods that controlled these processes; the gods had goals and could be influenced through prayer and sacrifices.

Now we understand physical processes somewhat better. Many of them (like rain) just happen. We generally only impute goals to living matter, but even there it can be difficult to distinguish goals from unintentional side effects. When a bee flies from flower to flower, it gathers nectar and pollinates the plants; we usually consider the first to be a goal (because it directly helps the bee) and the second to be a side effect, but this is somewhat arbitrary. Bees and flowers co-evolved to take advantage of each other -- bees benefit from pollinating plants so there will be more of them to get nectar from.

Barmar
  • 1,710
  • 8
  • 13
  • About the rain - hold on, I mentioned that in this question we are considering only organisms of a sufficient minimum intelligence. Rain clouds, presumably, do not qualify. About the bee - in the question, the criteria for something to be a goal is not whether it benefits the bee. It is whether, when we disturb the bee's activity, it adapts so that it still heads towards the goal; we look for behavioral attractors. If we place obstacles in the bee's way or move the flowers, it will still find a way to bring nectar back to the hive, so bringing nectar to the hive is a goal of the bee. – causative Mar 10 '24 at 14:13
  • However, if we coat the bee's legs and hairs with a non-stick polymer so they do not carry pollen, or shave the pollen-carrying hairs, the bee won't find some other way to carry pollen and fertilize the plants. It will go about its rounds as before, gathering nectar. So fertilizing plants is not a goal of the bee, even though it indirectly benefits the bee. – causative Mar 10 '24 at 14:14
  • Perhaps a better test would be to remove the pollen from the flowers but leave the nectar - the bee will go about its rounds as usual, unbothered by the lack of pollen. But if we remove the nectar from the flowers but leave the pollen, the bee will keep visiting more and more flowers until it has enough nectar to bring back to the hive. Assuming the experiment plays out as I'd expect, it would show that the nectar is a goal of the bee but the pollen is not. – causative Mar 10 '24 at 14:24
  • e.g. if you remove 80% of the pollen but leave the nectar, we'd expect the bee to visit the same number of flowers as before. But if you remove 80% of the nectar and leave the pollen, we'd expect the bee to visit roughly 5x as many flowers as before, so it can get a full load of nectar to bring back. – causative Mar 10 '24 at 14:37
  • "only organisms of a sufficient minimum intelligence" -- that raises the philosophical problem of defining intelligence. Reality doesn't have hard boundaries, everything is a continuum. – Barmar Mar 11 '24 at 16:47
  • Regarding pollen, we can ask why bees have the hairs that pick up the pollen in the first place. Evolution may have selected for them because of the way bees and flowers coevolved. – Barmar Mar 11 '24 at 16:49
  • The "sufficient minimum intelligence" is not a philosophical problem, it's simply a matter of what this current question is about. Personally I would say that a pendulum with friction, that has as an attractor the vertical stationary position, is following a goal, because I see no particular essential difference between what the pendulum does and what a human does when (say) driving a car, other than complexity. But this is objectionable to some people so I am saying in this question we are sidestepping that discussion, and it is simply not part of the question. – causative Mar 11 '24 at 18:13
  • In other words, to disprove the claim of the question, it is necessary to find examples where an organism has sufficient minimum intelligence to have goals (by your own judgment), the organism's goal is clearly known, and yet the proposed process of looking for behavioral attractors does not work to find the organism's goal. Regarding pollen - of course evolution put the hairs there. But that does not make it the bee's objective. A human may willingly choose not to reproduce, in defiance of all evolution. An organism's goals do not necessarily coincide with their evolution. – causative Mar 11 '24 at 18:17