17

Today's Saturday Morning Breakfast Cereal raises in interesting philosophical issue.

The comic posits a machine that makes your life perfect and happy, and asks if people would get bored of this. The response given is that this can't happen as long as the machine is working correctly. Feeling bored, unfulfilled, etc. are negative emotions that the machine must eradicate to make you feel happy. The final panel asks

But if an outside device is directly adjusting your thoughts and emotions, will you even feel like you're you anymore?

The answer is along the same lines:

I will if it tells me to!

Basically it's arguing that all your feelings are just a result of stimulation of appropriate neurons in the brain, so the machine can make you feel anything it's designed to stimulate.

Is there a flaw in this argument, other than the sheer impracticality of a machine with that level of detailed control of the subject's mind? We have anti-depression drugs, but they have negative side effects, and they're not able to counteract the emotional impacts of these.

Or is there something contradictory about being happy when there's nothing to be happy about? It seems like it's just the artificial high that comes from taking psychedelic drugs. In the real world you can tell that this is dream-like, but the comic suggests that the machine can cancel that, so you don't know your happiness is artificial.

It seems like this is a realization of Plato's shadows on the wall, not unlike The Matrix if it didn't have the bugs that allowed some of its citizens to tell that they're inside a machine and take control of it.

Barmar
  • 1,710
  • 8
  • 13
  • 1
    This is why I am glad that the universe is a dangerous, depressing, bleak place: because when I am happy, which is often, I know it is me. – Scott Rowe Dec 28 '23 at 01:36
  • The answer to this will depend on your chosen philosophy of mind. If you are a mind-body dualist then a physical machine is only able to "turn the knobs" of your physical self. Any emotions/qualities of your mental self are out-of-reach, so to speak, and limit the capabilities of such a machine. It may be the case that even a materialist pleasure machine would find limitations in this regard due to physical laws, but those limitations would likely not be as aesthetically pleasing as Cartesian theatre mental states. – Him Dec 29 '23 at 17:46
  • 1
    I'm not a dualist, I think there have been enough studies of the brain to prove conclusively that our thoughts and feelings directly correspond to physical changes in the brain, and the brain can be manipulated to cause thoughts and feelings (although not at the level of detail necessary to implement the proposed machine). – Barmar Dec 29 '23 at 18:26
  • 1
    I don't think think the impracticality of the proposed device is a flaw in the argument as such, only a feature of the rather extreme hyperbole that SMBC sometimes reaches to. – ilkkachu Dec 29 '23 at 19:48
  • @ilkkachu Right, it's not really a flaw in the argument, just a reason why it's just a theoretical question. – Barmar Dec 29 '23 at 19:54
  • Coincidentally, this is my personal solution to the Fermi paradox: all alien civilizations who are advanced enough have created their perfect-life-machines and are happily spending the eternity inside them. – Vilx- Dec 29 '23 at 23:20
  • Oh, and if you think about such a machine, it gets much, much worse. First of all you'd probably want immortality to go along with it. Or at least to live as long as the sun lasts. Which means that you'd need to optimize for energy expenditure and life support. AI would be absolutely necessary to run these machines over the aeons and they would optimize the hell out of it. First of all, any body parts not needed to experience pleasure would be cut away. That means leaving just your brain (and maybe a bit of spinal cord) in a jar. [Contd.] – Vilx- Dec 29 '23 at 23:33
  • Then the brain itself can be further optimized by removing regions that are irrelevant to the production of pleasure. Speech centers, visual processing, spatial processing, anything that gets triggered during bad emotions - you name it. The result would be a much more simplified, streamlined brain which both uses less energy to survive and is easier to control (thus less energy usage for the machine). At which point we get to the even more philosophical question of what exactly constitutes "pleasure" and "happiness". Thus, see https://www.smbc-comics.com/comic/happy-3 – Vilx- Dec 29 '23 at 23:36
  • @Vilx- How did we ever pass code review? Or Energy-Star? – Barmar Dec 29 '23 at 23:39
  • @Barmar Evolution is inefficient. – Vilx- Dec 29 '23 at 23:39
  • @Vilx- I'm wondering why AI that powerful would bother basically keeping slugs in jars? – Scott Rowe Dec 30 '23 at 00:01
  • 1
    @ScottRowe Because the slugs programmed it to. ¯\(ツ) – Vilx- Dec 30 '23 at 10:52
  • 1
    I feel the key element of this comic strip's argument is that, by definition, it would not be a perfect perfect-life-machine unless it banished boredom, which is about as profound as the observation that a male fox cannot be a vixen. The author did not have his protagonist go so far as to say that such a machine would not be perfect unless it exists, and therefore that we can deduce the existence of such a machine, but perhaps that would be just too absurd even for a comic strip that relishes absurdity. – sdenham Dec 31 '23 at 00:17
  • 1
    @sdenham the machine actually does exist, it is floating in space between Earth and Mars, but we haven't spotted it yet with telescopes. – Scott Rowe Dec 31 '23 at 13:25
  • 1
    @ScottRowe Rumor has it that it is in the trunk of Elon's Roadster, along with Russell's teapot. – sdenham Dec 31 '23 at 15:00

8 Answers8

9

There is no contradiction.

Imagine a hypothetical scenario, not in the "perfect-life machine," where you are ecstatically happy for real-world reasons. Perhaps you are witnessing the birth of a child, or are happy in the arms of someone you love, or you just won the lottery, or your small business made a huge sale, or you discovered the cure for cancer, or your favorite sports team won the championship.

Now, if the perfect-life machine could replicate the exact brain state you have during this ecstatic moment, and create good-enough illusions that the great thing is really happening, you would feel and think exactly the same way. You would believe that the great thing really was happening - the machine would have deluded you to do that - and you would feel just as fantastic. You would have to, unless you are proposing there is something about the way that we feel that is independent of our brain state. Are you proposing that?

Of course, all of this is independent of the question of whether we should step into such a perfect-life machine, or whether it's a rational choice to do so. From our current perspective outside the machine, we know the happiness would be based in a delusion, even though it would indeed be a real feeling of happiness. Personally, I would reject the machine unless I thought there was no further chance for real happiness in real life. From my perspective outside the machine, I judge that one unit of happiness for genuine real-world reasons, is worth many units of happiness based on fake delusions created by the machine.

There are also valuable things in real life that go beyond a mere feeling of happiness. Discovering or understanding things, making a positive difference in the lives of others, having power and proving the validity of my ideals - these things have value beyond any happy feeling they might provide to me. The perfect-life machine cannot grant these things, only the illusions of them. From inside the machine I wouldn't know the difference - but from outside, I do, and therefore from outside I don't want to go in.

causative
  • 12,714
  • 1
  • 16
  • 50
  • 1
    Does this answer imply that a perfect-life machine should control the entire brain state, including the memory? That is, when replicating a given state of happiness, should it erase the memory of previous happiness events? It seems that if the memory of previous happiness persists, then the person will eventually get bored of lack of variety (thus ruining the goal of the machine). If the memory is lost, then the person wouldn't feel that they are always happy (again, ruining the advertised goal). – Igor G Dec 28 '23 at 08:14
  • @IgorG If the machine erases your memory of the previous happiness event each time it starts a new happiness event, it can still make you always be happy - it just wouldn't make you remember always being happy. Different things. Also, I don't think it's necessary to fully erase all memory to eliminate boredom. It could instead tweak things so you can remember previous events if you try, but rarely think to do so, so they don't bother you. It could also reduce the influence such memories have on your emotional state, and directly influence that state to not be bored. – causative Dec 28 '23 at 10:02
  • 1
    @IgorG It can also create synthetic memories so you think you have always been happy. Kind of like how God created the geologic and fossil records to make it look like Earth is billions of years old, when it was actually created in 7 days about 5,000 years ago. :) – Barmar Dec 28 '23 at 15:52
  • 1
    @Barmar, so basically, we must assume the machine is omnipotent (brainstate-wise, that is) as it must be able to tweak every aspect of the brain and memory. Well, that's viable. But kind of trivial, since the assumption of omnipotence can explain anything :- ) – Igor G Dec 29 '23 at 01:56
  • 1
    @IgorG Yes, that's the practicality issue I mentioned in the question. This whole thing is totally theoretical, but that's hardly uncommon in philosophy. – Barmar Dec 29 '23 at 02:04
  • https://www.youtube.com/watch?v=JODWCwycNmg – BlueRaja - Danny Pflughoeft Dec 29 '23 at 10:06
6

I think the reasoning of the comic is actually pretty air-tight. Boredom is a mental experience, and such a machine should thus have control of this mental experience, including the control to stop this mental experience.

There's 2 ways to stop the experience of boredom for such a machine:

  1. Provide the subject with enough variety of mental experiences such that they wouldn't get bored anyway.

  2. Manually block the pathways leading to the experience of boredom, so whenever the subject would feel bored, they simply... don't.

I don't see any in-principle way why such a machine would be impossible - why it would be impossible for a computer to accomplish strategy 1 to the best of its ability for a human subject, and occasionally resort to strategy 2 when necessary.

TKoL
  • 2,569
  • 5
  • 15
  • 2
    When I was 14 I decided that being bored is stupid, because I have a limited lifespan and so boredom meant that I was too dumb to figure out what to do with it. I haven't been bored since, despite being in some incredibly boring circumstances at times. It is a decision, like most states of mind. Unless I have that machine built in to my head... – Scott Rowe Dec 28 '23 at 11:44
  • 1
    @ScottRowe that's admirable. I find myself bored all the time - I often am too dumb to figure out what to do. – TKoL Dec 28 '23 at 12:05
  • 1
    One good way to deal with unpleasant states like pain (like at the Dentist) or boredom is to look deeply in to the actual feelings at that moment. Often they will sort of unravel or at least become bearable. A strategy for some pains is basically, "put your mind on something else" - sometimes possible. Self-inquiry dissolves the 'me' that is experiencing the negative state and then there are just unpleasant feelings instead. Takes a while to master. – Scott Rowe Dec 28 '23 at 12:12
  • 1
    @ScottRowe But finding novel things to do all the time can be so tedious. :) – Barmar Dec 28 '23 at 15:47
  • «Manually block the pathways leading to the experience of boredom» — but what if «boredom» is just a state, when you're having no or not enough «experiences»? Like, you're bored when you're "pathways" are already blocked. – user28434 Dec 29 '23 at 08:25
  • @user28434 boredom is not just this passive state that happens whenever not enough other stuff is happening. Boredom is an active process in the brain. – TKoL Dec 29 '23 at 09:07
4

I think the comic is making a subtle equivocation. The starting premise of a happiness machine I take to be this:

Happiness is a feeling produced by a chemical state in our brain. Therefore, by precisely feeding chemicals to the right neural pathways, the happiness machine will bring about the happiness feeling.

However, as the comic goes along, this premise implicitly changes:

Happiness is a feeling dependent on what we think, which means that the wrong thoughts will end the feeling of happiness. An example is the thought that our sense of happiness is artificial. Consequently, the happiness machine must ensure we are thinking the right thoughts to make us happy.

The subtle switch is from happiness is a particular chemical administered to a particular neuron, to happiness is a cognitive structure. The two are different, as we can see with computers. Computer programs run by electricity being administered to transistors. However, the program itself is not tied to particular electrical currents and particular transistors. The same program can run in many different computational platforms. Conversely, the same electrical current administered to the same transistor can have dramatically different effects depending on the program context it occurs within.

So, bringing this back to the happiness machine, the original premise is the happiness machine is responsible for administering particular chemicals to particular neurons. But, then we switch from assuming happiness is based on a chemical reaction to assuming happiness is based on a cognitive state. Using our computer analogy, the switch is similar to saying happiness is electricity applied to a transistor to saying happiness is a particular program.

If all our happiness machine can do is apply chemicals to neurons (electricity to transistors), yet happiness itself more like a program, then the happiness machine is operating at the wrong level to ensure the right program is running. To continue on this line of thought, it is also impossible for a device that operates at the transistor level to comprehend the program level, per the halting problem. The happiness device must instead be able to comprehend the "happiness program" and operate on it to keep the program within the happiness parameters, which in general is undecidable due to Rice's theorem. This means a happiness device is most likely equivalent to solving the halting problem, which is impossible.

This doesn't preclude a limited happiness device, which only performs as expected within narrowly defined scenarios, but then the happiness device becomes no different than anything else we do for enjoyment using a machine, such as playing a video game or watching a movie. The promise of a happiness device is universality, capable of making someone always happy, and as discussed such a device is logically impossible.

yters
  • 1,877
  • 14
  • 20
  • This is an interesting take on it, but the flaw seems to be that the brain-computer is programming itself -- the cognitive structures are the result of the neural stimulations and vice versa. – Barmar Dec 28 '23 at 16:00
  • The brain also has pathways that work in two directions -- happiness causes you to smile, but smiling can also cause you to feel happy (although usually not as much as a real happiness-causing experience). – Barmar Dec 28 '23 at 16:02
  • @Barmar yes, I agree the brain-computer is running a program, and neural stimulation causes the cognitive structure, but that doesn't avoid the problem. Just as the fact that programs run on physical computers doesn't avoid the halting problem. It is a still a question of decidability. If happiness is a cognitive structure instead of a chemical, which is implied in the SMB comic, then we run into the halting problem. – yters Dec 28 '23 at 16:08
  • 1
    Happiness is a feeling produced by a chemical state in our brain. Therefore, by precisely feeding chemicals to the right neural pathways, the happiness machine will bring about the happiness feeling. - I don't think the comic implies that that's what a happiness machine would do. – TKoL Dec 28 '23 at 16:49
  • 1
    @TKoL regardless, if happiness is a state instead of a chemical, the halting problem applies. – yters Dec 29 '23 at 01:52
  • @yters i don't know what you mean by that. – TKoL Dec 29 '23 at 09:08
  • @yters: The comic begins with "Do you think if machines could make your life perfect you'd just get bored?" and it's hard to read that as talking about individual chemicals or neurons. It's clearly talking about the more capable machine from the beginning. The halting problem may be relevant if you want to reject the terms of the question in the first place, but that appears to be what the comic is doing anyway. It's just taking a different path to get there (i.e. the comic is arguing "A perfect-life machine must be perfect by definition, so stop trying to find flaws in it."). – Kevin Dec 29 '23 at 17:18
  • @TKoL it might be logically impossible, rather than merely very impractical, to consistently compute what to do to the brain to create happiness. – hegel5000 Jan 02 '24 at 13:32
3

A reductio that I think gets at the intuition:

  1. Suppose you can be made to be happy.
  2. To be happy you must not believe that you ought not to be happy.
  3. Therefore to make somebody happy you must make them believe that they ought to be happy, regardless of what their unimpaired cognition and apperception would tell them.
  4. Therefore be made to believe that you ought to be happy is to have your cognition and apperception compromised such that you can no longer effectively reason. We have a name for such a state: dreaming.
  5. A person who is dreaming of being happy is having a pleasant dream, not being happy.
  6. Therefore to be made to be happy is to be made to not be happy.
  7. 6 is a contradiction.
  8. Therefore 1 is false. (But you can be made to have pleasant dreams, and such dreams may not prevent the formation of accurate memories of real events.)
g s
  • 5,767
  • 2
  • 6
  • 24
  • What if you were already happy, for good reasons known to yourself? – Scott Rowe Dec 28 '23 at 01:58
  • @ScottRowe reminds me of a story I heard about an old magazine ad for a cheap pest-control device, guaranteed to kill all common bugs when properly used. When it arrives, it's two blocks of wood and an instruction sheet: place bug on unit A. Strike hard with unit B. Repeat as needed. – g s Dec 28 '23 at 02:29
  • 2
    This line of reasoning looks full of holes to me. – TKoL Dec 28 '23 at 09:43
  • An interesting thing I've noticed about dreams is that they can include stuff that works in the dream, but on waking makes no sense at all. Rarely in the dream I might catch these abberations and go huh? but then continue on. Someone once said that laughter is a state of temporary insanity. So, yeah, happiness is weird. – Scott Rowe Dec 28 '23 at 12:01
  • There was a TV show a few years ago about a man who lived in two realities -- when he went to sleep in one world he woke up in the other. Each one seemed like the dream world of the other, and we never knew which was which. – Barmar Dec 28 '23 at 15:56
  • @Barmar "Life is but a dream, sweetheart" – Scott Rowe Dec 30 '23 at 00:16
  • I think step 5 is a pretty bold claim. – Nathan Hinchey Feb 23 '24 at 14:56
3

I think what the question truly boils down to is what constitutes happiness. If you suppose that happiness is nothing but the right stimulation of feelings, the comic is correct. To disagree with the comic you need to provide a different account of happiness, and the only one I think would suffice is one that considers an immaterial aspect of happiness. The machine, being material and only able to affect you materially, would be unable to wholly fulfill your happiness. But, otherwise, it can give you all you need.

If classical psychology is correct, human emotions and the senses are material but intellect and free will are immaterial. The purpose of free will is to wish the good, and the purpose of the intellect is to grasp truth. Grant for a moment that goodness and truth have irreducibly immaterial aspects. Let's also suppose that to be happy you need to fulfill your highest purposes, which are precisely those of free will and the intellect. If the machine renders you unable to move your free will towards the good, it'll leave you frustrated. The machine may be able to induce you feelings associated with knowing truth, but it won't be able to impart upon you truth itself, leaving you (again) frustrated. Much less so will it be able to provide you with ultimate goodness and ultimate truth. What are they, you may suppose? Many philosophers would say they're one and the same thing: God, the final end (telos) of human beings. But you can bracket that off for the moment because, still, the machine wouldn't be able to provide you with the immaterial aspects of truth and goodness.

But any part of this stance would be too unpopular nowadays. Immateriality? God? Pff, not in our enlightened age. Plato would be horrified, yes, but that was because he believed that contemplating the forms was greater than anything the senses could provide. If the intellect is reducible to the senses he was wrong to suppose a difference anyway. The contemplation he wanted could be simulated, as well as any other human experience, even the most valuable moments of life you can think of. Everything is shadows on the wall. Sure it would be an illusion, but what's the point of truth if you can't tell the difference just like in a dream? Most modern people, if pressed enough, would grant that goodness and truth are phenomena of the mind. They don't exist out there somewhere waiting to be found by us, it's stuff we project upon an amoral and absurd universe, which the human mind in no way is over and above like religion supposes. Under such a framework, this machine is the closest thing to heaven anyone will ever have. So why would one pass the chance, I can only wonder. Maybe he believes in something better, something beyond anything the brain can experience, and I can only wish him good luck in attaining something so transcendent.

Mutoh
  • 656
  • 3
  • 8
2

The US constitution is based on Life, Liberty and the pursuit of Happiness (probably inspiring the French revolution's liberty equality fraternity). The Canadian OTOH, is based on Peace, Order and Good Government.

If you're a Canadian, the paradox of your question is no problem; if you're a US-ian¹ it is.

Ok the above is tongue in cheek. A fuller version would get too political.

A possible direction for an answer would go toward Hofstader's {True, False, Mu} or Edward de Bono's {Yes, No, Po}. That is the question is best solved by not being asked. Though I would freely admit that as the threat of the successors of chatGPT come closer, the problem becomes less philosophical and more immediately practical. Given that I'm far into the wrong side of 50, I'm hoping to be dead before humans willingly and lovingly² enter the Matrix.


¹ US-ian is politically correct, whereas American is grammatically correct. That I prefer the PC term, is the limit of my politicality

² All of spirituality can be defined as processes for actualization of your "contradictory" state of being happy without having anything to be happy about

Rushi
  • 2,637
  • 2
  • 7
  • 28
  • It may have inspired the French, but it was inspired by the English. "Life, liberty, and the pursuit of property" being a more pragmatic articulation. – J D Dec 28 '23 at 11:50
  • It is a good point that expectations largely determine what we experience. The difference between 'liberty' and "good government" would explain a lot in what we've seen in politics lately. Also, I like your definition of spirituality, it is the only one I recall seeing that I definitely agree with. – Scott Rowe Dec 28 '23 at 11:50
  • 1
    @ScottRowe Spirituality definition not original — articulated in the first line of the first book by non dual master Ramana when he was 21 years old: All living beings desire to be happy always. Spirituality philosophy all follows as corollary – Rushi Dec 28 '23 at 12:02
2

The machine, by definition, alters how the brain inside works such that it will be happy all the time. I'll buy that this is possible in theory. But happiness isn't really the key concept here. Suppose that I define "myself" as the decision-making process that my brain, as it is now, creates. I become happy (or not) based on specific stimuli. If you alter the brain such that it becomes happy based on a completely different set of stimuli, the person in the box is definitely happy, but it's no longer "me" 1.


1 It can potentially be made to think it's me, as the comic points out, but by the above definition of "myself", it's just wrong about that.

Ray
  • 1,342
  • 8
  • 13
  • But does it matter whether it's really "me"? As long as it's happy with what it is, who cares? But this is indeed the crux of the conundrum -- what really matters? And how would we know if it's otherwise? – Barmar Dec 29 '23 at 18:23
  • 1
    @Barmar That part's up to you. If you're willing to stop existing to create a new person who will be happier than you are, it's a good trade. – Ray Dec 29 '23 at 20:11
  • Now we're venturing into the "do I die when I use the transporter?" territory. – Barmar Dec 29 '23 at 20:19
  • 2
    @Barmar It's kind of the exact opposite of that. In the transporter case, you destroy the matter making up the body but preserve the configuration that creates the functionality. In this case, you keep the matter and destroy/alter the functionality. But yeah, in both cases, the big question is "What defines the self?" You might argue that anyone who's willing to go into the happiness box should be unwilling to take the transporter, and vice versa. (unless it's as a deliberate self-sacrifice). And if you're transported into the happiness box, you're really screwed. – Ray Dec 29 '23 at 20:24
  • 1
    And what if there's a trolley that will hit the happiness box, but you can throw a switch to make it hit a transporter? – Barmar Dec 29 '23 at 20:28
  • Isn't this really just the Ship of Theseus? Who knew that they could anticipate AI and mind altering technology so long ago? Smart guys! – Scott Rowe Dec 30 '23 at 00:07
  • 1
    @ScottRowe The transporter problem is. The Ship of Theseus, the transporter problem, and the "cells get replaced over time" problem are more or less all the same thing, just on different timescales. – Ray Dec 31 '23 at 01:24
  • What we need is an app that rings an alarm when we are thinking about something pointless and stupid yet again. But no one would be able to think because it would be so loud all the time :-) – Scott Rowe Dec 31 '23 at 13:22
0

"Do you think if machines could make your life perfect you'd just get bored"

The premise of the thought experiment is that machines can make your life perfect. If a perfect life precludes boredom then the answer to this question is obviously "no."

Jagerber48
  • 276
  • 1
  • 5