top of page
Search
Writer's pictureRicky

The Problem of Alien Minds

How do you know that I have a mind?


If we’ve never met, you’re engaging with text on the internet in a world with ChatGPT-4. But presumably, ChatGPT-4 doesn’t have a mind. (More on that in a moment.)


So there’s at least one way you might be reading text right now that wasn’t produced by a mind.


But even when we meet face to face, the difficulties don’t disappear, as Tom Nagel explains:


How much do you really know about what goes on in anyone else's mind? Clearly you observe only the bodies of other creatures, including people. You watch what they do, listen to what they say and to the other sounds they make, and see how they respond to their environment—what things attract them and what things repel them, what they eat, and so forth. You can also cut open other creatures and look at their physical insides, and perhaps compare their anatomy with yours. But none of this will give you direct access to their experiences, thoughts, and feelings. The only experiences you can actually have are your own: if you believe anything about the mental lives of others, it is on the basis of observing their physical construction and behavior.

You only have access to your own experiences, not mine. So how much can you really know about my mental life? Nagel pushes this idea further and further:

  • How do you know that chocolate ice cream tastes the same to both of us?

  • Maybe chocolate ice cream tastes to me like vanilla tastes to you?

  • Or maybe my experience of chocolate ice cream is more like hearing a sound than tasting a flavor?

  • Or maybe I don't experience chocolate ice cream because I don’t have experiences at all?

  • Or maybe I don't have a mind altogether? (After all, you can only observe my body.)

  • As a matter of fact, “How do you know that there are any minds at all besides your own?”


This last, radically skeptical question is traditionally called the problem of other minds. (Actually, I think it’s more of a puzzle.)


And it’s really tricky. Here, let’s take it on together:



Let’s grant that we both have human brains. (Are we sure? When’s the last time you saw yours on an MRI? And can we really trust what such mysterious images show us…?)


Okay, we can’t indulge every skeptical possibility or we’ll get nowhere.


But even if I have a human brain and you have a human brain, why should you think that I have a mind?

  • Because my behavior seems complicated?

  • Because our brains seem pretty similar?

  • Because I flinch and cry out when I touch a hot stove?

(You're still just observing my body!)


Science can't help us here. Not only do we know vanishingly little about how human brains work, it’s not clear what kind of science experiment could weigh in on the question of whether human beings have minds. What would we be looking for?


How can we do better than administering a survey?


Do you have a mind? (humans only)

  • Yes

  • No

  • Unsure


(Remember that you think, therefore you are.)


More and more I’ve come to see this setup, what I’ll call the puzzle of other human minds, as just that—a puzzle. And it’s a good one. Asking students how they know that I have a mind gets them thinking about both the nature of minds and the limits of their own evidence.


But as soon as I leave class to sit at a traffic light, any 'problem' disappears. I immediately and willingly grant that the other drivers have minds.


And I think I'm right to do that!


I don’t just think that others will drive as if they have minds—I’m perfectly willing to grant that they really do have minds, and conscious minds too. They can see that their light is red, and they're gonna hear me honk at them when they drive through it like an asshole. (I'm not sure how differently I'd feel compelled to live if I could make myself seriously doubt this.)


There's no puzzle-dissolving solution to be had here. Whatever we say, there are limits to the kind of certainty and justification that you can have. Now go figure out how to live with it!


Asking how I know that there are other human minds is just a puzzle.


But here's a real problem:


We don't know how to recognize minds if they're too different from ours.


I’ve seen three philosophy Q+As devolve into a room split between people who think Roombas are agents and people who don’t. Here the question isn't strictly, does it have a mind, but rather something like, does it have purposes or goals of its own? Or is it just a tool for our purposes and goals?


As it turns out, we don't agree on how to assess such questions at all.


After all, the human brain is still a black box to us. We might not understand how inputs (like sense data) get converted into outputs (like behaviors), but at least we’re pretty used to it. We have a lot of human storytelling, from history to fiction, and a fledgling scientific psychology to boot.


But AI architectures are a totally different kind of black box. And looking under the hood won't obviously help. Roomba computers don’t look much like human brains, and they work very differently.


With or without minds, AIs can still surprise the hell out of us. Back in 2016, AlphaGo played a move so weird that the world champion walked out of the room for a while before ultimately losing the game. Many initially thought Move 37 was a mistake. But it proved a signature moment in AI's redefinition of Go strategy.


So AI architectures work very differently from human brains and can give rise to very different behaviors.


They’re totally alien to us.



I'm willing to grant that given your human brain, you have a mind. But we don't know how to assess whether, given a particular AI architecture, it[?] does too.


So every time computers blow past one of our prized behavioral standards (beating chess masters, beating Go masters, engaging in humanlike conversation) we go nah, I guess that wasn’t really a sufficient measure of intelligence or mindedness. And we don't change our tune, even as they spring from human to superhuman levels of domain-specific mastery in stunningly short order.


It's just a little concerning because if you'd asked me five years ago what it would take to generalize beyond specific domains like chess or Go, I'd have pointed to high-level symbolic fluency in some form of language:



Anyway, I'm calling this the problem of alien minds:

How do you know whether something that works nothing like you or behaves very differently has a mind?

The stakes are pretty high.


Even if Roombas or ChatGPT-4 don’t feel pain or pleasure, they might still have a well-being if they’re agents. And if you have a well-being, it looks like we should show moral concern for your interests, for your sake.


All else being equal, it seems bad for an agent to face constant failure, and good for them to have at least some measure of success. All else rarely is equal, and I suspect that Roombas and ChatGPT-4 still fall short of genuine agency or ultimate moral concern.


But at some point we will start producing alien minds with moral dignity and worth. And how will we know when that’s the case, before we enslave or otherwise mistreat them en masse?


We don't even agree on the philosophical basics:

  • Do all minds have moral dignity and worth?

  • How much precaution should we show around possible minds?

  • What's a mind anyway?


Roombas and ChatGPT-4 can't feel pleasure or pain. And I sincerely doubt they can truly flourish or languish.


(I'm less sure whether they can adopt goals of their own in the course of furthering their human-given purposes, or whether they have minds.)


But we struggle to reason about any of this in general, especially for alien minds that might think very differently from us.


That's a problem worth thinking about, don't you think?


...you do think, don't you?

26 views0 comments

Related Posts

See All

Comments


Sign up for more philosophy in your life!

Thanks for subscribing!

bottom of page