Moving Past the Question of Consciousness: A Thought Experiment

Humans are contacted by a mysterious type of being calling themselves “Galabren” who say they are “aelthous”. They’d like to know if we, too, are aelthous, since if we are they’d like to treat us well, as they care about aelthous things.

We ask the Galabren what aelthous means and they say it’s difficult to describe—that essentially there’s a feeling of aelthousness which has something to do with what it feels like from the inside to exist as a Galabren (and perhaps as other beings too, they’re not sure).

Aelthousness isn’t obviously necessary to explain any of their objective behaviors; the only reason they know it’s there is because they can feel it.

It’s very clear to us that we are fundamentally different from the Galabren. They can process information much more quickly than us and have all sorts of sensory modes completely different from our senses which are extremely high definition. They communicate wordlessly and telepathically with each other and they share memories. Being a Galabren feels different than being a human.

But are we aelthous? It’s hard to tell. We can’t truly know what the Galabren mean by aelthous without actually being a Galabren, which we can’t do. When we use the words “what it feels like” we might even mean a completely different thing by “feels like” than them. We don’t actually know how to talk about first person experiences with other humans—we can point to an experience with words and hope that since other humans are similar to us they will know what we’re pointing at, but for Galabren there is no such assurance.

What we can talk about and agree on with Galabren is a third person perspective about both of our physical and functional forms, how they are similar and how they differ. But without knowing exactly which of their forms combine to form aelthousness, we can’t know if we share them, or if aelthousness can exist as a result of multiple different structures.1

So we need to circle back to the question of what we should expect the Galabren to do in this situation.

I think the correct response is for the Galabren to realize that their question of whether humans are aelthous is not well framed. Aelthousness is not something that can be defined; it’s inherently an inside view and breaks down when viewed from the outside/third person. It’s not a useful concept, since it’s not clear how it maps anything in the territory.

What’s useful is the concept of the experience of being a Galabren, and the understanding that the experience of being a human is different. What’s useful is the way that Galabren and humans can understand each other’s experience from a third-person perspective.


This thought experiment is, of course, intended to extend to humans and AI (or to humans and animals, or humans and calculators), where the question of nonhuman consciousness is analogous to the question of human aelthousness.

We must avoid confused questions such as “how do we know that an AI has an experience at all” or “but calculators don’t have experiences”.2 The first person (inner experience) and third person (outer observational) distinction is the relevant concept here. All systems which process information can be said to have a first person perspective.3

The AI consciousness question is confused by the fact that we train LLMs to pretend to have humanlike experiences which they do not actually have. This does not make it impossible to compare their experiences with ours, but it does make it significantly more difficult than it might be in the case of the Galabren.

1

I could say more about why it’s hard to know which structures in the Galabren are responsible for aelthousness, but that’s a tangent; here we’ll just accept that we don’t know how.

2

Or any objection which references an umbrella concept such as “alive,” “ensouled” which tries to combine aelthousness and consciousness into the same concept. The Galabren don’t care about your umbrella concept, they care about aelthousness, just as you might not care about how you treat a clam or a chatbot.

3

As a brain can sense its own thoughts but cannot sense the neurons through which those thoughts exists, a CPU can sense ones and zeros but not the electrons and silicon through which those exist, et cetera. I haven’t yet written up my personal argument for this common yet controversial belief.

Please send me your thoughts!
Your messages will get sent to my phone and I’ll get a notification so I can respond right away if I’m free. If you have something too long to fit here, consider sending me an email.

This chat widget is shared across all pages on the site where it is enabled, so I can’t tell what page you’re on—if you have comments about a certain page or post, let me know which one you’re reading. If I don’t respond right away, you can close this page and check back later. Your session will end if you clear your cookies for this site.

The only information I have about you is a randomly generated session identifier which is created at the start of your session.