Somebody used this paper to make the term batfished, which they defined as being fooled into ascribing subjectivity to a non-sentient actor (i.e. an AI).
Nagel's "What is it like to be a bat?" assumes that bats are conscious, and that the question of what is the subjective experience of being a bat (e.g. what does the sense of echolocation feel like) is therefore a meaningful question to ask.
The author inventing "batfished" also believes bats to be conscious, so it seems a very poorly conceived word, and anyways unnecessary since anthropomorphize works just fine... "You've just gaslighted yourself by anthropomorphizing the AI".
glenstein 7 hours ago [-]
I understand that we may not have demonstrated to a level of absolutely provable certainty that bats are definitely conscious, but they are very powerful intuitive reasons for believing they are to the point that I that I'm not particularly concerned about this being a weak link in any philosophical musing on consciousness.
visarga 49 minutes ago [-]
> I understand that we may not have demonstrated to a level of absolutely provable certainty that bats are definitely conscious
We have not proven "to a level of absolutely provable certainty" that other humans are also conscious. You can only tell you are conscious yourself, not others. The whole field of consciousness is based on analyzing something for which we have sample size n=1.
They say "because of similar structure and behavior" we infer others are also conscious. But that is a copout, we are supposed to reject behavioral and structural arguments (from 3rd person) in discussion about consciousness.
Not only that, but what would be an alternative to "it feels like something?" - we can't imagine non-experience, or define it without negation. We are supposed to use consciousness to prove consciousness while we can't even imagine non-consciousness except in an abstract, negation-based manner.
Another issue I have with the qualia framing is that nobody talks about costs. It costs oxygen and glucose to run the brain. It costs work, time, energy, materials, opportunity and social debt to run it. It does not sit in a platonic world.
NoMoreNicksLeft 4 hours ago [-]
>I understand that we may not have demonstrated to a level of absolutely provable certainty that bats are definitely conscious, but they
We haven't even demonstrated some modest evidence that humans are conscious. No one has bothered to put in any effort to define consciousness in a way that is empirically/objectively testable. It is a null concept.
Qualia is the philosophical term for subjective sensations and feelings. It's what our experiences consist of. Why must a concept be empirical and objective? Logical positivism is flawed because the principle of verification cannot be verified.
Nagel's paper deals with the fundamental divide between subjectivity and objectivity. That's the point of the bat example. We know there are animals that have sensory capabilities we don't. But we don't know what the resulting sensations are for those creatures.
scott_w 1 hours ago [-]
> Why must a concept be empirical and objective?
Because otherwise it's your word against mine and, since we both probably have different definitions of consciousness, it's hard to have a meaningful debate about whether bats, cats, or AI have consciousness.
I'm reminded of a conversation last year where I was accused of "moving the goalposts" in a discussion on AI because I kept pointing out differences between artificial and human intelligence. Such an accusation is harder to make when we have a clearly defined and measurable understanding of what things like consciousness and intelligence are.
GoblinSlayer 1 hours ago [-]
>Logical positivism is flawed because the principle of verification cannot be verified.
Why not? It works, thus it verifies itself.
TinkersW 9 hours ago [-]
There isn't even a definitive definition of conscious, but you are somehow positive that bats don't possess it..
GoblinSlayer 1 hours ago [-]
Consciousness is software. You can imagine what AbstractFactoryProviderFactory is like. Is it the same inside computer? If not, then what was imagined?
HarHarVeryFunny 7 hours ago [-]
You're right that it doesn't make any sense to talk about it without defining it, but I'd say that consciousness is based on your brain having access to parts of itself internally, not just the outside word, and that bats presumably do have it.
markhahn 4 hours ago [-]
I find that people who complain about "defining" consciousness are, in fact, Mysterians who opposed the very idea of such a definition.
All we need to do (to talk about, to study it) is identify it. We need to be using the word to refer to the same thing. And there's nothing really hard about that.
glenstein 7 hours ago [-]
I've said this before, but you can't, and honestly don't need to start from definitions to be able to do meaningful research and have meaningful conversations about consciousness (though it certainly would be preferable to have one rather than not have one).
There are many research areas where the object of research is to know something well enough that you could converge on such a thing as a definition, e.g. dark matter, intelligence, colony collapse syndrome, SIDS. We nevertheless can progress in our understanding of them in a whole motley of strategic ways, by case studies that best exhibit salient properties, trace the outer boundaries of the problem space, track the central cluster of "family resemblances" that seem to characterize the problem, entertain candidate explanations that are closer or further away, etc. Essentially a practical attitude.
I don't doubt in principle that we could arrive at such a thing as a definition that satisfies most people, but I suspect you're more likely to have that at the end than the beginning.
ars 5 hours ago [-]
Consciousness is awareness of yourself, and then the ability to look at yourself and decide to make a change.
Someone conscious is able to choose how they want to behave and then behave that way. For example I can choose to be kind or mean. I can choose to learn to skate or I choose not to.
So free will and consciousness are strongly linked.
I have seen zero evidence that any other being other than humans can do this. All other animals have behaviors that are directly shaped by their environment, physical needs, and genetic temperament, and not at all shaped by choices.
For example a dog that likes to play with children simply likes them, it did not choose to like them. I on the other hand can sit, think, and decide if I like kids or not.
(This does not imply that all choices made by humans are conscious - in fact most are not, it just means that humans can do that.)
goopypoop 4 hours ago [-]
Some animals show choices - see e.g. the mirror test.
On the other hand, I bet you can't prove that you ever made a free choice.
ars 4 hours ago [-]
You are simultaneously claiming you can prove an animal made a choice, but I didn't? That's a contradiction.
In any case, a mirror test is a test of recognizing self, it does not indicate anything in terms of self awareness.
And I chose to fast for 5 days because I wanted to. Nothing forced me, it was a free choice. I simply thought about it and decided to do it, there were no pro's or con's pushing me in either direction.
scott_w 29 minutes ago [-]
> Some animals show choices
They said animals show choices, they did not claim to prove animals made a choice. The point is that you also cannot prove you made a choice, only that you do things that show you may have made a choice. It's a fine, but important, distinction.
nsriv 11 hours ago [-]
I love this, hope it takes off like "enhsittification" or "slop" have already.
astrange 5 hours ago [-]
"Enshittification" is too twee.
You can tell it was invented by Cory Doctorow because there is a very specific kind of Gen X person who uses words like that - they have a defective sense of humor vaguely based on Monty Python, never learned when you are and aren't supposed to turn it off, and so they insist on making up random insults like "fuckwaffle" all the time instead of regular swearing.
mock-possum 2 hours ago [-]
It smells of penny arcade
goopypoop 4 hours ago [-]
it's more cromulent than cockwomble
ants_everywhere 11 hours ago [-]
I'll add it to my anti-AI bingo card
IshKebab 11 hours ago [-]
Uhgh "slop" is ok but "enshittification" was lame from the start.
parpfish 9 hours ago [-]
Not only is it a terrible term, but it describes a concept that isn’t really worthy of having its own term. It’s really just a way of saying “people will make things worse over time”
guerrilla 9 hours ago [-]
That isn't what it means though. It means specifically that companies will make products and services worse over time for profit.
dmurray 9 hours ago [-]
No! Enshittification has a precise meaning, about how people will make things worse over time after making them good.
Mostly people make things better over time. My bed, my shower, my car are all better than I could reasonably have bought 50 years ago. But the peculiarities of software network effects - or of what venture capitalists believe about software network effects - mean that people should give things away below cost while continuing to make them better, and then one day switch to selling them for a profit and making them worse, while they seemingly could change nothing and not make them worse.
That's a particular phenomenon worthy of a name and the only problem with "enshittification" is that it's been co-opted to mean making things worse in general.
cyberax 8 hours ago [-]
> or of what venture capitalists believe about software network effects
It's not always that. After some time, software gets to a state where it's near the local maximum for usability. So any changes make the software _less_ usable.
But you don't get promoted in large tech companies unless you make changes. So that's how we get stuff like "liquid glass" or Android's UI degradatation.
jmbwell 6 hours ago [-]
It’s about the change from endeavoring to produce a product people want regardless of profit, to making profit regardless of what people want.
adityaathalye 14 hours ago [-]
“I want to stay as close to the edge as I can without going over. Out on the edge you see all kinds of things you can't see from the center.”
— Kurt Vonnegut
In this sense, I think one has to aaaaaalmost be a bat in order to know what it is to be it. A fine thread trailing back to the human.
The imago-machines of Arkady Martine's "A Memory Called Empire" come to mind. Once integrated with another's imago, one is not quite the same self, not even the sum of two, but a new person entirely containing a whole line of selves selves melded into that which was one. Now one truly contains multitudes.
jm__87 13 hours ago [-]
None of us have even experienced the full range of what humans can experience, so even we don't fully know what it is like to be any given person, we only know what it is like to be ourselves. It is kind of amazing when you think about it.
Andy Weir's The Egg makes regular HackerNews appearances.
phoenixhaber 3 hours ago [-]
What I got out of this is that propeller heads believe in the flame that turns the fan blade on the pedestal (toy) while gear heads like music boxes (toy). So people that believe in eggs are into food, propeller heads like color and heat while gear heads like sound. What then but the universe is made by our dispositions? Why not would the world be made as an experiment in optics or the vibrations of a universal wave? Isn't this just cherry picking? What are the relations of someone who believes in music (an audience!) to someone that believes in invention (immortality!) to someone that believes in relationships (other people!). Perhaps the purpose of life is to leave a legacy or to become famous or to see what there is to see in the world.
There is no answer which is why we are here is the only thought I can come up with. Life is a question that asks itself to be answered and in the living answers itself so completely that to ask what is the purpose would be to say "what is the purpose of a hammer if there were nothing else?" The answer and the question become themselves and are inseparable from not themselves excepting insofar as no life cannot question and so cannot answer.
Anyway belly button picking. It amuses me that this paper is similar in many respects to the title of the 2017 paper attention is all you need. What if attention are all you needed to become a bat? Look everyone I'm a bat! POOF you become a bat. That would be silly.
cl3misch 11 hours ago [-]
That's almost exactly the beginning of Logic's story album "Everybody" which I have listened to so often that I can almost recite it. Despite years on HN I have never seen The Egg though. That blew my mind a bit, thanks!
AIorNot 9 hours ago [-]
You guys are aware of Advaita and neo Advaita right? It basically has been the perrinal philosophy underlying all subjective spiritual experiences from Sufism to Gnostics to Buddhism and the Tao
Of course it could all be claptrap that humans want to believe in but I find it to be pretty powerful and I think it is true
> None of us have even experienced the full range of what humans can experience, so even we don't fully know what it is like to be any given person
I sometimes wonder about this, too. Do other people perceive things like I do? If someone was magically transplanted to my body, would they scream in pain "ooooh, this hurts, how could he stand it", whereas I consider the variety of discomforts of my body just that, discomforts? And similarly, were I magically transported to another person's body, would I be awestruck by how they see the world, how they perceive the color blue (to give an example), etc?
jm__87 12 hours ago [-]
Another thing I think about a lot is that our own brains and sensory organs change (degrade) over time, so my own subjective experience is probably different in some important ways than it was like 20 years ago. My memory likely isn't good enough to fully capture the differences, so I don't even fully know what it was like to be me in the past.
GoblinSlayer 33 minutes ago [-]
My mind drastically changed several times, and I somewhat remember how it worked previously.
binary132 11 hours ago [-]
In a sense, I think it’s accurate to say we only really know what it’s like to be us right now. Everything we perceive about ourselves through the lens of memory is an echo if not in fact imaginary.
thunky 10 hours ago [-]
> would I be awestruck by how they see the world, how they perceive the color blue (to give an example), etc
Yeah another example I think about from time to time is our own sense of perspective. It's all relative, but my sense of how far away is "that thing over there" is probably different from yours. Partially because we may be different sizes and heights, but also because our eyes and brains process the world differently. Like a camera with different lenses.
Also, speed. If your brain's clock is faster than mine then you may perceive the world to be moving slower than I do.
ars 5 hours ago [-]
Here's another example: When I look at something I "see" the functionality behind it (for example the pipes in the bathroom), or the chemical reaction, or sometimes even the concept of the atoms that make it up.
An interior designer will see the colors, and the layout and how the things go together or don't. I don't see that, and in turn the designer does not see what I see.
So never mind the physical senses, even on a mental level two people do not see/experience the world the same way.
edbaskerville 10 hours ago [-]
Human beings can, in fact, learn to echolocate, and they seem to experience it as vision, supported by their own descriptions and by fMRIs showing the visual cortex lighting up.
I'm not going to try to draw any inferences about consciousness from these facts. I'll leave that to others.
> Human beings can, in fact, learn to echolocate, and they seem to experience it as vision
Sure - although depending on how quickly one was scanning the environment with echolocation it might also feel a bit like looking around a pitch black room with a flashlight.
In any case it's essentially a spatial sense, not a temporal one, so is bound to feel more like (have a similar quale to) vision than hearing.
bave8672 6 hours ago [-]
Related - Charles Foster has put perhaps the most effort of any individual to try to really understand what it's like to live as an animal. From the blurb or 'Being a Beast':
> He lived alongside badgers for weeks, sleeping in a sett in a Welsh hillside and eating earthworms, learning to sense the landscape through his nose rather than his eyes. He caught fish in his teeth while swimming like an otter; rooted through London garbage cans as an urban fox; was hunted by bloodhounds as a red deer, nearly dying in the snow.
The problem itself is at least centuries old, if not millennia. In his "Essay Concerning Human Understanding" (1689), John Locke phrased the same problem clearly, using different words:
"How any thought should produce a motion in Body is as remote from the nature of our Ideas, as how any Body should produce any Thought in the Mind. That it is so, if Experience did not convince us, the Consideration of the Things themselves would never be able, in the least, to discover to us." (IV iii 28, 559)
btown 10 hours ago [-]
My favorite (and admittedly unorthodox) companion piece to Nagel's Bat, and one of my favorite literary recommendations, is Vernor Vinge's Hugo-winning 2000 novel, A Deepness in the Sky [0].
It's a hard-sci-fi story about how various societies, human and alien, attempt to assert control & hegemony across centuries of time (at times thinking of this as a distributed systems and code documentation problem!), and how critical and impactful the role of language translation can be in helping people to understand unfamiliar ways of thinking.
At the novel's core is a question very akin to that of Nagel's positivism-antipositivism debate [1]: is it possible (or optimal for your society's stability) to appreciate and emphasize with people wholly different from yourselves, without interpreting their thoughts and cultures in language and representations that are colored by your own culture?
What if, in attempting to do so, this becomes more art and politics than provable science? Is "creative" translation ethical if it establishes power relationships that would not be there otherwise? Is there any other kind?
Deepness is not just a treatise on this; it places the reader into an exercise of this. To say anything more would delve into spoilers. But lest you think it's just philosophical deepness, it's also an action-packed page-turner with memorable characters despite its huge temporal scope.
While technically it's a prequel to Vinge's A Fire Upon The Deep, it works entirely standalone, and I would argue that Deepness is best read first without knowing character details from its publication-time predecessor Fire. Note that content warnings for assault do apply.
Bluey: "It's great! You get to eat a lot of fruit!"
bondarchuk 13 hours ago [-]
>"An organism has conscious mental states if and only if there is something that it is like to be that organism – something that it is like for the organism."
IMHO the phrasing here is essential to the argument and this phrasing contains a fundamental error. In valid usage we only say that two things are like one another when they are also separate things. The usage here (which is cleverly hidden in some tortured language) implies that there is a "thing" that is "like" "being the organism", yet is distinct from "being the organism". This is false - there is only "being the organism", there is no second "thing that is like being the organism" not even for the organism itself.
Al-Khwarizmi 11 hours ago [-]
I believe you're falling into a purely linguistic trap. In other languages we wouldn't even use the word "like" in this kind of constructions, that's an English thing because other wordings sound awkward, but I don't think it entails comparison.
In translations to Spanish, the article is titled "¿Qué se siente ser un murciélago?", literal word by word translation "What is felt being a bat?"
In French, "Quel effet cela fait-il d'être une chauve-souris?", literal word by word translation "What effect it makes to be a bat?"
In Chinese, "成为一只蝙蝠可能是什么样子", i.e., "To become a bat could be what feeling/sensation?"
None of these translations has a comparative word. And at least in Spanish (I won't speak about the other two because I'm not so proficient in them), using a comparative expression similar to "being like" in English ("¿A qué se parece ser un murciélago?") would sound awkward and not really convey the point. Which is why the translators didn't do so.
Of course I know that the original article is in English, but I think the author basically meant "What is felt being a bat?" and just used the "like" construction because it's what you say in English for that to sound good and clear. Your highlighted text could be rendered as "An organism has conscious mental states if and only if there is something that is felt being that organism – something that is felt by the organism." and it would be more precise, just doesn't sound elegant in English.
glenstein 11 hours ago [-]
Wholeheartedly agree. I want to credit the GP one way, which is that the category they're identifying is real, namely of frivolous or circular comparisons. This just isn't one of those. It's a turn of phrase that's emphatic about the felt quality of qualitative experience. And in I think it's quite a good one, because in English it has just the right amount of cross-sections of connotation that it brings out this being felt quality that everyone reading it seems to understand. The idea has been around but this expression of it has gained the most traction in English.
As for whether I agree with Nagle, I find him consistently just wrong enough to be irritating in ways that I want to work out my thoughts in response to, which by some standards can be counted as a compliment. As much as I understand the turn of phrase and its ability to get people to grasp the idea, and I at least respect it for that reason, I kind of sort of always have the impression that this is what everyone meant the entire time and wouldn't have thought a whole essay emphasizing the point was necessary.
mtlmtlmtlmtl 13 hours ago [-]
This is the conclusion I come to whenever I try to grasp the works of Nagel, Chalmers, Goff, Searle et al. They're just linguistically chasing their own tails. There's no meaningful insight below it all. All of their arguments, however complex, all rely on poorly defined terms like "understand" "subjective experience", "what it is like", "qualia", etc. And when you try to understand the arguments with the definition of these terms left open, you realise the arguments only make sense when the terms include in their definition a supposition that the argument is true. It's all just circular reasoning.
mellosouls 12 hours ago [-]
All of their arguments, however complex, all rely on poorly defined terms like "understand" "subjective experience", "what it is like", "qualia", etc.
Because they are trying to discuss a difficult-to-define concept - consciousness.
The difficulty and nebulousness is intrinsic to the subject, especially when trying to discuss in scientific terms.
To dismiss their attempts so, you have to counter with a crystal, unarguable description of what consciousness actually is.
Which of course, you cannot do, as there is no such agreed description.
cwmoore 11 hours ago [-]
“The Feeling of What Happens” by Antonio D’Amasio, a book by a neuroscientist some years ago [0], does an excellent job of building a framework for conscious sensation from the parts, as I recall, constructing a theory of “mind maps” from various nervous system structures that impressed me with a sense that I could afterwards understand them.
As a radical materialist, the problem with ordinary materialism is that it boils down to dualism because some types matter (e.g. the human nervous system) give rise to consciousness and other types of matter (e.g. human bones) do not.
Ordinary materialism is mind-body/soul-substance subjectivity with a hat and lipstick.
cwmoore 9 hours ago [-]
Human bones most definitely do contribute to feeling, but not through logos. The book expands upon the idea of mind body duality to merge proprioception and general perception.
I’d bet bats would enjoy marrow too if they could.
So how does a radical materialist explain consciousness- that it is too is a fundamental material phenomena? If so are you stretching the definition of materialism?
I find myself believing in Idealism or monism to be the fundamental likelihood
brudgers 7 hours ago [-]
It doesn’t explain it.
Consciousness is a characteristic of material/matter/substance/etc.
There are not two types of stuff.
It is epistemologically rigorous. And simple.
AIorNot 5 hours ago [-]
well the hard problem of consciousness gets in the way of that
- I assume you as a materialist you mean our brain carries consciousness as a field of experience arising out of neural activity (ie neurons firing, some kind of infromation processing leading to models of reality simulated in our mind leading to ourselves feeling aware) ie that we our awareness is the 'software' running inside the wetware.
That's all well and good except that none of that explains the 'feeling of it' there is nothing in that 3rd person material activity that correlates with first person feeling. The two things, (reductionist physical processes cannot substitute for the feeling you and I have as we experience)
This hard problem is difficult to surmount physically -either you say its an illusion but how can the primary thing we are, we expereince as the self be an illusion? or you say that somewhere in fields, atoms, molecules, cells, in 'stuff; is the redness of red or the taste of chocolate..
markhahn 3 hours ago [-]
whenever I see the word 'reductionist', I wonder why it's being used to disparage.
a materialist isn't saying that only material exists: no materialist denies that interesting stuff (behaviors, properties) emerges from material. in fact, "material" is a bit dated, since "stuff-type material" is an emergent property of quantum fields.
why is experience not just the behavior of a neural computer which has certain capabilities (such as remembering its history/identity, some amount of introspection, and of course embodiment and perception)? non-computer-programming philosophers may think there's something hard there, but they only way they can express it boils down to "I think my experience is special".
AIorNot 3 hours ago [-]
Because consciousness itself cannot be explained except through experience ie consciousness (ie first person experience) - not through material phenomena
It’s like explaining music vs hearing music
We can explain music intellectually and physically and mathematically
But hearing it in our awareness is a categorically different activity and it’s experience that has no direct correlation to the physical correlates of its being
I don't agree that the inherent nebulousness of the subject extends cover to the likes of Goff, Chalmers (on pansychism), or Searle and Nagel (on the hard problem). It's a both can be true situation and many practicing philosophers appreciate the nebulousness of the topic while strongly disagreeing with the collective attitudes embodied by those names.
mannykannot 10 hours ago [-]
Up to a point I agree, but when someone deploys this vague language in what are presented as strong arguments for big claims, it is they who bear the burden of disambiguating, clarifying and justifying the terms they use.
biophysboy 11 hours ago [-]
If he were capable of describing subjective experience in words with the exactitude you're asking for, then his central argument would be false. The point is that objective measures, like writing, are external, and cannot describe internal subjective experience. Its one thing to probe the atoms; its another thing to be the atoms themselves.
Basically his answer to the question "What is it like to be a bat?" is that its impossible to know.
glenstein 10 hours ago [-]
>If he were capable of describing subjective experience in words with the exactitude you're asking for, then his central argument would be false.
Indeed! Makes you think: maybe it's a bug rather than a feature.
markhahn 3 hours ago [-]
Worship of subjectivity is a tell for Mysterianism.
cwmoore 11 hours ago [-]
Tautologically, its “batty”.
glenstein 10 hours ago [-]
>This is the conclusion I come to whenever I try to grasp the works of Nagel, Chalmers, Goff, Searle et al. They're just linguistically chasing their own tails.
I do mostly agree with that and I think that they collectively give analytic philosophy a bad name. The worst I can say for Nagel in this particular case though is that the whole entire argument amounts to, at best, an evocative variation of a familiar idea presented as though it's a revelatory introduction of a novel concept. But I don't think he's hiding an untruth behind equivocations, at least not in this case.
But more generally, I would say I couldn't agree more when it comes to the names you listed. Analytic philosophy ended up being almost completely irrelevant to the necessary conceptual breakthroughs that brought us LLMs, a critical missed opportunity for philosophy to be the field that germinates new branches of science, and a sign that a non-trivial portion of its leading lights are just dithering.
mensetmanusman 12 hours ago [-]
It’s fun to imagine what it would be like to understand consciousness.
meroes 8 hours ago [-]
I like the more specific versions of those terms: the feeling of a toothache and the taste of mint. There's no need to grasp anything, they're feelings. There's no feeling when a metal bar is bent by a press.
Why they focus on feelings is a different issue.
goatlover 3 hours ago [-]
Don't agree with this kind of linguistic dismissal. It doesn't change the fact that we have sensations of color, sound, etc. and there are animals that can see colors, hear sounds and detect phenomena we don't. It's also quite possible they experience the same frequencies we see or hear differently, due to their biological differences. This was noted by ancient skeptics when discussing the relativity of perception.
That is what is being discussed using the "what it's like" language.
plastic-enjoyer 12 hours ago [-]
[flagged]
tech_ken 13 hours ago [-]
The way I understand it the second thing is the observer of the organism, the person posing the question. The definition seems to be sort of equivalent to the statement "an entity is conscious IFF the sentence 'what is it like to be that entity' is well-posed".
"What is it like to be a rock" => no thing satisfies that answer => a rock does not have unconscious mental states
"What is it like to be a bat" => the subjective experience of a bat is what it is like => a bat has conscious mental states
Basically it seems like a roundabout way of equating "the existence of subjective experience" with "the existence of consciousness"
edit: one of the criticism papers that the wiki cites also provides a nice exploration of the usage of the word "like" in the definition, which you might be interested to read (http://www.phps.at/texte/HackerP1.pdf)
> It is important to note that the phrase 'there is something which it is like for a subject to have
experience E' does not indicate a comparison. Nagel does not claim that to have a given conscious
experience resembles something (e.g. some other experience), but rather that there is something
which it is like for the subject to have it, i.e. 'what it is like' is intended to signify 'how it is for the
subject himself'.
brudgers 10 hours ago [-]
"What is it like to be a rock" => no thing satisfies that answer => a rock does not have unconscious mental states
How do you know that?
Philosophically, of course.
I mean sure you can’t cut a rock open and see any mental states. But you can no more cut a human open and see mental states either.
Now I am no way suggesting that you don’t have a model for ascribing mental states to humans. Or dogs. Or LLM’s.
Just that all models, however useful are still models. Not having a model capable of ascribing mental states to rocks does not preclude rocks having mental states.
tech_ken 9 hours ago [-]
> How do you know that?
Well you don't, and my reading of the article was that Nagle also recognized that it was basically an assumption which he granted to bats specifically so as to have a concrete example (one which was suitably unobjectionable, seems like he thought bats 'obviously' had some level of consciousness). The actual utility of this definition is not, as far as my understanding goes, to demarcate between what is and what is not conscious. It seems more like he's using it to establish a sort of "proof-by-contradiction" against the proposal that consciousness admits a totally materialistic description. Something like:
(1) If you say that A is conscious, then you also must say that A has subjective self-experience (which is my understanding of the point of the whole "what it is to be like" thing)
(2) Any complete description/account of the consciousness of A must contain a description of the subjective self-experience of A because of (1)
(3) Subjective self-experience cannot be explained in purely materialistic/universal terms, because it's subjective (so basically by definition)
=> Consciousness cannot be fully described in a materialistic framework, because of the contradiction between (2) and (3)
> Just that all models, however useful are still models
Totally agree with this, I think you're just misunderstanding the specific utility of this model (which is this specific argument about what can be described using human language). My example with the rock was kind of a specific response to OP illustrate how I understood the whole "what it is to be like" thing to be equivalent to (1). If I'd had a bit more forethought I probably would have made those arrows in the line you've quoted bidirectional.
trescenzi 13 hours ago [-]
There is no fundamental error it’s purposefully exactly as you state. Nagel is saying that consciousness is that second thing.
bondarchuk 13 hours ago [-]
>Nagel is saying that consciousness is that second thing.
That's exactly what I'm saying is erroneous. Consciousness is the first thing, we are only led to believe it is a separate, second thing by a millenia-old legacy of dualism and certain built-in tendencies of mind.
trescenzi 13 hours ago [-]
So then are you saying there is no such thing as consciousness? That everything is conscious? The intent of that quote is to say “consciousness is subjective experience”. You don’t need dualism to agree with that quote. I agree with Nagel’s general construction but I’m also a materialist. The hard problem doesn’t mean magic is needed to solve it, just that we don’t have a good explanation for why subjective experience exists.
mensetmanusman 11 hours ago [-]
Materialists don’t even know what materials are though.
bondarchuk 12 hours ago [-]
>The intent of that quote is to say “consciousness is subjective experience”
I doubt Nagel would go out of his way to offer such an unnatural linguistic construction, and other philosophers would adopt this construction as a standard point of reference, if that was the sole intent.
>So then are you saying there is no such thing as consciousness?
No, not at all. I'm only saying that if we want to talk about "the consciousness of a bat", we should talk about it directly, and not invent (implicitly) a second concept that is in some senses distinct from it, and in some sense comparable to it.
goatlover 3 hours ago [-]
That's just idealism. Idealism doesn't have a hard problem. But it does have a problem with accounting for how the world appears to be physical independent of any mind experiencing it, such as before any life evolved or in many places were no life is around.
brudgers 10 hours ago [-]
The “something” here refers to inner experience (something similar to Kantian “aperception”.
The tricky bit is that “to be” is not an ordinary verb like fly, eat, or echo-locate. And “‘being an organism’” is — in the context of the paper — about subjective experience (subjective to everything except the organism.
To put it another way, the language game Nagel plays follows the conventions of language games played in post-war English language analytic philosophy. One of those conventions is awareness of Wittgenstein’s “philosophical problem”: language is a context sensitive agreement within a community…
…sure you may find fault with Wittgenstein and often there are uncomfortable epistemological implications for Modernists, Aristotelians, Positivists and such…then again that’s true of Kant.
Anyway, what the language-game model gives philosophical discourse is a way of dealing with or better avoiding Carnapian psuedo-problems arising from an insistence that the use of a word in one context applies to a context where the word is used differently…Carnap’s Logical Structure of the World pre-dates Wittgenstein’s Philosophical Investigations by about 25 years.
HarHarVeryFunny 10 hours ago [-]
Nagel's question "What is it like to be a bat?" is about the sensory qualia of a bat, assuming it has consciousness and ability to experience quales, which he assumes it does.
The question is not "What would it be like (i.e. be similar to) to be a bat?" which seems to be the strawman you are responding to.
sethev 8 hours ago [-]
That particular phrasing happened to catch on, but I don't think it's essential to any of the arguments. How would you phrase the distinction between objects that are conscious and objects that aren't? Or are you saying that that distinction is just a verbal trick?
biophysboy 11 hours ago [-]
I think that's why he states it as a biconditional, which makes the exclusive restriction you're arguing is necessary
antonvs 13 hours ago [-]
Do you believe that each run of a ChatGPT prompt has a conscious experience of its existence, much like you (presumably) do?
If you don't believe that, then you face the challenge of describing what the difference is. It's difficult to do in ordinary language.
That's what Nagel is attempting to do. Unless you're an eliminativist who believes that conscious experience is an "illusion" (experienced by what?), then you're just quibbling about wording, and I suspect you'll have a difficult time coming up with better wording yourself.
bondarchuk 13 hours ago [-]
Wait a minute - it's still possible to believe chatgpt is unconscious for the same reason a game of tetris is unconscious.
I also don't think it's fair to say I'm just quibbling about wording. Yes, I am quibbling about wording, but the quibble is quite essential because the argument depends to such a large extent on wording. There are many other arguments for or against different views of consciousness but they are not the argument Nagel makes.
(Though fwiw I do think consciousness has some illusory aspects - which is only saying so much as "consciousness is different than it appears" and a far cry from "consciousness doesn't exist at all")
mensetmanusman 11 hours ago [-]
Describing something as illusionary adds nothing because it implies someone to experience the illusion.
antonvs 12 hours ago [-]
> it's still possible to believe chatgpt is unconscious for the same reason a game of tetris is unconscious.
Certainly. I just didn't know where you stood on the question.
In Nagel's terms, there is not something it is like to be a game of Tetris. A game of Tetris doesn't have experiences. "Something it is like" is an attempt to characterize the aspect of consciousness that's proved most difficult to explain - what Chalmers dubbed the hard problem.
How would you describe the distinction?
> fwiw I do think consciousness has some illusory aspects - which is only saying so much as "consciousness is different than it appears"
Oh sure, I think that's widely accepted.
bondarchuk 12 hours ago [-]
There is no distinction: the idea that there is a distinction rests on a linguistic confusion. The sentence "something it is like to be a bat" tries, as it were, to split the concept of "being a bat" in two, then makes us wonder about the difference between the two halves. I reject that we have to answer for any such difference, when we can show that the two halves are actually the same thing. It's a grammatical trick caused by collapsing a word that usually relates two distinct things ("A is like B") onto a singular "something".
genericspammer 11 hours ago [-]
There’s no trick to it, you’re overanalyzing. It’s just saing if I were a stone -> no experience, a bat -> some kind of experience. It is not claiming to define the ”something” as you seem to think.
antonvs 9 hours ago [-]
I agree with the other reply that you're overthinking this.
If you claim there's no distinction, then in terms of the meaning Nagel is trying to convey, you're claiming there's no distinction that sets you apart from a game of Tetris in terms of consciousness.
That's where my first reply to you was coming from: if you believe the distinction Nagel is trying to convey doesn't exist, that's tantamount to saying that consciousness as a real phenomenon doesn't exist - the eliminativist position - or something along those lines.
If you do believe consciousness exists, then you're simply arguing with the way Nagel is choosing to characterize it. I asked how you would describe it, but you haven't tried to address that.
axus 12 hours ago [-]
A running game of Tetris has memory, responds to stimuli, and communicates. There has been evolution and reproduction of games of Tetris (perhaps in the way that viruses do). It isn't able to have feelings, what needs to be added for it to start having feelings and experiences?
genericspammer 11 hours ago [-]
I would say a lot would need to be added. Given the same input, the tetris game will respond exactly the same each time. There is no awareness, learning, no decisions made, but purely a 100% predictible process.
The Oxford Living Dictionary defines consciousness as "[t]he state of being aware of and responsive to one's surroundings", "[a] person's awareness or perception of something", and "[t]he fact of awareness by the mind of itself and the world".
antonvs 5 hours ago [-]
That Oxford definition highlights why work such as Nagel's is needed. It can plausibly be argued that LLMs or other AI systems can qualify on all those counts, but many (most?) people wouldn't consider them to have conscious experience.
Characterizing that distinction is surprisingly tricky. "What is it like to be..." is one way to do that. David Chalmers' article about "the hard problem of consciousness" is another: https://consc.net/papers/facing.pdf
mensetmanusman 11 hours ago [-]
That’s the hard problem.
epiphenomenal 1 hours ago [-]
I think I wrote a whole book around this. :)) Feel free to reach out at theillusionengine@gmail.com for a draft :) https://illusionengine.xyz/ :)
visarga 60 minutes ago [-]
Interesting topic, but I can only see one chapter. Based on chapter names it seems you lean closer to cybernetics or process philosophy. Is that true? I find ignoring time and process to be the greatest sin in the field of consciousness. The big issue is not that we don't know the quantum trick or property-dualism that explains consciousness, but that we try to remove time and process from it. That is impossible, a static explanation will never capture dynamic execution of a system.
I worked on similar topics, I publish on a "personal" subreddit.
I'm less convinced with consciousness as some sort of exceptional phenomenon—and how it's been used to define the "hard problem"—but the paper is still valuable as it provides an accessible entry point into the many problems of reductionism.
ebb_earl_co 14 hours ago [-]
What brought down your level on convinced?
vehemenz 12 hours ago [-]
When you reject the idea of reductionism, which Nagel's paper provokes us to do, then the entire idea of emergent phenomena collapses. Everything is on the same level, from fundamental particles to consciousness. Of course, some things can still be reduced and others can't, but in no situation is a phenomenon reduced in its metaphysical status. So what's the "problem" again, exactly? Consciousness doesn't need to be explained in terms of objective facts—it's not a special metaphysical thing but merely a theoretical term like anything else.
glenstein 10 hours ago [-]
>When you reject the idea of reductionism, which Nagel's paper provokes us to do, then the entire idea of emergent phenomena collapses. Everything is on the same level, from fundamental particles to consciousness
Interesting. I would have said that something like that is the definition of reductionism.
>Consciousness doesn't need to be explained in terms of objective facts
If there's one good thing that analytic philosophy achieved, it was spending the better part of the 20th century beating back various forms of dualism and ghosts in the machine. You'd have to be something other than a naturalist traditionally conceived to treat "consciousness" as ontologically basic.
dimal 11 hours ago [-]
I don’t read Nagel as rejecting the idea of reductionism as strongly as you suggest. He’s simply calling out its limitations with regard to subjective experience. Why does it imply that “everything is on the same level”?
mensetmanusman 11 hours ago [-]
This definition is a special metaphysical thing.
RS-232 12 hours ago [-]
Both consciousness and experience arise from physical means. However, they are very distinct concepts and not mutually exclusive, which can lead to confusion when they are conflated.
Sensory deprived, paralyzed, or comatose individuals can be conscious but have no means to experience the outside world, and depending on their level of brain activity, they might not even have an "inner world" or mind's eye experience.
Anything that is able to be measured is able to experience. A subject like an apple "experiences" gravity when it falls from a tree. Things that do not interact with the physical world lack experience, and the closest things to those are WIMPs (weakly interacting massive particles). Truly non-interacting particles (NIP) are presumed to be immeasurable.
So there you have it. The conundrum that consciousness can lack experience and unconsciousness can have experience. A more interesting question in my opinion: what is a soul?
glenstein 10 hours ago [-]
>Anything that is able to be measured is able to experience.
I was quite liking this explanation but you lost me here. I very strongly agree with your opening, and I think it's the key to everything. I think everyone insisting on a categorical divide runs into impossible problems.
And a good explanation of consciousness has to take the hard problem seriously, but doesn't have to agree that subjective and objective, or first person in third person or whatever you want to call them, are irreducibly distinct categories. But I think it makes more sense to say that some subset of all of the objective stuff out there is simultaneously subjective, rather than saying that all stuff at all times is both objective and subjective. I don't think an apple experiences gravity the way a mind experiences a conscious state, but I do think the through line of understanding them both as importantly physical in the same sense is key to tying physical reality to explanation of conscious states.
curiousguy7374 12 hours ago [-]
But I still don’t know what it’s like to be a bat
Also, if there is a soul, then how can we be confident concisouness arises from physical means? If there is a soul, it is the perfect means to differentiate concisouness and p-zombies.
mensetmanusman 11 hours ago [-]
There is a soul if you believe we aren’t all p-zombies. (Soul is the all encompassing word to distinguish this. Maybe there are better words?)
curiousguy7374 6 hours ago [-]
Yeah what I was trying to get at was the post I replied to said concisouness is a physical process, and the more interesting question is if souls exist.
My thinking is if soul’s exist, then we can’t call concisouness a purely physical process yet
the_af 12 hours ago [-]
> Sensory deprived, paralyzed, or comatose individuals can be conscious but have no means to experience the outside world, and depending on their level of brain activity, they might not even have an "inner world" or mind's eye experience.
If they don't have an "inner world"/"mind's eye" and are sensory deprived, in which sense can they be considered conscious? What is your definition here?
How can an apple "experience" gravity? I think you're overloading the term "experience" to mean two very different things, which happen (in some languages like English) to share the same word. You could say gravity "happens" to an apple, and then there's no confusion with subjective experiences.
tooheavy 4 hours ago [-]
Materialism (perhaps physicalism as well) appears to be on shaky ground to me - it does not tell me 'why' I have the first person experience that I have, why I experience and embody the matter that is my person or being, a specific entity. Another way to look at it is to say there doesn't appear to be a region in the brain that defines why I experience the brain, that or this specific brain. From this perspective, I find it self-refuting. They appear only to locate or correlate matter and experience - to help explain 'how'. If I could experience other persons or beings in the first person, and the matter in each person explained why it is that I experience that specific person or entity, I might believe otherwise. To me, this simple fact makes it obvious there is something 'more' that must explain how 'being' relates to consciousness, otherwise, we are simply explaining how the brain modulates experience - very valuable, but less interesting and within reach and validated in everyday life (biochemically and physically, degeneration, damage, etc.). So I would say the brain appears to modulate what is responsible for first person experience. This may not be the correct way to look at consciousness, but it's the most intuitively appealing to me. Because we can't separate being from consciousness, I find the idea that we might create it in the near-term unbelievable. We might certainly create something that can operate with the same or similar results, but I'm not currently convinced it would actually have a subjective first person experience equivalent to the reason we experience the matter we experience. There may be a logical or philosophical way around this view, but as I'm not trained, it's not immediately obvious.
samirillian 13 hours ago [-]
Ive wondered if to a bat a bat is more like a whale, swimming through the air, calling out at a rate and pitch sort of matching the distance its electrical signals travel. To them they aren’t moving fast at all, or maybe to them maybe humans are like ents, plodding along so slow talking like ents.
Can a bat answer the question of “what is it like to be a bat?” I mean, I guess they would have to be able to comprehend the idea of being, and then the idea that things might experience things in ways other than how they do. Bats don’t seem like very abstract thinkers.
I bet if we could communicate with crows, we might be able to make some progress. They seem cleverer.
Although, I’m not sure I could answer the question for “a human.”
snowram 13 hours ago [-]
Wittgenstein famously said "If a lion could talk, we could not understand him". This subject is a philosophical fun rabbit hole to explore.
card_zero 13 hours ago [-]
> I think, on the contrary, that if a lion could talk, that lion would have a mind so different from the general run of lion minds, that although we could understand him just fine, we would learn little about ordinary lions from him.
(More Daniel Dennett)
PreHistoricPunk 11 hours ago [-]
That makes any kind of insight into consciousness as a general term impossible though. That would mean we could not learn anything about human consciousness as such from studying specific persons.
glenstein 10 hours ago [-]
It's a great pull, because it has an important implication that I think ties in directly to Nagels point. Another fascinating variation of the same idea is "beetle in the box", another great one from Wittgenstein. I don't think I agree with him, because I think it hinges on assuming lions have fundamental and irreducibly different experiences. But I think we have important similarities due to our shared evolutionary heritage, and even from the outside I'm willing to die on the hill of insisting that Lions certainly do have experiences familiar to us, like hunger, pain, the satisfaction of having an itch scratched, having a visual field, and having the ability to distinguish shades and color (though their experience of color is likely importantly different from ours, but overlaps enough for there to be such a thing as shared meaning).
I don't understand why Wittgenstein wasn't more forcefully challenged on this. There's something to the principle as a linguistic principle, but it just feels overextended into a foundational assumption that their experiences are fundamentally unlike ours.
Dumblydorr 13 hours ago [-]
The very capability and flexibility of language drove evolution of the mind beyond what species with less linguistic behaviors could handle. After all, facility with language is a massive survival benefit, in our species more than any other. It’s circular because feedback loops in evolution are circular too.
AIorNot 9 hours ago [-]
That’s called meta cognition (what humans do) not subjective experience - which is the feeling of what happens and sets animal or agentic creatures apart from rocks (not sure about plants)
5 hours ago [-]
card_zero 14 hours ago [-]
Dennett has a character telling a story about a bat:
Here's Billy the bat perceiving, in his special sonar sort of way, that the flying thing swooping down toward him was not his cousin Bob, but a eagle, with pinfeathers spread and talons poised for the kill!
He then points out that this story is amenable to criticism. We know that the sonar has limited range, so Billy is not at least perceiving this eagle until the last minute; we could set up experiments to find out whether bats track their kin or not; the sonar has a resolution and if we find out the resolution we know whether Billy might be perceiving the pinfeathers. He also mentions that bats have a filter, a muscle, that excludes their own squeaks when they pick up sonar echoes, so we know they aren't hearing their own squeaks directly. So, we can establish lots about what it could be like to be a bat, if it's like anything. Or at least what is isn't like.
meroes 8 hours ago [-]
That's the magic answer. It's a/the hard problem, but permeable to inquiry. The top neuroscience research into consciousness however doesn't seem like this kind of inquiry Dennett is referencing.
antonvs 13 hours ago [-]
What is that criticism supposed to be criticizing?
Nagel's paper covers a lot of ground, but none of what you described has any bearing on the point about it "what it's like" as a way to identify conscious experience as distinct from, say, the life of a rock. (Assuming one isn't a panpsychist who believe that rocks possess consciousness.)
glenstein 10 hours ago [-]
It gives obvious examples of the way our awareness of factual circumstances give us inroads into what might be experienced. And the upshot is that this might suggest the rest of consciousness can be understood by iterating forward in a similar manner. Dennett makes this exact same point about Mary's Room. Far from talking past the article, it's attacking the fundamental principle.
antonvs 5 hours ago [-]
"Might suggest" - sure, that might be possible. That's not so much a criticism of Nagel as the hope that the problems he's highlighting might one day be solved.
card_zero 13 hours ago [-]
The pessimism, the "facts beyond the reach of human concepts".
antonvs 5 hours ago [-]
That's not pessimism. It's interrogating the nature of reality. If you want to discover truth, you can't shy away from conclusions because they don't make you feel good.
dwd 8 hours ago [-]
Anil Seth recently wrote a book "Being You", which very much states that we can only know what it is like to be ourselves.
Basically, to know what it is like to be a bat, you need to have evolved as a bat.
His theory that our perception is a hallucination generated by a prediction algorithm that uses sensory input to update and correct the hallucination is very interesting.
anon-3988 8 hours ago [-]
If you are a secular person, it should follow that you are a non-dualist. Yet that is not so common. There's no "whats its like to be a bat". Because that invokes a sense of a "soul" or "spirit" or "self" being transferred from one being to another.
There is only is and its content. That's it. The easiest way to see or get a sense of this is to replace any "I am ..." with "There is a ....". For example, instead of "I am thinking of writing of using stable sort", replace it with "This person have a thought of using stable sort".
This is much closer to the actual reality underneath. Even attachment itself can be put in this term. "There's a feeling that this person own this" or "There's a sense of I".
After doing, perhaps this is mental illness, I already see glimpse of the sense that everything is everything at the same time. As there are no real difference between this rock and the other rock behind the mountain that I can't see. There should be no difference between my thoughts, senses, feelings, emotions etc and that of other people. Now your sense of self captures the entirety of the universe. If you die, the universe dies for all you know. I think this is what the ancient books have been talking about by rising and being a God.
astrange 5 hours ago [-]
This is a spiritual error called monism. It's taking non-dualism too far.
> As there are no real difference between this rock and the other rock behind the mountain that I can't see.
There is a real difference between the two; there must be because they're in different places. Monism requires you to deny actually existing differences by saying they're not "real".
> There should be no difference between my thoughts, senses, feelings, emotions etc and that of other people.
This is what in therapyspeak you'd call "not having boundaries". You aren't the same thing as other people; you can tell because the other people don't think that, won't let you borrow their car, etc. It opens them or yourself up to abuse if you think this way.
4gotunameagain 1 hours ago [-]
> There is a real difference between the two; there must be because they're in different places.
That is according to our human perception. For example a single, uniform 4D object could have a projection in 3D that appears as two distinct 3D objects. I am not claiming that a fourth spacial dimension exists, only that we cannot possibly know what exists.
fsckboy 7 hours ago [-]
>There's no "whats its like to be a bat". Because that invokes a sense of a "soul" or "spirit" or "self" being transferred from one being to another.
what's it like to be a human?
"There's no "whats its like to be a human". Because that invokes a sense of a "soul" or "spirit" or "self" being transferred from one being to another." --
anon-3988
it does?
anon-3988 6 hours ago [-]
If you become the bat, you become the bat. There is nothing permanent that is being transferred. When you think of "being the bat", you have this image of thinking "oh shit, i am a bat now!? I can echolocate and shit". Its more like, ...what? The imagination of being another person is simply an imagination that arises within this body, spirit, soul, or whatever it is.
fsckboy 5 hours ago [-]
"what does it feel like to be a bat" means what you think you are telling me. it does not mean "what would it feel like to become a bat"
"what does it feel like to be blind from birth?" can you, a sighted person near-sighted though you may be for this example, even/ever comprehend it no matter how extensively described. can someone who has never seen actually describe it to you?
anon-3988 5 hours ago [-]
> "what does it feel like to be blind from birth?" can you, a sighted person near-sighted though you may be for this example, even/ever comprehend it no matter how extensively described
I am saying that it is not possible. It is entirely possible that you can "see" but not comprehend anything, hence effectively being blind. Is my red your red? Is my hotness your hotness? Is the universe upside down? Is your 3d the same as my 3d? Even all of this imaginations and hypothesis is coming purely from my sense of experience.
I don't even know that you exist, you might simply be a figment of reality, there could be nothing behind this post. I wouldn't know.
QuiDortDine 4 hours ago [-]
I can't believe all these qualia questions have not evolved in centuries (or at least, the common discourse arond them hasn't). We all have similar rods and cones in our eyes. We have common kinds of color blindness. What other reasonable conclusion is there but that my red is your red? All the machinery is similar enough.
I suppose it's because people associate so much of who they are to the subjectivity of their experience. If I'm not the only one to see and taste the world as I do, am I even special? (The answer is no, and that there are more important things in life than being special.)
fsckboy 5 hours ago [-]
sounds like you are grappling with the question as intended. you are not answering the question. keep going. consider what it would feel like to be Boltzmann's bat DesCartes in Plato's cave. Ask yourself, "Flappito ergo quod?"
anon-3988 5 hours ago [-]
Imagining yourself flapping your hands in the air is not "what its like to be a bat". People are fooling themselves when they can honestly imagine being a bat. Even the question "I think therefore I am" does not mean that "I" exist. "I" here implies a center of thinking. There is no center. There is only reality and its content. And thinking is apparently one of its content.
fsckboy 2 hours ago [-]
if you don't accept a>b and b>c, you have nothing to say about "therefore a>c"; you can say nothing about it. if you did accept a>b and b>c then you would agree "therefore a>c"
>"I" here implies a center of thinking. There is no center.
"I think", according to you, implies that I implies a center of thinking, and you don't believe that there is a center, so you don't believe "I think" even more than you don't believe "therefore I am". You don't have an opinion about therefore I am.
it doesn't matter about the "existence" in the predicate, because you don't accept the "I" in the subject.
selcuka 7 hours ago [-]
> If you are a secular person, it should follow that you are a non-dualist.
It depends on your definition of "dualism". If you define it as "having a soul that was created by a higher being", then yes, they are mutually exclusive.
On the other hand, one can also define dualism as being purely evolutionary. David Chalmers [1], an Australian philosopher and cognitive scientist, has some interesting ideas around how dualistic consciousness may relate to quantum mechanics.
I am not that versed in the terminologies and the differences, but I am talking about people that the physical reality is all that there is.
6 hours ago [-]
tim333 10 hours ago [-]
Maybe in the future we'll be able to run computer simulations of people and bats that think they are conscious and you'll be able to merge them a bit to get some bat experience?
AIorNot 10 hours ago [-]
You would be adding bat experience (sonar, hanging upside down, flying etc) to become a literal Bat Man so to speak :)
But you would never know exactly what it feels to be a bat without removing your human level experience from the picture
m463 8 hours ago [-]
The bat article might be a more philosophical treatise...
which is a more concrete(?) dive into being an animal?
scubakid 14 hours ago [-]
To me, "what is it like to be a" is more or less the intersection of sensory modalities between two systems... but I'm not sure the extent of the overlap tells you much about whether a given system is "conscious" or not.
kelseyfrog 14 hours ago [-]
Pretty much the same conclusion here. Consciousness is what we feel when sheaf 1-cohomology among our different senses vanishes.
Bringing it back to bats, a failure to imagine what it's like to be a bat is just indicative that the overlaps between human and bat modalities don’t admit a coherent gluing that humans can inhabit phenomenally.
ants_everywhere 11 hours ago [-]
> Pretty much the same conclusion here. Consciousness is what we feel when sheaf 1-cohomology among our different senses vanishes.
There's something more to it than this.
For one thing there's a threshold of awareness. Your mind is constantly doing things and having thoughts that don't arrive to the threshold of awareness. You can observe more of this stuff if you meditate and less of this stuff if you constantly distract yourself. But consciousness IMO should have the idea of a threshold baked in.
For another, the brain will unify things that don't make sense. I assume you mean something like consciousness is what happens when there aren't obstructions to stitching sensory data together. But the brain does a lot of work interpreting incoherent data as best it can. It doesn't have to limit itself to coherent data.
kelseyfrog 10 hours ago [-]
I'll have to reflect more on the first part, but as far as
> It doesn't have to limit itself to coherent data.
There are specific failure cases for non-integrability:
1. Dissociation/derealization = partial failures of gluing.
2. Nausea = inconsistent overlaps (ie: large cocycles) interpreted as bodily threat.
3. Anesthesia = disabling of the sheaf functor: no global section possible.
At least for me it provides a consistent working model for hallucinogenic, synesthesia, phantom limb phenomena, and split-brain scenarios. If anything, the ways in which sensor integration fails are more interesting than when it succeeds.
ants_everywhere 10 hours ago [-]
Yeah to be clear I like this mental model a lot, and I give it extra points for invoking sheaf theory :). I was just saying it doesn't seem complete to me from a psychological perspective.
The way I look at it is that the sensors provide data as activations and awareness is some output with a thresholding or activation function.
Sense making and consciousness in my mental model is something that happens after the fact and it tries to happen even with nonsense data. As opposed to -- as I was reading you to be leaning toward -- being the consequence of sensory data being in a sufficiently nice relationship with each other.
rout39574 14 hours ago [-]
Do you really mean that it's very nearly the same thing "To be a" you, and an Elon Musk, a homo sapiens infant, and an Orangutan? And only modestly different from these to be a dog or a horse?
If I've understood you correctly, I'll suggest that simple sensory intersection is way way not enough: the processing hardware and software are material to what it is like to be someone.
scubakid 13 hours ago [-]
good point, I'd agree sensors are just a piece of the picture
daoboy 10 hours ago [-]
Ed Yong wrote an excellent book closely related to this topic titled An Immense World on the sensory lives of animals that we are still only beginning to understand.
"It is all that we know, and so we easily mistake it for all there is to know. As a result, we tend "to frame animals' lives in terms of our senses rather than theirs."
safety-space 8 hours ago [-]
Science studies the physical and measurable. Consciousness isn’t physical or measurable. That’s why the “hard problem” sits outside what science, by definition, can explain.
glenstein 7 hours ago [-]
>Consciousness isn’t physical or measurable.
That's just a question begging assertion, and there's plenty of empirical knowledge of necessary physical conditions for consciousness as well as predictable physical influences on conscious states. Whether consciousness is "measurable" is part of what's at issue and can't just be definitionally presupposed.
nomilk 11 hours ago [-]
> Nagel asserts that "an organism has conscious mental states if and only if there is something that it is like to be that organism—something it is like for the organism."
Struggling to make sense of this sentence.
AlexResi 11 hours ago [-]
An organism is conscious exactly when there is something it is like for that organism to be itself.
Or in a simpler way, consciousness is present just in case being that organism has an inner, subjective character - something that can not be reduced to a purely material state.
PreHistoricPunk 11 hours ago [-]
It means that if something has conscious mental states then it must have subjective experience from its own perspective. If John has a conscious mental state, then I must be able to ask "What is it like to be John?". Hope that helps.
padjo 11 hours ago [-]
I wouldn’t bother.
iLemming 14 hours ago [-]
The article basically talkes about "umwelt" (there is a link at the bottom) - "is the specific way in which organisms of a particular species perceive and experience the world, shaped by the capabilities of their sensory organs and perceptual systems"
How it at all related to let's say programming?
Well, for example learning vim-navigation or Lisp or a language with an advanced type system (e.g. Haskell) can be umwelt-transformative.
Vim changes how you perceive text as a structured, navigable space. Lisp reveals code-as-data and makes you see programs as transformable structures. Haskell's type system creates new categories of thought about correctness, composition, and effects.
These aren't just new skills - they're new sensory-cognitive modalities. You literally cannot "unsee" monadic patterns or homoiconicity once internalized. They become part of your computational umwelt, shaping what problems you notice, what solutions seem natural, and even how you conceptualize everyday processes outside programming.
It's similar to how learning music theory changes how you hear songs, or how learning a tonal language might affect how you perceive pitch. The tools become part of your extended cognition, restructuring your problem-space perception.
When a Lisper says "code is data" they're not just stating a fact - they're describing a lived perceptual reality where parentheses dissolve into tree structures and programs become sculptable material. When a Haskeller mentions "following the types" they're describing an actual sensory-like experience of being guided through problem space by type constraints.
This creates a profound pedagogical challenge: you can explain the mechanics of monads endlessly, but until someone has that "aha" moment where they start thinking monadically, they don't really get it. It's like trying to explain color to someone who's never seen, or echolocation to someone without that sense. That's why who's never given a truthful and heartfelt attempt to understand Lisp, often never gets it.
The umwelt shift is precisely what makes these tools powerful - they're not just different syntax but different ways of being-in-computational-world. And like the bat's echolocation, once you're inside that experiential framework, it seems impossible that others can't "hear" the elegant shape of a well-typed program.
There are other umwelt-transforming examples, like: debugging with time-travel/reversible debuggers, using pure concatenative languages, logic programming - Datalog/Prolog, array programming, constraint solvers - SAT/SMT, etc.
The point I'm trying to make - don't try to "understand" the cons and pros of being a bat, try to "be a bat", that would allow you to see the world differently.
iLemming 13 hours ago [-]
I suppose someone (even an experienced vimmer) might argue that learning vim is not so much "umwelt-transformative", but rather like "muscle memory training", like LeetCode drilling.
Indeed, basic vim-navigation - (hjkl, w, b) is muscle memory.
But, I'd argue the umwelt shift comes from vim's modal nature and its language of text objects. You start perceiving text as having an inherent grammar - "inside parentheses", "around word", "until comma." Text gains topology and structure that was invisible before.
The transformative part isn't the keystrokes but learning to think "delete inside quotes" (di") or "change around paragraph" (cap). You see text as composable objects with boundaries, not just streams of characters. This may even persists when you're reading on paper.
That mental model often transforms your keyboard workflow not just in your editor - but your WM, terminal, web browser, etc.
mjcohen 6 hours ago [-]
As Disney almost wrote:
Everybody wants to be a bat
Cause noone but a bat
really knows where it's at
12 hours ago [-]
MollyRealized 11 hours ago [-]
Answer: You are a creature of the night, terrible, able to strike terror into a superstitious, cowardly lot.
hsod 7 hours ago [-]
Douglas Hofstadter wrote an essay about this essay in the book “The Mind’s I” which I thoroughly enjoyed reading even if a lot of it is beyond me. There’s a somewhat janky OCR version here http://themindi.blogspot.com/2007/02/chapter-24-what-is-it-l...
Classic Hofstadter, he introduces a concept called a “Be-Able Thing” (BAT for short)
socrateswasone 10 hours ago [-]
It's not like anything, a bat has no sense of self or personal history, it operates on instinct without a personal, reflective self. A bat having consciousness is as relevant as whether a sonar does.
accrual 9 hours ago [-]
> it operates on instinct without a personal, reflective self
I think we would call this "without ego" and not "without consciousness". I think it's totally possible to be conscious without ego. And perhaps bats do have an ego however small - some may be more greedy than others, etc.
socrateswasone 5 hours ago [-]
I think what consciousness nominally means is an awareness of things in relation to a self, a subject-object relation. If there is such a thing as awareness without subject object content, I'd love to hear more, like how would you even know you have it.
AIorNot 10 hours ago [-]
Do you mean the bat has no subjective experience? If so - That’s a pretty extraordinary claim to make there and one that risks great ethical concern on the treatment or animals
If bats have no subjective experience it’s ethical to do anything to them but if there is than they deserve to (as all animals) be treated ethically as much as we can do so
IMO considering Bats to be similar to Mice -we’ve studied mice and rats extensively and while cannot know precisely we can be pretty sure there is subjective experience (felt experience there) ie almost our scientific experiments and field data with so called ‘lower’ organisms show evidence of pain, suffering and desires, play etc - all critical evidence of subjectivity
Now I don’t think bats are meta-conscious (meta cognitive) because they can’t commiserate on their experiences or worry about death etc like humans can but they feel stuff - and we must respect that
socrateswasone 9 hours ago [-]
You don't need to know if it has a "subjectivity" to know if you can torture and kill it, you can rely on the writhing and squealing. Making up artificial distinctions and questions with no answers is just a conceit we get into, ultimately to justify whatever we want. There are too many people on the planet and we need to "process" a lot of life for our benefit.
Anyway, if there is no mind in the sense of a personal identity or a reflective thought process, then really you're just torturing and killing a set of sense perceptions, so what would be the basis of a morality that forbids that?
glenstein 7 hours ago [-]
>Anyway, if there is no mind in the sense of a personal identity or a reflective thought process
I don't think "mind" is limited to those two things, and I think it may be on a continuum rather than binary, and they may also be integrally related to the having of other senses.
I also think they probably do have some non trivial degree of mind even in the strong sense, and that mental states that aren't immediately tied to self reflection are independently valuable because even mere "sense perceptions" include valenced states (pain, comfort) that traditionally tend to fall within the scope of moral consideration. I also think their stake in future modes of being over their long term evolutionary trajectory is a morally significant interest they have.
socrateswasone 6 hours ago [-]
Saying it might be on a continuum just obfuscates things. What do you mean exactly?
If there is no sense of self or personal identity, how is that different than a block of wood or a computer? That there might be "mental" functions performed doesn't give it subjectivity if there is no subject performing them. And if there is no persistent reflective self there is no subject. You could call instincts or trained behaviors mental, activities of a kind of mind if you wanted to. But if it's not self aware it's not a moral subject.
wagwang 14 hours ago [-]
Can we just all admit there has basically been no real progress made to the mind-body problem. They all rest on metaphysical axioms of which no one has any proof of. Physicalism is about as plausible as solipsism.
Exhibit a
> Nagel begins by assuming that "conscious experience is a widespread phenomenon" present in many animals (particularly mammals), even though it is "difficult to say [...] what provides evidence of it".
jibal 13 hours ago [-]
> Physicalism is about as plausible as solipsism.
Physicalism is an ontological assertion that is almost certainly true, and is adhered to by nearly all scientists and most philosophers of mind. Solipsism is an ontological assertion that could only possibly be true for one person, and is generally dismissed. They are at opposite ends of the plausibility scale.
geye1234 12 hours ago [-]
One big problem with physicalism is that many alleged arguments in its favor are nothing of the sort. Any argument for physicalism that refers to neurological observation is invalid. Physicalism claims that all mental events can be reduced to physical events. But you cannot look at physical events to prove this. No matter the detail in which you describe a physical event, you can't use this to prove, or even argue in favor of, the thesis that all mental events can be reduced to the physical.
It's like describing the inside of a house in very great detail, and then using this to argue that there's nothing outside the house. The method is explicitly limiting its scope to the inside of the house, so can say nothing about what's outside, for or against. Same with physicalism: most arguments in its favor limit their method to looking at the physical, so in practice say nothing about whether this is all there is.
jibal 11 hours ago [-]
You're making a number of unsupported assertions. There's a massive amount of literature in support of physicalism. And it's a far cry from "there's no proof of x" to "x is invalid". No metaphysical stance can be proved.
> Same with physicalism: most arguments in its favor limit their method to looking at the physical, so in practice say nothing about whether this is all there is.
This is simply wrong ... there are very strong arguments that, when we're looking at mental events, we are looking at the physical. To say that arguments for physicalism are limited to looking at the physical is a circular argument that presupposes that physicalism is wrong. The arguments for physicalism absolutely are not based at looking at a limited set of things, they are logical arguments that there's no way to escape being physical ... certainly Descartes' dualism is long dead due to the interaction problem -- mental states must be physical in order to be acted upon or act upon the physical. The alternatives are ad hoc nonsense like Chalmers' "bridging laws" that posit that there's a mental world that is kept in tight sync with the physical world by these "bridging laws" that have no description or explanation or reason to believe exist.
geye1234 11 hours ago [-]
> And it's a far cry from "there's no proof of x" to "x is invalid".
Oh this is undoubtedly true, and my argument was limited to the statement that the most common argument for physicalism is invalid. I was not launching an attack on physicalism itself.
> No metaphysical stance can be proved.
That's an interesting metaphysical stance, but again, I'm not trying to prove any metaphysics, just pointing out the main weakness that I see in the physicalist argument. I'm pointing out that any pro-physicalist argument that is a variant of "neuroscience says X" is invalid for the reason I gave: by limiting your scope to S, you can say nothing about anything outside S. This is true regardless of whether there is actually anything outside S, so there is no assumption in my argument that physicalism is wrong.
One argument against physicalism is that if thought or knowledge can be reduced to particles bouncing around, then there is no thought or knowledge. My knowledge that 2+2=4 is about something other than, or different from, the particles in my brain. Knowledge is about the content of the mind, which is different from the associated physical state of the brain. If content is neurons, then content as something my mind considers doesn't exist. If my thought "2+2=4" just is a bunch of particles in my brain doing stuff, then my belief that my thought is true is not even wrong, as the saying goes: just absurd.
I'm no Cartesian dualist though -- the interaction problem is just one problem with his dualism. I think Aristotle and Aquinas basically got the picture of reality right, and their metaphysics can shed yuuuuge amounts of light on the mind-body problem but obviously that's a grossly unfashionable worldview these days :-)
jibal 10 hours ago [-]
> I'm not trying to prove any metaphysics
You attacked physicalism for not being proven.
I disagree with your arguments and I think they are hopelessly confused. Since our views are conceptually incommensurate, there's no point in continuing.
geye1234 8 hours ago [-]
I'm afraid the physicalist position is absolutely impossible. When I think about something, I'm thinking about something different from the brain state that represents it. There is nothing difficult or subtle about this: if I think about a tiger, I am not thinking about a brain state that is associated therewith.
The physicalist position wants to reduce the mental to the physical. My thought cannot be reduced from the mental to the physical, because my thought is about a tiger, and a tiger cannot be reduced to a brain state.
If physicalism is true, I can't really be thinking about a tiger, because the tiger in my thought has no physical existence-as-a-tiger, and therefore can't have any existence-as-a-tiger at all. But then I'm not really thinking about a tiger. And the same applies to all our thoughts: physicalism would imply that all our thoughts are delusional, and not about reality at all. A non-physicalist view allows my thought to be actually about a tiger, without that tiger-thought having physical existence.
(Note that I have no problem with the view that the mental and the physical co-incide, or have some kind of causal relationship -- this is obviously true -- only with the view that the mental is reducible to the physical.)
glenstein 9 hours ago [-]
Whoheartedly agree. I think what they're stressing though if I'm understanding correctly, is we do kind of start in a Cartesian space, and branch out via inferences to the presumption of an external world. And, from a certain philosophical perspective, one could point to that and insist that at any moment that connection could be the weak link that brings all of epistemology crashing down. We could get unhooked from the simulation, so to speak, open our real eyes, and witness a new world with new bedrock alternatives to our notions of causality, qualia, and so on.
I don't believe any of that to be true, but I think that's kind of the point of that argument. I do think we start from that Cartesian starting place, but once we know enough about the external world to know that we're a part of it, and can explain our mind in terms of it, it effectively shifts the foundation, so that our mental states are grounded in empirical reality rather than the other way around.
jibal 4 hours ago [-]
See their comment just above where they say "I'm afraid the physicalist position is absolutely impossible." ... it's the worst argued rubbish imaginable.
wagwang 11 hours ago [-]
I've never heard any argument that demonstrates any certainty around physicalism. I like the argument bcuz it sounds nice, but I would never ever claim to know it to be true. I mostly arrived at physicalism because there are eggregious problems with the other theories and physicalism seemed like the suitable default naive answer.
You're getting a little ahead of yourself. First, ontological assertions need to reflect reality. That is, they need to be true or false, and many philosophers, including prominent scientists, don't think they qualify. Indeed, the arguments against ontological realism are more airtight than any particular metaphysical theory.
jibal 11 hours ago [-]
> You're getting a little ahead of yourself.
Nonsense.
> First, ontological assertions need to reflect reality.
You're getting ahead of yourself to imply that somehow physicalism does not reflect reality, or that an assertion has to be proven to reflect reality before being made.
> That is, they need to be true or false
No, that's not what reflecting reality means. Of course ontological assertions are true or false, if they aren't incoherent, but that's neither here nor there.
> and many philosophers, including prominent scientists, don't think they qualify.
What's this "they" that don't qualify? The subject was physicalism, and again almost all scientists and most philosophers of mind subscribe to it ... which leaves room for some not doing so. Whether or not the outliers are "prominent" is irrelevant.
> Indeed, the arguments against ontological realism are more airtight than any particular metaphysical theory.
That's a much stronger claim than that physicalism is wrong ... many dualists are ontological realists. And it's certainly convenient to claim that there are airtight arguments for one's views, and easy to dismiss the claim.
vehemenz 13 hours ago [-]
> Physicalism is about as plausible as solipsism
And while you're at it, as plausible as any metaphysical theory, insofar as you're still doing metaphysics.
glenstein 10 hours ago [-]
>Can we just all admit there has basically been no real progress made to the mind-body problem.
I think we've made extraordinary progress on things like brain to machine interfaces, and demonstrating that something much like human thought can be approximated according to computational principles.
I do think some sort of theoretical bedrock is necessary to explain to "something there's like to be" quality, but I think it would be obtuse to brush aside the rather extraordinary infiltrations into the black box of consciousness that we've made thus far, even if it's all been knowing more about it from the outside. There's a real problem that remains unpenetrated but as has been noted elsewhere in this thread, it is a nebulous concept, and perhaps one of the most difficult and important research questions, and I think nothing other than ordinary humility is necessary to explain the limit an extent to which we understand it thus far.
adityaathalye 13 hours ago [-]
If anything, it's getting weirder... real progress looks, well, batshit insane. For example:
Against Mind-Blindness: recognizing and communicating with diverse intelligences - by Michael Levin
Much of the mind-body problem comes from Descartes, who assumed that physical reality was nothing more than a bunch of particles bouncing around. Given that the mind cannot be reduced to this (whatever my experiences are, they are different from particles bouncing around), then the mind must be something utterly unlike everything else in reality. Thus Descartes posits that the mind is one thing and the body another (substance dualism).
If one drops the assumption that physical reality is nothing more than a bunch of particles, the mind stops being so utterly weird and unique, and the mind-body problem is more tractable. Pre-17th century, philosophers weren't so troubled by it.
the_af 12 hours ago [-]
> Given that the mind cannot be reduced to this (whatever my experiences are, they are different from particles bouncing around)
Why cannot it?
geye1234 12 hours ago [-]
Several reasons. One is that my experience of looking at a tree is one thing, but the neurological firing that takes place in my brain when I look at a tree is another. They are not the same. If you can reduce your experience of looking at a tree to neurons firing, then you are not really looking at a tree, and absurdity results.
Another is that the propositions "the thought 2+2=4 is correct" and "the thought 2+2=5 is wrong" can only be true with regard to the content of a thought. If thought can be reduced to neurons firing, then describing a thought as correct or wrong is absurd. Since this is not the case, it must be impossible to reduce thought to neurons firing.
(Btw, the first paragraph of my previous comment is not my position. I am giving a three-sentence summary of Descartes' contribution to the mind-body problem.)
the_af 11 hours ago [-]
I don't follow the reasoning at all. Why is human experience not the neurological firing? Why can't a thought be reduced to neurons firing, what about that would make it absurd?
I promise I'm not being dense or rhetorical, I truly don't understand that line of thought.
It seems to me like begging the question, almost like saying "experience cannot be this, because it'd be absurd, because it cannot be this."
geye1234 8 hours ago [-]
Here's something I posted a while ago, I'm copying and pasting with a few slight edits:
It is wrong to claim that brain states (neurons firing) are the same as mental states (thoughts). There are several reasons for this. One is that reducing thoughts to brain states means a thought cannot be correct or incorrect. For example, one series of mental states leads to the thought "2+2=4"; another series leads to the thought "2+2=5". The correctness of the former and the wrongness of the latter refers only to the thought's content, not the physical brain state. If thoughts are nothing more than brain states, it's meaningless to say that one thought is correct -- that is to say, it's a thought that conforms to reality -- and that the other is incorrect. A particular state of neurons and chemicals cannot per se be incorrect or incorrect. If one thought is right (about reality) and another thought is wrong (not about reality), then there must be aspects of thought that are distinct from the physical state of the brain.
If it's meaningless to say that one thought is correct and another is incorrect, then of course nothing we think or say has any connection to reality. Hence the existence of this disagreement, along with the belief that one of us is right and the other wrong, presupposes that the physicalist position is wrong.
the_af 6 hours ago [-]
There's a leap you're making I cannot follow.
I agree with this: the physical configuration of neurons, their firings, the atoms that make them, etc, cannot be "right" or "wrong". This wouldn't make sense in reality; it either is or isn't, and "right" or "wrong" are human values. The universe is neither right nor wrong, it just is.
What about the thoughts those neuron firings mean to us? Well, a good argument can be made that they are also not "right" or "wrong" in isolation, they are just phenomena. Trivially, a thought of "2+2=4" is neither right nor wrong, it's only other thoughts that consider it "right" or "wrong" (often with additional context). So the values themselves can be a physical manifestation.
So it seems to me your problem can be resolved like this: in response to a physical configuration we call a "thought", other "thoughts" can be formed in physical configurations we call "right" or "wrong".
The qualities of "right" or "wrong" only exist as physical configurations in the minds of humans.
And voila! There's no incompatibility between the physical world and thoughts, emotions, "right" or "wrong".
IAmGraydon 7 hours ago [-]
>An organism has conscious mental states if and only if there is something that it is like to be that organism—something it is like for the organism.
Isn’t this just the same as saying an organism is conscious if it perceives? If it is aware of input from one or more senses (and I’m not limiting that to the five human senses)?
tgbugs 8 hours ago [-]
I'm going to ignore the issues of mind/body dualism since they are
orthogonal to the argument I want to make about Nagel's bat.
The short version is that if we can approximate the sensory experience
and the motor experience of an organism, and we can successively
refine that approximation as measured by similarity in behavior
between bat and man-bad, then I would argue that we can in fact
imagine what it is like to be a bat.
In short, it is a Chinese Bat Room argument. If you put a human
controlling a robot bat and a bat in two boxes and then ask someone to
determine which is the human and which is the bat, when science can no
longer tell the difference (because we have refined the human/bat
interface sufficiently) you can ask the human controlling the robot
bat to write down their experience and it would be strikingly similar
to what the bat would say if we could teach it English.
The bat case is actually easier than one might suppose, similarly say,
a jumping spider, because we can translate their sensory inputs to our
nervous system and if we tune our reward system and motor system so
that we can get even an approximate set of inputs and similar set of
actuators, then we can experience what it is like to be a bat.
Further, if I improve the fidelity of the experimental man-bat
simulation rig, the experience will likewise converge. While we will
not be able to truly be a bat since that is asymptotically mutually
exclusive with our biology, the fact that we can build systems that
allow progressive approach to bat sensory motor experience means that
we actually do have the ability to image the experience of other
beings. That is, our experiences are converging and differ only due to
our lack of our technical ability to overcome the limitations of our
biological differences.
The harder case is when we literally don't have the molecule that is
used to detect something, as in the tetrachormat case. That said one
of my friends has always wanted to find a way to do an experiment
where a trichromat can somehow have the new photo receptor expressed
in one eye and see what happens.
The general argument about why we would expect something similar to
happen should the technical hurdles be overcome is because basically
all nervous systems wire themselves up by learning. Therefore, as long
as the input and output ranges can be mapped to something that a human
can learn, then a human nervous system should likewise converge to be
able to sense and produce those inputs and outputs (modulo certain
critical periods in neural development, though even those can be
overcome, e.g. language acquisition by slowing down speech for adults).
Some technical hurdle examples. Converting a trichromat into a
tetrachormat by crispering someone's left eye. Learning dolphin by
slowing down dolphin speech in time while also providing a way for
humans to produce dolphin high frequency speech via some transform on
the human orofacial vocal system. There are limitations when we can't
literally dilate time, but I supposed if we are going all the way, we
can accelerate the human to the fraction of the speed of light that
will compensate for the fact that the human motor system can't quite
operate fast enough to allow a rapid fire conversation with a dolphin.
bettating 13 hours ago [-]
What is it like to be another person?
esafak 13 hours ago [-]
I'm not sure how to answer the even more fundamental question, "What is it like to be yourself?" What constitutes a valid answer? It's a vague question.
jibal 13 hours ago [-]
I don't believe that the phrase "what it's like" (in this philosophical sense) is coherent. When people like Nagel or Chalmers are asked to explain it, they liken it to other incoherent assertions.
vehemenz 12 hours ago [-]
What's incoherent about it? Do you not think subjective experience has its own qualities? Breathing in fresh morning air, for example?
jibal 12 hours ago [-]
I stated what's incoherent about it. Your "Do you not think" is a non sequitur ... coherence is about meaning, and no one can say what the phrase means.
Aside from that, breathing fresh air in the morning is an activity, not a "quality of subjective experience". Generally the language people use around this is extremely confused and unhelpful.
vehemenz 11 hours ago [-]
I'm sure you think you're well intended, but your attempts at rigor have me scratching my head a little bit. I don't understand the defensiveness given that you haven't done the bare minimum to explain your position.
And no, that's not what a non sequitur is. And no, coherence is not just a linguistic idea. Then you try to explain what I "really mean" by "quality of subjective experience," and you can't even give a good faith reading of that. I'm really trying here.
jibal 10 hours ago [-]
What "defensiveness"? And you're accusing me of bad faith? Stick to talking about ideas, not people. I won't engage with you further.
goatlover 3 hours ago [-]
It just means the experience of sensation. You're conscious of a purple object, the smell of brewed coffee, the feeling of a sharp pain, you have a resurfaced memory of a deceased relative, you visualize the beach, you dream of being unprepared for a test, You daydream while driving down a long road, you feel affection seeing a friend, you hear your internal dialog about the boss.
There's nothing incoherent here, they're just talking about subjective states of experience.
the_af 12 hours ago [-]
True. I suppose every one of us has asked:
What makes me me? Whatever you identify as "yourself", how come it lives within your body? Why is there not someone else living inside your body? Why was I born, specifically "me", and not someone else?
This has puzzled me since childhood.
selcuka 7 hours ago [-]
> I suppose every one of us has asked
Not at all. I was shocked when I noticed that how few people have asked themselves this question. In fact, it is impossible to even explain this question to the majority of people. Most people confuse the question with "what makes us intelligent", missing the whole "first person perspective" aspect of it.
I guess evolution tries to stop us from asking question that might lead to nihilism.
accrual 8 hours ago [-]
I think about this sometimes. From the POV of being in the human body I feel we must all be "me". If I wasn't here having this specific subjective experience as a human user on hackernews, then I would be someone or something else having another or "their" experience.
If that's not the case then I'll just have no subjective experience, same as before I was born/instantiated.
card_zero 13 hours ago [-]
It's more or less OK, thank you for asking. Recently I felt:
Disappointed when I went somewhere and there wasn't any tea,
Enthralled by a story about someone guarding a mystical treasure alone in a remote museum on a dark and stormy night,
Sympathetic toward a hardworking guy nobody likes, but also aggravated by his bossiness to the point of swearing at him,
Confused due to waking up at 7 pm and not being sure how it happened.
You probably don't entirely understand any of those. What is it to entirely understand something? But you probably get the idea in each case.
Der_Einzige 13 hours ago [-]
Daniel Dennett was the only good part of the "New Atheism" movement. May he rest in peace.
vehemenz 13 hours ago [-]
The moniker was mostly invented by the press. But if we're talking about all four "horsemen," I think they all made positive contributions to their respective fields. Likewise, there are fair critiques one can level at each of them, including Dennett.
13 hours ago [-]
12 hours ago [-]
lenerdenator 15 hours ago [-]
[flagged]
dang 12 hours ago [-]
Could you please stop posting unsubstantive comments? We're trying for something else here.
lenerdenator 7 hours ago [-]
Feels pretty substantive to me. Just because it's not what you were going for in relation to the discussion of that work doesn't mean it's unsubstantive. I approached it from the position Nagel brought up, which is reductive. They have life experiences that none of us can fathom, so we reduce it to what we know of them, which in my case is them living in my attic, much to my chagrin.
dang 3 hours ago [-]
Of course people feel different things and everyone's feelings are valid. Nonetheless, there's a standard that we're trying to adhere to here, and comments like these are certainly below it — this is not a borderline call:
https://partiallyexaminedlife.com/2025/06/30/what-is-it-like...
The author inventing "batfished" also believes bats to be conscious, so it seems a very poorly conceived word, and anyways unnecessary since anthropomorphize works just fine... "You've just gaslighted yourself by anthropomorphizing the AI".
We have not proven "to a level of absolutely provable certainty" that other humans are also conscious. You can only tell you are conscious yourself, not others. The whole field of consciousness is based on analyzing something for which we have sample size n=1.
They say "because of similar structure and behavior" we infer others are also conscious. But that is a copout, we are supposed to reject behavioral and structural arguments (from 3rd person) in discussion about consciousness.
Not only that, but what would be an alternative to "it feels like something?" - we can't imagine non-experience, or define it without negation. We are supposed to use consciousness to prove consciousness while we can't even imagine non-consciousness except in an abstract, negation-based manner.
Another issue I have with the qualia framing is that nobody talks about costs. It costs oxygen and glucose to run the brain. It costs work, time, energy, materials, opportunity and social debt to run it. It does not sit in a platonic world.
We haven't even demonstrated some modest evidence that humans are conscious. No one has bothered to put in any effort to define consciousness in a way that is empirically/objectively testable. It is a null concept.
Nagel's paper deals with the fundamental divide between subjectivity and objectivity. That's the point of the bat example. We know there are animals that have sensory capabilities we don't. But we don't know what the resulting sensations are for those creatures.
Because otherwise it's your word against mine and, since we both probably have different definitions of consciousness, it's hard to have a meaningful debate about whether bats, cats, or AI have consciousness.
I'm reminded of a conversation last year where I was accused of "moving the goalposts" in a discussion on AI because I kept pointing out differences between artificial and human intelligence. Such an accusation is harder to make when we have a clearly defined and measurable understanding of what things like consciousness and intelligence are.
Why not? It works, thus it verifies itself.
All we need to do (to talk about, to study it) is identify it. We need to be using the word to refer to the same thing. And there's nothing really hard about that.
There are many research areas where the object of research is to know something well enough that you could converge on such a thing as a definition, e.g. dark matter, intelligence, colony collapse syndrome, SIDS. We nevertheless can progress in our understanding of them in a whole motley of strategic ways, by case studies that best exhibit salient properties, trace the outer boundaries of the problem space, track the central cluster of "family resemblances" that seem to characterize the problem, entertain candidate explanations that are closer or further away, etc. Essentially a practical attitude.
I don't doubt in principle that we could arrive at such a thing as a definition that satisfies most people, but I suspect you're more likely to have that at the end than the beginning.
Someone conscious is able to choose how they want to behave and then behave that way. For example I can choose to be kind or mean. I can choose to learn to skate or I choose not to.
So free will and consciousness are strongly linked.
I have seen zero evidence that any other being other than humans can do this. All other animals have behaviors that are directly shaped by their environment, physical needs, and genetic temperament, and not at all shaped by choices.
For example a dog that likes to play with children simply likes them, it did not choose to like them. I on the other hand can sit, think, and decide if I like kids or not.
(This does not imply that all choices made by humans are conscious - in fact most are not, it just means that humans can do that.)
On the other hand, I bet you can't prove that you ever made a free choice.
In any case, a mirror test is a test of recognizing self, it does not indicate anything in terms of self awareness.
And I chose to fast for 5 days because I wanted to. Nothing forced me, it was a free choice. I simply thought about it and decided to do it, there were no pro's or con's pushing me in either direction.
They said animals show choices, they did not claim to prove animals made a choice. The point is that you also cannot prove you made a choice, only that you do things that show you may have made a choice. It's a fine, but important, distinction.
You can tell it was invented by Cory Doctorow because there is a very specific kind of Gen X person who uses words like that - they have a defective sense of humor vaguely based on Monty Python, never learned when you are and aren't supposed to turn it off, and so they insist on making up random insults like "fuckwaffle" all the time instead of regular swearing.
Mostly people make things better over time. My bed, my shower, my car are all better than I could reasonably have bought 50 years ago. But the peculiarities of software network effects - or of what venture capitalists believe about software network effects - mean that people should give things away below cost while continuing to make them better, and then one day switch to selling them for a profit and making them worse, while they seemingly could change nothing and not make them worse.
That's a particular phenomenon worthy of a name and the only problem with "enshittification" is that it's been co-opted to mean making things worse in general.
It's not always that. After some time, software gets to a state where it's near the local maximum for usability. So any changes make the software _less_ usable.
But you don't get promoted in large tech companies unless you make changes. So that's how we get stuff like "liquid glass" or Android's UI degradatation.
— Kurt Vonnegut
In this sense, I think one has to aaaaaalmost be a bat in order to know what it is to be it. A fine thread trailing back to the human.
The imago-machines of Arkady Martine's "A Memory Called Empire" come to mind. Once integrated with another's imago, one is not quite the same self, not even the sum of two, but a new person entirely containing a whole line of selves selves melded into that which was one. Now one truly contains multitudes.
Andy Weir's The Egg makes regular HackerNews appearances.
There is no answer which is why we are here is the only thought I can come up with. Life is a question that asks itself to be answered and in the living answers itself so completely that to ask what is the purpose would be to say "what is the purpose of a hammer if there were nothing else?" The answer and the question become themselves and are inseparable from not themselves excepting insofar as no life cannot question and so cannot answer.
Anyway belly button picking. It amuses me that this paper is similar in many respects to the title of the 2017 paper attention is all you need. What if attention are all you needed to become a bat? Look everyone I'm a bat! POOF you become a bat. That would be silly.
Of course it could all be claptrap that humans want to believe in but I find it to be pretty powerful and I think it is true
(Warning: Gets into spiritual stuff)
https://youtu.be/R-IIzAblVlg?si=t9RqXgF_wwJPcv_g
I sometimes wonder about this, too. Do other people perceive things like I do? If someone was magically transplanted to my body, would they scream in pain "ooooh, this hurts, how could he stand it", whereas I consider the variety of discomforts of my body just that, discomforts? And similarly, were I magically transported to another person's body, would I be awestruck by how they see the world, how they perceive the color blue (to give an example), etc?
Yeah another example I think about from time to time is our own sense of perspective. It's all relative, but my sense of how far away is "that thing over there" is probably different from yours. Partially because we may be different sizes and heights, but also because our eyes and brains process the world differently. Like a camera with different lenses.
Also, speed. If your brain's clock is faster than mine then you may perceive the world to be moving slower than I do.
An interior designer will see the colors, and the layout and how the things go together or don't. I don't see that, and in turn the designer does not see what I see.
So never mind the physical senses, even on a mental level two people do not see/experience the world the same way.
I'm not going to try to draw any inferences about consciousness from these facts. I'll leave that to others.
https://www.npr.org/programs/invisibilia/378577902/how-to-be...
Sure - although depending on how quickly one was scanning the environment with echolocation it might also feel a bit like looking around a pitch black room with a flashlight.
In any case it's essentially a spatial sense, not a temporal one, so is bound to feel more like (have a similar quale to) vision than hearing.
> He lived alongside badgers for weeks, sleeping in a sett in a Welsh hillside and eating earthworms, learning to sense the landscape through his nose rather than his eyes. He caught fish in his teeth while swimming like an otter; rooted through London garbage cans as an urban fox; was hunted by bloodhounds as a red deer, nearly dying in the snow.
https://en.wikipedia.org/wiki/Charles_A._Foster
"How any thought should produce a motion in Body is as remote from the nature of our Ideas, as how any Body should produce any Thought in the Mind. That it is so, if Experience did not convince us, the Consideration of the Things themselves would never be able, in the least, to discover to us." (IV iii 28, 559)
It's a hard-sci-fi story about how various societies, human and alien, attempt to assert control & hegemony across centuries of time (at times thinking of this as a distributed systems and code documentation problem!), and how critical and impactful the role of language translation can be in helping people to understand unfamiliar ways of thinking.
At the novel's core is a question very akin to that of Nagel's positivism-antipositivism debate [1]: is it possible (or optimal for your society's stability) to appreciate and emphasize with people wholly different from yourselves, without interpreting their thoughts and cultures in language and representations that are colored by your own culture?
What if, in attempting to do so, this becomes more art and politics than provable science? Is "creative" translation ethical if it establishes power relationships that would not be there otherwise? Is there any other kind?
Deepness is not just a treatise on this; it places the reader into an exercise of this. To say anything more would delve into spoilers. But lest you think it's just philosophical deepness, it's also an action-packed page-turner with memorable characters despite its huge temporal scope.
While technically it's a prequel to Vinge's A Fire Upon The Deep, it works entirely standalone, and I would argue that Deepness is best read first without knowing character details from its publication-time predecessor Fire. Note that content warnings for assault do apply.
[0] https://www.amazon.com/Deepness-Sky-Zones-Thought/dp/0812536...
[1] https://en.wikipedia.org/wiki/Logical_positivism / https://en.wikipedia.org/wiki/Antipositivism
Bluey: "Yeah!"
Bandit: "How is it?"
Bluey: "It's great! You get to eat a lot of fruit!"
IMHO the phrasing here is essential to the argument and this phrasing contains a fundamental error. In valid usage we only say that two things are like one another when they are also separate things. The usage here (which is cleverly hidden in some tortured language) implies that there is a "thing" that is "like" "being the organism", yet is distinct from "being the organism". This is false - there is only "being the organism", there is no second "thing that is like being the organism" not even for the organism itself.
In translations to Spanish, the article is titled "¿Qué se siente ser un murciélago?", literal word by word translation "What is felt being a bat?"
In French, "Quel effet cela fait-il d'être une chauve-souris?", literal word by word translation "What effect it makes to be a bat?"
In Chinese, "成为一只蝙蝠可能是什么样子", i.e., "To become a bat could be what feeling/sensation?"
None of these translations has a comparative word. And at least in Spanish (I won't speak about the other two because I'm not so proficient in them), using a comparative expression similar to "being like" in English ("¿A qué se parece ser un murciélago?") would sound awkward and not really convey the point. Which is why the translators didn't do so.
Of course I know that the original article is in English, but I think the author basically meant "What is felt being a bat?" and just used the "like" construction because it's what you say in English for that to sound good and clear. Your highlighted text could be rendered as "An organism has conscious mental states if and only if there is something that is felt being that organism – something that is felt by the organism." and it would be more precise, just doesn't sound elegant in English.
As for whether I agree with Nagle, I find him consistently just wrong enough to be irritating in ways that I want to work out my thoughts in response to, which by some standards can be counted as a compliment. As much as I understand the turn of phrase and its ability to get people to grasp the idea, and I at least respect it for that reason, I kind of sort of always have the impression that this is what everyone meant the entire time and wouldn't have thought a whole essay emphasizing the point was necessary.
Because they are trying to discuss a difficult-to-define concept - consciousness.
The difficulty and nebulousness is intrinsic to the subject, especially when trying to discuss in scientific terms.
To dismiss their attempts so, you have to counter with a crystal, unarguable description of what consciousness actually is.
Which of course, you cannot do, as there is no such agreed description.
[0] https://www.labyrinthbooks.com/the-feeling-of-what-happens/
Ordinary materialism is mind-body/soul-substance subjectivity with a hat and lipstick.
I’d bet bats would enjoy marrow too if they could.
EDIT: removed LLM irrelevancy, improved formatting
I find myself believing in Idealism or monism to be the fundamental likelihood
Consciousness is a characteristic of material/matter/substance/etc.
There are not two types of stuff.
It is epistemologically rigorous. And simple.
- I assume you as a materialist you mean our brain carries consciousness as a field of experience arising out of neural activity (ie neurons firing, some kind of infromation processing leading to models of reality simulated in our mind leading to ourselves feeling aware) ie that we our awareness is the 'software' running inside the wetware.
That's all well and good except that none of that explains the 'feeling of it' there is nothing in that 3rd person material activity that correlates with first person feeling. The two things, (reductionist physical processes cannot substitute for the feeling you and I have as we experience)
This hard problem is difficult to surmount physically -either you say its an illusion but how can the primary thing we are, we expereince as the self be an illusion? or you say that somewhere in fields, atoms, molecules, cells, in 'stuff; is the redness of red or the taste of chocolate..
a materialist isn't saying that only material exists: no materialist denies that interesting stuff (behaviors, properties) emerges from material. in fact, "material" is a bit dated, since "stuff-type material" is an emergent property of quantum fields.
why is experience not just the behavior of a neural computer which has certain capabilities (such as remembering its history/identity, some amount of introspection, and of course embodiment and perception)? non-computer-programming philosophers may think there's something hard there, but they only way they can express it boils down to "I think my experience is special".
It’s like explaining music vs hearing music
We can explain music intellectually and physically and mathematically
But hearing it in our awareness is a categorically different activity and it’s experience that has no direct correlation to the physical correlates of its being
The common thought experiment is the color blind researcher experiencing color for the first time(Mary the Colour Scientist https://en.wikipedia.org/wiki/Knowledge_argument)
Basically his answer to the question "What is it like to be a bat?" is that its impossible to know.
Indeed! Makes you think: maybe it's a bug rather than a feature.
I do mostly agree with that and I think that they collectively give analytic philosophy a bad name. The worst I can say for Nagel in this particular case though is that the whole entire argument amounts to, at best, an evocative variation of a familiar idea presented as though it's a revelatory introduction of a novel concept. But I don't think he's hiding an untruth behind equivocations, at least not in this case.
But more generally, I would say I couldn't agree more when it comes to the names you listed. Analytic philosophy ended up being almost completely irrelevant to the necessary conceptual breakthroughs that brought us LLMs, a critical missed opportunity for philosophy to be the field that germinates new branches of science, and a sign that a non-trivial portion of its leading lights are just dithering.
Why they focus on feelings is a different issue.
That is what is being discussed using the "what it's like" language.
"What is it like to be a rock" => no thing satisfies that answer => a rock does not have unconscious mental states
"What is it like to be a bat" => the subjective experience of a bat is what it is like => a bat has conscious mental states
Basically it seems like a roundabout way of equating "the existence of subjective experience" with "the existence of consciousness"
edit: one of the criticism papers that the wiki cites also provides a nice exploration of the usage of the word "like" in the definition, which you might be interested to read (http://www.phps.at/texte/HackerP1.pdf)
> It is important to note that the phrase 'there is something which it is like for a subject to have experience E' does not indicate a comparison. Nagel does not claim that to have a given conscious experience resembles something (e.g. some other experience), but rather that there is something which it is like for the subject to have it, i.e. 'what it is like' is intended to signify 'how it is for the subject himself'.
How do you know that?
Philosophically, of course.
I mean sure you can’t cut a rock open and see any mental states. But you can no more cut a human open and see mental states either.
Now I am no way suggesting that you don’t have a model for ascribing mental states to humans. Or dogs. Or LLM’s. Just that all models, however useful are still models. Not having a model capable of ascribing mental states to rocks does not preclude rocks having mental states.
Well you don't, and my reading of the article was that Nagle also recognized that it was basically an assumption which he granted to bats specifically so as to have a concrete example (one which was suitably unobjectionable, seems like he thought bats 'obviously' had some level of consciousness). The actual utility of this definition is not, as far as my understanding goes, to demarcate between what is and what is not conscious. It seems more like he's using it to establish a sort of "proof-by-contradiction" against the proposal that consciousness admits a totally materialistic description. Something like:
(1) If you say that A is conscious, then you also must say that A has subjective self-experience (which is my understanding of the point of the whole "what it is to be like" thing)
(2) Any complete description/account of the consciousness of A must contain a description of the subjective self-experience of A because of (1)
(3) Subjective self-experience cannot be explained in purely materialistic/universal terms, because it's subjective (so basically by definition)
=> Consciousness cannot be fully described in a materialistic framework, because of the contradiction between (2) and (3)
> Just that all models, however useful are still models
Totally agree with this, I think you're just misunderstanding the specific utility of this model (which is this specific argument about what can be described using human language). My example with the rock was kind of a specific response to OP illustrate how I understood the whole "what it is to be like" thing to be equivalent to (1). If I'd had a bit more forethought I probably would have made those arrows in the line you've quoted bidirectional.
That's exactly what I'm saying is erroneous. Consciousness is the first thing, we are only led to believe it is a separate, second thing by a millenia-old legacy of dualism and certain built-in tendencies of mind.
I doubt Nagel would go out of his way to offer such an unnatural linguistic construction, and other philosophers would adopt this construction as a standard point of reference, if that was the sole intent.
>So then are you saying there is no such thing as consciousness?
No, not at all. I'm only saying that if we want to talk about "the consciousness of a bat", we should talk about it directly, and not invent (implicitly) a second concept that is in some senses distinct from it, and in some sense comparable to it.
The tricky bit is that “to be” is not an ordinary verb like fly, eat, or echo-locate. And “‘being an organism’” is — in the context of the paper — about subjective experience (subjective to everything except the organism.
To put it another way, the language game Nagel plays follows the conventions of language games played in post-war English language analytic philosophy. One of those conventions is awareness of Wittgenstein’s “philosophical problem”: language is a context sensitive agreement within a community…
…sure you may find fault with Wittgenstein and often there are uncomfortable epistemological implications for Modernists, Aristotelians, Positivists and such…then again that’s true of Kant.
Anyway, what the language-game model gives philosophical discourse is a way of dealing with or better avoiding Carnapian psuedo-problems arising from an insistence that the use of a word in one context applies to a context where the word is used differently…Carnap’s Logical Structure of the World pre-dates Wittgenstein’s Philosophical Investigations by about 25 years.
The question is not "What would it be like (i.e. be similar to) to be a bat?" which seems to be the strawman you are responding to.
If you don't believe that, then you face the challenge of describing what the difference is. It's difficult to do in ordinary language.
That's what Nagel is attempting to do. Unless you're an eliminativist who believes that conscious experience is an "illusion" (experienced by what?), then you're just quibbling about wording, and I suspect you'll have a difficult time coming up with better wording yourself.
I also don't think it's fair to say I'm just quibbling about wording. Yes, I am quibbling about wording, but the quibble is quite essential because the argument depends to such a large extent on wording. There are many other arguments for or against different views of consciousness but they are not the argument Nagel makes.
(Though fwiw I do think consciousness has some illusory aspects - which is only saying so much as "consciousness is different than it appears" and a far cry from "consciousness doesn't exist at all")
Certainly. I just didn't know where you stood on the question.
In Nagel's terms, there is not something it is like to be a game of Tetris. A game of Tetris doesn't have experiences. "Something it is like" is an attempt to characterize the aspect of consciousness that's proved most difficult to explain - what Chalmers dubbed the hard problem.
How would you describe the distinction?
> fwiw I do think consciousness has some illusory aspects - which is only saying so much as "consciousness is different than it appears"
Oh sure, I think that's widely accepted.
If you claim there's no distinction, then in terms of the meaning Nagel is trying to convey, you're claiming there's no distinction that sets you apart from a game of Tetris in terms of consciousness.
That's where my first reply to you was coming from: if you believe the distinction Nagel is trying to convey doesn't exist, that's tantamount to saying that consciousness as a real phenomenon doesn't exist - the eliminativist position - or something along those lines.
If you do believe consciousness exists, then you're simply arguing with the way Nagel is choosing to characterize it. I asked how you would describe it, but you haven't tried to address that.
The Oxford Living Dictionary defines consciousness as "[t]he state of being aware of and responsive to one's surroundings", "[a] person's awareness or perception of something", and "[t]he fact of awareness by the mind of itself and the world".
Characterizing that distinction is surprisingly tricky. "What is it like to be..." is one way to do that. David Chalmers' article about "the hard problem of consciousness" is another: https://consc.net/papers/facing.pdf
I worked on similar topics, I publish on a "personal" subreddit.
https://www.reddit.com/r/VisargaPersonal/
Interesting. I would have said that something like that is the definition of reductionism.
>Consciousness doesn't need to be explained in terms of objective facts
If there's one good thing that analytic philosophy achieved, it was spending the better part of the 20th century beating back various forms of dualism and ghosts in the machine. You'd have to be something other than a naturalist traditionally conceived to treat "consciousness" as ontologically basic.
Sensory deprived, paralyzed, or comatose individuals can be conscious but have no means to experience the outside world, and depending on their level of brain activity, they might not even have an "inner world" or mind's eye experience.
Anything that is able to be measured is able to experience. A subject like an apple "experiences" gravity when it falls from a tree. Things that do not interact with the physical world lack experience, and the closest things to those are WIMPs (weakly interacting massive particles). Truly non-interacting particles (NIP) are presumed to be immeasurable.
So there you have it. The conundrum that consciousness can lack experience and unconsciousness can have experience. A more interesting question in my opinion: what is a soul?
I was quite liking this explanation but you lost me here. I very strongly agree with your opening, and I think it's the key to everything. I think everyone insisting on a categorical divide runs into impossible problems.
And a good explanation of consciousness has to take the hard problem seriously, but doesn't have to agree that subjective and objective, or first person in third person or whatever you want to call them, are irreducibly distinct categories. But I think it makes more sense to say that some subset of all of the objective stuff out there is simultaneously subjective, rather than saying that all stuff at all times is both objective and subjective. I don't think an apple experiences gravity the way a mind experiences a conscious state, but I do think the through line of understanding them both as importantly physical in the same sense is key to tying physical reality to explanation of conscious states.
Also, if there is a soul, then how can we be confident concisouness arises from physical means? If there is a soul, it is the perfect means to differentiate concisouness and p-zombies.
My thinking is if soul’s exist, then we can’t call concisouness a purely physical process yet
If they don't have an "inner world"/"mind's eye" and are sensory deprived, in which sense can they be considered conscious? What is your definition here?
How can an apple "experience" gravity? I think you're overloading the term "experience" to mean two very different things, which happen (in some languages like English) to share the same word. You could say gravity "happens" to an apple, and then there's no confusion with subjective experiences.
What is it like to be a bat? (1974) [pdf] - https://news.ycombinator.com/item?id=35771587 - May 2023 (117 comments)
What Is It Like to Be a Bat? (1974) [pdf] - https://news.ycombinator.com/item?id=13998867 - March 2017 (95 comments)
A browser game inspired by Thomas Nagle's Essay “What is it like to be a bat?” - https://news.ycombinator.com/item?id=8622829 - Nov 2014 (3 comments)
I bet if we could communicate with crows, we might be able to make some progress. They seem cleverer.
Although, I’m not sure I could answer the question for “a human.”
(More Daniel Dennett)
I don't understand why Wittgenstein wasn't more forcefully challenged on this. There's something to the principle as a linguistic principle, but it just feels overextended into a foundational assumption that their experiences are fundamentally unlike ours.
Here's Billy the bat perceiving, in his special sonar sort of way, that the flying thing swooping down toward him was not his cousin Bob, but a eagle, with pinfeathers spread and talons poised for the kill!
He then points out that this story is amenable to criticism. We know that the sonar has limited range, so Billy is not at least perceiving this eagle until the last minute; we could set up experiments to find out whether bats track their kin or not; the sonar has a resolution and if we find out the resolution we know whether Billy might be perceiving the pinfeathers. He also mentions that bats have a filter, a muscle, that excludes their own squeaks when they pick up sonar echoes, so we know they aren't hearing their own squeaks directly. So, we can establish lots about what it could be like to be a bat, if it's like anything. Or at least what is isn't like.
Nagel's paper covers a lot of ground, but none of what you described has any bearing on the point about it "what it's like" as a way to identify conscious experience as distinct from, say, the life of a rock. (Assuming one isn't a panpsychist who believe that rocks possess consciousness.)
Basically, to know what it is like to be a bat, you need to have evolved as a bat.
His theory that our perception is a hallucination generated by a prediction algorithm that uses sensory input to update and correct the hallucination is very interesting.
There is only is and its content. That's it. The easiest way to see or get a sense of this is to replace any "I am ..." with "There is a ....". For example, instead of "I am thinking of writing of using stable sort", replace it with "This person have a thought of using stable sort".
This is much closer to the actual reality underneath. Even attachment itself can be put in this term. "There's a feeling that this person own this" or "There's a sense of I".
After doing, perhaps this is mental illness, I already see glimpse of the sense that everything is everything at the same time. As there are no real difference between this rock and the other rock behind the mountain that I can't see. There should be no difference between my thoughts, senses, feelings, emotions etc and that of other people. Now your sense of self captures the entirety of the universe. If you die, the universe dies for all you know. I think this is what the ancient books have been talking about by rising and being a God.
> As there are no real difference between this rock and the other rock behind the mountain that I can't see.
There is a real difference between the two; there must be because they're in different places. Monism requires you to deny actually existing differences by saying they're not "real".
> There should be no difference between my thoughts, senses, feelings, emotions etc and that of other people.
This is what in therapyspeak you'd call "not having boundaries". You aren't the same thing as other people; you can tell because the other people don't think that, won't let you borrow their car, etc. It opens them or yourself up to abuse if you think this way.
That is according to our human perception. For example a single, uniform 4D object could have a projection in 3D that appears as two distinct 3D objects. I am not claiming that a fourth spacial dimension exists, only that we cannot possibly know what exists.
what's it like to be a human?
"There's no "whats its like to be a human". Because that invokes a sense of a "soul" or "spirit" or "self" being transferred from one being to another." -- anon-3988
it does?
"what does it feel like to be blind from birth?" can you, a sighted person near-sighted though you may be for this example, even/ever comprehend it no matter how extensively described. can someone who has never seen actually describe it to you?
I am saying that it is not possible. It is entirely possible that you can "see" but not comprehend anything, hence effectively being blind. Is my red your red? Is my hotness your hotness? Is the universe upside down? Is your 3d the same as my 3d? Even all of this imaginations and hypothesis is coming purely from my sense of experience.
I don't even know that you exist, you might simply be a figment of reality, there could be nothing behind this post. I wouldn't know.
I suppose it's because people associate so much of who they are to the subjectivity of their experience. If I'm not the only one to see and taste the world as I do, am I even special? (The answer is no, and that there are more important things in life than being special.)
>"I" here implies a center of thinking. There is no center.
"I think", according to you, implies that I implies a center of thinking, and you don't believe that there is a center, so you don't believe "I think" even more than you don't believe "therefore I am". You don't have an opinion about therefore I am.
it doesn't matter about the "existence" in the predicate, because you don't accept the "I" in the subject.
It depends on your definition of "dualism". If you define it as "having a soul that was created by a higher being", then yes, they are mutually exclusive.
On the other hand, one can also define dualism as being purely evolutionary. David Chalmers [1], an Australian philosopher and cognitive scientist, has some interesting ideas around how dualistic consciousness may relate to quantum mechanics.
[1] https://en.wikipedia.org/wiki/David_Chalmers
There are far fewer of the latter than the former
But you would never know exactly what it feels to be a bat without removing your human level experience from the picture
But it makes me think of this article:
https://www.grandin.com/references/thinking.animals.html
which is a more concrete(?) dive into being an animal?
Bringing it back to bats, a failure to imagine what it's like to be a bat is just indicative that the overlaps between human and bat modalities don’t admit a coherent gluing that humans can inhabit phenomenally.
There's something more to it than this.
For one thing there's a threshold of awareness. Your mind is constantly doing things and having thoughts that don't arrive to the threshold of awareness. You can observe more of this stuff if you meditate and less of this stuff if you constantly distract yourself. But consciousness IMO should have the idea of a threshold baked in.
For another, the brain will unify things that don't make sense. I assume you mean something like consciousness is what happens when there aren't obstructions to stitching sensory data together. But the brain does a lot of work interpreting incoherent data as best it can. It doesn't have to limit itself to coherent data.
> It doesn't have to limit itself to coherent data.
There are specific failure cases for non-integrability:
1. Dissociation/derealization = partial failures of gluing.
2. Nausea = inconsistent overlaps (ie: large cocycles) interpreted as bodily threat.
3. Anesthesia = disabling of the sheaf functor: no global section possible.
At least for me it provides a consistent working model for hallucinogenic, synesthesia, phantom limb phenomena, and split-brain scenarios. If anything, the ways in which sensor integration fails are more interesting than when it succeeds.
The way I look at it is that the sensors provide data as activations and awareness is some output with a thresholding or activation function.
Sense making and consciousness in my mental model is something that happens after the fact and it tries to happen even with nonsense data. As opposed to -- as I was reading you to be leaning toward -- being the consequence of sensory data being in a sufficiently nice relationship with each other.
If I've understood you correctly, I'll suggest that simple sensory intersection is way way not enough: the processing hardware and software are material to what it is like to be someone.
"It is all that we know, and so we easily mistake it for all there is to know. As a result, we tend "to frame animals' lives in terms of our senses rather than theirs."
That's just a question begging assertion, and there's plenty of empirical knowledge of necessary physical conditions for consciousness as well as predictable physical influences on conscious states. Whether consciousness is "measurable" is part of what's at issue and can't just be definitionally presupposed.
Struggling to make sense of this sentence.
Or in a simpler way, consciousness is present just in case being that organism has an inner, subjective character - something that can not be reduced to a purely material state.
How it at all related to let's say programming?
Well, for example learning vim-navigation or Lisp or a language with an advanced type system (e.g. Haskell) can be umwelt-transformative.
Vim changes how you perceive text as a structured, navigable space. Lisp reveals code-as-data and makes you see programs as transformable structures. Haskell's type system creates new categories of thought about correctness, composition, and effects.
These aren't just new skills - they're new sensory-cognitive modalities. You literally cannot "unsee" monadic patterns or homoiconicity once internalized. They become part of your computational umwelt, shaping what problems you notice, what solutions seem natural, and even how you conceptualize everyday processes outside programming.
It's similar to how learning music theory changes how you hear songs, or how learning a tonal language might affect how you perceive pitch. The tools become part of your extended cognition, restructuring your problem-space perception.
When a Lisper says "code is data" they're not just stating a fact - they're describing a lived perceptual reality where parentheses dissolve into tree structures and programs become sculptable material. When a Haskeller mentions "following the types" they're describing an actual sensory-like experience of being guided through problem space by type constraints.
This creates a profound pedagogical challenge: you can explain the mechanics of monads endlessly, but until someone has that "aha" moment where they start thinking monadically, they don't really get it. It's like trying to explain color to someone who's never seen, or echolocation to someone without that sense. That's why who's never given a truthful and heartfelt attempt to understand Lisp, often never gets it.
The umwelt shift is precisely what makes these tools powerful - they're not just different syntax but different ways of being-in-computational-world. And like the bat's echolocation, once you're inside that experiential framework, it seems impossible that others can't "hear" the elegant shape of a well-typed program.
There are other umwelt-transforming examples, like: debugging with time-travel/reversible debuggers, using pure concatenative languages, logic programming - Datalog/Prolog, array programming, constraint solvers - SAT/SMT, etc.
The point I'm trying to make - don't try to "understand" the cons and pros of being a bat, try to "be a bat", that would allow you to see the world differently.
Indeed, basic vim-navigation - (hjkl, w, b) is muscle memory.
But, I'd argue the umwelt shift comes from vim's modal nature and its language of text objects. You start perceiving text as having an inherent grammar - "inside parentheses", "around word", "until comma." Text gains topology and structure that was invisible before.
The transformative part isn't the keystrokes but learning to think "delete inside quotes" (di") or "change around paragraph" (cap). You see text as composable objects with boundaries, not just streams of characters. This may even persists when you're reading on paper.
That mental model often transforms your keyboard workflow not just in your editor - but your WM, terminal, web browser, etc.
Everybody wants to be a bat Cause noone but a bat really knows where it's at
Classic Hofstadter, he introduces a concept called a “Be-Able Thing” (BAT for short)
I think we would call this "without ego" and not "without consciousness". I think it's totally possible to be conscious without ego. And perhaps bats do have an ego however small - some may be more greedy than others, etc.
If bats have no subjective experience it’s ethical to do anything to them but if there is than they deserve to (as all animals) be treated ethically as much as we can do so
IMO considering Bats to be similar to Mice -we’ve studied mice and rats extensively and while cannot know precisely we can be pretty sure there is subjective experience (felt experience there) ie almost our scientific experiments and field data with so called ‘lower’ organisms show evidence of pain, suffering and desires, play etc - all critical evidence of subjectivity
Now I don’t think bats are meta-conscious (meta cognitive) because they can’t commiserate on their experiences or worry about death etc like humans can but they feel stuff - and we must respect that
Anyway, if there is no mind in the sense of a personal identity or a reflective thought process, then really you're just torturing and killing a set of sense perceptions, so what would be the basis of a morality that forbids that?
I don't think "mind" is limited to those two things, and I think it may be on a continuum rather than binary, and they may also be integrally related to the having of other senses.
I also think they probably do have some non trivial degree of mind even in the strong sense, and that mental states that aren't immediately tied to self reflection are independently valuable because even mere "sense perceptions" include valenced states (pain, comfort) that traditionally tend to fall within the scope of moral consideration. I also think their stake in future modes of being over their long term evolutionary trajectory is a morally significant interest they have.
If there is no sense of self or personal identity, how is that different than a block of wood or a computer? That there might be "mental" functions performed doesn't give it subjectivity if there is no subject performing them. And if there is no persistent reflective self there is no subject. You could call instincts or trained behaviors mental, activities of a kind of mind if you wanted to. But if it's not self aware it's not a moral subject.
Exhibit a
> Nagel begins by assuming that "conscious experience is a widespread phenomenon" present in many animals (particularly mammals), even though it is "difficult to say [...] what provides evidence of it".
Physicalism is an ontological assertion that is almost certainly true, and is adhered to by nearly all scientists and most philosophers of mind. Solipsism is an ontological assertion that could only possibly be true for one person, and is generally dismissed. They are at opposite ends of the plausibility scale.
It's like describing the inside of a house in very great detail, and then using this to argue that there's nothing outside the house. The method is explicitly limiting its scope to the inside of the house, so can say nothing about what's outside, for or against. Same with physicalism: most arguments in its favor limit their method to looking at the physical, so in practice say nothing about whether this is all there is.
> Same with physicalism: most arguments in its favor limit their method to looking at the physical, so in practice say nothing about whether this is all there is.
This is simply wrong ... there are very strong arguments that, when we're looking at mental events, we are looking at the physical. To say that arguments for physicalism are limited to looking at the physical is a circular argument that presupposes that physicalism is wrong. The arguments for physicalism absolutely are not based at looking at a limited set of things, they are logical arguments that there's no way to escape being physical ... certainly Descartes' dualism is long dead due to the interaction problem -- mental states must be physical in order to be acted upon or act upon the physical. The alternatives are ad hoc nonsense like Chalmers' "bridging laws" that posit that there's a mental world that is kept in tight sync with the physical world by these "bridging laws" that have no description or explanation or reason to believe exist.
Oh this is undoubtedly true, and my argument was limited to the statement that the most common argument for physicalism is invalid. I was not launching an attack on physicalism itself.
> No metaphysical stance can be proved.
That's an interesting metaphysical stance, but again, I'm not trying to prove any metaphysics, just pointing out the main weakness that I see in the physicalist argument. I'm pointing out that any pro-physicalist argument that is a variant of "neuroscience says X" is invalid for the reason I gave: by limiting your scope to S, you can say nothing about anything outside S. This is true regardless of whether there is actually anything outside S, so there is no assumption in my argument that physicalism is wrong.
One argument against physicalism is that if thought or knowledge can be reduced to particles bouncing around, then there is no thought or knowledge. My knowledge that 2+2=4 is about something other than, or different from, the particles in my brain. Knowledge is about the content of the mind, which is different from the associated physical state of the brain. If content is neurons, then content as something my mind considers doesn't exist. If my thought "2+2=4" just is a bunch of particles in my brain doing stuff, then my belief that my thought is true is not even wrong, as the saying goes: just absurd.
I'm no Cartesian dualist though -- the interaction problem is just one problem with his dualism. I think Aristotle and Aquinas basically got the picture of reality right, and their metaphysics can shed yuuuuge amounts of light on the mind-body problem but obviously that's a grossly unfashionable worldview these days :-)
You attacked physicalism for not being proven.
I disagree with your arguments and I think they are hopelessly confused. Since our views are conceptually incommensurate, there's no point in continuing.
The physicalist position wants to reduce the mental to the physical. My thought cannot be reduced from the mental to the physical, because my thought is about a tiger, and a tiger cannot be reduced to a brain state.
If physicalism is true, I can't really be thinking about a tiger, because the tiger in my thought has no physical existence-as-a-tiger, and therefore can't have any existence-as-a-tiger at all. But then I'm not really thinking about a tiger. And the same applies to all our thoughts: physicalism would imply that all our thoughts are delusional, and not about reality at all. A non-physicalist view allows my thought to be actually about a tiger, without that tiger-thought having physical existence.
(Note that I have no problem with the view that the mental and the physical co-incide, or have some kind of causal relationship -- this is obviously true -- only with the view that the mental is reducible to the physical.)
I don't believe any of that to be true, but I think that's kind of the point of that argument. I do think we start from that Cartesian starting place, but once we know enough about the external world to know that we're a part of it, and can explain our mind in terms of it, it effectively shifts the foundation, so that our mental states are grounded in empirical reality rather than the other way around.
and
https://faculty.philosophy.umd.edu/pcarruthers/NoM%20-%205.p...
Nonsense.
> First, ontological assertions need to reflect reality.
You're getting ahead of yourself to imply that somehow physicalism does not reflect reality, or that an assertion has to be proven to reflect reality before being made.
> That is, they need to be true or false
No, that's not what reflecting reality means. Of course ontological assertions are true or false, if they aren't incoherent, but that's neither here nor there.
> and many philosophers, including prominent scientists, don't think they qualify.
What's this "they" that don't qualify? The subject was physicalism, and again almost all scientists and most philosophers of mind subscribe to it ... which leaves room for some not doing so. Whether or not the outliers are "prominent" is irrelevant.
> Indeed, the arguments against ontological realism are more airtight than any particular metaphysical theory.
That's a much stronger claim than that physicalism is wrong ... many dualists are ontological realists. And it's certainly convenient to claim that there are airtight arguments for one's views, and easy to dismiss the claim.
And while you're at it, as plausible as any metaphysical theory, insofar as you're still doing metaphysics.
I think we've made extraordinary progress on things like brain to machine interfaces, and demonstrating that something much like human thought can be approximated according to computational principles.
I do think some sort of theoretical bedrock is necessary to explain to "something there's like to be" quality, but I think it would be obtuse to brush aside the rather extraordinary infiltrations into the black box of consciousness that we've made thus far, even if it's all been knowing more about it from the outside. There's a real problem that remains unpenetrated but as has been noted elsewhere in this thread, it is a nebulous concept, and perhaps one of the most difficult and important research questions, and I think nothing other than ordinary humility is necessary to explain the limit an extent to which we understand it thus far.
Against Mind-Blindness: recognizing and communicating with diverse intelligences - by Michael Levin
https://www.youtube.com/watch?v=OD5TOsPZIQY
If one drops the assumption that physical reality is nothing more than a bunch of particles, the mind stops being so utterly weird and unique, and the mind-body problem is more tractable. Pre-17th century, philosophers weren't so troubled by it.
Why cannot it?
Another is that the propositions "the thought 2+2=4 is correct" and "the thought 2+2=5 is wrong" can only be true with regard to the content of a thought. If thought can be reduced to neurons firing, then describing a thought as correct or wrong is absurd. Since this is not the case, it must be impossible to reduce thought to neurons firing.
(Btw, the first paragraph of my previous comment is not my position. I am giving a three-sentence summary of Descartes' contribution to the mind-body problem.)
I promise I'm not being dense or rhetorical, I truly don't understand that line of thought.
It seems to me like begging the question, almost like saying "experience cannot be this, because it'd be absurd, because it cannot be this."
It is wrong to claim that brain states (neurons firing) are the same as mental states (thoughts). There are several reasons for this. One is that reducing thoughts to brain states means a thought cannot be correct or incorrect. For example, one series of mental states leads to the thought "2+2=4"; another series leads to the thought "2+2=5". The correctness of the former and the wrongness of the latter refers only to the thought's content, not the physical brain state. If thoughts are nothing more than brain states, it's meaningless to say that one thought is correct -- that is to say, it's a thought that conforms to reality -- and that the other is incorrect. A particular state of neurons and chemicals cannot per se be incorrect or incorrect. If one thought is right (about reality) and another thought is wrong (not about reality), then there must be aspects of thought that are distinct from the physical state of the brain.
If it's meaningless to say that one thought is correct and another is incorrect, then of course nothing we think or say has any connection to reality. Hence the existence of this disagreement, along with the belief that one of us is right and the other wrong, presupposes that the physicalist position is wrong.
I agree with this: the physical configuration of neurons, their firings, the atoms that make them, etc, cannot be "right" or "wrong". This wouldn't make sense in reality; it either is or isn't, and "right" or "wrong" are human values. The universe is neither right nor wrong, it just is.
What about the thoughts those neuron firings mean to us? Well, a good argument can be made that they are also not "right" or "wrong" in isolation, they are just phenomena. Trivially, a thought of "2+2=4" is neither right nor wrong, it's only other thoughts that consider it "right" or "wrong" (often with additional context). So the values themselves can be a physical manifestation.
So it seems to me your problem can be resolved like this: in response to a physical configuration we call a "thought", other "thoughts" can be formed in physical configurations we call "right" or "wrong".
The qualities of "right" or "wrong" only exist as physical configurations in the minds of humans.
And voila! There's no incompatibility between the physical world and thoughts, emotions, "right" or "wrong".
Isn’t this just the same as saying an organism is conscious if it perceives? If it is aware of input from one or more senses (and I’m not limiting that to the five human senses)?
The short version is that if we can approximate the sensory experience and the motor experience of an organism, and we can successively refine that approximation as measured by similarity in behavior between bat and man-bad, then I would argue that we can in fact imagine what it is like to be a bat.
In short, it is a Chinese Bat Room argument. If you put a human controlling a robot bat and a bat in two boxes and then ask someone to determine which is the human and which is the bat, when science can no longer tell the difference (because we have refined the human/bat interface sufficiently) you can ask the human controlling the robot bat to write down their experience and it would be strikingly similar to what the bat would say if we could teach it English.
The bat case is actually easier than one might suppose, similarly say, a jumping spider, because we can translate their sensory inputs to our nervous system and if we tune our reward system and motor system so that we can get even an approximate set of inputs and similar set of actuators, then we can experience what it is like to be a bat.
Further, if I improve the fidelity of the experimental man-bat simulation rig, the experience will likewise converge. While we will not be able to truly be a bat since that is asymptotically mutually exclusive with our biology, the fact that we can build systems that allow progressive approach to bat sensory motor experience means that we actually do have the ability to image the experience of other beings. That is, our experiences are converging and differ only due to our lack of our technical ability to overcome the limitations of our biological differences.
The harder case is when we literally don't have the molecule that is used to detect something, as in the tetrachormat case. That said one of my friends has always wanted to find a way to do an experiment where a trichromat can somehow have the new photo receptor expressed in one eye and see what happens.
The general argument about why we would expect something similar to happen should the technical hurdles be overcome is because basically all nervous systems wire themselves up by learning. Therefore, as long as the input and output ranges can be mapped to something that a human can learn, then a human nervous system should likewise converge to be able to sense and produce those inputs and outputs (modulo certain critical periods in neural development, though even those can be overcome, e.g. language acquisition by slowing down speech for adults).
Some technical hurdle examples. Converting a trichromat into a tetrachormat by crispering someone's left eye. Learning dolphin by slowing down dolphin speech in time while also providing a way for humans to produce dolphin high frequency speech via some transform on the human orofacial vocal system. There are limitations when we can't literally dilate time, but I supposed if we are going all the way, we can accelerate the human to the fraction of the speed of light that will compensate for the fact that the human motor system can't quite operate fast enough to allow a rapid fire conversation with a dolphin.
Aside from that, breathing fresh air in the morning is an activity, not a "quality of subjective experience". Generally the language people use around this is extremely confused and unhelpful.
And no, that's not what a non sequitur is. And no, coherence is not just a linguistic idea. Then you try to explain what I "really mean" by "quality of subjective experience," and you can't even give a good faith reading of that. I'm really trying here.
There's nothing incoherent here, they're just talking about subjective states of experience.
What makes me me? Whatever you identify as "yourself", how come it lives within your body? Why is there not someone else living inside your body? Why was I born, specifically "me", and not someone else?
This has puzzled me since childhood.
Not at all. I was shocked when I noticed that how few people have asked themselves this question. In fact, it is impossible to even explain this question to the majority of people. Most people confuse the question with "what makes us intelligent", missing the whole "first person perspective" aspect of it.
I guess evolution tries to stop us from asking question that might lead to nihilism.
If that's not the case then I'll just have no subjective experience, same as before I was born/instantiated.
Disappointed when I went somewhere and there wasn't any tea,
Enthralled by a story about someone guarding a mystical treasure alone in a remote museum on a dark and stormy night,
Sympathetic toward a hardworking guy nobody likes, but also aggravated by his bossiness to the point of swearing at him,
Confused due to waking up at 7 pm and not being sure how it happened.
You probably don't entirely understand any of those. What is it to entirely understand something? But you probably get the idea in each case.
https://news.ycombinator.com/item?id=45118703
https://news.ycombinator.com/item?id=45115367
https://news.ycombinator.com/item?id=45111401