If the portrait talked back to me, I’d wonder if it were conscious, but Siri/Google has been talking back to me for over a decade yet I don’t think it’s conscious.
Unlike vision, we don't really know what consciousness is. Maybe we can't know. And AIs do "think" in some sense and can observe their own thinking - in that sense at least they are self-aware.
If I knew how human consciousness works, I'd feel confident reasoning about whether AIs have it. I don't and I'm not.
If we believe Julian Jaynes of bicameral mind hypothesis, consciousness is not required for many cognitive acts such as learning, thinking logically, solving puzzles etc etc. All consciousness is apparently good for is to fantasize, to focus attention on a particular matter for prolonged duration, and to plot devious intrigues.
Whether for Janynian reasons or not, that seems to be obviously true. We've all had the experience of doing some moderately complex task (like driving a car) and afterward having no recollection of any conscious experience of it.
I suspect this is what "flow states" are about too (people LOVE flow states; I do).
Do you think that it's "common sense" that it's impossible in principle for any mere machine to duplicate the processes that make us conscious and self-aware and so on? Or is it your understanding of how present-day LLMs work ("stochastic parrots"?) that make you say it's "common sense" that they're not sentient?
“Once you realize that an infinity of perfect pixels don’t make a portrait come to life, you should also conclude that an infinity of perfect words don’t make software come to life, either.”
This is a false comparison. Sure, no collection of words or pixels can be conscious - I don’t expect it’s possible to make a .txt or a .jpg sapient. But it’s not the words themselves that are being called conscious, it’s the algorithm generating them. It seems obvious to me that it’s possible to have a conscious human brain in a vat that can only interact with the world via a text terminal; the mere fact that LLMs interact via text isn’t disqualifying for consciousness.
I agree that all publicly-available AIs are probably not conscious, but consciousness is a property of an algorithm, not of the outputs of an algorithm. LLMs run incredibly complex and impenetrable algorithms to predict next tokens, and you’d need a lot of not-yet-existent STEM knowledge to demonstrate that no part of it exhibits consciousness.
Consciousness is not and can not be property of algorithms. For one, humans are not algorithms but living substances. Even Chinese room is not an algorithm but the entire physical system that runs algorithms.
Perhaps it's more accurate to say that consciousness is a property of an instance of an algorithm being run, rather than the algorithm itself - a hard drive with an LLM saved to it is no more conscious than a vitrified brain. So there needs to be something physical and changing that's actually executing the algorithm. But a computer, despite not being alive, can simulate a human brain exactly as if it were part of a living substance, including when it talks about its own experiences and about its awareness of its own awareness.
One possibility, that has perhaps not received sufficient attention is that physics captures only metrical properties of things. But, when the question is of algorithms and exact simulations, then non-metrical aspects of things may be important.
The first non-metrical aspect is existence itself. For example, a table can be represented in a simulation by its dimensionsandl mechanical properties. But in addition there is the fact that the table exists.
Another possibility, is that consciousness itself is a linguistic constructd which came into being by a contingent historic process. This development is not guaranteed in a simulation.
A computer has not, at least to date, perfectly simulated the 302 neurons of c. elegans, much less the human brain which is more complex by several orders of magnitude.
That means no machine can be conscious, not even a perfect behavioral replica of a human. The end of that chain of logic is p-zombies -- beings who are indistinguishable from humans in every way, down to bare atoms, except they have no subjective experiences. They're just automata that perfectly emulate what humans who did have those experiences would do.
(There might be intermediate positions here where you believe that the brain exploits quantum effects or something else we haven't figured out yet and that it's just the current silicon-based paradigm that precludes consciousness. Scott Aaronson has some mind-bending speculations along these lines. More usefully, Aaronson tears apart the sillier attempts at carbon exceptionalism. See his essay, "The Ghost in the Quantum Turing Machine".)
**if you reject Searle's Chinese Room argument**
Now I think we're just talking price. Current LLMs are obviously not conscious but what about future LLMs that pass even the most stringent, months-long Turing tests? Still just replicating human words? What if they can also be gainfully employed? What if they have robot bodies? You have to draw a line somewhere (whether or not this is remotely close to happening in the real world) and accept them as conscious, right? Refusing to ever do that is back in Chinese Room territory.
Consciousness is a huge mystery and I kind of sympathize with those who'd rather bite the bullet in the other direction. Instead of admitting that machines can, in principle, be conscious we could insist that there's no such thing as consciousness. That my own feeling that I'm obviously conscious is an illusion. I think that's silly, but, y'know, I *would* say that.
---
Important PS: I think that AGI can be dangerous (even potentially apocalyptically so) without being conscious. So I don't think we need to agree on the philosophy to agree on the pragmatics.
Consciousness may be contingent product of history. This is the startling idea of Julian Jaynes. Mindspace arose around 1000 BC as coalescence of several language related terms that originally meant physically felt effects such as racing of heart or tightening of stomach as responses to external stress.
Irrespective of the details of Jaynes, the point is a machine may be conscious but it is not guaranteed to be so. It was a contingent fact of evolution of language and of history that we are conscious. Any given machine, of whatever complexity, need not be conscious, and indeed very unlikely to be so.
You're making a category error. You're talking about the representation rather than the process that generated the representation. Of course the screen you're using to interact with the chatbot isn't conscious, just as it's not the screen that's conscious when I chat with a human friend, but the process generating it might be.
With a camera, we understand very well how the generating process creates images and can reason that the camera obviously isn't conscious, either. But with AI (or humans), the generation is much more complicated, so we shouldn't be nearly as certain. I don't think current AIs are meaningfully conscious, but I think it's quite likely that a conscious AI can be built.
Just epistemologically, I would bet that the IQs of ppl who believe that AI has the potential to be conscious are pulled on average from a different section of the normal curve than the ppl who believed that photographs could maybe see us.
Non-epistemologically,the differences between AI and photographs are so plural and obvious as to basically not be worth listing. Ask your own brain what the differences are if you aren't sure and you will see.
Fwiw I doubt that current AI is "conscious" in the everyday sense of the word, but I would be that future AIs do in fact have the potential to have consciousness.
Love you blog and your writing, especially The Case Against Education and Selfish Reasons to Have More Kids
> Furthermore, even though no one has ever built a new eye, getting the right biological building blocks is crucial. Fleshy material at least has the potential to see, but ink and paper never will.
Does a camera see? It can stand in well enough for an eye that we have had artificial systems for more than a decade allowing blind people to see again (the main limit not being the eye part but the connection-to-brain part). So the biological blocks aren't necessary. A camera looks nothing like an eye, but does have reasonable counterparts to its parts (a lens, a 'retina' of sorts, focusing apparatus)
Similarly, your AIs-arent't-conscious argument can't just rely on the implementation being non-biological. Artificial neural weights probably don't emulate their biological components perfectly (partly because we still don't understand biological neurons and their networks all that well) but they are an attempt at emulating how brains work.
And you can do (some) analogous experiments to those we do with brains: The Antropic team identified a set of artificial neurons tying to various concepts, and by tweaking one, drastically change the model's general functioning, such as making it believe it is the golden gate bridge (https://transformer-circuits.pub/2024/scaling-monosemanticity/index.html#assessing-tour-influence). This kind of small intervention having large effects would be literally impossible in Searle's Chinese Room (which simply is a giant mapping of questions to answers), so trying to reduce what's going on to a simple set of word mappings is not reasonable (which doesn't mean it is actually similar to human minds).
I read it. There's a scene in Neal Stephenson's _System of the World_, set in ~1715, where a character ("Peer") simply is unable to believe that another character, Dappa, a brilliant well-read multi-lingual black man, is intelligent, despite talking directly with him. He's so blind that he thinks Dappa is some kind of talking parrot. (A hilarious scene.)
I agree with B. Caplan too. I read your post and you make good points.
Although one could similarly argue that each neuron (or whatever individual component of the brain) is as entirely unconscious as the individual components of the mechanical device you proposed—just as each person in your stadium analogy need not have any idea what the output will be—I agree there’s no reason to think that LLMs are conscious.
This is tricky: I was about to write something like “…that the purely mechanistic LLMs are conscious,” but as you noted, whether the brain’s operations are equally mechanistic/deterministic (I think they are) has no bearing on the view that LLMs aren’t conscious. Their operation—complex and sophisticated as it is—still seems simple enough to discard the possibility that it could somehow give rise to any sort of consciousness. And there’s no reason to couple intelligence (“mechanistic” or otherwise) with consciousness anyway, as you also noted.
I think a lot of confusion is caused by the notion of consciousness “arising,” which implies that it can spontaneously appear when a computational system is complex enough. This makes about as much sense as a fully operational eye just popping out in nature. Sure, some very rudimentary sensory-induced consciousness must have appeared at some point in nature, but nothing like our human consciousness could have suddenly emerged. Through millions of years of natural selection, that initial protoconsciousness gradually gave rise to the kind of consciousness we humans have—a consciousness that has, by the way, been “designed” by evolution to operate in very specific ways. Like the human eye, it has been “sculpted” by natural selection to behave in a particular and precise manner; it’s not just some random, undefined quality “floating” out there for no reason or special purpose.
In other words, it makes no sense to think that a consciousness even remotely close to ours (or of any other "advanced" kind) might simply arise in an LLM-like program.
This (and the Pinker quotation) reminds me of Hans Moravec's idea re transferring human minds into computer emulations ("uploading" minds, or what Robin Hanson calls "ems").
Put the patient on a hospital bed, and scan a small portion of the brain, emulating its function on a computer. Give the patient a switch that lets him go back and forth between running part of his brain on the computer vs brain wetware. Let him flip back and forth to confirm that his experience remains the same in either mode. Then repeat, gradually, until the person is entirely emulated.
You seem to mix together two ideas - that consciousness probably arose gradually, which seems plausible, and that consciousness must have been selected for and must serve some special purpose. Why are you confident in the latter? Surely consciousness must be generated by features of of our minds that were selected for, but I don't see why consciousness itself must play any role or have been selected for directly. Perhaps it could instead be a side effect (or 'epiphenomenon') of whatever cognitive functional characteristics were important to our genetic fitness.
I did mention, though, why it seems to be adaptive, and hence selected for. If we consider how it functions (for instance, how our awareness of what we’re doing can fade or intensify, as when we’re driving), it becomes evident that it has been precisely “designed” to operate in very specific ways. Another example: we’re unconscious of most of our brain’s operations and conscious of only a few processes, the awareness of which is highly plausibly adaptive (advantageous in a Darwinian sense). Although the epiphenomenon perspective can be compelling, it doesn’t quite square with the strong evidence that consciousness serves a useful function. I contrast these two perspectives in more detail here:https://open.substack.com/pub/cdelosada/p/are-we-conscious-automata?r=68hj5&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false
A largely agree about the evolution sculpting proto-human consciousness. I only add that Jaynes could be right and the full human consciousness arose as a historically contingent thing very recently- about 1000 BC.
If a portrait could comment on my appearance, I would update my beliefs on whether it could see me.
If the portrait talked back to me, I’d wonder if it were conscious, but Siri/Google has been talking back to me for over a decade yet I don’t think it’s conscious.
Unlike vision, we don't really know what consciousness is. Maybe we can't know. And AIs do "think" in some sense and can observe their own thinking - in that sense at least they are self-aware.
If I knew how human consciousness works, I'd feel confident reasoning about whether AIs have it. I don't and I'm not.
If we believe Julian Jaynes of bicameral mind hypothesis, consciousness is not required for many cognitive acts such as learning, thinking logically, solving puzzles etc etc. All consciousness is apparently good for is to fantasize, to focus attention on a particular matter for prolonged duration, and to plot devious intrigues.
Whether for Janynian reasons or not, that seems to be obviously true. We've all had the experience of doing some moderately complex task (like driving a car) and afterward having no recollection of any conscious experience of it.
I suspect this is what "flow states" are about too (people LOVE flow states; I do).
Do you think that it's "common sense" that it's impossible in principle for any mere machine to duplicate the processes that make us conscious and self-aware and so on? Or is it your understanding of how present-day LLMs work ("stochastic parrots"?) that make you say it's "common sense" that they're not sentient?
Because those are two very different claims!
“Once you realize that an infinity of perfect pixels don’t make a portrait come to life, you should also conclude that an infinity of perfect words don’t make software come to life, either.”
This is a false comparison. Sure, no collection of words or pixels can be conscious - I don’t expect it’s possible to make a .txt or a .jpg sapient. But it’s not the words themselves that are being called conscious, it’s the algorithm generating them. It seems obvious to me that it’s possible to have a conscious human brain in a vat that can only interact with the world via a text terminal; the mere fact that LLMs interact via text isn’t disqualifying for consciousness.
I agree that all publicly-available AIs are probably not conscious, but consciousness is a property of an algorithm, not of the outputs of an algorithm. LLMs run incredibly complex and impenetrable algorithms to predict next tokens, and you’d need a lot of not-yet-existent STEM knowledge to demonstrate that no part of it exhibits consciousness.
Consciousness is not and can not be property of algorithms. For one, humans are not algorithms but living substances. Even Chinese room is not an algorithm but the entire physical system that runs algorithms.
Perhaps it's more accurate to say that consciousness is a property of an instance of an algorithm being run, rather than the algorithm itself - a hard drive with an LLM saved to it is no more conscious than a vitrified brain. So there needs to be something physical and changing that's actually executing the algorithm. But a computer, despite not being alive, can simulate a human brain exactly as if it were part of a living substance, including when it talks about its own experiences and about its awareness of its own awareness.
Is simulation of hurricane a hurricane?
One possibility, that has perhaps not received sufficient attention is that physics captures only metrical properties of things. But, when the question is of algorithms and exact simulations, then non-metrical aspects of things may be important.
The first non-metrical aspect is existence itself. For example, a table can be represented in a simulation by its dimensionsandl mechanical properties. But in addition there is the fact that the table exists.
Another possibility, is that consciousness itself is a linguistic constructd which came into being by a contingent historic process. This development is not guaranteed in a simulation.
A computer has not, at least to date, perfectly simulated the 302 neurons of c. elegans, much less the human brain which is more complex by several orders of magnitude.
No, but it could.
I have a confusing mix of agreement and disagreement with this. But first, a key clarifying question: Do you buy Searle's Chinese Room argument?
(To put my own cards on the table: https://agifriday.substack.com/p/searle )
Actually maybe I can just answer both branches:
**if you accept Searle's Chinese Room argument**
That means no machine can be conscious, not even a perfect behavioral replica of a human. The end of that chain of logic is p-zombies -- beings who are indistinguishable from humans in every way, down to bare atoms, except they have no subjective experiences. They're just automata that perfectly emulate what humans who did have those experiences would do.
(There might be intermediate positions here where you believe that the brain exploits quantum effects or something else we haven't figured out yet and that it's just the current silicon-based paradigm that precludes consciousness. Scott Aaronson has some mind-bending speculations along these lines. More usefully, Aaronson tears apart the sillier attempts at carbon exceptionalism. See his essay, "The Ghost in the Quantum Turing Machine".)
**if you reject Searle's Chinese Room argument**
Now I think we're just talking price. Current LLMs are obviously not conscious but what about future LLMs that pass even the most stringent, months-long Turing tests? Still just replicating human words? What if they can also be gainfully employed? What if they have robot bodies? You have to draw a line somewhere (whether or not this is remotely close to happening in the real world) and accept them as conscious, right? Refusing to ever do that is back in Chinese Room territory.
Consciousness is a huge mystery and I kind of sympathize with those who'd rather bite the bullet in the other direction. Instead of admitting that machines can, in principle, be conscious we could insist that there's no such thing as consciousness. That my own feeling that I'm obviously conscious is an illusion. I think that's silly, but, y'know, I *would* say that.
---
Important PS: I think that AGI can be dangerous (even potentially apocalyptically so) without being conscious. So I don't think we need to agree on the philosophy to agree on the pragmatics.
Consciousness may be contingent product of history. This is the startling idea of Julian Jaynes. Mindspace arose around 1000 BC as coalescence of several language related terms that originally meant physically felt effects such as racing of heart or tightening of stomach as responses to external stress.
Irrespective of the details of Jaynes, the point is a machine may be conscious but it is not guaranteed to be so. It was a contingent fact of evolution of language and of history that we are conscious. Any given machine, of whatever complexity, need not be conscious, and indeed very unlikely to be so.
Didn't you lose a bet on how fast AI capabilities would develop already?
Also, look at the chains of thought in https://www.arxiv.org/abs/2509.15541. o3, at least, is doing something much weirder than talking like humans.
You're making a category error. You're talking about the representation rather than the process that generated the representation. Of course the screen you're using to interact with the chatbot isn't conscious, just as it's not the screen that's conscious when I chat with a human friend, but the process generating it might be.
With a camera, we understand very well how the generating process creates images and can reason that the camera obviously isn't conscious, either. But with AI (or humans), the generation is much more complicated, so we shouldn't be nearly as certain. I don't think current AIs are meaningfully conscious, but I think it's quite likely that a conscious AI can be built.
“Both are technologies of superficial simulation.” No, AI is, or soon will be, a technology of *profound* simulation. That’s different.
Just epistemologically, I would bet that the IQs of ppl who believe that AI has the potential to be conscious are pulled on average from a different section of the normal curve than the ppl who believed that photographs could maybe see us.
Non-epistemologically,the differences between AI and photographs are so plural and obvious as to basically not be worth listing. Ask your own brain what the differences are if you aren't sure and you will see.
Fwiw I doubt that current AI is "conscious" in the everyday sense of the word, but I would be that future AIs do in fact have the potential to have consciousness.
Love you blog and your writing, especially The Case Against Education and Selfish Reasons to Have More Kids
♥️♥️♥️
> Furthermore, even though no one has ever built a new eye, getting the right biological building blocks is crucial. Fleshy material at least has the potential to see, but ink and paper never will.
Does a camera see? It can stand in well enough for an eye that we have had artificial systems for more than a decade allowing blind people to see again (the main limit not being the eye part but the connection-to-brain part). So the biological blocks aren't necessary. A camera looks nothing like an eye, but does have reasonable counterparts to its parts (a lens, a 'retina' of sorts, focusing apparatus)
Similarly, your AIs-arent't-conscious argument can't just rely on the implementation being non-biological. Artificial neural weights probably don't emulate their biological components perfectly (partly because we still don't understand biological neurons and their networks all that well) but they are an attempt at emulating how brains work.
And you can do (some) analogous experiments to those we do with brains: The Antropic team identified a set of artificial neurons tying to various concepts, and by tweaking one, drastically change the model's general functioning, such as making it believe it is the golden gate bridge (https://transformer-circuits.pub/2024/scaling-monosemanticity/index.html#assessing-tour-influence). This kind of small intervention having large effects would be literally impossible in Searle's Chinese Room (which simply is a giant mapping of questions to answers), so trying to reduce what's going on to a simple set of word mappings is not reasonable (which doesn't mean it is actually similar to human minds).
I wouldn’t have put you down as a fellow hard-problem-ist
Hear, hear. AI is neither consciousness nor the apocalypse.
Holograms of Holocaust survivors make every word they "say" true.
And NOT JUST about themselves, either...
The only post I’ve ever posted is about this. I agree. https://open.substack.com/pub/ari234/p/consciousness-is-very-strange
I read it. There's a scene in Neal Stephenson's _System of the World_, set in ~1715, where a character ("Peer") simply is unable to believe that another character, Dappa, a brilliant well-read multi-lingual black man, is intelligent, despite talking directly with him. He's so blind that he thinks Dappa is some kind of talking parrot. (A hilarious scene.)
We could be doing the same with AI.
I agree with B. Caplan too. I read your post and you make good points.
Although one could similarly argue that each neuron (or whatever individual component of the brain) is as entirely unconscious as the individual components of the mechanical device you proposed—just as each person in your stadium analogy need not have any idea what the output will be—I agree there’s no reason to think that LLMs are conscious.
This is tricky: I was about to write something like “…that the purely mechanistic LLMs are conscious,” but as you noted, whether the brain’s operations are equally mechanistic/deterministic (I think they are) has no bearing on the view that LLMs aren’t conscious. Their operation—complex and sophisticated as it is—still seems simple enough to discard the possibility that it could somehow give rise to any sort of consciousness. And there’s no reason to couple intelligence (“mechanistic” or otherwise) with consciousness anyway, as you also noted.
I think a lot of confusion is caused by the notion of consciousness “arising,” which implies that it can spontaneously appear when a computational system is complex enough. This makes about as much sense as a fully operational eye just popping out in nature. Sure, some very rudimentary sensory-induced consciousness must have appeared at some point in nature, but nothing like our human consciousness could have suddenly emerged. Through millions of years of natural selection, that initial protoconsciousness gradually gave rise to the kind of consciousness we humans have—a consciousness that has, by the way, been “designed” by evolution to operate in very specific ways. Like the human eye, it has been “sculpted” by natural selection to behave in a particular and precise manner; it’s not just some random, undefined quality “floating” out there for no reason or special purpose.
In other words, it makes no sense to think that a consciousness even remotely close to ours (or of any other "advanced" kind) might simply arise in an LLM-like program.
You may be interested in a thought experiment proposed by Steven Pinker that aligns closely with the ideas in your post. Here’s the reference: https://grok.com/share/bGVnYWN5LWNvcHk%3D_f545796e-ee0a-4e90-b384-317072a5fd28
This (and the Pinker quotation) reminds me of Hans Moravec's idea re transferring human minds into computer emulations ("uploading" minds, or what Robin Hanson calls "ems").
Put the patient on a hospital bed, and scan a small portion of the brain, emulating its function on a computer. Give the patient a switch that lets him go back and forth between running part of his brain on the computer vs brain wetware. Let him flip back and forth to confirm that his experience remains the same in either mode. Then repeat, gradually, until the person is entirely emulated.
You seem to mix together two ideas - that consciousness probably arose gradually, which seems plausible, and that consciousness must have been selected for and must serve some special purpose. Why are you confident in the latter? Surely consciousness must be generated by features of of our minds that were selected for, but I don't see why consciousness itself must play any role or have been selected for directly. Perhaps it could instead be a side effect (or 'epiphenomenon') of whatever cognitive functional characteristics were important to our genetic fitness.
I did mention, though, why it seems to be adaptive, and hence selected for. If we consider how it functions (for instance, how our awareness of what we’re doing can fade or intensify, as when we’re driving), it becomes evident that it has been precisely “designed” to operate in very specific ways. Another example: we’re unconscious of most of our brain’s operations and conscious of only a few processes, the awareness of which is highly plausibly adaptive (advantageous in a Darwinian sense). Although the epiphenomenon perspective can be compelling, it doesn’t quite square with the strong evidence that consciousness serves a useful function. I contrast these two perspectives in more detail here:https://open.substack.com/pub/cdelosada/p/are-we-conscious-automata?r=68hj5&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false
A largely agree about the evolution sculpting proto-human consciousness. I only add that Jaynes could be right and the full human consciousness arose as a historically contingent thing very recently- about 1000 BC.
Yes, Steven Pinker’s articulation of the problem is excellent.