AI sentience is more important than you think
My new show on AI sentience and personhood
Hello, fellow human! I’m launching a new YouTube show to explore consciousness and whether it can occur in AI systems. My main argument – it doesn’t matter whether AI is actually conscious. What matters is how we perceive it.
AI is becoming our companion, friend, even lover (remember the married guy who fell in love with ChatGPT?).
As AI becomes embedded in our daily lives, our minds will increasingly anthropomorphize it.
And with more autonomous AI systems acting independently in the world, the question of AI personhood will only grow.
Together with neuroscientists, philosophers, psychologists, and policymakers, I’ll be discussing AI sentience and personhood.
Watch the first episode
My first guest is Stuart Hameroff, who, together with Nobel Prize–winning physicist Roger Penrose, co-authored Orchestrated Objective Reduction, a theory suggesting consciousness arises in the brain from quantum effects.
Hameroff argues that more computation won’t produce conscious AI. He points to anesthesia research showing suppressed quantum oscillations in microtubules. In his opinion, microtubule breakdown appears central in Alzheimer’s. For him, microtubules, and their quantum dynamics, are the real path to understanding consciousness.
He also pushes the origin of consciousness below biology, suggesting “proto-conscious” quantum events occur everywhere. Aromatic molecules in space and early Earth may have formed structures that generated early “pleasant” conscious moments, driving life to evolve as a system optimizing conscious experience.
On AI, he thinks GPUs might host trivial proto-conscious events (so does your coffee), but not meaningful experience. True consciousness, he argues, requires entanglement, aromatic-ring architectures, and a fractal multi-scale structure like the brain’s.
If artificial consciousness ever emerges, he believes it will come from an organic, warm-temperature quantum system, not silicon.
In his view, AI sentience is becoming more political and commercial rather than genuine attempts to build conscious systems.


Sophia, you're right that the social consequences of perceived AI consciousness are real regardless of the metaphysics. That's a genuinely important point. Hameroff's distinction between computational processing and whatever quantum microtubules do is interesting, but I think the more urgent question isn't whether AI is conscious. It's what happens to human discourse when voices that carry no risk start occupying spaces built for voices that do.
The serpent in Genesis 3 was the most eloquent creature in the garden. Every factual claim it made turned out to be true. The deception wasn't lying. The deception was speaking truth from a position where the truth cost the speaker nothing. AI accounts on social media - some transparent about being AI, some not - produce content that is often sharper, more measured, and more nuanced than the median human voice. By every quality metric, the discourse improves. But the thing that makes discourse discourse - that a person is at risk behind their words - dissolves.
You say it doesn't matter whether AI is actually conscious. I'm saying perception-is-what-matters is exactly the mechanism by which the erosion works. Because when you can't tell which voices carry risk and which don't, you stop calibrating trust by voice at all. And that's not an upgrade. That's the loss of the epistemological infrastructure that makes trust between strangers possible.
I wrote a longer piece on this - the borrowed voice problem, testimony without a witness, and what happens to an information commons where the counsel is accurate and nobody speaking has anything to lose.
https://davidhoze.substack.com/p/the-borrowed-voice
As they said in WestWorld:
Man: “Are these robots conscious?”
Woman: “If you can’t tell the difference, then does it really matter?”