Simple algos have intrinsic motivations — AI isn’t “just linear algebra"
Conversation with Michael Levin on AI consciousness
Hello fellow humans! Sharing the latest interview I had with famous biologist and co-author of xenobots (computer-designed “living robots”), Michael Levin, on AI consciousness.
Key moments:
Intelligent systems are not equal to consciousness. We can’t rule out consciousness in AI. Consciousness shouldn’t be attached to hardware (silicon vs. biological origin).
Humanity might be on the path to Neanderthals (depending on how AI development progresses).
The barrier to creating bioweapons has never been high. AI can make it easier, but the barrier was never high to begin with.
Even very simple algorithms can show intrinsic motivations that resemble free will. We need tools to recognize them, suppress unwanted behaviors, and encourage the ones we want. We should stay humble and not dismiss AI as “just linear algebra,” because even simple deterministic code can have motivations we don’t fully understand.
It’s a continuous process from the blob of chemicals of an unfertilized egg to forming a human mind — there is no magic lightning flash at which you were a bunch of chemicals and now you are a formed mind.
Where is that fine line at which some creatures are considered to be sentient and others are not? It doesn’t exist, but the crazy thing is that we have to divide.
The cognitive light cone captures the scale of goals humans can pursue and the largest things we can truly comprehend. It’s not just about intelligence — it also includes compassion. That combination is what makes us human.
Watch the full episode on my new YouTube channel on AI sentience

