Co-founder of Twitch, Emmett Shear, on the Co-existence of Humans and Superintelligence
Plus: Google DeepMind’s AGI research principles; NVIDIA’s new project — DiffusionRenderer.
Hello, fellow human! Sharing some of the most interesting BuzzRobot talks with you. A few weeks ago, we hosted AMA with Emmett Shear, co-founder of Twitch. Emmett was briefly CEO of OpenAI (remember the OpenAI drama?). Today, he’s building a company called Softmax with the mission to solve the AI alignment problem.
The core idea is that leading AI labs align AI systems to human preferences, it’s a system of control. But Superintelligence will be more powerful than humans — we won’t be able to control it. Instead, we want it to be aligned with us, to see us as part of its tribe. That increases the probability that it keeps us around rather than eliminating us.
Check out the recording of my conversation with Emmett here.
Some key takeaways from the conversation:
We are raising Superintelligence collectively as a species. This is the first time that species are giving birth to species. It's literally being born from our collective consciousness, from our cultural knowledge.
[About creating Superintelligence] It's not as hard as it looks, and it's going to happen, and it doesn't take big clusters to do that. Humans run on 20 watts (human brain). Every kid would be able to run Superintelligence on their laptop at some point.
Aligning AI to user preferences and values is a system of control. But how can you control something that is more powerful than you are? The mission of Softmax is aligning to the greater whole. As an example of that: we have independent cells that form multicellular organisms, or ant colonies in which everyone knows their role.
AI is part of humanity, and we will be aligned to it, and it will be aligned to us. You can use a tool without being aligned to it. But you can't align to another living being (e.g. Superintelligence) without having it be aligned back to you.
[Advice to founders] Go after very crazy ideas. Nobody is safe, even safe ideas are not safe anymore. So go after really crazy ideas.
Some other useful bits: next Tuesday, we are hosting a talk with the NVIDIA team about their recent work presented at CVPR — DiffusionRenderer. It’s a neural rendering framework that can approximate how light behaves in the real world. It can manipulate light in images, like turning daytime into night or a sunny scene into a cloudy one. It can also generate synthetic data for autonomous vehicle and robotics research.
Read the details of the upcoming talk and register to attend here (the talk is online).
And if you feel like watching more AI research talks, here are a few cool ones we recently hosted:
Lawrence Chan from METR.org (the lab that evaluates frontier models) joined AMA with our community on AI safety. He covered why AI self-awareness is a machine learning research problem, and discussed the risks of manipulation and deception by AI. He also shared METR’s recent finding that the capabilities of AI agents to carry out long-term tasks are doubling every seven months.
If you’re into AGI safety, check out Rohin Shah’s talk. Rohin leads the AGI Safety & Alignment team at Google DeepMind. He shared some of the principles his team uses in their AGI safety research.