How to Build the Future: Demis Hassabis
YouTube · JNyuX1zoOgU
Quick Read
Summary
Takeaways
- ❖AGI, estimated around 2030, still requires breakthroughs in continual learning, long-term reasoning, and memory beyond current large-scale pre-training methods.
- ❖Reinforcement Learning (RL) and agent systems, pioneered by DeepMind with AlphaGo, are crucial for AGI's active problem-solving capabilities.
- ❖DeepMind excels at distilling frontier model power into smaller, highly efficient 'flash' and 'flashlight' models, enabling widespread, low-latency deployment across Google's billions of users and edge devices.
- ❖Current AI models exhibit 'jagged intelligence,' solving complex problems while failing at elementary reasoning, indicating a missing 'introspection' about their own thought processes.
- ❖Agents are in their early experimental phase; while useful for task aspects, they lack the continual learning needed for full, adaptive task completion.
- ❖True AI creativity would involve inventing a game like Go, not just mastering it, suggesting a missing component beyond pattern matching and extrapolation.
- ❖Google's Gemma models represent a commitment to open-source 'Western stacks' for AI, especially for vulnerable edge devices like Android, glasses, and robotics.
- ❖Gemini's multimodal design provides a competitive advantage for building world models, robotics, and digital assistants that understand the physical world.
- ❖Inference costs will likely never be 'free' due to Jevons paradox and physical bottlenecks, necessitating continued efficiency optimization.
- ❖A 'virtual cell' simulation, a grand challenge in biology, is approximately 10 years away, limited by the ability to image live cells at nanometer resolution without destruction.
- ❖Scientific domains ripe for 'AlphaFold-style' breakthroughs feature massive combinatorial search spaces, clear objective functions, and abundant data or simulators.
- ❖AI systems are close to genuine scientific reasoning but still lack the ability to generate truly novel, deep hypotheses (the 'Einstein test').
- ❖Deep tech founders should combine AI with other hard science areas (e.g., materials, medicine) to create defensible, impactful companies, acknowledging AGI's potential emergence during their 10-year development cycles.
Insights
1Missing Components for AGI
While current AI techniques like large-scale pre-training, RLHF, and chain-of-thought are foundational, AGI still requires significant breakthroughs in continual learning, long-term reasoning, and memory systems to achieve consistency across tasks.
Continual learning, long-term reasoning, some aspects of memory, these are still unsolved. I think all of these are going to be required for AGI.
2Strategic Importance of Model Distillation
DeepMind's core strength lies in distilling the power of its largest frontier models into smaller, highly efficient 'flash' and 'flashlight' models. This strategy is essential for serving billions of users across Google's diverse products (Search, Maps, YouTube, Android) with low latency and cost, and for enabling privacy-preserving AI on edge devices and robotics.
One of our biggest strengths has been distilling and packing that power into smaller and smaller models very quickly... we've got to serve the biggest probably AI surfaces... they have to be served extremely fast, extremely efficiently and cheaply and with low latency.
3Challenges in AI Reasoning and Creativity
Current AI models exhibit 'jagged intelligence,' capable of solving advanced problems but prone to basic errors and 'overthinking' loops. True creativity, like inventing a complex game from a high-level description, remains elusive, suggesting a missing element beyond pattern matching or extrapolation.
Sometimes it will consider a move, it will realize it's a blunder but it can't find anything better so it kind of goes back to that move and does it anyway... on the one hand, it can solve gold medal problems in IMO... but on the other hand... it can still make basic elementary maths errors... Can it invent go? That's what I want a system that can invent go if you give it a high-level description.
4Multimodal Foundation Models for World Understanding
Gemini's design as a multimodal model from its inception provides a significant advantage for understanding the physical world, intuitive physics, and real-world context. This is crucial for applications in robotics, digital assistants, and building comprehensive 'world models.'
We started it being multimodal from the start... it needs to understand the physical world around you and intuitive physics and and the and the physical context you're in and that's what our systems are extremely good at.
5Criteria for Scientific AI Breakthroughs
Scientific domains ripe for 'AlphaFold-style' breakthroughs share three characteristics: a massive combinatorial search space, a clear objective function (e.g., minimizing free energy, winning a game), and sufficient data or a robust simulator to generate synthetic data.
Massive combinatorial search space... you have a clear objective function... enough data and/or a simulator that can generate you lots of indistribution synthetic data.
Bottom Line
The 'Einstein test' for AI involves training a system with 1901 scientific knowledge and seeing if it can independently derive breakthroughs like special relativity (1905), indicating true novel discovery beyond pattern matching.
This test defines the frontier of AI's scientific reasoning, moving beyond problem-solving to hypothesis generation and original conceptual leaps, which is currently a missing capability.
Developing AI architectures or training methodologies specifically designed for analogical reasoning and generating novel, meaningful hypotheses could unlock unprecedented scientific acceleration.
Deep tech startups should strategically combine AI with other 'world of atoms' technologies (e.g., materials science, medicine) to create defensible businesses that are less susceptible to rapid shifts in foundation models.
This approach builds 'moats' by integrating complex physical realities and domain expertise, making it harder for generic AI updates to replicate or disrupt the core value proposition.
Founders with interdisciplinary expertise in AI and a hard science/engineering field are uniquely positioned to identify and build these highly defensible, high-impact companies.
Key Concepts
Jevons Paradox
Increased efficiency in resource use (like AI inference) leads to increased demand, preventing the resource from becoming 'free' or unused, as new applications consume any gains.
Root Node Problems
Scientific challenges whose solutions unlock entire new branches or avenues of discovery, similar to how AlphaFold transformed biology by cracking protein structure prediction.
AlphaFold-style Breakthrough Criteria
A framework for identifying scientific problems suitable for AI breakthroughs, characterized by: 1) massive combinatorial search space, 2) clear objective function, and 3) sufficient data or a reliable simulator.
Jagged Intelligence
The phenomenon where AI models can solve extremely complex problems (e.g., IMO gold medal math) but simultaneously make elementary errors, indicating inconsistencies in their reasoning capabilities.
Lessons
- When building deep tech companies today, anticipate the emergence of AGI (e.g., by 2030) and design your products/systems to either leverage it as a tool or remain valuable in an AGI-present world.
- Focus on interdisciplinary problems that combine advanced AI with other deep technology areas, especially those involving the 'world of atoms' (e.g., materials, medicine), to build more defensible and impactful ventures.
- Prioritize developing robust memory systems, continual learning capabilities, and more consistent reasoning mechanisms in your AI agents to move beyond 'duct tape' solutions and enable full task autonomy.
Notable Moments
Demis Hassabis's personal AGI timeline is around 2030, emphasizing the need for deep tech founders to consider this in their 10-year development cycles.
This provides a concrete timeframe from a leading expert, influencing strategic planning for long-term AI-dependent projects and investments.
DeepMind's AlphaFold was released for free to all scientists, becoming a fundamental tool used by over three million researchers and expected to be part of almost every future drug discovery process.
This exemplifies the 'root node problem' approach, where solving a foundational scientific challenge with AI unlocks vast downstream innovation and impact across an entire field.
The release of Gemma models signifies Google's commitment to providing highly capable, open-source 'Western stacks' for AI, particularly for edge devices.
This democratizes access to powerful AI models, fostering innovation and addressing concerns about privacy and security by enabling local processing on devices like phones, glasses, and robots.
Quotes
"Continual learning, long-term reasoning, uh some aspects of memory, these are still unsolved. I think all of these are going to be required for AGI."
"One of our biggest strengths has been distilling and packing that power into smaller and smaller models very quickly."
"I often get the impression with our systems and and our competitor systems that they're almost overthinking they're almost getting into sort of loops of things."
"Can it invent go? That's what I want a system that can invent go if you give it a high-level description."
"I'm not sure inference will ever be essentially free. I mean there's sort of Jevron's paradox."
"Step one was solve intelligence i.e build AGI and then step two was use it to solve everything else."
Q&A
Recent Questions
Related Episodes

Is AI Hiding Its Full Power? With Geoffrey Hinton
"AI pioneer Geoffrey Hinton explains the foundational mechanics of neural networks, reveals AI's emergent capacity for deception and self-preservation, and outlines the profound, unpredictable societal shifts ahead."

The Most Important Founder You've Never Heard Of
"Explore the untold story of Demis Hassabis, the prodigy behind DeepMind, whose relentless pursuit of Artificial General Intelligence (AGI) and groundbreaking work in protein folding is quietly reshaping science and technology."

The GPT Moment for Robotics Is Here
"Physical Intelligence is pioneering general-purpose robotics, leveraging cloud-hosted AI models and cross-embodiment data to enable a 'Cambrian explosion' of vertical robotics companies."

AI Is Unlocking Millions Of New Builders
"Emergent, a YC-backed AI platform, has enabled 7 million apps in 8 months by empowering non-technical users to build production-ready software, challenging traditional SaaS and developer roles."