Quick Read

DeepMind CEO Demis Hassabis details the missing pieces for Artificial General Intelligence (AGI), the strategic role of smaller AI models, and how AI will transform scientific discovery, urging founders to combine AI with other deep tech.
AGI requires continual learning, long-term reasoning, and better memory, not just scaled-up current methods.
DeepMind prioritizes distilling powerful AI into smaller, efficient models for edge devices and billions of users.
True scientific breakthroughs with AI need massive combinatorial search, clear objectives, and rich data/simulators.

Summary

Demis Hassabis, CEO of Google DeepMind, discusses the current state and future trajectory of AI, emphasizing the remaining challenges to achieve Artificial General Intelligence (AGI), which he estimates around 2030. He highlights that while current large-scale pre-training and RLHF are foundational, key missing components include continual learning, long-term reasoning, and robust memory systems. Hassabis explains DeepMind's strategy of distilling powerful frontier models into smaller, efficient 'flash' and 'flashlight' models for broad deployment across Google products and edge devices, driven by cost, speed, privacy, and security needs. He also addresses the 'jagged intelligence' of current models, noting their struggles with consistent reasoning and true creativity (like inventing a game rather than just playing it). For scientific discovery, Hassabis outlines the criteria for 'AlphaFold-style' breakthroughs: massive combinatorial search spaces, clear objective functions, and sufficient data or simulators. He advises deep tech founders to pursue interdisciplinary problems, combining AI with other hard science areas like materials or medicine, to build defensible, long-lasting value, and to anticipate AGI's emergence within their 10-year development cycles.
Demis Hassabis, a pioneer in AI, offers an unparalleled, insider perspective on the path to AGI and its profound implications. His insights are critical for anyone building in AI, revealing both the technical frontiers and strategic considerations for developing robust, efficient, and impactful AI systems. His advice for founders underscores the importance of tackling 'root node problems' in science and combining AI with deep tech to create truly transformative and defensible ventures, providing a roadmap for navigating the rapidly evolving AI landscape.

Takeaways

  • AGI, estimated around 2030, still requires breakthroughs in continual learning, long-term reasoning, and memory beyond current large-scale pre-training methods.
  • Reinforcement Learning (RL) and agent systems, pioneered by DeepMind with AlphaGo, are crucial for AGI's active problem-solving capabilities.
  • DeepMind excels at distilling frontier model power into smaller, highly efficient 'flash' and 'flashlight' models, enabling widespread, low-latency deployment across Google's billions of users and edge devices.
  • Current AI models exhibit 'jagged intelligence,' solving complex problems while failing at elementary reasoning, indicating a missing 'introspection' about their own thought processes.
  • Agents are in their early experimental phase; while useful for task aspects, they lack the continual learning needed for full, adaptive task completion.
  • True AI creativity would involve inventing a game like Go, not just mastering it, suggesting a missing component beyond pattern matching and extrapolation.
  • Google's Gemma models represent a commitment to open-source 'Western stacks' for AI, especially for vulnerable edge devices like Android, glasses, and robotics.
  • Gemini's multimodal design provides a competitive advantage for building world models, robotics, and digital assistants that understand the physical world.
  • Inference costs will likely never be 'free' due to Jevons paradox and physical bottlenecks, necessitating continued efficiency optimization.
  • A 'virtual cell' simulation, a grand challenge in biology, is approximately 10 years away, limited by the ability to image live cells at nanometer resolution without destruction.
  • Scientific domains ripe for 'AlphaFold-style' breakthroughs feature massive combinatorial search spaces, clear objective functions, and abundant data or simulators.
  • AI systems are close to genuine scientific reasoning but still lack the ability to generate truly novel, deep hypotheses (the 'Einstein test').
  • Deep tech founders should combine AI with other hard science areas (e.g., materials, medicine) to create defensible, impactful companies, acknowledging AGI's potential emergence during their 10-year development cycles.

Insights

1Missing Components for AGI

While current AI techniques like large-scale pre-training, RLHF, and chain-of-thought are foundational, AGI still requires significant breakthroughs in continual learning, long-term reasoning, and memory systems to achieve consistency across tasks.

Continual learning, long-term reasoning, some aspects of memory, these are still unsolved. I think all of these are going to be required for AGI.

2Strategic Importance of Model Distillation

DeepMind's core strength lies in distilling the power of its largest frontier models into smaller, highly efficient 'flash' and 'flashlight' models. This strategy is essential for serving billions of users across Google's diverse products (Search, Maps, YouTube, Android) with low latency and cost, and for enabling privacy-preserving AI on edge devices and robotics.

One of our biggest strengths has been distilling and packing that power into smaller and smaller models very quickly... we've got to serve the biggest probably AI surfaces... they have to be served extremely fast, extremely efficiently and cheaply and with low latency.

3Challenges in AI Reasoning and Creativity

Current AI models exhibit 'jagged intelligence,' capable of solving advanced problems but prone to basic errors and 'overthinking' loops. True creativity, like inventing a complex game from a high-level description, remains elusive, suggesting a missing element beyond pattern matching or extrapolation.

Sometimes it will consider a move, it will realize it's a blunder but it can't find anything better so it kind of goes back to that move and does it anyway... on the one hand, it can solve gold medal problems in IMO... but on the other hand... it can still make basic elementary maths errors... Can it invent go? That's what I want a system that can invent go if you give it a high-level description.

4Multimodal Foundation Models for World Understanding

Gemini's design as a multimodal model from its inception provides a significant advantage for understanding the physical world, intuitive physics, and real-world context. This is crucial for applications in robotics, digital assistants, and building comprehensive 'world models.'

We started it being multimodal from the start... it needs to understand the physical world around you and intuitive physics and and the and the physical context you're in and that's what our systems are extremely good at.

5Criteria for Scientific AI Breakthroughs

Scientific domains ripe for 'AlphaFold-style' breakthroughs share three characteristics: a massive combinatorial search space, a clear objective function (e.g., minimizing free energy, winning a game), and sufficient data or a robust simulator to generate synthetic data.

Massive combinatorial search space... you have a clear objective function... enough data and/or a simulator that can generate you lots of indistribution synthetic data.

Bottom Line

The 'Einstein test' for AI involves training a system with 1901 scientific knowledge and seeing if it can independently derive breakthroughs like special relativity (1905), indicating true novel discovery beyond pattern matching.

So What?

This test defines the frontier of AI's scientific reasoning, moving beyond problem-solving to hypothesis generation and original conceptual leaps, which is currently a missing capability.

Impact

Developing AI architectures or training methodologies specifically designed for analogical reasoning and generating novel, meaningful hypotheses could unlock unprecedented scientific acceleration.

Deep tech startups should strategically combine AI with other 'world of atoms' technologies (e.g., materials science, medicine) to create defensible businesses that are less susceptible to rapid shifts in foundation models.

So What?

This approach builds 'moats' by integrating complex physical realities and domain expertise, making it harder for generic AI updates to replicate or disrupt the core value proposition.

Impact

Founders with interdisciplinary expertise in AI and a hard science/engineering field are uniquely positioned to identify and build these highly defensible, high-impact companies.

Key Concepts

Jevons Paradox

Increased efficiency in resource use (like AI inference) leads to increased demand, preventing the resource from becoming 'free' or unused, as new applications consume any gains.

Root Node Problems

Scientific challenges whose solutions unlock entire new branches or avenues of discovery, similar to how AlphaFold transformed biology by cracking protein structure prediction.

AlphaFold-style Breakthrough Criteria

A framework for identifying scientific problems suitable for AI breakthroughs, characterized by: 1) massive combinatorial search space, 2) clear objective function, and 3) sufficient data or a reliable simulator.

Jagged Intelligence

The phenomenon where AI models can solve extremely complex problems (e.g., IMO gold medal math) but simultaneously make elementary errors, indicating inconsistencies in their reasoning capabilities.

Lessons

  • When building deep tech companies today, anticipate the emergence of AGI (e.g., by 2030) and design your products/systems to either leverage it as a tool or remain valuable in an AGI-present world.
  • Focus on interdisciplinary problems that combine advanced AI with other deep technology areas, especially those involving the 'world of atoms' (e.g., materials, medicine), to build more defensible and impactful ventures.
  • Prioritize developing robust memory systems, continual learning capabilities, and more consistent reasoning mechanisms in your AI agents to move beyond 'duct tape' solutions and enable full task autonomy.

Notable Moments

Demis Hassabis's personal AGI timeline is around 2030, emphasizing the need for deep tech founders to consider this in their 10-year development cycles.

This provides a concrete timeframe from a leading expert, influencing strategic planning for long-term AI-dependent projects and investments.

DeepMind's AlphaFold was released for free to all scientists, becoming a fundamental tool used by over three million researchers and expected to be part of almost every future drug discovery process.

This exemplifies the 'root node problem' approach, where solving a foundational scientific challenge with AI unlocks vast downstream innovation and impact across an entire field.

The release of Gemma models signifies Google's commitment to providing highly capable, open-source 'Western stacks' for AI, particularly for edge devices.

This democratizes access to powerful AI models, fostering innovation and addressing concerns about privacy and security by enabling local processing on devices like phones, glasses, and robots.

Quotes

"

"Continual learning, long-term reasoning, uh some aspects of memory, these are still unsolved. I think all of these are going to be required for AGI."

Demis Hassabis
"

"One of our biggest strengths has been distilling and packing that power into smaller and smaller models very quickly."

Demis Hassabis
"

"I often get the impression with our systems and and our competitor systems that they're almost overthinking they're almost getting into sort of loops of things."

Demis Hassabis
"

"Can it invent go? That's what I want a system that can invent go if you give it a high-level description."

Demis Hassabis
"

"I'm not sure inference will ever be essentially free. I mean there's sort of Jevron's paradox."

Demis Hassabis
"

"Step one was solve intelligence i.e build AGI and then step two was use it to solve everything else."

Demis Hassabis

Q&A

Recent Questions

Related Episodes