Tom Griffiths on The Laws of Thought | Mindscape 343
Quick Read
Summary
Takeaways
- ❖The quest for 'laws of thought' began with early philosophers like Aristotle and Leibniz, aiming to formalize reasoning mathematically.
- ❖George Boole formalized logic using a unique algebra, seeing himself as a theoretical psychologist focused on ideal thought.
- ❖The introduction of probability theory by Bayes and Laplace extended logic to uncertain inferences, allowing for degrees of belief.
- ❖Human cognition often deviates from ideal logic or Bayesian reasoning due to 'resource rationality,' where biases are efficient strategies under finite cognitive resources.
- ❖David Marr's levels of analysis (computational, algorithmic, implementation) provide a framework for understanding intelligence, acknowledging multiple valid theories at different levels.
- ❖Current large language models (LLMs) require vastly more data than humans to learn language, highlighting a significant 'inductive bias' gap.
- ❖Meta-learning is an approach to train neural networks to learn from less data by optimizing initial weights to capture human-like inductive biases.
- ❖AI systems often exhibit 'jagged intelligence,' performing exceptionally in narrow domains but failing spectacularly in closely related ones, a consequence of their training objectives.
- ❖Representing concepts as points in space and using neural networks provides a third thread for understanding thought, complementing logic and probability.
Insights
1The Three Threads of Thought: Logic, Probability, and Spaces
The understanding of thought has evolved through three main mathematical frameworks. Logic, starting with Aristotle and formalized by Boole, deals with certain deductions. Probability theory, developed by Bayes and Laplace, extends this to uncertain inductive inferences, allowing for degrees of belief. The third thread, emerging in the 20th century, conceptualizes thoughts as points in a mathematical space, with neural networks providing a computational mechanism for transforming and computing with these spatial representations.
Griffiths details Aristotle's syllogisms (), Leibniz's arithmetic (), Boole's algebra (), and the probabilistic work of Bayes and Laplace (). He then introduces the 'spaces, features, and networks' concept for neural networks () and their historical development ().
2Resource Rationality Explains Human 'Irrationality'
Human cognitive biases and heuristics, often seen as 'errors' compared to ideal logic or Bayesian reasoning, are better understood as 'resource-rational' adaptations. These shortcuts allow humans to make effective decisions and solve problems efficiently within the constraints of finite cognitive resources (time, energy, memory). Evolution and learning have optimized these strategies for the problems humans typically face.
Griffiths discusses how human actions, seemingly irrational compared to perfect logic, 'actually have good reasons for them' (). He elaborates on 'resource rationality' as redefining rationality for bounded agents () and how heuristics are not necessarily 'bad things' but efficient shortcuts ().
3The Inductive Bias Gap Between Humans and AI
A fundamental difference between human minds and current large language models (LLMs) lies in their inductive bias. Human children learn language from approximately five years of exposure, while LLMs require data equivalent to 5,000 to 50,000 years of continuous speech. This vast 'inductive bias gap' means humans possess innate predispositions and broader world experiences that enable highly efficient learning from limited data, a capability AI is still striving to achieve.
Griffiths highlights that 'the big difference between human minds and brains and large language models that we have today is about inductive bias' (). He quantifies the data difference: '5 years of exposure' for a child versus 'between 5,000 and 50,000 years of continuous speech' for LLMs ().
4Meta-Learning as a Strategy for Human-like AI
To bridge the inductive bias gap, cognitive scientists are exploring meta-learning approaches for neural networks. This involves manipulating the initial weights of a neural network in an 'outer loop' training process, allowing the network to learn more quickly and efficiently from less data in an 'inner loop.' The goal is to imbue AI with inductive biases that mimic human learning, potentially leading to more human-like solutions and better generalizability.
Griffiths describes his lab's work with Tom McCoy on meta-learning (), explaining how it 'tries to create neural networks where we manipulate the initial weights... in such a way that it's able to learn from less data' (). He details the 'outer loop' and 'inner loop' process ().
5AI's 'Jagged Intelligence' and Mismatched Objectives
Current AI systems, particularly LLMs, often exhibit 'jagged intelligence'—performing exceptionally well in specific tasks but failing unexpectedly in closely related ones. This mismatch arises because AI's behavior is shaped by its explicit training objective (e.g., predicting the next word), which differs from the complex, multi-faceted computational problems human minds have evolved to solve. This leads to solutions that are 'weird' or non-intuitive from a human perspective.
Griffiths notes that AI systems can be 'very good at solving one problem and then fails quite spectacularly on a problem that's right next to it' (), calling it 'jagged intelligence' (). He cites research showing LLMs' sensitivity to output probability over correctness (), like being better at counting to 30 than 29 because 30 is more common online.
Bottom Line
The historical development of 'laws of thought' (from logic to probability to neural networks) suggests that a complete understanding of intelligence requires integrating diverse mathematical frameworks rather than seeking a single, unifying theory.
This multi-faceted approach implies that future breakthroughs in AI may come from combining and synthesizing different computational paradigms, rather than solely optimizing current deep learning architectures. It challenges the 'one simple trick' mentality.
Research into hybrid AI architectures that explicitly integrate symbolic logic, probabilistic reasoning, and neural network representations could yield more robust, interpretable, and human-aligned intelligence.
Human cognitive 'biases' are often highly efficient 'resource-rational' strategies, optimized by evolution and learning for real-world constraints, not flaws.
This re-evaluation of human 'irrationality' suggests that directly mimicking human biases in AI, or at least understanding their adaptive function, could lead to more efficient and robust AI systems that perform better under real-world resource limitations, rather than striving for 'perfect' but computationally expensive solutions.
Develop AI systems that learn and adapt their computational strategies based on available resources, dynamically choosing between 'ideal' but costly algorithms and 'resource-rational' heuristics, similar to how humans make trade-offs.
The massive data requirements of current LLMs (5,000-50,000 years of speech equivalent) compared to human children (5 years) highlight a profound 'inductive bias' gap.
This gap indicates that current AI is fundamentally different from human intelligence in its learning efficiency. Simply scaling up data and compute may not lead to human-like generalization or common sense. It underscores the need for AI to develop stronger, more innate learning predispositions.
Invest heavily in meta-learning and other techniques to engineer stronger inductive biases into AI models, enabling them to learn effectively from significantly less data, reducing training costs, and improving generalization capabilities.
Opportunities
AI Inductive Bias Engineering Service
A service that uses meta-learning and cognitive science insights to 'engineer' stronger, more human-like inductive biases into AI models. This would enable businesses to train highly effective AI with significantly less data, reducing computational costs and accelerating deployment for specialized applications where data is scarce.
AI 'Jagged Intelligence' Diagnostic & Mitigation Platform
A platform that analyzes AI models to identify and characterize their 'jagged intelligence' boundaries and idiosyncratic biases (e.g., sensitivity to output probability over correctness). It would provide diagnostics, explainability tools, and suggest mitigation strategies to make AI behavior more predictable, reliable, and aligned with human expectations.
Key Concepts
David Marr's Levels of Analysis
A framework for understanding information processing systems (like the mind) at three levels: 1) Computational (what problem is being solved and what's the ideal solution?), 2) Algorithmic (what processes approximate that ideal solution?), and 3) Implementation (how is it physically realized?). This model suggests that a complete understanding requires compatible theories at all three levels, not a single 'silver bullet' explanation.
Resource Rationality
The idea that human cognitive 'irrationality' or biases, when compared to perfect logical or probabilistic reasoning, are often optimal strategies for solving problems given finite cognitive resources (e.g., time, energy, memory). It reframes rationality as using the best algorithm to choose actions under specific constraints, rather than always choosing the ideal action.
Inductive Bias
The set of assumptions or predispositions a learning system (human or AI) has that allows it to generalize from limited data. Humans possess strong inductive biases, enabling rapid learning from small datasets, whereas current large language models require immense amounts of data due to weaker inherent biases. Meta-learning aims to instill stronger inductive biases in AI.
Lessons
- When evaluating AI, move beyond surface-level performance to consider its underlying 'objective function' and resource constraints, as these fundamentally shape its behavior and limitations.
- Apply David Marr's levels of analysis to complex problems: first, define the ideal computational goal; second, devise an algorithmic strategy given available resources; third, consider the implementation details.
- Recognize that human 'irrationality' often stems from 'resource rationality.' Instead of trying to eliminate all biases, understand their adaptive function and when they are appropriate shortcuts.
- For AI development, prioritize instilling stronger inductive biases through techniques like meta-learning to enable more efficient, human-like learning from limited data, rather than solely relying on vast datasets.
Notable Moments
Host Sean Carroll's observation about LLMs' poor arithmetic and counting abilities despite being computers, likening it to human limitations.
This sets the stage for the core theme: 'thought' isn't a single thing, and systems (human or AI) optimize for different abilities under constraints, leading to surprising strengths and weaknesses.
Griffiths' revelation that Leibniz's attempt to formalize thought as arithmetic involved 'vector embeddings' for concepts, a precursor to modern AI techniques.
This highlights the deep historical roots of computational theories of mind and how seemingly modern AI concepts have philosophical antecedents, demonstrating a long-standing quest for mathematical understanding of thought.
Marvin Minsky's abandonment of neural network research in the 1960s due to perceived impossibility of scaling them to 'learn anything interesting'.
This illustrates a classic scientific 'dead end' that was later overcome by technological advancements (compute power, data) and algorithmic breakthroughs (backpropagation), emphasizing how practical constraints can shape theoretical progress and vice versa.
Quotes
"It's almost like you tried really hard to make a program that sounded human and in the course of doing that, it lost the ability to do arithmetic, which is kind of interesting when you think about it."
"The idea of coming up with the laws of thought, the laws that an ideally rational creature working under certain constraints would follow."
"We are, you know, good at solving the kinds of problems that we face with the resources that we have."
"The big difference between human minds and brains and large language models that we have today is about inductive bias."
"If you want to be able to learn from five years of data and you're currently learning from 5,000 years of data, then you've got 4,995 years to make up in terms of the content of that inductive bias."
"You can have an AI system that's very good at solving one problem and then fails quite spectacularly on a problem that's right next to it."
Q&A
Recent Questions
Related Episodes

Joe Rogan Experience #2467 - Michael Pollan
"Michael Pollan and Joe Rogan explore the profound mysteries of consciousness, from the intelligence of plants to the existential threats and opportunities presented by AI, challenging our anthropocentric view of the world."

MIT Physicist: DARPA, Warp Drives, Supergravity & Aliens on Jupiter | Jim Gates
"MIT Physicist Jim Gates details his journey from a four-year-old inspired by sci-fi to a leading researcher in supersymmetry, revealing how fundamental physics equations contain computer error correction codes and discussing the nature of scientific genius, AI, and the future of space travel."

PBS News Hour full episode, April 10, 2026
"This episode covers high-stakes US-Iran peace talks amidst ongoing conflict, Hungary's pivotal election challenging Viktor Orban, the accelerating decline in US birth rates, AI's disruptive impact on jobs, and Palestinian Christians observing Easter under Israeli restrictions."

LIVE: INSTANT FALLOUT from Trump-Iran ‘CEASEFIRE’…
"The hosts dissect the immediate fallout of the Trump-Iran 'ceasefire,' revealing significant US losses, a fractured MAGA world, and a growing progressive debate over extreme rhetoric."