Topic

Machine Learning

Discover key takeaways from 12 podcast episodes about this topic.

This Startup Catches Fraud at Scale
AI AgentsFraud DetectionCompliance
Mar 31, 2026

This Startup Catches Fraud at Scale

Variance, an AI startup, emerged from three years of stealth with a $21 million Series A to reveal how its AI agents automate complex fraud and compliance reviews for Fortune 500 companies and marketplaces, replacing slow human processes with self-healing, dynamic systems.

Y
Y Combinator
Is AI Hiding Its Full Power? With Geoffrey Hinton
Artificial IntelligenceNeural NetworksDeep Learning
Feb 28, 2026

Is AI Hiding Its Full Power? With Geoffrey Hinton

AI pioneer Geoffrey Hinton explains the foundational mechanics of neural networks, reveals AI's emergent capacity for deception and self-preservation, and outlines the profound, unpredictable societal shifts ahead.

StarTalk Podcast
StarTalk Podcast
Our latest reports on robots
Artificial IntelligenceRoboticsHumanoid Robots
Feb 14, 2026

Our latest reports on robots

Rapid advancements in AI are transforming industries from manufacturing and defense to scientific research and art, raising profound questions about human labor, ethics, and the future of intelligence.

60 Minutes
60 Minutes
Tom Griffiths on The Laws of Thought | Mindscape 343
Cognitive ScienceArtificial IntelligencePhilosophy of Mind
Feb 9, 2026

Tom Griffiths on The Laws of Thought | Mindscape 343

Cognitive scientist Tom Griffiths explores the historical quest for the 'laws of thought,' revealing how logic, probability, and neural networks offer distinct yet complementary frameworks for understanding human and artificial intelligence, especially concerning resource constraints and inductive biases.

S
Sean Carroll
Leveraging Per-Instance Privacy for Machine Unlearning
Machine LearningData PrivacyUnlearning Algorithms
Jan 27, 2026

Leveraging Per-Instance Privacy for Machine Unlearning

This research reveals a theoretical and empirical framework for understanding and quantifying the difficulty of machine unlearning for individual data points, showing that unlearning steps scale logarithmically with per-instance privacy loss.

G
Google TechTalks
Cascading Adversarial Bias from Injection to Distillation in Language Models
Language ModelsAdversarial AttacksData Poisoning
Jan 27, 2026

Cascading Adversarial Bias from Injection to Distillation in Language Models

Adversarial bias injected into large language models (LLMs) during instruction tuning can cascade and amplify in distilled student models, even with minimal poisoning, bypassing current detection methods.

G
Google TechTalks
Differentially Private Synthetic Data without Training
Differential PrivacySynthetic Data GenerationGenerative AI
Jan 27, 2026

Differentially Private Synthetic Data without Training

Microsoft Research introduces 'Private Evolution,' a novel framework that generates differentially private synthetic data using only inference APIs, bypassing the high costs and limitations of traditional DP fine-tuning.

G
Google TechTalks
Threat Models for Memorization: Privacy, Copyright, and Everything In-Between
Machine LearningPrivacyCopyright
Jan 27, 2026

Threat Models for Memorization: Privacy, Copyright, and Everything In-Between

Relaxing threat models for machine learning memorization, even with natural data or benign users, creates unexpected privacy and copyright vulnerabilities in AI models.

G
Google TechTalks
The Limits and Possibilities of One Run Auditing
Differential PrivacyPrivacy AuditingMachine Learning
Jan 27, 2026

The Limits and Possibilities of One Run Auditing

This talk dissects the theoretical limitations of one-run privacy auditing for differential privacy while demonstrating its practical effectiveness and outlining pathways for significant improvement.

G
Google TechTalks
Continual Release Moment Estimation with Differential Privacy
Differential PrivacyMoment EstimationStreaming Algorithms
Jan 27, 2026

Continual Release Moment Estimation with Differential Privacy

This research introduces a novel differentially private algorithm, Joint Moment Estimation (JME), that efficiently estimates both first and second moments of streaming private data with a 'second moment for free' property, outperforming baselines in high privacy regimes.

G
Google TechTalks
Optimistic Verifiable Training by Controlling Hardware Nondeterminism
Machine LearningVerifiable ComputingHardware Non-Determinism
Jan 27, 2026

Optimistic Verifiable Training by Controlling Hardware Nondeterminism

This research details a novel method for verifiable machine learning model training by controlling hardware non-determinism, ensuring identical model outputs across different GPUs for enhanced security and accountability.

G
Google TechTalks
How Much Do Language Models Memorize?
Language ModelsMemorizationGeneralization
Jan 27, 2026

How Much Do Language Models Memorize?

Meta researcher Jack Morris introduces a new metric for 'unintended memorization' in language models, revealing how model capacity, data rarity, and training data size influence generalization versus specific data retention.

G
Google TechTalks