Topic
Language Models
Discover key takeaways from 2 podcast episodes about this topic.

Language ModelsAdversarial AttacksData Poisoning
Jan 27, 2026Cascading Adversarial Bias from Injection to Distillation in Language Models
Adversarial bias injected into large language models (LLMs) during instruction tuning can cascade and amplify in distilled student models, even with minimal poisoning, bypassing current detection methods.
G
Google TechTalks
Language ModelsMemorizationGeneralization
Jan 27, 2026How Much Do Language Models Memorize?
Meta researcher Jack Morris introduces a new metric for 'unintended memorization' in language models, revealing how model capacity, data rarity, and training data size influence generalization versus specific data retention.
G
Google TechTalks