Large Language Models
Discover key takeaways from 4 podcast episodes about this topic.

Erica Cartmill on How Human and Animal Minds Think and Play | Mindscape 346
This episode explores the complex, non-linear nature of intelligence across human and animal species, challenging anthropocentric views and revealing the sophisticated social and cognitive abilities of great apes, dogs, and birds.

Worst-Case Membership Inference of Language Models
This talk introduces a novel, highly effective strategy for generating 'canaries' to audit language models for membership inference, revealing a critical disconnect between audit success and actual privacy risk.

Privacy Auditing of Large Language Models
Existing methods for privacy auditing in Large Language Models (LLMs) systematically underestimate worst-case data memorization, necessitating new canary strategies for effective empirical leakage detection.

Threat Models for Memorization: Privacy, Copyright, and Everything In-Between
Relaxing threat models for machine learning memorization, even with natural data or benign users, creates unexpected privacy and copyright vulnerabilities in AI models.