Topic
Fine-tuning
Discover key takeaways from 3 podcast episodes about this topic.

Large Language Models (LLMs)Differential Privacy (DP)Machine Learning Adaptation
Jan 27, 2026Private Adaptations of Large Language Models
Private adaptations of open-source Large Language Models (LLMs) offer superior privacy, performance, and cost-effectiveness compared to adapting closed-source LLMs, especially for sensitive data.
G
Google TechTalks
Large Language ModelsMembership Inference AttacksPrivacy Auditing
Jan 27, 2026Worst-Case Membership Inference of Language Models
This talk introduces a novel, highly effective strategy for generating 'canaries' to audit language models for membership inference, revealing a critical disconnect between audit success and actual privacy risk.
G
Google TechTalks
Membership Inference Attacks (MIA)Large Language Models (LLMs)N-gram Coverage
Jan 27, 2026The Surprising Effectiveness of Membership Inference with Simple N-Gram Coverage
Discover how a simple n-gram coverage attack can surprisingly and effectively detect if specific data was used to train large language models, even with limited black-box access.
G
Google TechTalks