Google TechTalks

Private Adaptations of Large Language Models
Private adaptations of open-source Large Language Models (LLMs) offer superior privacy, performance, and cost-effectiveness compared to adapting closed-source LLMs, especially for sensitive data.

Threat Models for Memorization: Privacy, Copyright, and Everything In-Between
Relaxing threat models for machine learning memorization, even with natural data or benign users, creates unexpected privacy and copyright vulnerabilities in AI models.

Privacy Ripple Effects from Adding or Removing Personal Information in Language Model Training
Research reveals how dynamic LLM training, including PII additions and removals, creates 'assisted memorization' and 'privacy ripple effects,' making sensitive data extractable even when initially unmemorized.
Want more on data security?
Explore deep-dive summaries and actionable takeaways from the best minds across different podcasts discussing this topic.
View All Data Security Episodes→Don't see the episode you're looking for?
We're constantly adding new episodes, but if you want to see a specific one from Google TechTalks summarized, let us know!
Submit an Episode