Google TechTalks
Differential PrivacyMachine LearningData PrivacyLarge Language Models (LLMs)Machine Learning SecurityData poisoningData SecurityPrompt EngineeringFine-tuningLarge Language ModelsPrivacy AuditingLLM securityFederated LearningAI EthicsAdversarial AttacksMembership Inference AttacksModel MemorizationDeep LearningMachine learning vulnerabilitiesSynthetic Data GenerationMachine Learning PrivacyRetrieval Augmented Generation (RAG)AI SecurityNatural Language ProcessingLanguage ModelsAI SafetyContinual CountingGenerative AIStreaming AlgorithmsApproximation AlgorithmsData MemorizationPrivacyPrivacy-Preserving Data AnalysisCopyright InfringementInformation Theory

Machine LearningPrivacyCopyright
Threat Models for Memorization: Privacy, Copyright, and Everything In-Between
Relaxing threat models for machine learning memorization, even with natural data or benign users, creates unexpected privacy and copyright vulnerabilities in AI models.
Explore Insights →

Large Language Models (LLMs)PrivacyData Security
Privacy Ripple Effects from Adding or Removing Personal Information in Language Model Training
Research reveals how dynamic LLM training, including PII additions and removals, creates 'assisted memorization' and 'privacy ripple effects,' making sensitive data extractable even when initially unmemorized.
Explore Insights →
Want more on privacy?
Explore deep-dive summaries and actionable takeaways from the best minds across different podcasts discussing this topic.
View All Privacy Episodes→Don't see the episode you're looking for?
We're constantly adding new episodes, but if you want to see a specific one from Google TechTalks summarized, let us know!
Submit an Episode