Google TechTalks
Differential PrivacyMachine LearningData PrivacyLarge Language Models (LLMs)Machine Learning SecurityData poisoningData SecurityPrompt EngineeringFine-tuningLarge Language ModelsPrivacy AuditingLLM securityFederated LearningAI EthicsAdversarial AttacksMembership Inference AttacksModel MemorizationDeep LearningMachine learning vulnerabilitiesSynthetic Data GenerationMachine Learning PrivacyRetrieval Augmented Generation (RAG)AI SecurityNatural Language ProcessingLanguage ModelsAI SafetyContinual CountingGenerative AIStreaming AlgorithmsApproximation AlgorithmsData MemorizationPrivacyPrivacy-Preserving Data AnalysisCopyright InfringementInformation Theory

Large Language ModelsMembership Inference AttacksPrivacy Auditing
Worst-Case Membership Inference of Language Models
This talk introduces a novel, highly effective strategy for generating 'canaries' to audit language models for membership inference, revealing a critical disconnect between audit success and actual privacy risk.
Explore Insights →

Medical AIData PrivacyMembership Inference Attacks
Disparate Privacy Risks from Medical AI - An Investigation into Patient-level Privacy Risk
Medical AI models, especially larger ones, expose individual patient data to significant and disproportionately high privacy risks, particularly for minority patient groups, despite appearing safe in aggregate metrics.
Explore Insights →
Want more on membership inference attacks?
Explore deep-dive summaries and actionable takeaways from the best minds across different podcasts discussing this topic.
View All Membership Inference Attacks Episodes→Don't see the episode you're looking for?
We're constantly adding new episodes, but if you want to see a specific one from Google TechTalks summarized, let us know!
Submit an Episode