Disparate Privacy Risks from Medical AI - An Investigation into Patient-level Privacy Risk
Medical AIData PrivacyMembership Inference Attacks

Disparate Privacy Risks from Medical AI - An Investigation into Patient-level Privacy Risk

Medical AI models, especially larger ones, expose individual patient data to significant and disproportionately high privacy risks, particularly for minority patient groups, despite appearing safe in aggregate metrics.

Explore Insights →
Leveraging Per-Instance Privacy for Machine Unlearning
Machine LearningData PrivacyUnlearning Algorithms

Leveraging Per-Instance Privacy for Machine Unlearning

This research reveals a theoretical and empirical framework for understanding and quantifying the difficulty of machine unlearning for individual data points, showing that unlearning steps scale logarithmically with per-instance privacy loss.

Explore Insights →
Chasing the Constants and its Implications in Differential Privacy
Differential PrivacyContinual CountingMatrix Mechanism

Chasing the Constants and its Implications in Differential Privacy

Discover how refining mathematical constants in differential privacy algorithms significantly reduces error in continual data streams, impacting applications from disease tracking to private federated learning.

Explore Insights →
Differentially Private Synthetic Data without Training
Differential PrivacySynthetic Data GenerationGenerative AI

Differentially Private Synthetic Data without Training

Microsoft Research introduces 'Private Evolution,' a novel framework that generates differentially private synthetic data using only inference APIs, bypassing the high costs and limitations of traditional DP fine-tuning.

Explore Insights →
The Limits and Possibilities of One Run Auditing
Differential PrivacyPrivacy AuditingMachine Learning

The Limits and Possibilities of One Run Auditing

This talk dissects the theoretical limitations of one-run privacy auditing for differential privacy while demonstrating its practical effectiveness and outlining pathways for significant improvement.

Explore Insights →
The Surprising Effectiveness of Membership Inference with Simple N-Gram Coverage
Membership Inference Attacks (MIA)Large Language Models (LLMs)N-gram Coverage

The Surprising Effectiveness of Membership Inference with Simple N-Gram Coverage

Discover how a simple n-gram coverage attack can surprisingly and effectively detect if specific data was used to train large language models, even with limited black-box access.

Explore Insights →
How Much Do Language Models Memorize?
Language ModelsMemorizationGeneralization

How Much Do Language Models Memorize?

Meta researcher Jack Morris introduces a new metric for 'unintended memorization' in language models, revealing how model capacity, data rarity, and training data size influence generalization versus specific data retention.

Explore Insights →

Want more on data privacy?

Explore deep-dive summaries and actionable takeaways from the best minds across different podcasts discussing this topic.

View All Data Privacy Episodes

Don't see the episode you're looking for?

We're constantly adding new episodes, but if you want to see a specific one from Google TechTalks summarized, let us know!

Submit an Episode