LLM privacyLLM securityData poisoning
Jan 27, 2026Going Back and Beyond: Emerging (Old) Threats in LLM Privacy and Poisoning
This talk from ETH Zurich reveals how large language models (LLMs) pose significant, often overlooked, privacy risks through advanced profiling and introduces novel poisoning attacks that activate only after model quantization or fine-tuning.