Can LLMs Keep a Secret? Testing Privacy Implications of Language Models
Offered By: Google TechTalks via YouTube
Course Description
Overview
Explore the privacy implications of using large language models (LLMs) in interactive settings through this 42-minute Google TechTalk presented by Niloofar Mireshghallah and Hyunwoo Kim from the University of Washington. Delve into a new set of inference-time privacy risks that arise when LLMs are fed information from multiple sources and expected to reason about what to share in their outputs. Examine the limitations of existing evaluation frameworks in capturing the nuances of these privacy challenges. Gain insights into future research directions for improved auditing of models for privacy risks and developing more effective mitigation strategies.
Syllabus
Can LLMs Keep a Secret? Testing Privacy Implications of Language Models
Taught by
Google TechTalks
Related Courses
The Location AdvantageEsri via Independent Secure Android App Development
University of Southampton via FutureLearn Cloud Computing Security
University System of Maryland via edX Evaluación de peligros y riesgos por fenómenos naturales
Universidad Nacional Autónoma de México via Coursera المدافعون عن حقوق الإنسان
Amnesty International via edX