YoVDO

Can LLMs Keep a Secret? Testing Privacy Implications of Language Models

Offered By: Google TechTalks via YouTube

Tags

Privacy Courses Risk Mitigation Courses Data Security Courses Model Evaluation Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the privacy implications of using large language models (LLMs) in interactive settings through this 42-minute Google TechTalk presented by Niloofar Mireshghallah and Hyunwoo Kim from the University of Washington. Delve into a new set of inference-time privacy risks that arise when LLMs are fed information from multiple sources and expected to reason about what to share in their outputs. Examine the limitations of existing evaluation frameworks in capturing the nuances of these privacy challenges. Gain insights into future research directions for improved auditing of models for privacy risks and developing more effective mitigation strategies.

Syllabus

Can LLMs Keep a Secret? Testing Privacy Implications of Language Models


Taught by

Google TechTalks

Related Courses

Managing Devices using Enterprise Mobility Suite
Microsoft via edX
Firebase Essentials For Android
Google via Udacity
Research Data Management and Sharing
The University of North Carolina at Chapel Hill via Coursera
SAP HANA CLOUD PLATFORM の重要事項
SAP Learning
Windows 10 pour l'entreprise
Microsoft Virtual Academy via OpenClassrooms