YoVDO

Emerging Vulnerabilities in Large-scale NLP Models

Offered By: USC Information Sciences Institute via YouTube

Tags

Cybersecurity Courses Machine Learning Courses Data Extraction Courses Data Privacy Courses Adversarial Attacks Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore emerging vulnerabilities in large-scale Natural Language Processing (NLP) models in this hour-long conference talk presented by Eric Wallace from the University of California, Berkeley. Delve into the potential security risks, privacy concerns, and insights that arise from the increasing scale of modern machine learning and NLP models. Examine how adversaries can exploit these vulnerabilities to extract private training data, steal model weights, and poison training sets, even with limited black-box access. Gain valuable perspectives on the impact of model scaling and its implications for the field. Learn from Wallace, a PhD student supported by the Apple Fellowship in AI/ML, as he shares his research on making large language models more robust, trustworthy, secure, and private.

Syllabus

Emerging Vulnerabilities in Large-scale NLP Models


Taught by

USC Information Sciences Institute

Related Courses

Introduction to Artificial Intelligence
Stanford University via Udacity
Natural Language Processing
Columbia University via Coursera
Probabilistic Graphical Models 1: Representation
Stanford University via Coursera
Computer Vision: The Fundamentals
University of California, Berkeley via Coursera
Learning from Data (Introductory Machine Learning course)
California Institute of Technology via Independent