YoVDO

Emerging Vulnerabilities in Large-scale NLP Models

Offered By: USC Information Sciences Institute via YouTube

Tags

Cybersecurity Courses Machine Learning Courses Data Extraction Courses Data Privacy Courses Adversarial Attacks Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore emerging vulnerabilities in large-scale Natural Language Processing (NLP) models in this hour-long conference talk presented by Eric Wallace from the University of California, Berkeley. Delve into the potential security risks, privacy concerns, and insights that arise from the increasing scale of modern machine learning and NLP models. Examine how adversaries can exploit these vulnerabilities to extract private training data, steal model weights, and poison training sets, even with limited black-box access. Gain valuable perspectives on the impact of model scaling and its implications for the field. Learn from Wallace, a PhD student supported by the Apple Fellowship in AI/ML, as he shares his research on making large language models more robust, trustworthy, secure, and private.

Syllabus

Emerging Vulnerabilities in Large-scale NLP Models


Taught by

USC Information Sciences Institute

Related Courses

Machine Learning and Artificial Intelligence Security Risk: Categorizing Attacks and Failure Modes
LinkedIn Learning
How Apple Scans Your Phone and How to Evade It - NeuralHash CSAM Detection Algorithm Explained
Yannic Kilcher via YouTube
Deep Learning New Frontiers
Alexander Amini via YouTube
Deep Learning New Frontiers
Alexander Amini via YouTube
MIT 6.S191 - Deep Learning Limitations and New Frontiers
Alexander Amini via YouTube