YoVDO

Robust Generalization in the Era of LLMs - Jailbreaking Attacks and Defenses

Offered By: Simons Institute via YouTube

Tags

Adversarial Attacks Courses Cybersecurity Courses LLaMA (Large Language Model Meta AI) Courses Claude Courses Gemini Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the current landscape of jailbreaking attacks and defenses in large language models (LLMs) through this informative lecture by Hamed Hassani from the University of Pennsylvania. Delve into the vulnerabilities of popular LLMs like GPT, Llama, Claude, and Gemini to adversarial manipulation, and examine the growing interest in improving their robustness. Gain insights into the latest developments in the jailbreaking literature, including new perspectives on robust generalization, innovative black-box attacks on LLMs, and emerging defense strategies. Learn about a new leaderboard designed to evaluate the robust generalization capabilities of production LLMs. Understand the challenges and opportunities in aligning LLMs with human intentions and protecting them against malicious exploitation.

Syllabus

Robust Generalization in the Era of LLMs: Jailbreaking Attacks and Defenses


Taught by

Simons Institute

Related Courses

Computer Security
Stanford University via Coursera
Cryptography II
Stanford University via Coursera
Malicious Software and its Underground Economy: Two Sides to Every Story
University of London International Programmes via Coursera
Building an Information Risk Management Toolkit
University of Washington via Coursera
Introduction to Cybersecurity
National Cybersecurity Institute at Excelsior College via Canvas Network