YoVDO

Nightshade: Prompt-Specific Poisoning Attacks on Text-to-Image Generative Models - Session 335

Offered By: IEEE via YouTube

Tags

AI Ethics Courses Computer Vision Courses Neural Networks Courses Prompt Engineering Courses Image Manipulation Courses Generative Models Courses Data Poisoning Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a cutting-edge research presentation on prompt-specific poisoning attacks targeting text-to-image generative models. Delve into the innovative "Nightshade" technique, which introduces a novel approach to manipulating AI-generated images. Gain insights into the potential vulnerabilities of popular image generation models and understand the implications for AI security and ethics. Learn about the methodology, experimental results, and potential countermeasures discussed by researcher Shawn Shan in this thought-provoking IEEE conference talk.

Syllabus

335 Nightshade Prompt Specific Poisoning Attacks on Text to Image Generative Models Shawn Shan


Taught by

IEEE Symposium on Security and Privacy

Tags

Related Courses

Visual Recognition & Understanding
University at Buffalo via Coursera
Deep Learning for Computer Vision
IIT Hyderabad via Swayam
Deep Learning in Life Sciences - Spring 2021
Massachusetts Institute of Technology via YouTube
Advanced Deep Learning Methods for Healthcare
University of Illinois at Urbana-Champaign via Coursera
Generative Models
Serrano.Academy via YouTube