YoVDO

Nightshade: Prompt-Specific Poisoning Attacks on Text-to-Image Generative Models - Session 335

Offered By: IEEE via YouTube

Tags

AI Ethics Courses Computer Vision Courses Neural Networks Courses Prompt Engineering Courses Image Manipulation Courses Generative Models Courses Data Poisoning Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a cutting-edge research presentation on prompt-specific poisoning attacks targeting text-to-image generative models. Delve into the innovative "Nightshade" technique, which introduces a novel approach to manipulating AI-generated images. Gain insights into the potential vulnerabilities of popular image generation models and understand the implications for AI security and ethics. Learn about the methodology, experimental results, and potential countermeasures discussed by researcher Shawn Shan in this thought-provoking IEEE conference talk.

Syllabus

335 Nightshade Prompt Specific Poisoning Attacks on Text to Image Generative Models Shawn Shan


Taught by

IEEE Symposium on Security and Privacy

Tags

Related Courses

Computational Photography
Georgia Institute of Technology via Coursera
Film, Images & Historical Interpretation in the 20th Century: The Camera Never Lies
University of London International Programmes via Coursera
Make Your Own 2048
Udacity
Applications of Linear Algebra Part 1
Davidson College via edX
HTML5 Canvas
Udacity