YoVDO

Nightshade: Prompt-Specific Poisoning Attacks on Text-to-Image Generative Models - Session 335

Offered By: IEEE via YouTube

Tags

AI Ethics Courses Computer Vision Courses Neural Networks Courses Prompt Engineering Courses Image Manipulation Courses Generative Models Courses Data Poisoning Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a cutting-edge research presentation on prompt-specific poisoning attacks targeting text-to-image generative models. Delve into the innovative "Nightshade" technique, which introduces a novel approach to manipulating AI-generated images. Gain insights into the potential vulnerabilities of popular image generation models and understand the implications for AI security and ethics. Learn about the methodology, experimental results, and potential countermeasures discussed by researcher Shawn Shan in this thought-provoking IEEE conference talk.

Syllabus

335 Nightshade Prompt Specific Poisoning Attacks on Text to Image Generative Models Shawn Shan


Taught by

IEEE Symposium on Security and Privacy

Tags

Related Courses

Introduction to Artificial Intelligence
Stanford University via Udacity
Computer Vision: The Fundamentals
University of California, Berkeley via Coursera
Computational Photography
Georgia Institute of Technology via Coursera
Einführung in Computer Vision
Technische Universität München (Technical University of Munich) via Coursera
Introduction to Computer Vision
Georgia Institute of Technology via Udacity