YoVDO

Nightshade: Prompt-Specific Poisoning Attacks on Text-to-Image Generative Models - Session 335

Offered By: IEEE via YouTube

Tags

AI Ethics Courses Computer Vision Courses Neural Networks Courses Prompt Engineering Courses Image Manipulation Courses Generative Models Courses Data Poisoning Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a cutting-edge research presentation on prompt-specific poisoning attacks targeting text-to-image generative models. Delve into the innovative "Nightshade" technique, which introduces a novel approach to manipulating AI-generated images. Gain insights into the potential vulnerabilities of popular image generation models and understand the implications for AI security and ethics. Learn about the methodology, experimental results, and potential countermeasures discussed by researcher Shawn Shan in this thought-provoking IEEE conference talk.

Syllabus

335 Nightshade Prompt Specific Poisoning Attacks on Text to Image Generative Models Shawn Shan


Taught by

IEEE Symposium on Security and Privacy

Tags

Related Courses

Knowledge-Based AI: Cognitive Systems
Georgia Institute of Technology via Udacity
AI for Everyone: Master the Basics
IBM via edX
Introducción a La Inteligencia Artificial (IA)
IBM via Coursera
AI for Legal Professionals (I): Law and Policy
National Chiao Tung University via FutureLearn
Artificial Intelligence Ethics in Action
LearnQuest via Coursera