Nightshade: Prompt-Specific Poisoning Attacks on Text-to-Image Generative Models - Session 335
Offered By: IEEE via YouTube
Course Description
Overview
Explore a cutting-edge research presentation on prompt-specific poisoning attacks targeting text-to-image generative models. Delve into the innovative "Nightshade" technique, which introduces a novel approach to manipulating AI-generated images. Gain insights into the potential vulnerabilities of popular image generation models and understand the implications for AI security and ethics. Learn about the methodology, experimental results, and potential countermeasures discussed by researcher Shawn Shan in this thought-provoking IEEE conference talk.
Syllabus
335 Nightshade Prompt Specific Poisoning Attacks on Text to Image Generative Models Shawn Shan
Taught by
IEEE Symposium on Security and Privacy
Tags
Related Courses
Knowledge-Based AI: Cognitive SystemsGeorgia Institute of Technology via Udacity AI for Everyone: Master the Basics
IBM via edX Introducción a La Inteligencia Artificial (IA)
IBM via Coursera AI for Legal Professionals (I): Law and Policy
National Chiao Tung University via FutureLearn Artificial Intelligence Ethics in Action
LearnQuest via Coursera