Nightshade: Prompt-Specific Poisoning Attacks on Text-to-Image Generative Models - Session 335
Offered By: IEEE via YouTube
Course Description
Overview
Explore a cutting-edge research presentation on prompt-specific poisoning attacks targeting text-to-image generative models. Delve into the innovative "Nightshade" technique, which introduces a novel approach to manipulating AI-generated images. Gain insights into the potential vulnerabilities of popular image generation models and understand the implications for AI security and ethics. Learn about the methodology, experimental results, and potential countermeasures discussed by researcher Shawn Shan in this thought-provoking IEEE conference talk.
Syllabus
335 Nightshade Prompt Specific Poisoning Attacks on Text to Image Generative Models Shawn Shan
Taught by
IEEE Symposium on Security and Privacy
Tags
Related Courses
Discover, Validate & Launch New Business Ideas with ChatGPTUdemy 150 Digital Marketing Growth Hacks for Businesses
Udemy AI: Executive Briefing
Pluralsight The Complete Digital Marketing Guide - 25 Courses in 1
Udemy Learn to build a voice assistant with Alexa
Udemy