Efficient Training Image Extraction from Diffusion Models
Offered By: Google TechTalks via YouTube
Course Description
Overview
Explore efficient methods for extracting training images from diffusion models in this Google TechTalk presented by Ryan Webster. Learn about the challenges of data privacy and copyright issues arising from highly duplicated training images in diffusion models. Discover a streamlined pipeline that significantly reduces extraction time from GPU-years to GPU-days while maintaining similar image extraction capabilities. Examine the process of de-duplicating the LAION-2B dataset and understand the prevalence of duplicated images. Compare whitebox and blackbox extraction attacks to the original method, noting their improved efficiency in terms of network evaluations. Investigate the phenomenon of template copies, where diffusion models replicate fixed image regions while varying others. Analyze the impact of deduplicating training sets on new diffusion models and their tendency to generate templates rather than exact copies. Gain valuable insights into copied images from a data perspective and their implications for future model development and data management strategies.
Syllabus
Efficient Training Image Extraction from Diffusion Models Ryan Webs
Taught by
Google TechTalks
Related Courses
Diffusion Models Beat GANs on Image Synthesis - Machine Learning Research Paper ExplainedYannic Kilcher via YouTube Diffusion Models Beat GANs on Image Synthesis - ML Coding Series - Part 2
Aleksa Gordić - The AI Epiphany via YouTube OpenAI GLIDE - Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models
Aleksa Gordić - The AI Epiphany via YouTube Food for Diffusion
HuggingFace via YouTube Imagen: Text-to-Image Generation Using Diffusion Models - Lecture 9
University of Central Florida via YouTube