Differentially Private Diffusion Models Generate Useful Synthetic Images
Offered By: Google TechTalks via YouTube
Course Description
Overview
Explore the potential of privacy-preserving synthetic image generation using differentially private diffusion models in this Google TechTalk presented by Sahra Ghalebikesabi. Discover how fine-tuning ImageNet pre-trained diffusion models with over 80M parameters achieves state-of-the-art results on CIFAR-10 and Camelyon17 datasets, significantly improving both Fréchet Inception Distance (FID) and downstream classifier accuracy. Learn about the reduction of CIFAR-10 FID from 26.2 to 9.8 and the increase in accuracy from 51.0% to 88.0%. Examine the impressive 91.1% downstream accuracy achieved on synthetic Camelyon17 data, approaching the 96.5% benchmark set by real data training. Understand how leveraging generative models to create infinite amounts of data maximizes downstream prediction performance and enables effective hyperparameter tuning. Gain insights into the practical applications of diffusion models fine-tuned with differential privacy, demonstrating their ability to produce useful and provably private synthetic data, even when facing significant distribution shifts between pre-training and fine-tuning distributions.
Syllabus
Differentially Private Diffusion Models Generate Useful Synthetic Images
Taught by
Google TechTalks
Related Courses
Statistical Machine LearningCarnegie Mellon University via Independent Secure and Private AI
Facebook via Udacity Data Privacy and Anonymization in R
DataCamp Build and operate machine learning solutions with Azure Machine Learning
Microsoft via Microsoft Learn Data Privacy and Anonymization in Python
DataCamp