Differentially Private Diffusion Models Generate Useful Synthetic Images
Offered By: Google TechTalks via YouTube
Course Description
Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the potential of privacy-preserving synthetic image generation using differentially private diffusion models in this Google TechTalk presented by Sahra Ghalebikesabi. Discover how fine-tuning ImageNet pre-trained diffusion models with over 80M parameters achieves state-of-the-art results on CIFAR-10 and Camelyon17 datasets, significantly improving both Fréchet Inception Distance (FID) and downstream classifier accuracy. Learn about the reduction of CIFAR-10 FID from 26.2 to 9.8 and the increase in accuracy from 51.0% to 88.0%. Examine the impressive 91.1% downstream accuracy achieved on synthetic Camelyon17 data, approaching the 96.5% benchmark set by real data training. Understand how leveraging generative models to create infinite amounts of data maximizes downstream prediction performance and enables effective hyperparameter tuning. Gain insights into the practical applications of diffusion models fine-tuned with differential privacy, demonstrating their ability to produce useful and provably private synthetic data, even when facing significant distribution shifts between pre-training and fine-tuning distributions.
Syllabus
Differentially Private Diffusion Models Generate Useful Synthetic Images
Taught by
Google TechTalks
Related Courses
Computer Vision for Data ScientistsLinkedIn Learning AlexNet and ImageNet Explained
James Briggs via YouTube Do ImageNet Classifiers Generalize to ImageNet? - Analyzing ML Progress and Challenges
Paul G. Allen School via YouTube Fast and Accurate Deep Neural Networks Training
Paul G. Allen School via YouTube Analysis of Large-Scale Visual Recognition - Bay Area Vision Meeting
Meta via YouTube