Self-Cannibalizing AI: Exposing Generative Text-to-Image Models
Offered By: media.ccc.de via YouTube
Course Description
Overview
Explore artistic strategies for exposing generative text-to-image models in this 54-minute conference talk from the 37C3 event. Delve into the complex world of AI image generation, examining how machines learn from each other and engage in self-cannibalism within the generative process. Investigate the inner workings of image-generation models through techniques like feedback, misuse, and hacking. Learn about experiments on Stable-Diffusion pipelines, manipulation of aesthetic scoring in public text-to-image datasets, NSFW classification, and the use of Contrastive Language-Image Pre-training (CLIP) to reveal biases and problematic correlations. Discover how datasets and machine-learning models are filtered and constructed, and examine the implications of these processes. Explore the limitations and tendencies of generative AI models, including their ability to reproduce input images and their default patterns. Join speakers Ting-Chun Liu and Leon-Etienne Kühr as they share insights on the political discourses surrounding generative AI and the challenges of understanding increasingly complex datasets and models.
Syllabus
37C3 - Self-cannibalizing AI
Taught by
media.ccc.de
Related Courses
6.S191: Introduction to Deep LearningMassachusetts Institute of Technology via Independent Generate Synthetic Images with DCGANs in Keras
Coursera Project Network via Coursera Image Compression and Generation using Variational Autoencoders in Python
Coursera Project Network via Coursera Build Basic Generative Adversarial Networks (GANs)
DeepLearning.AI via Coursera Apply Generative Adversarial Networks (GANs)
DeepLearning.AI via Coursera