Self-Cannibalizing AI: Exposing Generative Text-to-Image Models
Offered By: media.ccc.de via YouTube
Course Description
Overview
Explore artistic strategies for exposing generative text-to-image models in this 54-minute conference talk from the 37C3 event. Delve into the complex world of AI image generation, examining how machines learn from each other and engage in self-cannibalism within the generative process. Investigate the inner workings of image-generation models through techniques like feedback, misuse, and hacking. Learn about experiments on Stable-Diffusion pipelines, manipulation of aesthetic scoring in public text-to-image datasets, NSFW classification, and the use of Contrastive Language-Image Pre-training (CLIP) to reveal biases and problematic correlations. Discover how datasets and machine-learning models are filtered and constructed, and examine the implications of these processes. Explore the limitations and tendencies of generative AI models, including their ability to reproduce input images and their default patterns. Join speakers Ting-Chun Liu and Leon-Etienne Kühr as they share insights on the political discourses surrounding generative AI and the challenges of understanding increasingly complex datasets and models.
Syllabus
37C3 - Self-cannibalizing AI
Taught by
media.ccc.de
Related Courses
Knowledge-Based AI: Cognitive SystemsGeorgia Institute of Technology via Udacity AI for Everyone: Master the Basics
IBM via edX Introducción a La Inteligencia Artificial (IA)
IBM via Coursera AI for Legal Professionals (I): Law and Policy
National Chiao Tung University via FutureLearn Artificial Intelligence Ethics in Action
LearnQuest via Coursera