Linear Structure of High-Level Concepts in Text-Controlled Generative Models
Offered By: Valence Labs via YouTube
Course Description
Overview
Explore the linear structure of high-level concepts in text-controlled generative models through this comprehensive talk by Victor Veitch from Valence Labs. Delve into the algebraic structure of vector representations in large language models and text-to-image diffusion models. Discover how natural language is embedded into vector representations and used for sampling from the model's output space. Examine the concept of "linear" representations, their emergence, and their application in understanding and controlling generative models with precision. Follow along as the speaker covers topics including the Linear Representation Hypothesis, language models, subspace notions, causal inner product, and related experiments. Gain insights from the conclusions and participate in the discussion to deepen your understanding of this complex subject in the field of artificial intelligence and machine learning.
Syllabus
- Discussant Slide + Introduction
- Linear Representation Hypothesis
- Language Models
- Subspace Notions
- Causal Inner Product
- Experiments
- Conclusions
- Discussion
Taught by
Valence Labs
Related Courses
Diffusion Models Beat GANs on Image Synthesis - Machine Learning Research Paper ExplainedYannic Kilcher via YouTube Diffusion Models Beat GANs on Image Synthesis - ML Coding Series - Part 2
Aleksa Gordić - The AI Epiphany via YouTube OpenAI GLIDE - Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models
Aleksa Gordić - The AI Epiphany via YouTube Food for Diffusion
HuggingFace via YouTube Imagen: Text-to-Image Generation Using Diffusion Models - Lecture 9
University of Central Florida via YouTube