YoVDO

Linear Structure of High-Level Concepts in Text-Controlled Generative Models

Offered By: Valence Labs via YouTube

Tags

Machine Learning Courses Linear Algebra Courses Causal Inference Courses Diffusion Models Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the linear structure of high-level concepts in text-controlled generative models through this comprehensive talk by Victor Veitch from Valence Labs. Delve into the algebraic structure of vector representations in large language models and text-to-image diffusion models. Discover how natural language is embedded into vector representations and used for sampling from the model's output space. Examine the concept of "linear" representations, their emergence, and their application in understanding and controlling generative models with precision. Follow along as the speaker covers topics including the Linear Representation Hypothesis, language models, subspace notions, causal inner product, and related experiments. Gain insights from the conclusions and participate in the discussion to deepen your understanding of this complex subject in the field of artificial intelligence and machine learning.

Syllabus

- Discussant Slide + Introduction
- Linear Representation Hypothesis
- Language Models
- Subspace Notions
- Causal Inner Product
- Experiments
- Conclusions
- Discussion


Taught by

Valence Labs

Related Courses

Introduction to Artificial Intelligence
Stanford University via Udacity
Natural Language Processing
Columbia University via Coursera
Probabilistic Graphical Models 1: Representation
Stanford University via Coursera
Computer Vision: The Fundamentals
University of California, Berkeley via Coursera
Learning from Data (Introductory Machine Learning course)
California Institute of Technology via Independent