Seeing Like a Language Model - Understanding Embeddings and Meaning in AI
Offered By: MLOps.community via YouTube
Course Description
Overview
Explore the fascinating world of language model perception in this 34-minute conference talk by Linus Lee, Research Engineer at Notion. Delve into recent breakthroughs in interpretability research to understand how embeddings represent meaning and how language models process text. Discover encouraging updates on Lee's ongoing exploration of these topics and learn how these insights can be applied to improve retrieval-augmented LLM systems and create more intuitive interfaces for reading and writing. Gain valuable knowledge from Lee's experience in prototyping AI-augmented tools for thinking and collaborative work. This talk, presented at the AI in Production Conference by MLOps.community, offers a unique perspective on the inner workings of language models and their potential to enhance our creative and collaborative processes.
Syllabus
Seeing Like a Language Model // Linus Lee // AI in Production Conference Full Talk
Taught by
MLOps.community
Related Courses
Machine Learning Modeling Pipelines in ProductionDeepLearning.AI via Coursera Live Responsible AI Dashboard: One-Stop Shop for Operationalizing RAI in Practice - Episode 43
Microsoft via YouTube Build Responsible AI Using Error Analysis Toolkit
Microsoft via YouTube Neural Networks Are Decision Trees - With Alexander Mattick
Yannic Kilcher via YouTube Interpretable Explanations of Black Boxes by Meaningful Perturbation - CAP6412 Spring 2021
University of Central Florida via YouTube