YoVDO

Seeing Like a Language Model - Understanding Embeddings and Meaning in AI

Offered By: MLOps.community via YouTube

Tags

Language Models Courses Artificial Intelligence Courses Neural Networks Courses Creativity Courses Collaborative Work Courses Embeddings Courses Interpretability Courses Retrieval Augmented Generation Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the fascinating world of language model perception in this 34-minute conference talk by Linus Lee, Research Engineer at Notion. Delve into recent breakthroughs in interpretability research to understand how embeddings represent meaning and how language models process text. Discover encouraging updates on Lee's ongoing exploration of these topics and learn how these insights can be applied to improve retrieval-augmented LLM systems and create more intuitive interfaces for reading and writing. Gain valuable knowledge from Lee's experience in prototyping AI-augmented tools for thinking and collaborative work. This talk, presented at the AI in Production Conference by MLOps.community, offers a unique perspective on the inner workings of language models and their potential to enhance our creative and collaborative processes.

Syllabus

Seeing Like a Language Model // Linus Lee // AI in Production Conference Full Talk


Taught by

MLOps.community

Related Courses

Machine Learning Modeling Pipelines in Production
DeepLearning.AI via Coursera
Live Responsible AI Dashboard: One-Stop Shop for Operationalizing RAI in Practice - Episode 43
Microsoft via YouTube
Build Responsible AI Using Error Analysis Toolkit
Microsoft via YouTube
Neural Networks Are Decision Trees - With Alexander Mattick
Yannic Kilcher via YouTube
Interpretable Explanations of Black Boxes by Meaningful Perturbation - CAP6412 Spring 2021
University of Central Florida via YouTube