YOCO: Decoder-Decoder Architectures for Language Models Explained
Offered By: Unify via YouTube
Course Description
Overview
Explore a 49-minute session featuring Yutao Sun from Tsinghua University, co-author of the paper "You Only Cache Once: Decoder-Decoder Architectures for Language Models". Delve into YOCO, a decoder-decoder architecture for Large Language Models that enhances inference memory, prefill latency, and throughput across various context lengths and model sizes by caching key-value pairs only once. Gain insights into this innovative approach and its potential impact on AI development. Discover additional resources including The Deep Dive newsletter for the latest AI research and industry trends, and Unify's blog for in-depth exploration of the AI deployment stack. Connect with Unify through their website, GitHub, Discord, Twitter, and Reddit to stay updated on cutting-edge AI advancements and join the community discussion.
Syllabus
YOCO Explained
Taught by
Unify
Related Courses
Microsoft Bot Framework and Conversation as a PlatformMicrosoft via edX Unlocking the Power of OpenAI for Startups - Microsoft for Startups
Microsoft via YouTube Improving Customer Experiences with Speech to Text and Text to Speech
Microsoft via YouTube Stanford Seminar - Deep Learning in Speech Recognition
Stanford University via YouTube Select Topics in Python: Natural Language Processing
Codio via Coursera