Really Long Context LLMs - 200k Input Tokens
Offered By: Trelis Research via YouTube
Course Description
Overview
Explore the capabilities of long context large language models in this 25-minute video from Trelis Research. Learn about the Yi 34B 200k model, compare different long context models, and understand the differences between base and chat fine-tuned versions. Discover how to perform passkey retrieval tasks using Yi 6B and Claude models. Examine the Yi 34B model's performance with 107k context length. Get hands-on guidance for inferencing Yi models using Runpod, and learn about Yi function-calling models. Access comprehensive resources for long context models and gain valuable insights into the latest advancements in LLM technology.
Syllabus
Really long context length large language models
Video overview
Yi 200k context model
Which long context models are actually good?
Base model vs Chat fine-tuned Yi model
Passkey Retrieval on Yi 6B
Passkey retrieval for Claude
Yi 34B performance at 107k context
Inferencing Yi models with Runpod
Yi Function-calling models
Long Context Model Resources
Video Summary
Taught by
Trelis Research
Related Courses
Hugging Face on Azure - Partnership and Solutions AnnouncementMicrosoft via YouTube Question Answering in Azure AI - Custom and Prebuilt Solutions - Episode 49
Microsoft via YouTube Open Source Platforms for MLOps
Duke University via Coursera Masked Language Modelling - Retraining BERT with Hugging Face Trainer - Coding Tutorial
rupert ai via YouTube Masked Language Modelling with Hugging Face - Microsoft Sentence Completion - Coding Tutorial
rupert ai via YouTube