YoVDO

Context Caching for Faster and Cheaper LLM Inference

Offered By: Trelis Research via YouTube

Tags

Cost Reduction Courses Claude Courses vLLM Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore context caching techniques for faster and cheaper inference in large language models (LLMs) in this 35-minute video tutorial. Learn how context caching works, its two main types, and implementation strategies for Claude, Google Gemini, and SGLang. Discover the cost-saving potential of this advanced inference technique and gain practical insights into improving LLM performance. Access comprehensive resources, including code repositories, slides, and timestamps, to enhance your understanding of this cutting-edge topic in AI development.

Syllabus

Introduction to context caching for LLMs
Video Overview
How does context caching work?
Two types of caching
Context caching with Claude and Google Gemini
Context caching with Claude
Context caching with Gemini Flash or Gemini Pro
Context caching with SGLang works also with vLLM
Cost Comparison
Video Resources


Taught by

Trelis Research

Related Courses

Career Hacking: The Ultimate Job Search Course (Now w/ AI!)
Udemy
Insane AI News Happening That No One is Noticing - Weekly Roundup
Matt Wolfe via YouTube
Live Coding an LLM Battle - GPT-4 vs. Claude - 20 Questions Game
Rob Mulla via YouTube
Complete Tutorial of Top Generative AI Tools - ChatGPT, GitHub Copilot, Claude, and Google Gemini
Great Learning via YouTube
Preparing Data with Generative AI
Pluralsight