YoVDO

Context Caching for Faster and Cheaper LLM Inference

Offered By: Trelis Research via YouTube

Tags

Cost Reduction Courses Claude Courses vLLM Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore context caching techniques for faster and cheaper inference in large language models (LLMs) in this 35-minute video tutorial. Learn how context caching works, its two main types, and implementation strategies for Claude, Google Gemini, and SGLang. Discover the cost-saving potential of this advanced inference technique and gain practical insights into improving LLM performance. Access comprehensive resources, including code repositories, slides, and timestamps, to enhance your understanding of this cutting-edge topic in AI development.

Syllabus

Introduction to context caching for LLMs
Video Overview
How does context caching work?
Two types of caching
Context caching with Claude and Google Gemini
Context caching with Claude
Context caching with Gemini Flash or Gemini Pro
Context caching with SGLang works also with vLLM
Cost Comparison
Video Resources


Taught by

Trelis Research

Related Courses

Introduction to Linux Virtualization from the Command Line
A Cloud Guru
Advanced Manufacturing Process Analysis
University at Buffalo via Coursera
Advanced Monitoring and Optimizing with DynamoDB (German)
Amazon Web Services via AWS Skill Builder
Amazon WorkSpaces Deep Dive
Amazon Web Services via AWS Skill Builder
Amazon WorkSpaces Deep Dive (Japanese)
Amazon Web Services via AWS Skill Builder