Context Caching for Faster and Cheaper LLM Inference
Offered By: Trelis Research via YouTube
Course Description
Overview
Explore context caching techniques for faster and cheaper inference in large language models (LLMs) in this 35-minute video tutorial. Learn how context caching works, its two main types, and implementation strategies for Claude, Google Gemini, and SGLang. Discover the cost-saving potential of this advanced inference technique and gain practical insights into improving LLM performance. Access comprehensive resources, including code repositories, slides, and timestamps, to enhance your understanding of this cutting-edge topic in AI development.
Syllabus
Introduction to context caching for LLMs
Video Overview
How does context caching work?
Two types of caching
Context caching with Claude and Google Gemini
Context caching with Claude
Context caching with Gemini Flash or Gemini Pro
Context caching with SGLang works also with vLLM
Cost Comparison
Video Resources
Taught by
Trelis Research
Related Courses
Supply Chain FundamentalsMassachusetts Institute of Technology via edX The Digital Economy: Effective Supply Chain Management
The Open University via FutureLearn Prototipazione virtuale
University of Naples Federico II via Federica Advanced Manufacturing Process Analysis
University at Buffalo via Coursera Improving Your Business Through a Culture of Health
Harvard University via edX