YoVDO

Cache Strategies with Best Practices

Offered By: USENIX via YouTube

Tags

SREcon Courses

Course Description

Overview

Explore cache strategies and best practices in this 26-minute conference talk from SREcon21. Dive into technical details of efficient caching implementations for data-intensive and latency-sensitive online services. Learn about Cache Item strategies, Multiple Cache TTL strategies, and Cache warm-up strategies, with real-world examples from LinkedIn. Discover how to improve system performance, maintain cache efficiency, and increase availability. Examine topics such as read-through cache architecture, async cache refresh, dynamic TTL, notification pipelines, deduplication of fallback calls, persistent caching, shared remote caches, schema upgrades, and sharding considerations. Gain valuable insights to optimize your caching solutions and avoid common pitfalls that can impact system performance.

Syllabus

Intro
Read-through Cache Architecture
A simple cache implementation
A production issue
Async Cache Refresh
Dynamic TTL
Notification pipeline
Time-to-live (TTL)
A real production issue
Dedup fallback calls
Async cache update
Cache partial, empty and error result
Persistent cache
cache rsync
A shared remote cache
schema upgrade
cache warm up
local cache format
duplication objects
remote cache schema
Sharding and memory usage
Conclusion


Taught by

USENIX

Related Courses

How to Not Destroy Your Production Kubernetes Clusters
USENIX via YouTube
SRE and ML - Why It Matters
USENIX via YouTube
Knowledge and Power - A Sociotechnical Systems Discussion on the Future of SRE
USENIX via YouTube
Tracing Bare Metal with OpenTelemetry
USENIX via YouTube
Improving How We Observe Our Observability Data - Techniques for SREs
USENIX via YouTube