Fast RDMA-based Ordered Key-Value Store Using Remote Learned Cache
Offered By: USENIX via YouTube
Course Description
Overview
Syllabus
Intro
KVS: key pillar for distributed systems
Traditional KVS uses RPC (Server-centric)
Challenge: limited NIC abstraction
Existing systems adopt caching
High cache miss cost for caching tree Tree node size can be much larger than the KV
Trade-off of existing KVS
Overview of XSTORE Hybrid architecture 11
Our approach: Learned cache Using ML as the cache structure for tree-based index Motivated by the learned index[1]
Client-direct Get() using learned cache
Benefits of the learned cache
Challenges of learned cache
Outline of the remaining content Server-side data structure for dynamic workloads
Models cannot learn dynamic B+Tree address Can only learn when the addresses are sorted
Solution: another layer of indirection Observation: leaf nodes are logically sorted
Client-direct Get() using model & TT
Model retraining Model is retrained at server in background threads 9: Small cost & extra CPU usage at the server
Stale model handling Background update causes stale learned models
Performance of XSTORE on YCSB 100M KVS, uniform workloads
Sensitive to the dataset
Taught by
USENIX
Related Courses
GraphX - Graph Processing in a Distributed Dataflow FrameworkUSENIX via YouTube Theseus - An Experiment in Operating System Structure and State Management
USENIX via YouTube RedLeaf - Isolation and Communication in a Safe Operating System
USENIX via YouTube Microsecond Consensus for Microsecond Applications
USENIX via YouTube KungFu - Making Training in Distributed Machine Learning Adaptive
USENIX via YouTube