OmniCache: Collaborative Caching for Near-storage Accelerators
Offered By: USENIX via YouTube
Course Description
Overview
Explore a groundbreaking caching design for near-storage accelerators in this 17-minute conference talk from FAST '24. Delve into OmniCache, an innovative approach that combines near-storage and host memory capabilities to enhance I/O and data processing performance. Learn about the "near-cache" concept, which maximizes data access efficiency, and discover how collaborative caching enables concurrent operations using both host and device caches. Examine the dynamic model-driven offloading support that optimizes processing across host and device processors by monitoring hardware and software metrics. Investigate the potential of CXL technology for memory expansion and its integration into the OmniCache framework. Gain insights into the impressive performance improvements achieved by OmniCache, with up to 3.24X gains for I/O workloads and 3.06X for data processing workloads.
Syllabus
FAST '24 - OmniCache: Collaborative Caching for Near-storage Accelerators
Taught by
USENIX
Related Courses
Computer ArchitecturePrinceton University via Coursera Introduction to Computer Architecture
Carnegie Mellon University via Independent Build a Modern Computer from First Principles: From Nand to Tetris (Project-Centered Course)
Hebrew University of Jerusalem via Coursera 计算机系统基础(一) :程序的表示、转换与链接
Nanjing University via Coursera Computer Architecture
Indian Institute of Technology Madras via Swayam