YoVDO

Reexamining Direct Cache Access to Optimize I-O Intensive Applications for Multi-hundred-gigabit Networks

Offered By: USENIX via YouTube

Tags

USENIX Annual Technical Conference Courses

Course Description

Overview

Explore a comprehensive analysis of Direct Cache Access (DCA) optimization for I/O intensive applications in multi-hundred-gigabit networks in this 21-minute conference talk from USENIX ATC '20. Dive into the current implementation of DCA in Intel processors, focusing on Data Direct I/O technology (DDIO) and its impact on system performance. Learn how optimizing DDIO can significantly reduce latency in network functions running at 100 Gbps and understand the challenges it presents at 200 Gbps. Discover key findings on cache management, including the importance of selective data injection and explicit cache bypassing. Gain insights into tuning DDIO, the impact of processing time, and strategies for improving performance in current and future high-speed network systems.

Syllabus

Intro
Direct Cache Access (DCA)
Intel Data Direct I/O (DDIO)
Pressure from these trends
What happens at 200 Gbps?
How does DDIO work?
LLC ways used by DDIO
How does DDIO perform?
Reducing #Descriptors is Not Sufficient! (1/2)
IIO LLC WAYS Register
Impact of Tuning DDIO
Is Tuning DDIO Enough?
What about Current Systems?
Using Our Knowledge for 200 Gbps
Our Key Findings (1/2)
Impact of Processing Time
Conclusion


Taught by

USENIX

Related Courses

Amazon DynamoDB - A Scalable, Predictably Performant, and Fully Managed NoSQL Database Service
USENIX via YouTube
Faasm - Lightweight Isolation for Efficient Stateful Serverless Computing
USENIX via YouTube
AC-Key - Adaptive Caching for LSM-based Key-Value Stores
USENIX via YouTube
The Future of the Past - Challenges in Archival Storage
USENIX via YouTube
A Decentralized Blockchain with High Throughput and Fast Confirmation
USENIX via YouTube