YoVDO

How Application-Level Priority Management Keeps Latency Low and Throughput High

Offered By: Linux Foundation via YouTube

Tags

Task Management Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore application-level priority management techniques for optimizing both throughput and latency in a single application during this Linux Foundation webinar. Delve into ScyllaDB CTO and Co-Founder Avi Kivity's insights on achieving high performance through strategies like Shard per Core, task isolation, and application-managed tasks. Learn about execution timelines, switching queues, preemption techniques, and stall detectors. Compare I/O to CPU challenges, understand safe disk space management, and discover scheduler basics with operation highlights. Examine dynamic shares adjustment and resource partitioning for providing different quality of service to users. Gain valuable knowledge on balancing the constant tension between throughput and latency in modern applications.

Syllabus

Intro
Comparing throughput and latency
Why mix throughput and latency computing?
Achieving high throughput
Shard per Core
Isolating tasks in threads
Application-level task isolation
Application managed tasks
Execution timeline
Switching queues
Preemption techniques
Stall detector
Comparing 1/0 to CPU
Challenges with 1/0
Safe space for disk
Schedulers Basics - operation highlight
Dynamic Shares Adjustment
Resource partitioning (QoS) Provide different quality of service to different users


Taught by

Linux Foundation

Tags

Related Courses

Get Organized: How to be a Together Teacher
Relay Graduate School of Education via Coursera
Concurrency
AdaCore via Independent
Sprint Planning for Faster Agile Team Delivery
University System of Maryland via edX
Introduction to Project Management with ClickUp
Coursera Project Network via Coursera
Create a Project Charter in Google Sheets
Coursera Project Network via Coursera