YoVDO

Optimizing Knowledge Distillation Training With Volcano

Offered By: CNCF [Cloud Native Computing Foundation] via YouTube

Tags

Conference Talks Courses Kubernetes Courses Model Compression Courses Volcano Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a conference talk on optimizing knowledge distillation training using Volcano. Delve into the innovative approach of leveraging Volcano as a scheduler to deploy Teacher models in online Kubernetes GPU inference card clusters, enhancing the throughput of knowledge distillation processes. Learn how this method allows for flexible scheduling, mitigating task failures during peak hours and maximizing the use of cluster resources. Discover the detailed process of optimizing elastic distillation training with Volcano, complete with benchmark data. Gain insights into large-scale training, Elastic Deep Learning, and the advantages of this approach. Examine the Volcano architecture, GPU sharing techniques, and its integration with Kubernetes for efficient model compression and deployment.

Syllabus

Introduction
Project Background
Large Scale Training
Elastic Deep Learning
Knowledge Distillation
Advantages
Training Vector
William Wang
Challenges
CNCF Sandbox
Volcano Architecture
Survival Kubernetes
Volcano Job
GPU Sharing
Cromwell
Commander
Kubernetes


Taught by

CNCF [Cloud Native Computing Foundation]

Related Courses

Building Geospatial Apps on Postgres, PostGIS, & Citus at Large Scale
Microsoft via YouTube
Unlocking the Power of ML for Your JavaScript Applications with TensorFlow.js
TensorFlow via YouTube
Managing the Reactive World with RxJava - Jake Wharton
ChariotSolutions via YouTube
What's New in Grails 2.0
ChariotSolutions via YouTube
Performance Analysis of Apache Spark and Presto in Cloud Environments
Databricks via YouTube