YoVDO

Online Optimization Meets Federated Learning - Tutorial

Offered By: Uncertainty in Artificial Intelligence via YouTube

Tags

Federated Learning Courses Gradient Descent Courses Convex Optimization Courses Bandit Algorithms Courses Stochastic Optimization Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the intersection of online optimization and federated learning in this comprehensive 2-hour and 22-minute tutorial from the Uncertainty in Artificial Intelligence conference. Delve into state-of-the-art theoretical results in online and bandit convex optimization, federated/distributed optimization, and emerging findings at their convergence. Begin with an in-depth look at the Online Optimization setting, focusing on the adversarial model, regret notion, and various feedback models. Analyze performance guarantees of online gradient descent-based algorithms. Next, examine the Distributed/Federated Stochastic Optimization model, discussing data heterogeneity assumptions, local update algorithms, and min-max optimal algorithms. Highlight the scarcity of results beyond the stochastic setting, particularly in adaptive adversaries. Conclude by exploring the emerging field of Distributed Online Optimization, introducing a distributed notion of regret and recent developments in first and zeroth-order feedback. Gain insights into numerous open questions and practical applications of this framework, presented by experts Aadirupa Saha and Kumar Kshitij Patel.

Syllabus

UAI 2023 Tutorial: Online Optimization Meets Federated Learning


Taught by

Uncertainty in Artificial Intelligence

Related Courses

Secure and Private AI
Facebook via Udacity
Advanced Deployment Scenarios with TensorFlow
DeepLearning.AI via Coursera
Big Data for Reliability and Security
Purdue University via edX
MLOps for Scaling TinyML
Harvard University via edX
Edge Analytics: IoT and Data Science
LinkedIn Learning