A Smoothed Analysis of the Greedy Algorithm for Linear Contextual Bandits - Theory Seminar
Offered By: Paul G. Allen School via YouTube
Course Description
Overview
Explore a theory seminar on the smoothed analysis of the greedy algorithm in bandit learning. Delve into the tension between exploration and exploitation, particularly in high-stakes decision-making scenarios involving individuals. Examine how the greedy algorithm, which prioritizes immediate optimal decisions, can be analyzed in linear contextual bandit problems. Learn about the smoothed analysis approach, which demonstrates that small perturbations in adversarial context choices can lead to "no regret" performance. Investigate the implications for balancing exploration and exploitation in slightly perturbed environments. Cover topics such as classic algorithm design, online algorithms, online ML algorithms, single and multi-parameter models, regret analysis, diversity, and margins. Gain insights into the potential benefits and applications of greedy algorithms in various settings.
Syllabus
Intro
meta-question
Classic Algorithm Design
Online Algorithms
Online ML Algorithms
Outline
Single-parameter model
Multi-parameter model
Regret wrt M
(good) performance of greedy algorithms?
Single-parameter regime
Multi-parameter regime
A change in perspective
Diversity
Margins
Why might we use greedy?
Taught by
Paul G. Allen School
Related Courses
Natural Language ProcessingColumbia University via Coursera Intro to Algorithms
Udacity Conception et mise en œuvre d'algorithmes.
École Polytechnique via Coursera Paradigms of Computer Programming
Université catholique de Louvain via edX Data Structures and Algorithm Design Part I | 数据结构与算法设计(上)
Tsinghua University via edX