Bandits - Kevin Jamieson - University of Washington
Offered By: Paul G. Allen School via YouTube
Course Description
Overview
Explore the fundamentals of bandit algorithms in this comprehensive lecture from the University of Washington. Delve into the future of machine learning and discover how bandit algorithms are applied in various real-world scenarios, including drug development, Google Maps optimization, and content recommendation systems. Learn about stochastic models, Thompson sampling, and regret minimization techniques. Gain insights into key concepts such as sublinear regret, sub-Gaussian distributions, and the Central Limit Theorem. Enhance your understanding of this crucial area of machine learning and its practical applications in decision-making processes.
Syllabus
Introduction
The Future of Machine Learning
Bandits
Drug Makers
Google Maps
Content Recommendation
Stochastic Model
Thompson
Regret minimization
Regret
Sublinear Regret
Sub Gaussian
Central Limit Theorem
Taught by
Paul G. Allen School
Related Courses
Drug DiscoveryUniversity of California, San Diego via Coursera 新药发现和药物靶点 | Drug Discovery and its Target
Peking University via edX Principles and Applications of NMR Spectroscopy
Indian Institute of Science Bangalore via Swayam Cell Culture Technologies
Indian Institute of Technology Kanpur via Swayam Medicinal Chemistry
Indian Institute of Technology Madras via Swayam