YoVDO

PAC Learning

Offered By: Churchill CompSci Talks via YouTube

Tags

Machine Learning Courses Supervised Learning Courses Binary Classification Courses PAC Learning Courses

Course Description

Overview

Explore the concept of Probably Approximately Correct (PAC) learning in this 31-minute conference talk by Peter Rugg. Delve into the foundations of machine learning, examining what types of problems can be learned and what it means to learn a problem. Understand the PAC framework's approach to specifying worst-case error bounds for problem learnability. Follow the formulation of supervised binary classification and the definition of PAC learning. Investigate methods for determining PAC learnability, covering topics such as proper and improper learning, agnostic learning, and the Vapnik-Chervonenkis dimension. Gain insights into the significance and influence of PAC in machine learning theory, as well as its criticisms.

Syllabus

Intro
Supervised Machine Learning
Problem Parameters
Adversarial (Worst Case) Choices
Proper and Improper Learning
Agnostic Learning
The Theoretical Question
Why Probably (Approximately Correct)?
Learnability Example
Vapnik-Chervonenkis Dimension
VC Dimension and Proper Learnability
Significance and Influence of PAC
Criticisms of PAC


Taught by

Churchill CompSci Talks

Related Courses

Classification Models
Udacity
Evaluate Machine Learning Models with Yellowbrick
Coursera Project Network via Coursera
Logistic Regression with Python and Numpy
Coursera Project Network via Coursera
Computational Learning Theory and Beyond
openHPI
Introduction to Deep Learning with Keras
DataCamp