A Guide to Cross-Validation for AI - Avoiding Overfitting and Ensuring Generalizability
Offered By: Molecular Imaging & Therapy via YouTube
Course Description
Overview
Explore cross-validation techniques for AI in this comprehensive 49-minute video lecture by Dr. Tyler Bradshaw from Molecular Imaging & Therapy. Delve into the concepts of overfitting and generalizability, and learn about the pitfalls of using one-time split methods. Understand the importance of representative test sets and avoiding tuning to the test set. Discover various cross-validation approaches, including K-fold with folded and hold-out test sets, nested cross-validation, leave-one-out, and random sampling. Gain insights on selecting the most appropriate approach by weighing their pros and cons. The lecture concludes with final thoughts and references a paper for further study, providing a solid foundation for implementing effective cross-validation techniques in AI projects.
Syllabus
Introduction
Overfitting vs. generalizability
Pitfalls of using one-time split method
Pitfall #1: Non-representative test set
Pitfall #2: Tuning to the test set
Cross-validation
Important note: in CV we are testing pipeline, not a single model
K-fold, folded test set
K-fold, hold-out test-set
Nested cross-validation
leave-one-out
random sampling
selecting an approach: pros and cons
Final thoughts
Taught by
Molecular Imaging & Therapy
Related Courses
Practical Machine LearningJohns Hopkins University via Coursera Practical Deep Learning For Coders
fast.ai via Independent 機器學習基石下 (Machine Learning Foundations)---Algorithmic Foundations
National Taiwan University via Coursera Data Analytics Foundations for Accountancy II
University of Illinois at Urbana-Champaign via Coursera Entraînez un modèle prédictif linéaire
CentraleSupélec via OpenClassrooms