Walking the Random Forest and Boosting the Trees
Offered By: EuroPython Conference via YouTube
Course Description
Overview
Explore tree-based ensemble models in this EuroPython 2018 conference talk by Kevin Lemagnen. Dive into the world of Random Forest and Gradient Boosting, two powerful machine learning techniques that leverage bagging and boosting respectively. Learn how these ensemble models compare to Deep Learning and why they remain essential tools for data scientists. Discover their implementation in Python using popular libraries like LightGBM, XGBoost, and scikit-learn. Gain insights into the theory behind these models and their practical applications in solving a wide range of problems. Understand why ensemble models are often easier to tune and interpret than more complex alternatives. Follow along with the provided notebook to bridge the gap between theoretical concepts and hands-on implementation.
Syllabus
Introduction
Data
Random Forest
Boosting
Recommendations
Taught by
EuroPython Conference
Related Courses
A Brief History of Data StorageEuroPython Conference via YouTube Breaking the Stereotype - Evolution & Persistence of Gender Bias in Tech
EuroPython Conference via YouTube We Can Get More from Spatial, GIS, and Public Domain Datasets
EuroPython Conference via YouTube Using NLP to Detect Knots in Protein Structures
EuroPython Conference via YouTube The Challenges of Doing Infra-As-Code Without "The Cloud"
EuroPython Conference via YouTube