Successfully Evaluating Predictive Modelling
Offered By: University of Edinburgh via edX
Course Description
Overview
A predictive exercise is not finished when a model is built. This course will equip you with essential skills for understanding performance evaluation metrics, using Python, to determine whether a model is performing adequately.
Specifically, you will learn:
- Appropriate measures that are used to evaluate predictive models
- Procedures that are used to ensure that models do not cheat through, for example, overfitting or predicting incorrect distributions
- The ways that different model evaluation criteria illustrate how one model excels over another and how to identify when to use certain criteria
This is the foundation of optimising successful predictive models. The concepts will be brought together in a comprehensive case study that deals with customer churn. You will be tasked with selecting suitable variables to predict whether a customer will leave a telecommunications provider by looking into their behaviour, creating various models, and benchmarking them by using the appropriate evaluation criteria.
Syllabus
Week 1: Evaluation Metrics and Feature Selection
Week 2: Feature Selection and Correlation Analysis
Week 3: Feature Selection with Decomposition Techniques
Week 4: Sampling Techniques
Week 5: Resampling Techniques
Week 6: Case Study
Taught by
Dr Johannes De Smedt
Tags
Related Courses
DP-100 Part 2 - ModelingA Cloud Guru Associado de Engenheiro de ML da AWS 1.2 Transformar dados (Português) | AWS ML Engineer Associate 1.2 Transform Data (Portuguese)
Amazon Web Services via AWS Skill Builder AWS ML Engineer Associate 2.3 Refine Models
Amazon Web Services via AWS Skill Builder Big Data Capstone Project
University of Adelaide via edX Clustering and Classification with Machine Learning in R
Packt via Coursera