Interpretable Machine Learning Applications: Part 1
Offered By: Coursera Project Network via Coursera
Course Description
Overview
In this 1-hour long project-based course, you will learn how to create interpretable machine learning applications on the example of two classification regression models, decision tree and random forestc classifiers. You will also learn how to explain such prediction models by extracting the most important features and their values, which mostly impact these prediction models. In this sense, the project will boost your career as Machine Learning (ML) developer and modeler in that you will be able to get a deeper insight into the behaviour of your ML model. The project will also benefit your career as a decision maker in an executive position, or consultant, interested in deploying trusted and accountable ML applications.
Syllabus
- Interpretable machine learning applications: Part 1
- Gain insights into the feature importance of your prediction model. Getting to know which features and their values are most significant for the prediction model, will not only give further insights into the prediction model for machine learning modelers and developers, but also for the intended users of a machine learning application. Hence, in this project, you will learn how to go beyond the development and use of a machine learning (ML) application based on a regression classifier, by adding on explainability and interpretation aspects of the ML application.
Taught by
Epaminondas Kapetanios
Related Courses
Interpretable Machine Learning Applications: Part 2Coursera Project Network via Coursera Interpretable machine learning applications: Part 3
Coursera Project Network via Coursera Interpretable Machine Learning Applications: Part 4
Coursera Project Network via Coursera