Estimating ML-Models Financial Impact
Offered By: Higher School of Economics via Coursera
Course Description
Overview
This online course covers the basics of financial impact estimation for machine learning models deployed in business processes. We will discuss the general approaches to financial estimation, consider the applications to credit scoring and marketing response models, and focus on the relationship between statistical model quality metrics and financial results, as well as the concepts of A/B testing and potential biases as they apply to historical data.
Multiple courses focus on building machine learning models and assessing their predictive power. However, much less attention is usually paid to explaining how the model quality translates into financial results. Even more so, decision strategies relying on model predictions are normally not covered in great detail.
In this course, we will focus on the step when we already have a ML model and want to estimate the expected financial results, and verify the model by running either an A/B test or a backtest. In addition, we will learn how to tune threshold decision rules for model probabilities, thereby improving financial results, as well as account for model uncertainty or biases in historical data that may tamper with our financial estimates. We will analyze the binary classification case, which is the most common type of a ML task.
After completing this course, you, as a data scientist, will be able to come up with better arguments when explaining the value of your machine learning models to your leadership. If your role in the company gravitates toward business processes, you will gain a better understanding of how machine learning models can have an impact on the financial results.
Multiple courses focus on building machine learning models and assessing their predictive power. However, much less attention is usually paid to explaining how the model quality translates into financial results. Even more so, decision strategies relying on model predictions are normally not covered in great detail.
In this course, we will focus on the step when we already have a ML model and want to estimate the expected financial results, and verify the model by running either an A/B test or a backtest. In addition, we will learn how to tune threshold decision rules for model probabilities, thereby improving financial results, as well as account for model uncertainty or biases in historical data that may tamper with our financial estimates. We will analyze the binary classification case, which is the most common type of a ML task.
After completing this course, you, as a data scientist, will be able to come up with better arguments when explaining the value of your machine learning models to your leadership. If your role in the company gravitates toward business processes, you will gain a better understanding of how machine learning models can have an impact on the financial results.
Syllabus
- Project valuation: valuation metrics, planning and rules
- Model development and deployment is always a project. During this week, we will discuss project valuation. We will consider the general financial metrics such as Net Present Value, Internal Rate of Return, and others.
- Model quality and decision making. Benefit curve
- During the second week, we will focus on decision-making based on model predictions and the relationship between model quality and financial benefit. We will discuss how to plot benefit curves using different threshold decisions and optimize the financial results using threshold tuning.
- Estimating model risk discounts
- When a model is being used in the production environment for a long time, its quality can deteriorate. During this week, we will learn to calculate confidence intervals for model quality estimates and estimate potential negative financial effects.
- A/B testing and financial result verification
- A/B testing is a great way to verify our expectations concerning the financial effects. During this week, we will discuss the principles of A/B testing, its design, as well as the ways to assess its outcomes.
- Unobservable model errors, metalearning
- Imagine that our historical data is biased, and we cannot obtain any other data. During this week, we will discuss how to restore unobservable events by using such methods as reject inference and metalearning.
Taught by
Alexey A. Masyutin, Viktor I. Skripiuk and Elena S. Kozhina
Tags
Related Courses
Assess for Success: Marketing Analytics and MeasurementGoogle via Coursera Hypothesis Testing with Python
Codecademy Create an A/B web page marketing test with Google Optimize
Coursera Project Network via Coursera Upping Your Pinterest Game with Paid Ads
CreativeLive Перекрестные исследования
E-Learning Development Fund via Coursera