Managing Models Using MLflow on Databricks
Offered By: Pluralsight
Course Description
Overview
This course will teach you how to manage the end-to-end lifecycle of your machine learning models using the MLflow managed service on Databricks.
The machine learning workflow involves many intricate steps to ensure that the model that you deploy to production is meaningful and robust. Managing this workflow manually is hard which is why the MLflow service which manages the integrated machine learning workflow end-to-end is game changing. Databricks makes this even easier by offering a managed version of this service that is simple, intuitive, and easy to use. In this course, Managing Models Using MLflow on Databricks, you will learn to create an MLflow experiment and use it to track the runs of your models. First, you will see how you can use explicit logging to log model-related metrics and parameters and view, sort, and compare runs in an experiment. Next, you will then see how you can use autologging to track all relevant parameter, metrics, and artifacts without you having to explicitly write logging code. Then, you will see how you can use MLflow to productionize and serve your models, and register your models in the model registry and perform batch inference using your model. After that, you will learn how to transition your model through lifecycle stages such as Staging, Production, and Archived. Finally, you will see how you can work with custom models in MLflow. You will also learn how to package your model in a reusable format as an MLflow project and run training using that project hosted on Github or on the Databricks file system. When you are finished with this course, you will have the skills and knowledge to use MLflow on Databricks to manage the entire lifecycle of your machine learning model.
The machine learning workflow involves many intricate steps to ensure that the model that you deploy to production is meaningful and robust. Managing this workflow manually is hard which is why the MLflow service which manages the integrated machine learning workflow end-to-end is game changing. Databricks makes this even easier by offering a managed version of this service that is simple, intuitive, and easy to use. In this course, Managing Models Using MLflow on Databricks, you will learn to create an MLflow experiment and use it to track the runs of your models. First, you will see how you can use explicit logging to log model-related metrics and parameters and view, sort, and compare runs in an experiment. Next, you will then see how you can use autologging to track all relevant parameter, metrics, and artifacts without you having to explicitly write logging code. Then, you will see how you can use MLflow to productionize and serve your models, and register your models in the model registry and perform batch inference using your model. After that, you will learn how to transition your model through lifecycle stages such as Staging, Production, and Archived. Finally, you will see how you can work with custom models in MLflow. You will also learn how to package your model in a reusable format as an MLflow project and run training using that project hosted on Github or on the Databricks file system. When you are finished with this course, you will have the skills and knowledge to use MLflow on Databricks to manage the entire lifecycle of your machine learning model.
Syllabus
- Course Overview 2mins
- Tracking Models Using MLflow 54mins
- Productionizing and Serving Models 42mins
- Using Custom Models and MLflow Projects 20mins
Taught by
Janani Ravi
Related Courses
Predicción del fraude bancario con autoML y PycaretCoursera Project Network via Coursera Clasificación de datos de Satélites con autoML y Pycaret
Coursera Project Network via Coursera Regresión (ML) en la vida real con PyCaret
Coursera Project Network via Coursera ML Pipelines on Google Cloud
Google Cloud via Coursera ML Pipelines on Google Cloud
Pluralsight