Managing Models Using MLflow on Databricks
Offered By: Pluralsight
Course Description
Overview
This course will teach you how to manage the end-to-end lifecycle of your machine learning models using the MLflow managed service on Databricks.
The machine learning workflow involves many intricate steps to ensure that the model that you deploy to production is meaningful and robust. Managing this workflow manually is hard which is why the MLflow service which manages the integrated machine learning workflow end-to-end is game changing. Databricks makes this even easier by offering a managed version of this service that is simple, intuitive, and easy to use. In this course, Managing Models Using MLflow on Databricks, you will learn to create an MLflow experiment and use it to track the runs of your models. First, you will see how you can use explicit logging to log model-related metrics and parameters and view, sort, and compare runs in an experiment. Next, you will then see how you can use autologging to track all relevant parameter, metrics, and artifacts without you having to explicitly write logging code. Then, you will see how you can use MLflow to productionize and serve your models, and register your models in the model registry and perform batch inference using your model. After that, you will learn how to transition your model through lifecycle stages such as Staging, Production, and Archived. Finally, you will see how you can work with custom models in MLflow. You will also learn how to package your model in a reusable format as an MLflow project and run training using that project hosted on Github or on the Databricks file system. When you are finished with this course, you will have the skills and knowledge to use MLflow on Databricks to manage the entire lifecycle of your machine learning model.
The machine learning workflow involves many intricate steps to ensure that the model that you deploy to production is meaningful and robust. Managing this workflow manually is hard which is why the MLflow service which manages the integrated machine learning workflow end-to-end is game changing. Databricks makes this even easier by offering a managed version of this service that is simple, intuitive, and easy to use. In this course, Managing Models Using MLflow on Databricks, you will learn to create an MLflow experiment and use it to track the runs of your models. First, you will see how you can use explicit logging to log model-related metrics and parameters and view, sort, and compare runs in an experiment. Next, you will then see how you can use autologging to track all relevant parameter, metrics, and artifacts without you having to explicitly write logging code. Then, you will see how you can use MLflow to productionize and serve your models, and register your models in the model registry and perform batch inference using your model. After that, you will learn how to transition your model through lifecycle stages such as Staging, Production, and Archived. Finally, you will see how you can work with custom models in MLflow. You will also learn how to package your model in a reusable format as an MLflow project and run training using that project hosted on Github or on the Databricks file system. When you are finished with this course, you will have the skills and knowledge to use MLflow on Databricks to manage the entire lifecycle of your machine learning model.
Syllabus
- Course Overview 2mins
- Tracking Models Using MLflow 54mins
- Productionizing and Serving Models 42mins
- Using Custom Models and MLflow Projects 20mins
Taught by
Janani Ravi
Related Courses
Data Processing with AzureLearnQuest via Coursera Mejores prácticas para el procesamiento de datos en Big Data
Coursera Project Network via Coursera Data Science with Databricks for Data Analysts
Databricks via Coursera Azure Data Engineer con Databricks y Azure Data Factory
Coursera Project Network via Coursera Curso Completo de Spark con Databricks (Big Data)
Coursera Project Network via Coursera