YoVDO

AI Workflow: AI in Production

Offered By: IBM via Coursera

Tags

Artificial Intelligence Courses Linear Algebra Courses Docker Courses Kubernetes Courses Unit Testing Courses Probability Theory Courses

Course Description

Overview

This is the sixth course in the IBM AI Enterprise Workflow Certification specialization.   You are STRONGLY encouraged to complete these courses in order as they are not individual independent courses, but part of a workflow where each course builds on the previous ones.     This course focuses on models in production at a hypothetical streaming media company.  There is an introduction to IBM Watson Machine Learning.  You will build your own API in a Docker container and learn how to manage containers with Kubernetes.  The course also introduces  several other tools in the IBM ecosystem designed to help deploy or maintain models in production.  The AI workflow is not a linear process so there is some time dedicated to the most important feedback loops in order to promote efficient iteration on the overall workflow.   By the end of this course you will be able to: 1.  Use Docker to deploy a flask application 2.  Deploy a simple UI to integrate the ML model, Watson NLU, and Watson Visual Recognition 3.  Discuss basic Kubernetes terminology 4.  Deploy a scalable web application on Kubernetes  5.  Discuss the different feedback loops in AI workflow 6.  Discuss the use of unit testing in the context of model production 7.  Use IBM Watson OpenScale to assess bias and performance of production machine learning models. Who should take this course? This course targets existing data science practitioners that have expertise building machine learning models, who want to deepen their skills on building and deploying AI in large enterprises. If you are an aspiring Data Scientist, this course is NOT for you as you need real world expertise to benefit from the content of these courses.   What skills should you have? It is assumed that you have completed Courses 1 through 5 of the IBM AI Enterprise Workflow specialization and you have a solid understanding of the following topics prior to starting this course: Fundamental understanding of Linear Algebra; Understand sampling, probability theory, and probability distributions; Knowledge of descriptive and inferential statistical concepts; General understanding of machine learning techniques and best practices; Practiced understanding of Python and the packages commonly used in data science: NumPy, Pandas, matplotlib, scikit-learn; Familiarity with IBM Watson Studio; Familiarity with the design thinking process.

Syllabus

  • Feedback loops and Monitoring
    • This module focuses on feedback loops and monitoring. Feedback loops represent all the possible ways you can return to an earlier stage in the AI enterprise workflow. We initially discussed feedback loops in the first course of this specialization; however, here our focus is on unit testing. We are also looking at business value, a very important consideration that often gets overlooked; is the model having as significant effect on business metrics as intended? It is important to be able to use log files that have been standardized across the team to answer questions about business value as well as performance monitoring. You will have an opportunity to complete a case study on performance monitoring, where you will write unit tests for a logger and a logging API endpoint, test them, and write a suite of unit tests to validate if the logging is working correctly.
  • Hands on with Openscale and Kubernetes
    • This module will wrap up the formal learning in this course by completing hands on tutorials of Watson Openscale and Kubernetes. IBM Watson OpensScale is a suite of services that allows you to track the performance of production AI and its impact on business goals, with actionable metrics, in a single console. Kubernetes is a container orchestration platform for managing, scheduling and automating the deployment of Docker containers. The containers we have developed as part of this course are essentially microservices meant to be deployed as cloud native applications.
  • Capstone: Pulling it all together (Part 1)
    • In this module you start part one (Data Investigation) of a three-part capstone project designed to pull everything you have learned together. We have provided a brief review of what you should have learned thus far; however, you may want to review the first five courses prior to starting the project. A major goal of this capstone is to emulate a real-world scenario, so we won’t be providing notebooks to guide you as we have done with the previous case studies.
  • Capstone: Pulling it all together (Part 2)
    • In this module you will complete your capstone project and submit it for peer review. Part 2 of the Capstone project involves building models and selecting the best model to deploy. You will use time-series algorithms to predict future values based on previously observed values over time. In part 3 of the Capstone project, your focus will be creating a post-production analysis script that investigates the relationship between model performance and the business metrics aligned with the deployed model. After completing and submitting your capstone project, you will have access to the solution files for further review.

Taught by

Mark J Grover and Ray Lopez, Ph.D.

Tags

Related Courses

Advanced Machine Learning
The Open University via FutureLearn
Advanced Statistics for Data Science
Johns Hopkins University via Coursera
Algebra & Algorithms
Moscow Institute of Physics and Technology via Coursera
Algèbre Linéaire (Partie 2)
École Polytechnique Fédérale de Lausanne via edX
Linear Algebra III: Determinants and Eigenvalues
Georgia Institute of Technology via edX