YoVDO

AI Workflow: Business Priorities and Data Ingestion

Offered By: IBM via Coursera

Tags

Artificial Intelligence Courses Business Intelligence Courses Design Thinking Courses Linear Algebra Courses Jupyter Notebooks Courses Probability Theory Courses

Course Description

Overview

This is the first course of a six part specialization.  You are STRONGLY encouraged to complete these courses in order as they are not individual independent courses, but part of a workflow where each course builds on the previous ones. This first course in the IBM AI Enterprise Workflow Certification specialization introduces you to the scope of the specialization and prerequisites.  Specifically, the courses in this specialization are meant for practicing data scientists who are knowledgeable about probability, statistics, linear algebra, and Python tooling for data science and machine learning.  A hypothetical streaming media company will be introduced as your new client.  You will be introduced to the concept of design thinking, IBMs framework for organizing large enterprise AI projects.  You will also be introduced to the basics of scientific thinking, because the quality that distinguishes a seasoned data scientist from a beginner is creative, scientific thinking.  Finally you will start your work for the hypothetical media company by understanding the data they have, and by building a data ingestion pipeline using Python and Jupyter notebooks.   By the end of this course you should be able to: 1.  Know the advantages of carrying out data science using a structured process 2.  Describe how the stages of design thinking correspond to the AI enterprise workflow 3.  Discuss several strategies used to prioritize business opportunities 4.  Explain where data science and data engineering have the most overlap in the AI workflow 5.  Explain the purpose of testing in data ingestion  6.  Describe the use case for sparse matrices as a target destination for data ingestion  7.  Know the initial steps that can be taken towards automation of data ingestion pipelines   Who should take this course? This course targets existing data science practitioners that have expertise building machine learning models, who want to deepen their skills on building and deploying AI in large enterprises. If you are an aspiring Data Scientist, this course is NOT for you as you need real world expertise to benefit from the content of these courses.   What skills should you have? It is assumed you have a solid understanding of the following topics prior to starting this course: Fundamental understanding of Linear Algebra; Understand sampling, probability theory, and probability distributions; Knowledge of descriptive and inferential statistical concepts; General understanding of machine learning techniques and best practices; Practiced understanding of Python and the packages commonly used in data science: NumPy, Pandas, matplotlib, scikit-learn; Familiarity with IBM Watson Studio; Familiarity with the design thinking process.

Syllabus

  • IBM AI Enterprise Workflow Introduction
    • The goal of this first module is to introduce you to the overall specialization requirements, evaluate your understanding of some key prerequisite knowledge, and familiarize you with several process models commonly used today. In this course we will use the process of design thinking, but it is the consistent application of a process in practice that is important, not the exact process itself. There are a number of reasons for choosing the design thinking process, but the most important is that it is being applied in a cross-disciplinary way—that is outside of data science.
  • Data Collection
    • Throughout this module you will learn or reinforce what you already know about identifying and articulating business opportunities. In this module you will learn the importance of applying a scientific thought process to the task of understanding the business use case. This process has many similarities to that of being an investigator. You will also generate a healthy respect for the need to pause, step back and think scientifically about the main processes in this stage.
  • Data Ingestion
    • Cleaning, parsing, assembling and gut-checking data is among the most time-consuming tasks that a data scientist has to perform. The time spent on data cleaning can start at 60% and increase depending on data quality and the project requirements. This module looks at the process of ingesting data and presents a case study working a real world scenario.

Taught by

Mark J Grover and Ray Lopez, Ph.D.

Tags

Related Courses

Business Considerations for 5G with Edge, IoT, and AI
Linux Foundation via edX
FinTech for Finance and Business Leaders
ACCA via edX
Ethics, Laws and Implementing an AI Solution on Microsoft Azure
Cloudswyft via FutureLearn
Deep Learning and Python Programming for AI with Microsoft Azure
Cloudswyft via FutureLearn
Post Graduate Certificate in Advanced Machine Learning & AI
Indian Institute of Technology Roorkee via Coursera