AI Skills: Basic and Advanced Techniques in Machine Learning
Offered By: Delft University of Technology via edX
Course Description
Overview
This series of hands-on and interactive MOOCs will give learners a comprehensive overview of the basics machine learning topics. You will discover how machine learning classification and regression techniques allow you to make predictions for a category (classification) or for a number (regression) given data. This can be useful in predicting properties of objects (such as their weight or shape), or predicting qualities of people (customer satisfaction, etc.).
You will learn about unsupervised learning techniques such as clustering and dimensionality reduction and how useful they are to make sense of large and/or high dimensional datasets.
We will also cover more advanced supervised learning techniques such as deep learning. This is useful to train neural networks to solve more complicated classification and regression tasks. Finally, you will deep dive into the reinforcement learning techniques and understand how to use them to train AI agents that interact with an environment.
The lectures feature a unique combination of videos mixed with hands-on interaction with machine learning algorithms to stimulate a deeper understanding. In the exercises you apply the algorithms in Python using scikit-learn and in the final project you will further deepen your understanding of the various concepts by building and tuning a machine learning pipeline from start to finish.
Syllabus
Course 1: AI skills for Engineers: Supervised Machine Learning
Learn the fundamentals of machine learning to help you correctly apply various classification and regression machine learning algorithms to real-life problems using the Python toolbox scikit-learn.
Course 2: AI skills: Introduction to Unsupervised, Deep and Reinforcement Learning
Learn the fundamentals and principal AI concepts about clustering, dimensionality reduction, reinforcement learning and deep learning to solve real-life problems.
Courses
-
Machine learning classification and regression techniques have potential uses in various engineering disciplines. These machine learning models allow you to make predictions for a category (classification) or for a number (regression) given sensor data, and can be used in, for example, predicting properties of objects (such as their weight or shape).
Using hands-on and interactive exercises you will get insight into:
Machine learning and its variants, such as supervised learning, semi-supervised learning, unsupervised learning and reinforcement learning.
Regression techniques such as linear regression, K-nearest neighbor regression, how to deal with outliers and evaluation metrics such as the mean squared error (MSE) and mean absolute error (MAE).
Classification techniques such as the histogram method, the nearest mean (or nearest medoid) method and the nearest neighbor classifier. We cover the classification setting and important concepts such as the Bayes classifier and the Bayes error, the optimal classifier in theory.
Training models using (stochastic) gradient descent and its variants, we learn how to tune this optimizer, and how to use it to construct a logistic regression classification model.
Overfitting means a classifier works well on a training set but not on unseen test data. We discuss how to build complex non-linear models, and we analyze how we can understand overfitting using the bias-variance decomposition and the curse of dimensionality. Finally, we discuss how to evaluate fairly and tune machine learning models and estimate how much data they need for an sufficient performance.
Regularization methods can help to mitigate overfitting. We discuss two regularization techniques for estimating the linear regression coefficients: ridge regression and LASSO. The latter can also be used for variable selection.
Classifier evaluation metrics such as the ROC curve and confusion matrix can give more insight into the performance of classifiers. We also discuss what constitutes a “good” accuracy; this is given by so-called dummy-classifiers which are naïve baselines.
Support Vector Machines (SVMs) are more advanced classification models that can provide good performance even in high-dimensional spaces and with little data. We discuss their different variants such as the soft-margin SVM, the hard-margin SVM and the nonlinear kernel SVM.
Decision Trees are simple models that can easily be understood by lay people. They are easy to use and visualize, and instead of a black box they can be easily understood as an interpretable white box model, making them suitable for various applications.
The lectures feature a unique combination of videos mixed with hands-on interaction with machine learning algorithms to stimulate a deeper understanding. In the exercises you apply the algorithms in Python using scikit-learn and in the final project you will further deepen your understanding of the various concepts by building and tuning a machine learning pipeline from start to finish.
-
In this course you will learn the basics of several machine learning topics to help you solve real life challenges. Unsupervised learning techniques such as clustering and dimensionality reduction are useful to make sense of large and/or high dimensional datasets that are not annotated. Deep learning is a supervised learning technique that is useful to train neural networks to solve more complicated classification and regression tasks. Finally, reinforcement learning techniques can be used to train AI agents that interact with an environment.
Using hands-on and interactive exercises you will get insight into the fundamental algorithms and basic concepts of:
Clustering is used to identify similar data/objects and patterns from your engineering datasets. It is a technique that is especially useful if you don’t have labeled or annotated data. We explain various approaches to clustering and cover how similarity and dissimilarity measures are used.
Dimensionality reduction techniques are used to reduce the number of features representing a given dataset, while retaining the structure of the dataset. We discuss feature selection and feature extraction techniques such as Principal Component Analysis (PCA), and how and when to apply it.
Deep Learning is a family of machine learning methods based on artificial neural networks. You will learn how to build and train deep neural networks consisting of fully connected neural networks of multiple hidden layers.
Reinforcement learning teaches an AI to interact with an environment. We cover basic reinforcement learning concepts and techniques, such as how to model the system using a Markov Decision Process, and how to train an optimal policy using tabular Q-learning using the Bellman equation.
This course is designed by a team of TU Delft machine learning experts from various backgrounds, highlighting the various topics from their individual perspectives.
Taught by
Wendelin Böhmer, Tom Viering, Hanne Kekkonen, Hongrui Wang, Amira Elnouty and Luca Laurenti
Tags
Related Courses
Data Preparation (Import and Cleaning) for PythonA Cloud Guru DP-100 Part 2 - Modeling
A Cloud Guru AI For Lawyers (II): Tools for Legal Professionals
National Chiao Tung University via FutureLearn Introducción a la Inteligencia Artificial: Principales Algoritmos
Galileo University via edX Basic Data Analysis and Model Building using Python
Coursera Community Project Network via Coursera