Reducing Complexity in Data
Offered By: Pluralsight
Course Description
Overview
This course covers several techniques used to optimally simplify data used in supervised machine learning applications ranging from relatively simple feature selection techniques to very complex applications of clustering using deep neural networks.
Machine learning techniques have grown significantly more powerful in recent years, but excessive complexity in data is still a major problem. There are several reasons for this - distinguishing signal from noise gets harder with more complex data, and the risks of overfitting go up as well. Finally, as cloud-based machine learning becomes more and more popular, reducing complexity in data is crucial in making training more affordable. Cloud-based ML solutions can be very expensive indeed. In this course, Reducing Complexity in Data you will learn how to make the data fed into machine learning models more tractable and more manageable, without resorting to any hacks or shortcuts, and without compromising on quality or correctness. First, you will learn the importance of parsimony in data, and understand the pitfalls of working with data of excessively high-dimensionality, often referred to as the curse of dimensionality. Next, you will discover how and when to resort to feature selection, employing statistically sound techniques to find a subset of the features input based on their information content and link to the output. Finally, you will explore how to use two advanced techniques - clustering, and autoencoding. Both of these are applications of unsupervised learning used to simplify data as a precursor to a supervised learning algorithm. Each of them often relies on a sophisticated implementation such as deep learning using neural networks. When you’re finished with this course, you will have the skills and knowledge of conceptually sound complexity reduction needed to reduce the complexity of data used in supervised machine learning applications.
Machine learning techniques have grown significantly more powerful in recent years, but excessive complexity in data is still a major problem. There are several reasons for this - distinguishing signal from noise gets harder with more complex data, and the risks of overfitting go up as well. Finally, as cloud-based machine learning becomes more and more popular, reducing complexity in data is crucial in making training more affordable. Cloud-based ML solutions can be very expensive indeed. In this course, Reducing Complexity in Data you will learn how to make the data fed into machine learning models more tractable and more manageable, without resorting to any hacks or shortcuts, and without compromising on quality or correctness. First, you will learn the importance of parsimony in data, and understand the pitfalls of working with data of excessively high-dimensionality, often referred to as the curse of dimensionality. Next, you will discover how and when to resort to feature selection, employing statistically sound techniques to find a subset of the features input based on their information content and link to the output. Finally, you will explore how to use two advanced techniques - clustering, and autoencoding. Both of these are applications of unsupervised learning used to simplify data as a precursor to a supervised learning algorithm. Each of them often relies on a sophisticated implementation such as deep learning using neural networks. When you’re finished with this course, you will have the skills and knowledge of conceptually sound complexity reduction needed to reduce the complexity of data used in supervised machine learning applications.
Taught by
Janani Ravi
Related Courses
Graph Partitioning and ExpandersStanford University via NovoEd The Analytics Edge
Massachusetts Institute of Technology via edX More Data Mining with Weka
University of Waikato via Independent Mining Massive Datasets
Stanford University via edX The Caltech-JPL Summer School on Big Data Analytics
California Institute of Technology via Coursera