Deep Learning Essentials
Offered By: University of Pennsylvania via Coursera
Course Description
Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Delve into the history of deep learning, and explore neural networks like the perceptron, how they function, and what architectures underpin them. Complete short coding assignments in Python.
Syllabus
- Module 1: History of Deep Learning
- In this module, we'll first peek through history, talk about the different ways in which people have attempted to build artificial intelligences in the past and explore what intelligence is made up of. Then, we'll start our investigation into an early model called the perceptron.
- Module 2: Perceptron, Stochastic Gradient Descent & Kernel Methods
- This module, we will continue exploring the perceptron. We'll delve into stochastic gradient descent (SGD), a fundamental optimization technique that enables the perceptron, and other models, to learn from data by iteratively updating the model's parameters to minimize errors. Afterward, we will look at kernel methods. These techniques can separate two sets of points in more complicated ways, drawing inspiration from how the human eye works.
- Module 3: Fully Connected Networks
- This module, we will move to exploring fully-connected networks. These networks are sophisticated models that can be thought of as a perceptron sitting on top of another perceptron, continuing in such a fashion. Each layer in a fully-connected network takes inputs from the layer below it, working to separate data points (such as the red and the blue scattered points) a little better than the one before it, and then passes it on to the next layer.
- Module 4: Backpropagation
- We will finish this course by looking at backpropagation, which is an algorithm to train neural networks to find the best set of weights that minimize error on the data. Backpropagation applies the chain rule from calculus to efficiently calculate gradients of the loss function with respect to the weights, enabling the model to update its weights in the opposite direction of the gradient. We'll discuss the importance of typical datasets consisting of images, sentences, and sounds, and how neural networks can learn from the spatial regularities present in such data.
Taught by
Chris Callison-Burch and Pratik Chaudhari
Tags
Related Courses
Statistical LearningIllinois Institute of Technology via Coursera 機器學習技法 (Machine Learning Techniques)
National Taiwan University via Coursera Utilisez des modèles supervisés non linéaires
CentraleSupélec via OpenClassrooms Generalization Theory in Machine Learning
Institute for Pure & Applied Mathematics (IPAM) via YouTube Alexander Wagner - Nonembeddability of Persistence Diagrams into Hilbert Spaces
Applied Algebraic Topology Network via YouTube