Cutting-Edge AI: Deep Reinforcement Learning in Python
Offered By: Udemy
Course Description
Overview
What you'll learn:
- Understand a cutting-edge implementation of the A2C algorithm (OpenAI Baselines)
- Understand and implement Evolution Strategies (ES) for AI
- Understand and implement DDPG (Deep Deterministic Policy Gradient)
- Understand important foundations for OpenAI ChatGPT, GPT-4
Ever wondered how AI technologies like OpenAI ChatGPT and GPT-4 really work? In this course, you will learn the foundations of these groundbreaking applications.
Welcome to Cutting-Edge AI!
This is technically Deep Learning in Python part 11 of my deep learning series, and my 3rd reinforcement learning course.
Deep Reinforcement Learning is actually the combination of 2 topics: Reinforcement Learning and Deep Learning (Neural Networks).
While both of these have been around for quite some time, it’s only been recently that Deep Learning has really taken off, and along with it, Reinforcement Learning.
The maturation of deep learning has propelled advances in reinforcement learning, which has been around since the 1980s, although some aspects of it, such as the Bellman equation, have been for much longer.
Recently, these advances have allowed us to showcase just how powerful reinforcement learning can be.
We’ve seen how AlphaZero can master the game of Go using only self-play.
This is just a few years after the original AlphaGo already beat a world champion in Go.
We’ve seen real-world robots learn how to walk, and even recover after being kicked over, despite only being trained using simulation.
Simulation is nice because it doesn’t require actual hardware, which is expensive. If your agent falls down, no real damage is done.
We’ve seen real-world robots learn hand dexterity, which is no small feat.
Walking is one thing, but that involves coarse movements. Hand dexterity is complex - you have many degrees of freedom and many of the forces involved are extremely subtle.
Imagine using your foot to do something you usually do with your hand, and you immediately understand why this would be difficult.
Last but not least - video games.
Even just considering the past few months, we’ve seen some amazing developments. AIs are now beating professional players in CS:GO and Dota 2.
So what makes this course different from the first two?
Now that we know deep learning works with reinforcement learning, the question becomes: how do we improve these algorithms?
This course is going to show you a few different ways: including the powerful A2C (Advantage Actor-Critic) algorithm, the DDPG (Deep Deterministic Policy Gradient) algorithm, and evolution strategies.
Evolution strategies is a new and fresh take on reinforcement learning, that kind of throws away all the old theory in favor of a more "black box" approach, inspired by biological evolution.
What’s also great about this new course is the variety of environments we get to look at.
First, we’re going to look at the classic Atari environments. These are important because they show that reinforcement learning agents can learn based on images alone.
Second, we’re going to look at MuJoCo, which is a physics simulator. This is the first step to building a robot that can navigate the real-world and understand physics - we first have to show it can work with simulated physics.
Finally, we’re going to look at Flappy Bird, everyone’s favorite mobile game just a few years ago.
Thanks for reading, and I’ll see you in class!
"If you can't implement it, you don't understand it"
Or as the great physicist Richard Feynman said: "What I cannot create, I do not understand".
My courses are the ONLY courses where you will learn how to implement machine learning algorithms from scratch
Other courses will teach you how to plug in your data into a library, but do you really need help with 3 lines of code?
After doing the same thing with 10 datasets, you realize you didn't learn 10 things. You learned 1 thing, and just repeated the same 3 lines of code 10 times...
Suggested prerequisites:
Calculus
Probability
Object-oriented programming
Python coding: if/else, loops, lists, dicts, sets
Numpy coding: matrix and vector operations
Linear regression
Gradient descent
Know how to build a convolutional neural network (CNN) in TensorFlow
Markov Decision Proccesses (MDPs)
WHATORDERSHOULDITAKEYOURCOURSESIN?:
Check out the lecture "Machine Learning and AIPrerequisite Roadmap" (available in the FAQ of any of my courses, including the free Numpy course)
UNIQUEFEATURES
Every line of code explained in detail - email me any time if you disagree
No wasted time "typing" on the keyboard like other courses - let's be honest, nobody can really write code worth learning about in just 20 minutes from scratch
Not afraid of university-level math - get important details about algorithms that other courses leave out
Taught by
Lazy Programmer Inc.
Related Courses
Advanced Machine LearningThe Open University via FutureLearn On-Ramp to AP* Calculus
Weston High School via edX Preparing for the AP* Calculus AB and BC Exams
University of Houston System via Coursera Calculus: Single Variable Part 4 - Applications
University of Pennsylvania via Coursera Applications of Calculus
Boxplay via FutureLearn