YoVDO

Tractable Novelty Exploration Over Continuous and Discrete Sequential Decision Problems

Offered By: University of Melbourne via YouTube

Tags

Deep Reinforcement Learning Courses Sampling Courses Bloom Filters Courses

Course Description

Overview

Explore the latest advances in width-based planning algorithms for sequential decision problems in this 55-minute lecture by Dr. Nir Lipovetzky, Senior Lecturer at the University of Melbourne's School of Computing and Information Systems. Delve into the world of AI planning, focusing on structural exploration features rather than goal-oriented heuristics or gradients. Learn about state novelty evaluation and its exponential nature, and discover two key advancements: defining state features for continuous dynamics and developing polynomial approximations of novelty through sampling and bloom filters. Compare the performance of polynomial planners in discrete sequential decision problems with state-of-the-art deep reinforcement learning algorithms using OpenAI Gym benchmarks. Gain insights into how width-based planners can achieve comparable policy quality with significantly reduced computational resources.

Syllabus

Introduction
Motivation of planning
Algorithm
State novelty
Bounded iterative with
The notion of with
Properties of with
Elephant in the room
Questions
Exponential time
Bloom filters
Openlist control
Results


Taught by

The University of Melbourne

Tags

Related Courses

6.S094: Deep Learning for Self-Driving Cars
Massachusetts Institute of Technology via Independent
Natural Language Processing (NLP)
Microsoft via edX
Deep Reinforcement Learning
Nvidia Deep Learning Institute via Udacity
Advanced AI: Deep Reinforcement Learning in Python
Udemy
Self-driving go-kart with Unity-ML
Udemy