Textual Explanation for Self-Driving Vehicles
Offered By: University of Central Florida via YouTube
Course Description
Overview
Explore the intricacies of self-driving vehicle technology in this 28-minute lecture from the University of Central Florida. Delve into the research question and motivation behind explainable driving models, understanding their importance and goals. Learn about the main idea of Explainable Driving Mod and its network architecture, including preprocessing, convolutional feature encoding, and vehicle controller components. Discover the Strongly Aligned Attention (SAA) mechanism and the Textual Explanation Generator with its Explanation LSTM. Examine the Berkeley Deep Drive explanation Dataset and the training process. Evaluate the vehicle controller, compare its variants, and analyze attention under regularization. Finally, assess the explanation generator through both automated and human evaluation methods.
Syllabus
Intro
Research Question and Motivation
Why It is important to know?
Goal of the work?
The main Idea : Explainable Driving Mod
The Network Architecture
Preprocessing
Convolutional Feature Encoder
Vehicle Controller (3)
Strongly Aligned Attention (SAA)
Textual Explanation Generator • Explanation LSTM
Berkeley Deep Drive explanation Dataset
Training
Evaluation of Vehicle Controller
Comparing variants of Vehicle Controller
Attention under regularization
Evaluation of Explanation Generator
Human Evaluation
Taught by
UCF CRCV
Tags
Related Courses
Neural Networks for Machine LearningUniversity of Toronto via Coursera 機器學習技法 (Machine Learning Techniques)
National Taiwan University via Coursera Machine Learning Capstone: An Intelligent Application with Deep Learning
University of Washington via Coursera Прикладные задачи анализа данных
Moscow Institute of Physics and Technology via Coursera Leading Ambitious Teaching and Learning
Microsoft via edX