Label Efficient Visual Abstractions for Autonomous Driving
Offered By: Andreas Geiger via YouTube
Course Description
Overview
Explore a keynote presentation on label-efficient visual abstractions for autonomous driving. Delve into the trade-offs between annotation costs and driving performance in semantic segmentation-based approaches. Learn about practical insights for exploiting segmentation-based visual abstractions more efficiently, resulting in reduced variance of learned policies. Examine the impact of different segmentation-based modalities on behavior cloning agents in the CARLA simulator. Discover how to optimize intermediate representations for driving tasks, moving beyond traditional image-space loss functions to maximize safety and distance traveled per intervention.
Syllabus
Introduction
Two dominating paradigms to selfdriving
Direct perception
Conditional forints learning
Intermediate representations
More Related Findings
What is a good visual abstraction
InputOutput
No Crash Benchmark
Identifying Relevant Classes
Results
Qualitative Results
Summary
Dataset Overview
Illustrations
Taught by
Andreas Geiger
Related Courses
Introduction to Artificial IntelligenceStanford University via Udacity Natural Language Processing
Columbia University via Coursera Probabilistic Graphical Models 1: Representation
Stanford University via Coursera Computer Vision: The Fundamentals
University of California, Berkeley via Coursera Learning from Data (Introductory Machine Learning course)
California Institute of Technology via Independent