YoVDO

Efficient 3D Perception for Autonomous Vehicles

Offered By: MIT HAN Lab via YouTube

Tags

Computer Vision Courses Autonomous Vehicles Courses Object Detection Courses Object Tracking Courses LIDAR Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore cutting-edge advancements in efficient 3D perception for autonomous vehicles in this guest lecture by Zhijian Liu from MIT HAN Lab. Delve into the BEVFusion framework, which unifies camera, LiDAR, and radar features in a shared bird's-eye view space, achieving state-of-the-art performance on multiple 3D perception benchmarks. Learn about the 40-fold acceleration of the view transformation operator, addressing a critical efficiency bottleneck. Discover how BEVFusion excels in various tasks, including object detection, tracking, and map segmentation. Examine two recent innovations: FlatFormer, an efficient point cloud transformer that achieves real-time performance on edge GPUs, and SparseViT, which leverages spatial sparsity in 2D image transformers for improved efficiency. Gain insights into the latest research driving the development of more efficient and accurate perception systems for autonomous vehicles.

Syllabus

Efficient 3D Perception for Autonomous Vehicles (Zhijian Liu)


Taught by

MIT HAN Lab

Related Courses

6.S094: Deep Learning for Self-Driving Cars
Massachusetts Institute of Technology via Independent
Multi-Object Tracking for Automotive Systems
Chalmers University of Technology via edX
Decision-Making for Autonomous Systems
Chalmers University of Technology via edX
Self-Driving Fundamentals: Featuring Apollo
Baidu via Udacity
Transport Systems: Global Issues and Future Innovations
University of Leeds via FutureLearn