YoVDO

Efficient 3D Perception for Autonomous Vehicles

Offered By: MIT HAN Lab via YouTube

Tags

Computer Vision Courses Autonomous Vehicles Courses Object Detection Courses Object Tracking Courses LIDAR Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore cutting-edge advancements in efficient 3D perception for autonomous vehicles in this guest lecture by Zhijian Liu from MIT HAN Lab. Delve into the BEVFusion framework, which unifies camera, LiDAR, and radar features in a shared bird's-eye view space, achieving state-of-the-art performance on multiple 3D perception benchmarks. Learn about the 40-fold acceleration of the view transformation operator, addressing a critical efficiency bottleneck. Discover how BEVFusion excels in various tasks, including object detection, tracking, and map segmentation. Examine two recent innovations: FlatFormer, an efficient point cloud transformer that achieves real-time performance on edge GPUs, and SparseViT, which leverages spatial sparsity in 2D image transformers for improved efficiency. Gain insights into the latest research driving the development of more efficient and accurate perception systems for autonomous vehicles.

Syllabus

Efficient 3D Perception for Autonomous Vehicles (Zhijian Liu)


Taught by

MIT HAN Lab

Related Courses

State Estimation and Localization for Self-Driving Cars
University of Toronto via Coursera
Sensor Fusion
Mercedes Benz via Udacity
Remote Sensing: Principles and Applications
Indian Institute of Technology Bombay via Swayam
Reality Capture Foundations for AEC
LinkedIn Learning
Learning FARO: Laser Scanning
LinkedIn Learning