YoVDO

TorchSparse++ - Efficient Training and Inference Framework for Sparse Convolution on GPUs

Offered By: MIT HAN Lab via YouTube

Tags

GPU Computing Courses Deep Learning Courses Computer Vision Courses Neural Networks Courses Parallel Computing Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a conference talk from MICRO 2023 presenting "TorchSparse++: Efficient Training and Inference Framework for Sparse Convolution on GPUs." Delve into the research conducted by Haotian Tang, Shang Yang, Zhijian Liu, and colleagues from MIT HAN Lab. Learn about their innovative approach to improving sparse convolution efficiency on GPUs for both training and inference. Discover the key features and benefits of the TorchSparse++ framework, designed to enhance performance in various applications. Gain insights into the potential impact of this technology on deep learning and computer vision tasks. Access additional resources, including the TorchSparse website, project details, and open-source code, to further explore this cutting-edge development in sparse convolution optimization.

Syllabus

MICRO'23 TorchSparse++: Efficient Training and Inference Framework for Sparse Convolution on GPUs


Taught by

MIT HAN Lab

Related Courses

Neural Networks for Machine Learning
University of Toronto via Coursera
機器學習技法 (Machine Learning Techniques)
National Taiwan University via Coursera
Machine Learning Capstone: An Intelligent Application with Deep Learning
University of Washington via Coursera
Прикладные задачи анализа данных
Moscow Institute of Physics and Technology via Coursera
Leading Ambitious Teaching and Learning
Microsoft via edX