Offloading-Efficient Sparse AI Systems
Offered By: ACM SIGPLAN via YouTube
Course Description
Overview
Explore offloading-efficient sparse AI systems in this 19-minute conference talk presented at the SPARSE 2024 workshop. Gain insights from speaker Luo Mai as they delve into cutting-edge techniques for optimizing sparse artificial intelligence architectures. Learn about the latest advancements in offloading strategies designed to enhance the efficiency of sparse AI models. Discover how these approaches can potentially improve performance and resource utilization in AI systems. Understand the implications of this research for the future development of more efficient and scalable AI technologies. Engage with content from the SPARSE workshop, part of the ACM SIGPLAN-sponsored PLDI 2024 conference, focusing on innovations in sparse computing for AI applications.
Syllabus
[SPARSE24] Offloading-Efficient Sparse AI Systems
Taught by
ACM SIGPLAN
Related Courses
Cloud Computing Concepts, Part 1University of Illinois at Urbana-Champaign via Coursera Cloud Computing Concepts: Part 2
University of Illinois at Urbana-Champaign via Coursera Reliable Distributed Algorithms - Part 1
KTH Royal Institute of Technology via edX Introduction to Apache Spark and AWS
University of London International Programmes via Coursera Réalisez des calculs distribués sur des données massives
CentraleSupélec via OpenClassrooms