Offloading-Efficient Sparse AI Systems
Offered By: ACM SIGPLAN via YouTube
Course Description
Overview
Explore offloading-efficient sparse AI systems in this 19-minute conference talk presented at the SPARSE 2024 workshop. Gain insights from speaker Luo Mai as they delve into cutting-edge techniques for optimizing sparse artificial intelligence architectures. Learn about the latest advancements in offloading strategies designed to enhance the efficiency of sparse AI models. Discover how these approaches can potentially improve performance and resource utilization in AI systems. Understand the implications of this research for the future development of more efficient and scalable AI technologies. Engage with content from the SPARSE workshop, part of the ACM SIGPLAN-sponsored PLDI 2024 conference, focusing on innovations in sparse computing for AI applications.
Syllabus
[SPARSE24] Offloading-Efficient Sparse AI Systems
Taught by
ACM SIGPLAN
Related Courses
Fog Networks and the Internet of ThingsPrinceton University via Coursera AWS IoT: Developing and Deploying an Internet of Things
Amazon Web Services via edX Business Considerations for 5G with Edge, IoT, and AI
Linux Foundation via edX 5G Strategy for Business Leaders
Linux Foundation via edX Intel® Edge AI Fundamentals with OpenVINO™
Intel via Udacity