TinyEngine and Parallel Processing - EfficientML.ai Lecture 11
Offered By: MIT HAN Lab via YouTube
Course Description
Overview
Explore the intricacies of TinyEngine and parallel processing in this comprehensive lecture from MIT's 6.5940 course on Efficient Machine Learning. Delve into advanced concepts presented by Prof. Song Han as he discusses the implementation and optimization of TinyEngine for resource-constrained devices. Learn about parallel processing techniques that enhance the performance of machine learning models on edge devices. Gain valuable insights into the latest developments in efficient ML deployment and understand how TinyEngine contributes to the field of embedded AI. Access accompanying slides for visual aids and additional resources to supplement your learning experience.
Syllabus
EfficientML.ai Lecture 11 - TinyEngine (MIT 6.5940, Fall 2024)
Taught by
MIT HAN Lab
Related Courses
TensorFlow Lite for Edge Devices - TutorialfreeCodeCamp Few-Shot Learning in Production
HuggingFace via YouTube TinyML Talks Germany - Neural Network Framework Using Emerging Technologies for Screening Diabetic
tinyML via YouTube TinyML for All: Full-stack Optimization for Diverse Edge AI Platforms
tinyML via YouTube TinyML Talks - Software-Hardware Co-design for Tiny AI Systems
tinyML via YouTube