Enabling Neural Network at the Low Power Edge - A Neural Network Compiler
Offered By: tinyML via YouTube
Course Description
Overview
Explore a tinyML Talks webcast on enabling neural networks for low-power edge devices. Discover Eta Compute's integrated approach to minimizing barriers in designing neural networks for ultra-low power operation, focusing on embedded vision applications. Learn about neural network optimization for embedded systems, hardware-software co-optimization for energy efficiency, and automatic inference code generation using a proprietary hardware-aware compiler tool. Gain insights into memory management, compute power optimization, and accuracy considerations for deploying neural networks in IoT and mobile devices. Understand the challenges and solutions in implementing neural networks on hardware-constrained embedded systems, with practical examples in people counting and AI vision applications.
Syllabus
Introduction
Agenda
Challenges
Current status
Tensorflow
Current version
Pipelines
Applications
People Counting
AI Vision
Neural Network
Summary
Questions
TinyML Tech Sponsors
Taught by
tinyML
Related Courses
Neural Networks for Machine LearningUniversity of Toronto via Coursera Good Brain, Bad Brain: Basics
University of Birmingham via FutureLearn Statistical Learning with R
Stanford University via edX Machine Learning 1—Supervised Learning
Brown University via Udacity Fundamentals of Neuroscience, Part 2: Neurons and Networks
Harvard University via edX