Enabling Neural Network at the Low Power Edge - A Neural Network Compiler
Offered By: tinyML via YouTube
Course Description
Overview
Explore a tinyML Talks webcast on enabling neural networks for low-power edge devices. Discover Eta Compute's integrated approach to minimizing barriers in designing neural networks for ultra-low power operation, focusing on embedded vision applications. Learn about neural network optimization for embedded systems, hardware-software co-optimization for energy efficiency, and automatic inference code generation using a proprietary hardware-aware compiler tool. Gain insights into memory management, compute power optimization, and accuracy considerations for deploying neural networks in IoT and mobile devices. Understand the challenges and solutions in implementing neural networks on hardware-constrained embedded systems, with practical examples in people counting and AI vision applications.
Syllabus
Introduction
Agenda
Challenges
Current status
Tensorflow
Current version
Pipelines
Applications
People Counting
AI Vision
Neural Network
Summary
Questions
TinyML Tech Sponsors
Taught by
tinyML
Related Courses
Embedded Systems - Shape The World: Microcontroller Input/OutputThe University of Texas at Austin via edX Model Checking
Chennai Mathematical Institute via Swayam Introduction to the Internet of Things and Embedded Systems
University of California, Irvine via Coursera Sistemas embebidos: Aplicaciones con Arduino
Universidad Nacional Autónoma de México via Coursera Quantitative Formal Modeling and Worst-Case Performance Analysis
EIT Digital via Coursera