Low-Cost Neural Network Inferencing on the Edge With Xcore.AI
Offered By: tinyML via YouTube
Course Description
Overview
Explore low-cost neural network inferencing on edge devices with xcore.ai in this tinyML Talks webcast. Discover XMOS's next-generation crossover processor featuring a novel vector unit designed for low precision integer and binarized neural network inference. Learn about the core ideas behind this vector unit and how it differs from traditional load-store architecture to enable high-throughput convolution calculations. Gain insights into the software tools and libraries that allow users to maximize the hardware's potential. Witness a demonstration of optimization and deployment tools based on TensorFlow Lite for microcontrollers, including the conversion and analysis of a MobileNet variant. The talk covers topics such as the vector unit infrastructure, optimized device demonstrations, and future developments in the field.
Syllabus
Introduction
vector unit
infrastructure
demo
explore optimized
device demo
next talk
Taught by
tinyML
Related Courses
Fog Networks and the Internet of ThingsPrinceton University via Coursera AWS IoT: Developing and Deploying an Internet of Things
Amazon Web Services via edX Business Considerations for 5G with Edge, IoT, and AI
Linux Foundation via edX 5G Strategy for Business Leaders
Linux Foundation via edX Intel® Edge AI Fundamentals with OpenVINO™
Intel via Udacity