Stanford Seminar - Neural Networks on Chip Design from the User Perspective
Offered By: Stanford University via YouTube
Course Description
Overview
To apply neural networks to different applications, various customized hardware architectures are proposed in the past a few years to boost the energy efficiency of deep learning inference processing. Meanwhile, the possibilities of adopting emerging NVM (Non-Volatile Memory) technology for efficient learning systems, i.e., in-memory-computing, are also attractive for both academia and industry. We will briefly review our past effort on Deep learning Processing Unit (DPU) design on FPGA in Tsinghua and Deephi, and then talk about some features, i.e. interrupt and virtualization, we are trying to introduce into the accelerators from the user's perspective. Furthermore, we will also talk about the challenges for reliability and security issues in NN accelerators on both FPGA and NVM, and some preliminary solutions for now.
Syllabus
Introduction.
Deep Learning for Everything.
The New Era is Waiting for the Next Rising Star.
Why? Power Consumption and Latency Are Crucial.
Development of Energy-Efficient Computing Chips.
Our Previous Work: Software Hardware Co-design for Energy Efficient NN Inference System.
NN Compression: Quantization.
NN Compression: Pruning.
Hardware Architecture - Utilization.
Academic NN Accelerators (Performance vs Power).
Survey on FPGA based Inference Accelerators.
Application Scenarios: Cloud, Edge, Terminal.
Growing of Computation Power.
Brief Summary.
CNN Greatly Benefits Basic Functions in Robotic Applications.
Accelerator Interrupt for Hardware Conflicts.
Interrupt Respond Latency & Extra Cost.
How to Interrupt?.
Virtual Instruction-Based Interrupt.
DNN Inference Tasks in the Cloud.
How to Support Multiple Tasks in the Cloud?.
How to Support Dynamic Workload in the Cloud?.
Low-overhead Reconfiguration of ISA-based Accelerator.
Design Techniques.
Experiments.
Analysis for NN Fault Problems.
Fault Model in Network Architecture Search (NAS).
Fault Tolerant Training - NAS Framework.
Discovered Architecture.
Bottleneck of Energy Efficiency Improvement.
Conventional Encryption Incurs Massive Write Operations.
Orders of differences in Write endurance and Write Latency.
SFGE: Sparse Fast Gradient Encryption.
Accuracy Drop vs Encryption Num and Intensity.
Select Encryption Configuration for Different NNS.
Taught by
Stanford Online
Tags
Related Courses
TensorFlow Developer Certificate Exam PrepA Cloud Guru Post Graduate Certificate in Advanced Machine Learning & AI
Indian Institute of Technology Roorkee via Coursera Advanced AI Techniques for the Supply Chain
LearnQuest via Coursera Advanced Learning Algorithms
DeepLearning.AI via Coursera IBM AI Engineering
IBM via Coursera