What Can In-memory Computing Deliver and What Are the Barriers - tinyML Summit 2019
Offered By: tinyML via YouTube
Course Description
Overview
Explore in-memory computing's potential and challenges in this 36-minute conference talk from the tinyML Summit 2019. Delve into Prof. Naveen Verma's insights on the memory wall, data movement amortization, and IMC as a spatial architecture. Examine current IMC standings, including analog computation, algorithmic co-design, programmability, and efficient application mappings. Discover the path forward with charge-domain analog computing, featuring a 2.4Mb, 64-tile IMC system. Learn about programmable IMC, bit-scalable mixed-signal compute, development boards, and design flows. Witness demonstrations and gain valuable conclusions on the future of in-memory computing in tiny machine learning applications.
Syllabus
Intro
The memory wall Separating memory from compute fundamentally raises a communicatio
So, we should amortize data movement
In-memory computing (IMC)
The basic tradeoffs
IMC as a spatial architecture
Where does IMC stand today?
analog computation
Algorithmic co-design(?)
programmability
efficient application mappings
Path forward: charge-domain analog computin
2.4Mb, 64-tile IMC
Programmable IMC
Bit-scalable mixed-signal compute
Development board
Design flow
Demonstrations
Conclusions
Taught by
tinyML
Related Courses
Computer ArchitecturePrinceton University via Coursera Introduction to Computer Architecture
Carnegie Mellon University via Independent Build a Modern Computer from First Principles: From Nand to Tetris (Project-Centered Course)
Hebrew University of Jerusalem via Coursera 计算机系统基础(一) :程序的表示、转换与链接
Nanjing University via Coursera Computer Architecture
Indian Institute of Technology Madras via Swayam