YoVDO

Stanford Seminar - Enabling NLP, Machine Learning, and Few-Shot Learning Using Associative Processing

Offered By: Stanford University via YouTube

Tags

Machine Learning Courses Natural Language Processing (NLP) Courses Few-shot Learning Courses

Course Description

Overview

This presentation details a fully programmable, associative, content-based, compute in-memory architecture that changes the concept of computing from serial data processing--where data is moved back and forth between the processor and memory--to massive parallel data processing, compute, and search directly in-place.

This associative processing unit (APU) can be used in many machine learning applications, one-shot/few-shot learning, convolutional neural networks, recommender systems and data mining tasks such as prediction, classification, and clustering.

Additionally, the architecture is well-suited to processing large corpora and can be applied to Question Answering (QA) and various NLP tasks such as language translation. The architecture can embed long documents and compute in-place any type of memory network and answer complex questions in O(1).

About the Speaker: Dr. Avidan Akeribs is VP of GSI Technology's Associative Computing Business Unit. He has over 30 years of experience in parallel computing and In-Place Associative Computing. He has over 25 Granted Patents related to parallel and in-memory associative computing. Dr. Akeribs has a PhD in Applied mathematics & Computer Science from the Weismann Instiitute of Science, Israel.His specialties are Computational Memory, Associative Processing, Parallel Algorithms, and Machine Learning.

For more information about this seminar and its speaker, you can visit http://ee380.stanford.edu/Abstracts/1...


Syllabus

Introduction.
The Challenge In Al Computing (Matrix Multiplication is not enough!!).
Von Neumann Architecture.
Changing the Rules of the Game!!!.
APU-Associative Processing Unit.
How Computers Work Today.
Truth Table Example.
CAM/ Associative Search.
TCAM Search By Standard Memory Cells.
Neighborhood Computing.
Search & Count.
CPU vs GPU vs FPGA vs APU.
Communication between Sections.
Section Computing to Improve Performance.
APU Chip Layout.
APU Layout vs GPU Layout.
K-NN Use Case in an APU.
K-MINS: The Algorithm.
Dense (1XN) Vector by Sparse NxM Matrix.
Two NxN Sparse Matrix Multiplication.
Taylor Series.
1M SoftMax Performance.
Examples.
Example of Associative Attention Computing.
GSI Associative Solution for End to End.
Low-Shot: Train the network on distance.
Programming Model.
PCle Development Boards.
Computing in Non-Volatile Cells.
Solutions for Future Data Centers.
Summary.


Taught by

Stanford Online

Tags

Related Courses

Natural Language Processing
Columbia University via Coursera
Natural Language Processing
Stanford University via Coursera
Introduction to Natural Language Processing
University of Michigan via Coursera
moocTLH: Nuevos retos en las tecnologĂ­as del lenguaje humano
Universidad de Alicante via MirĂ­adax
Natural Language Processing
Indian Institute of Technology, Kharagpur via Swayam