YoVDO

A Chiplet-Based Generative Inference Architecture with Block Floating Point Datatypes

Offered By: Scalable Parallel Computing Lab, SPCL @ ETH Zurich via YouTube

Tags

Transformer Models Courses PyTorch Courses Quantization Courses Deep Reinforcement Learning Courses Chiplets Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a comprehensive conference talk on chiplet-based generative inference architecture and block floating point datatypes for AI acceleration. Delve into modular, spatial CGRA-like architectures optimized for generative inference, and learn about deep RL-based mappers in compilers for spatial and temporal architectures. Discover weight and activation quantization techniques in block floating point formats, building upon GPTQ and SmoothQuant, and their implementation in PyTorch. Examine an extension to EL-attention for reducing KV cache size and bandwidth. Gain insights from speaker Sudeep Bhoja in this SPCL_Bcast #38 recording from ETH Zurich's Scalable Parallel Computing Lab, featuring an in-depth presentation followed by announcements and a Q&A session.

Syllabus

Introduction
Talk
Announcements
Q&A Session


Taught by

Scalable Parallel Computing Lab, SPCL @ ETH Zurich

Related Courses

Digital Signal Processing
École Polytechnique Fédérale de Lausanne via Coursera
Principles of Communication Systems - I
Indian Institute of Technology Kanpur via Swayam
Digital Signal Processing 2: Filtering
École Polytechnique Fédérale de Lausanne via Coursera
Digital Signal Processing 3: Analog vs Digital
École Polytechnique Fédérale de Lausanne via Coursera
Digital Signal Processing 4: Applications
École Polytechnique Fédérale de Lausanne via Coursera