YoVDO

LLM Efficient Inference in CPUs and Intel GPUs - Intel Neural Speed

Offered By: The Machine Learning Engineer via YouTube

Tags

LLM (Large Language Model) Courses Data Science Courses Machine Learning Courses Neural Networks Courses Model Optimization Courses Hardware Acceleration Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore efficient inference techniques for Large Language Models (LLMs) on CPUs and Intel GPUs using Intel Neural Speed in this 30-minute video. Dive into the performance capabilities of Intel Extension for Transformers and gain practical insights through provided Jupyter notebooks. Learn how to optimize LLM inference for data science and machine learning applications, leveraging Intel's hardware-specific solutions. Access accompanying resources, including a Medium article and GitHub repositories, to deepen your understanding and implement the techniques discussed.

Syllabus

LLM Efficient Inference In CPUs and Intel GPUs. Intel Neural Speed #datascience #machinelearning


Taught by

The Machine Learning Engineer

Related Courses

Neural Networks for Machine Learning
University of Toronto via Coursera
Good Brain, Bad Brain: Basics
University of Birmingham via FutureLearn
Statistical Learning with R
Stanford University via edX
Machine Learning 1—Supervised Learning
Brown University via Udacity
Fundamentals of Neuroscience, Part 2: Neurons and Networks
Harvard University via edX