LLMOps: LLMs Memory and Compute Optimizations
Offered By: The Machine Learning Engineer via YouTube
Course Description
Overview
Explore FlashAttention and GQA techniques to enhance efficiency in self-attention layers, and discover FSDP and DDP methods for training and fine-tuning Large Language Models (LLMs) in this 24-minute tutorial. Gain practical insights into memory and compute optimizations for LLMs, with access to a comprehensive PowerPoint presentation and hands-on Jupyter notebook for implementation.
Syllabus
LLMOps: LLMs Memory and Compute Optimizations #machinelearning #datascience
Taught by
The Machine Learning Engineer
Related Courses
Large Language Models: Application through ProductionDatabricks via edX LLMOps - LLM Bootcamp
The Full Stack via YouTube MLOps: Why DevOps Solutions Fall Short in the Machine Learning World
Linux Foundation via YouTube Quick Wins Across the Enterprise with Responsible AI
Microsoft via YouTube End-to-End AI App Development: Prompt Engineering to LLMOps
Microsoft via YouTube