YoVDO

Large Language Models: Foundation Models from the Ground Up

Offered By: Databricks via edX

Tags

LLM (Large Language Model) Courses BERT Courses LoRA (Low-Rank Adaptation) Courses T5 Courses PEFT Courses

Course Description

Overview

This course dives into the details of LLM foundation models. You will learn the innovations that led to the proliferation of transformer-based architectures, from encoder models (BERT), to decoder models (GPT), to encoder-decoder models (T5). You will also learn about the recent breakthroughs that led to applications like ChatGPT. You will gain understanding about the latest advances that continue to improve LLM functionality including Flash Attention, LoRa, AliBi, and PEFT methods. The course concludes with an overview of multi-modal LLM developments to address NLP problems involving a combination of text, audio, and visual components.


Syllabus

  • Module 1 - Transformer Architecture: Attention & Transformer Fundamentals

  • Module 2 - Efficient Fine Tuning

  • Module 3 - Deployment and Hardware Considerations

  • Module 4 - Beyond Text-Based LLMs: Multi-Modality


Taught by

Sam Raymond, Chengyin Eng, Joseph Bradley and Matei Zaharia

Related Courses

Fine-Tuning LLMs with PEFT and LoRA
Sam Witteveen via YouTube
Pre-training and Fine-tuning of Code Generation Models
CNCF [Cloud Native Computing Foundation] via YouTube
MLOps: Fine-tuning Mistral 7B with PEFT, QLora, and MLFlow
The Machine Learning Engineer via YouTube
MLOps MLflow: Fine-Tuning Mistral 7B con PEFT y QLora - EspaƱol
The Machine Learning Engineer via YouTube
MLOps: PEFT Dialog Summarization with Flan T5 Using LoRA
The Machine Learning Engineer via YouTube