Large Language Models: Foundation Models from the Ground Up
Offered By: Databricks via edX
Course Description
Overview
This course dives into the details of LLM foundation models. You will learn the innovations that led to the proliferation of transformer-based architectures, from encoder models (BERT), to decoder models (GPT), to encoder-decoder models (T5). You will also learn about the recent breakthroughs that led to applications like ChatGPT. You will gain understanding about the latest advances that continue to improve LLM functionality including Flash Attention, LoRa, AliBi, and PEFT methods. The course concludes with an overview of multi-modal LLM developments to address NLP problems involving a combination of text, audio, and visual components.
Syllabus
-
Module 1 - Transformer Architecture: Attention & Transformer Fundamentals
-
Module 2 - Efficient Fine Tuning
-
Module 3 - Deployment and Hardware Considerations
-
Module 4 - Beyond Text-Based LLMs: Multi-Modality
Taught by
Sam Raymond, Chengyin Eng, Joseph Bradley and Matei Zaharia
Related Courses
How to Do Stable Diffusion LORA Training by Using Web UI on Different ModelsSoftware Engineering Courses - SE Courses via YouTube MicroPython & WiFi
Kevin McAleer via YouTube Building a Wireless Community Sensor Network with LoRa
Hackaday via YouTube ComfyUI - Node Based Stable Diffusion UI
Olivio Sarikas via YouTube AI Masterclass for Everyone - Stable Diffusion, ControlNet, Depth Map, LORA, and VR
Hugh Hou via YouTube