Large Language Models: Foundation Models from the Ground Up
Offered By: Databricks via edX
Course Description
Overview
This course dives into the details of LLM foundation models. You will learn the innovations that led to the proliferation of transformer-based architectures, from encoder models (BERT), to decoder models (GPT), to encoder-decoder models (T5). You will also learn about the recent breakthroughs that led to applications like ChatGPT. You will gain understanding about the latest advances that continue to improve LLM functionality including Flash Attention, LoRa, AliBi, and PEFT methods. The course concludes with an overview of multi-modal LLM developments to address NLP problems involving a combination of text, audio, and visual components.
Syllabus
-
Module 1 - Transformer Architecture: Attention & Transformer Fundamentals
-
Module 2 - Efficient Fine Tuning
-
Module 3 - Deployment and Hardware Considerations
-
Module 4 - Beyond Text-Based LLMs: Multi-Modality
Taught by
Sam Raymond, Chengyin Eng, Joseph Bradley and Matei Zaharia
Related Courses
Sentiment Analysis with Deep Learning using BERTCoursera Project Network via Coursera Natural Language Processing with Attention Models
DeepLearning.AI via Coursera Fine Tune BERT for Text Classification with TensorFlow
Coursera Project Network via Coursera Deploy a BERT question answering bot on Django
Coursera Project Network via Coursera Generating discrete sequences: language and music
Ural Federal University via edX