Large Language Models
Offered By: Databricks via edX
Course Description
Overview
As Large Language Model (LLM) applications disrupt countless industries, breakthroughs such as ChatGPT are becoming household terms. The demand for LLM-based applications is skyrocketing, and this program will provide you with the skills and knowledge needed to be at the forefront of this exciting field. You will learn how you can build your own production-ready LLM-based applications, leveraging the latest and most popular natural language processing (NLP) frameworks.
Through dynamic lectures, demos, and hands-on labs taught by industry leaders and renowned researchers—such as Matei Zaharia, co-founder and chief technologist at Databricks and computer science professor at UC Berkeley—students will learn how to develop and productionize LLM applications. Brought to you by the big data company that created the popular open-source projects Apache Spark, MLflow, Delta, and Dolly, the instructors bring a unique perspective from working with F500 companies, startups, and academia.
The first course takes you through a practical tour of how to get started quickly with LLMs for common applications, including fine-tuning open-source LLMs to build your own custom chat model. You will also learn how to apply LLMOps best practices for deploying models at scale, as well as evaluate the efficacy and bias of LLMs.
The second course dives into the details of language foundation models. You will learn the innovations that led from LSTMs to Transformers, including BERT, GPT, and T5, and the key breakthroughs that led to the model powering ChatGPT.
By the end of the program, you will have built your own end-to-end LLM workflows that are ready for production. Upon completion, you will be well-equipped to pursue careers as LLM developers, data scientists, and engineers, and to build innovative solutions to complex natural language processing problems.
If you’d like to audit the program, you’ll need to navigate to the individual course page to audit each course.
Syllabus
Course 1: Large Language Models: Application through Production
This course is aimed at developers, data scientists, and engineers looking to build LLM-centric applications with the latest and most popular frameworks. By the end of this course, you will have built an end-to-end LLM workflow that is ready for production!
Course 2: Large Language Models: Foundation Models from the Ground Up
This course dives into the details of foundation models in large language models (LLMs). You will learn the innovations that led to the proliferation of transformer-based models, including BERT, GPT, and T5, and the key breakthroughs that led to applications such as ChatGPT. Additionally, you will gain understanding about the latest advances that continue to improve LLM functionality including Flash Attention, LoRa, AliBi, and PEFT methods.
Courses
-
This course is aimed at developers, data scientists, and engineers looking to build LLM-centric applications with the latest and most popular frameworks. You will use Hugging Face to solve natural language processing (NLP) problems, leverage LangChain to perform complex, multi-stage tasks, and deep-dive into prompt engineering. You will use data embeddings and vector databases to augment LLM pipelines. Additionally, you will fine-tune LLMs with domain-specific data to improve performance and cost, as well as identify the benefits and drawbacks of proprietary models. You will assess societal, safety, and ethical considerations of using LLMs. Finally, you will learn how to deploy your models at scale, leveraging LLMOps best practices.
By the end of this course, you will have built an end-to-end LLM workflow that is ready for production!
-
This course dives into the details of LLM foundation models. You will learn the innovations that led to the proliferation of transformer-based architectures, from encoder models (BERT), to decoder models (GPT), to encoder-decoder models (T5). You will also learn about the recent breakthroughs that led to applications like ChatGPT. You will gain understanding about the latest advances that continue to improve LLM functionality including Flash Attention, LoRa, AliBi, and PEFT methods. The course concludes with an overview of multi-modal LLM developments to address NLP problems involving a combination of text, audio, and visual components.
Taught by
Chengyin Eng, Joseph Bradley, Matei Zaharia and Sam Raymond
Related Courses
Large Language Models: Foundation Models from the Ground UpDatabricks via edX Improving Accuracy of LLM Applications
DeepLearning.AI via Coursera Fine-Tuning LLM Models - Generative AI Course
freeCodeCamp LLaMa for Developers
LinkedIn Learning Stable Diffusion: Tips, Tricks, and Techniques
LinkedIn Learning