Open Source LLMOps Solutions
Offered By: Duke University via Coursera
Course Description
Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Learn the fundamentals of large language models (LLMs) and put them into practice by deploying your own solutions based on open source models. By the end of this course, you will be able to leverage state-of-the-art open source LLMs to create AI applications using a code-first approach.
You will start by gaining an in-depth understanding of how LLMs work, including model architectures like transformers and advancements like sparse expert models. Hands-on labs will walk you through launching cloud GPU instances and running pre-trained models like Code Llama, Mistral, and stable diffusion.
The highlight of the course is a guided project where you will fine-tune a model like LLaMA or Mistral on a dataset of your choice. You will use SkyPilot to easily scale model training on low-cost spot instances across cloud providers. Finally, you will containerize your model for efficient deployment using model servers like LoRAX and vLLM.
By the end of the course, you will have first-hand experience leveraging open source LLMs to build AI solutions. The skills you gain will enable you to further advance your career in AI.
Syllabus
- Getting Started with Open Source Ecosystem
- In this module, you will learn how to leverage pre-trained natural language processing models to build NLP applications. We will explore popular open source models like BERT. You will learn how to access these models using libraries like HuggingFace Transformers and use them for tasks like text classification, question answering, and text generation. A key skill will be using large language models to synthetically augment datasets. By feeding the model examples and extracting the text it generates, you can create more training data. Through hands-on exercises, you will build basic NLP pipelines in Python that use pre-trained models to perform tasks like sentiment analysis. By the end of the module, you'll have practical experience using state-of-the-art NLP techniques to create capable language applications.
- Using Local LLMs from LLamafile to Whisper.cpp
- In this module, you run language models locally. Keep data private. Avoid latency and fees. Use Mixtral model and llamafile.
- Applied Projects
- In this module, you will use models in the browser with Transformers.js and ONNX. You will gain experience on porting models to the ONNX runtime and experience how to put them on the browser. You will also use the Cosmopolitan project to build a phrase generator that is easily portable on different systems.
- Recap and Final Challenges
- In this module, you will focus on completing several external labs and hands-on examples that will allow you to feel comfortable running local LLMs, connect to them with APIs using Python as well as building solutions with the Rust programming language
Taught by
Noah Gift and Alfredo Deza
Tags
Related Courses
Моделирование биологических молекул на GPU (Biomolecular modeling on GPU)Moscow Institute of Physics and Technology via Coursera LLM Server
Pragmatic AI Labs via edX AI Infrastructure and Operations Fundamentals
Nvidia via Coursera Deep Learning - Computer Vision for Beginners Using PyTorch
Packt via Coursera Empower Search with AI using Amazon OpenSearch Service
Amazon Web Services via AWS Skill Builder