Open Source LLMOps
Offered By: Pragmatic AI Labs via edX
Course Description
Overview
Experience Open Source Large Language Models (LLMs)
- Master cutting-edge LLM architectures like Transformers through hands-on labs
- Fine-tune models on your data with SkyPilot's scalable training platform
- Deploy efficiently with model servers like LoRAX and vLLM
Explore the Open Source LLM Ecosystem:
- Gain in-depth understanding of how LLMs work under the hood
- Run pre-trained models like Code Llama, Mistral & Stable Diffusion
- Discover advanced architectures like Sparse Expert Models
- Launch cloud GPU instances for accelerated compute
Guided LLM Project:
- Fine-tune LLaMA, Mistral or other LLMs on your custom dataset
- Leverage SkyPilot to scale training across cloud providers
- Containerize your fine-tuned model for production deployment
- Serve models efficiently with LoRAX, vLLM and other open servers
- Build powerful AI solutions leveraging state-of-the-art open source language models. Gain practical LLMOps skills through code-first learning.
Syllabus
Week 1: Getting Started with Open Source Ecosystem
Introduction to popular open source natural language processing models and their capabilities
Accessing pre-trained NLP models using libraries like HuggingFace Transformers
Using large language models for synthetic data augmentation to enhance datasets
Building real-world NLP solutions using open source tools in Python and Rust
Week 2: Using Local LLMs from LLamaFile to Whisper.cpp
Key components of LLamaFile for packaging language models into portable files
Running local language models from LLamaFile on your own devices
Automating speech recognition workflows using Whisper.cpp
Integrating Whisper.cpp into GenAI building blocks and applications
Week 3: Applied Projects
Using language models in the browser with Transformers.js and ONNX
Exporting models to the ONNX format for enhanced portability
Developing portable command-line interfaces with the Cosmopolitan project
Building a phrase generator application as a native binary using Cosmopolitan
Week 4: Recap and Final Challenges
Connecting to local language models with APIs using Python
Retrieval augmented generation using local LLMs
Hands-on labs for GPU-accelerated MLOps workflows
Final project to build an interactive LLamaFile sandbox
By the end of this course, learners will have gained practical experience leveraging state-of-the-art open source language models to build AI applications. They will be able to deploy solutions on their own devices as well as integrate models into efficient MLOps pipelines.
Taught by
Alfredo Deza and Noah Gift
Related Courses
Models and Platforms for Generative AIIBM via edX Natural Language Processing with Attention Models
DeepLearning.AI via Coursera Circuitos con SPICE: Sistemas trifásicos y análisis avanzado
Pontificia Universidad Católica de Chile via Coursera Linear Circuits
Georgia Institute of Technology via Coursera Intro to AI Transformers
Codecademy