YoVDO

AWS Flash - Operationalize Generative AI Applications (FMOps/LLMOps)

Offered By: Amazon Web Services via AWS Skill Builder

Tags

Amazon Web Services (AWS) Courses MLOps Courses Generative AI Courses Model Selection Courses Model Evaluation Courses Fine-Tuning Courses LLMOps Courses Retrieval Augmented Generation Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
This course provides an overview of challenges in productionizing LLMs and a set of tools available to solve them. The course will provide an overview of the reference architecture for developing, deploying, and operationalizing LLMs, as well as expand on each stage of the process.

  • Course level: Intermediate
  • Duration: 90min

Activities

This course includes presentations, real-world examples and case studies.

Course Objectives

In this course you will learn to:

  • Differentiate between MLOps and LLMOps and define core challenges in operationalizing LLMs
  • Learn how to select the optimal LLM for a given use-case
  • Understand how to evaluate LLMs and the difference between evaluation and benchmarking
  • Define core components of Retrieval-Augmented Generation (RAG) and how it can be managed
  • Differentiate continual pre-training from fine-tuning
  • Understand fine-tuning techniques available out-of-the-box on AWS
  • Learn about what to monitor in LLMs and how to do it on AWS
  • Understand governance and security best practices
  • Illustrate reference architecture for LLMOps on AWS

Intended Audience

This course is intended for:

  • Data Scientists and ML Engineers looking to automate the build and deployment of LLMs
  • Solution Architects and DevOps engineers looking to understand the overall architecture of an LLMOps platform

Prerequisites

We recommend that attendees of this course have:

  • Completion of Generative AI Learning Plan for Developers (digital)
  • A technical background and programming experience is helpful

Course Outline

Module 1: Introduction to LLMOps

  • Introduction to LLMOps
  • LLMOps Roles
  • Challenges in operationalizing LLMs

Module 2: LLM Selection

  • Use-case benchmarking of LLMs
  • Priority-based decision making

Module 3: LLM Evaluation

  • Evaluation methods
  • Evaluation prompt catalog
  • Evaluation framework and metrics
  • Benchmarking framework and metrics

Module 4: Retrieval Augmented Generation (RAG)

  • LLM customization
  • Embedding models
  • Vector databases
  • RAG workflows
  • Advanced RAG techniques

Module 5: LLM Fine-tuning

  • Continual pre-training vs. fine-tuning
  • Parameter-efficient fine-tuning (PEFT)
  • Fine-tuning architecture

Module 6: LLM Monitoring

  • LLM monitoring
  • LLM guardrails

Module 7: LLM Governance and Security

  • Security and governance best practices
  • Security and governance tools

Module 8: LLMOps Architecture

  • LLMOps lifecycle

Demos

  • Text embedding and semantic similarity
  • LLM fine-tuning and evaluation at scale
  • Inference safeguards


Keywords

  • Gen AI
  • Generative AI



Tags

Related Courses

AWS Flash - Operationalize Generative AI Applications (FMOps/LLMOps) (Simplified Chinese)
Amazon Web Services via AWS Skill Builder
Building Retrieval Augmented Generation (RAG) workflows with Amazon OpenSearch Service
Amazon Web Services via AWS Skill Builder
Advanced Prompt Engineering for Everyone
Vanderbilt University via Coursera
Advanced Retrieval for AI with Chroma
DeepLearning.AI via Coursera
End to End LLMs with Azure
Duke University via Coursera