YoVDO

Customizing and Evaluating LLMs Using Amazon SageMaker JumpStart

Offered By: Amazon Web Services via AWS Skill Builder

Tags

Machine Learning Courses Prompt Engineering Courses LangChain Courses Model Evaluation Courses Fine-Tuning Courses Retrieval Augmented Generation Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!

In this course, you learn about customizing and evaluating large language models (LLMs) using Amazon SageMaker JumpStart. Amazon SageMaker JumpStart is a machine learning (ML) hub with foundation models, built-in algorithms, and prebuilt ML solutions that you can deploy with a few clicks. You will learn the alternatives to fine-tuning including the foundations of prompt engineering and retrieval augmented generation (RAG). You will also learn to fine-tune, deploy and evaluate fine-tuned models available on SageMaker JumpStart.


Using your own AWS account and the notebooks provided, you can practice building RAG applications using the Amazon SageMaker-LangChain integration. You can also fine-tune a Llama3 model and evaluate it using evaluation metrics. You can practice one of the aspects of responsible AI with the help of a notebook that addresses prompt stereotyping. Alternatively, you can watch a video demonstration of running the notebooks.

  • Course level: Advanced
  • Duration: 4 hours

Activities

This course includes presentations, demonstrations, and assessments.


Course objectives

In this course, you will do the following:

  • Describe the different techniques to customize LLMs.
  • Describe when to use prompt engineering and retrieval augmented generation as customization options.
  • Demonstrate the use of Amazon SageMaker-LangChain integration to build a RAG application using a Falcon model.
  • Describe the use of domain adaptation and instruction fine-tuning.
  • Demonstrate how to fine-tune and deploy a model from the SageMaker JumpStart ML hub.
  • Demonstrate the use of the SageMaker Python SDK to fine-tune LLMs using Parameter Efficient Fine-Tuning (PEFT).
  • Evaluate foundation models by using the SageMaker JumpStart console and fmeval library.

Intended Audience:

This course is intended for the following job roles:

  • Data scientists
  • Machine learning engineers

Prerequisites

We recommend that attendees of this course have the following:

  • More than 1 year of experience with natural language processing (NLP)
  • More than 1 year of experience with training and tuning language models
  • Intermediate-level proficiency in Python language programming
  • AWS Technical Essentials.
  • Amazon SageMaker JumpStart Foundations.

Course outline

  • Module 1: Introduction to Customizing LLMs
    • Customizing LLMs
    • Choosing customization methods
  • Module 2: Prompt Engineering and RAG for Customizing LLMs
    • Using prompt engineering
    • Using Retrieval Augmented Generation (RAG)
    • Using advanced RAG patterns.
  • Demo 1: Create a RAG application using Amazon SageMaker-LangChain integration and a Falcon 7B model from SageMaker JumpStart
  • Module 3: Fine-tuning and Deploying Foundation Models
    • Customize foundation models using fine tuning
    • How to use SageMaker JumpStart console to fine-tune and deploy an LLM
  • Demo 2: Fine-tune a Llama 3 model available on SageMaker JumpStart using Amazon SageMaker Python SDK
  • Module 4: Evaluating Foundation Models
    • Discuss model evaluation metrics
    • Evaluate foundations models using Amazon SageMaker JumpStart console
  • Demo 3: Evaluate prompt stereotyping of a Falcon-7B model using the fmeval library
  • Module 5: Resources
    • Learn More
    • Contact Us

Tags

Related Courses

Discover, Validate & Launch New Business Ideas with ChatGPT
Udemy
150 Digital Marketing Growth Hacks for Businesses
Udemy
AI: Executive Briefing
Pluralsight
The Complete Digital Marketing Guide - 25 Courses in 1
Udemy
Learn to build a voice assistant with Alexa
Udemy