Fine-tuning LLMs to Reduce Hallucination - Leveraging Out-of-Domain Data
Offered By: Weights & Biases via YouTube
Course Description
Overview
Syllabus
Webinar agenda and overview of Mistral AI
Fine-Tuning Services: Introduction to Mistral's fine-tuning API and services
Conversational AI Interface: Introduction to LAT, Mistral's conversational AI tool
Latest Model Releases: Newest Mistral models and their features
Fine-Tuning Process: Steps and benefits of fine-tuning models
Hackathon Winning Projects: Examples of innovative uses of fine-tuning
Hands-On Demo Introduction: Introduction to the practical demo segment
Setting Up the Demo: Instructions for setting up and running the demo notebook
Creating Initial Prompt: Steps to create and test an initial prompt
Evaluation Pipeline: Setting up and running an evaluation pipeline for model performance
Improving Model Performance: Strategies and techniques to enhance model accuracy
Fine-Tuning and Results: Creating and evaluating a fine-tuned model
Two-Step Fine-Tuning: Explanation and demonstration of the two-step fine-tuning process
Conclusion and final thoughts
Taught by
Weights & Biases
Related Courses
Amazon SageMaker JumpStart Foundations (Japanese)Amazon Web Services via AWS Skill Builder AWS Flash - Generative AI with Diffusion Models
Amazon Web Services via AWS Skill Builder AWS Flash - Operationalize Generative AI Applications (FMOps/LLMOps)
Amazon Web Services via AWS Skill Builder AWS SimuLearn: Automate Fine-Tuning of an LLM
Amazon Web Services via AWS Skill Builder AWS SimuLearn: Fine-Tune a Base Model with RLHF
Amazon Web Services via AWS Skill Builder