Fine-tuning LLMs to Reduce Hallucination - Leveraging Out-of-Domain Data
Offered By: Weights & Biases via YouTube
Course Description
Overview
Syllabus
Webinar agenda and overview of Mistral AI
Fine-Tuning Services: Introduction to Mistral's fine-tuning API and services
Conversational AI Interface: Introduction to LAT, Mistral's conversational AI tool
Latest Model Releases: Newest Mistral models and their features
Fine-Tuning Process: Steps and benefits of fine-tuning models
Hackathon Winning Projects: Examples of innovative uses of fine-tuning
Hands-On Demo Introduction: Introduction to the practical demo segment
Setting Up the Demo: Instructions for setting up and running the demo notebook
Creating Initial Prompt: Steps to create and test an initial prompt
Evaluation Pipeline: Setting up and running an evaluation pipeline for model performance
Improving Model Performance: Strategies and techniques to enhance model accuracy
Fine-Tuning and Results: Creating and evaluating a fine-tuned model
Two-Step Fine-Tuning: Explanation and demonstration of the two-step fine-tuning process
Conclusion and final thoughts
Taught by
Weights & Biases
Related Courses
Advanced Deployment Scenarios with TensorFlowDeepLearning.AI via Coursera AI for Medical Diagnosis
DeepLearning.AI via Coursera AI for Medical Prognosis
DeepLearning.AI via Coursera AI in Healthcare Capstone
Stanford University via Coursera Amazon SageMaker JumpStart Foundations
Amazon Web Services via AWS Skill Builder