YoVDO

Improving Accuracy of LLM Applications

Offered By: DeepLearning.AI via Coursera

Tags

Fine-Tuning Courses SQL Courses Prompt Engineering Courses LoRA (Low-Rank Adaptation) Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Join our new short course, Improving Accuracy of LLM Applications with Lamini and Meta. Learn from Sharon Zhou, Co-founder & CEO of Lamini, and Amit Sangani, Senior Director of Partner Engineering, Meta. Many developers have experienced frustration with inconsistent results when working with LLM applications. This course offers a systematic approach to enhance the accuracy and reliability of your LLM applications. You will build an SQL agent, add evaluation metrics to measure performance, and use prompt engineering and self-reflection to make the model perform better. Finally, you will fine-tune the model with techniques like LoRA and memory tuning that embeds facts in model weights to reduce hallucinations. In this course, you’ll use Llama’s family of open-source models. What you’ll do: 1. Build a text to SQL agent and simulate situations where it hallucinates to begin the evaluation process. 2. Build an evaluation framework to systematically measure performance, including criteria for good evaluations, best practices, and how to develop an evaluation score. 3. Learn how instruction fine-tuning enhances pre-trained LLMs to follow instructions, and how memory fine-tuning embeds facts to reduce hallucinations. 4. Break fine-tuning myths and see how Performance-Efficient Fine-tuning (PEFT) techniques like Low-Rank Adaptation(LoRA) reduce training time by 100x and Mixture of Memory Experts (MoME) reduces it even further. 5. Go through an iterative process of generating training data and fine-tuning, learning practical tips such as adding examples, generating variations, and filtering generated data to increase model accuracy. Start improving the accuracy of LLM applications today!

Syllabus

  • Improving Accuracy of LLM Applications
    • Join our new short course, Improving Accuracy of LLM Applications with Lamini and Meta. Learn from Sharon Zhou, Co-founder & CEO of Lamini, and Amit Sangani, Senior Director of Partner Engineering, Meta.Many developers have experienced frustration with inconsistent results when working with LLM applications. This course offers a systematic approach to enhance the accuracy and reliability of your LLM applications.You will build an SQL agent, add evaluation metrics to measure performance, and use prompt engineering and self-reflection to make the model perform better. Finally, you will fine-tune the model with techniques like LoRA and memory tuning that embeds facts in model weights to reduce hallucinations.In this course, you’ll use Llama’s family of open-source models. What you’ll do: 1. Build a text to SQL agent and simulate situations where it hallucinates to begin the evaluation process. 2. Build an evaluation framework to systematically measure performance, including criteria for good evaluations, best practices, and how to develop an evaluation score. 3. Learn how instruction fine-tuning enhances pre-trained LLMs to follow instructions, and how memory fine-tuning embeds facts to reduce hallucinations. 4. Break fine-tuning myths and see how Performance-Efficient Fine-tuning (PEFT) techniques like Low-Rank Adaptation(LoRA) reduce training time by 100x and Mixture of Memory Experts (MoME) reduces it even further. 5. Go through an iterative process of generating training data and fine-tuning, learning practical tips such as adding examples, generating variations, and filtering generated data to increase model accuracy. Start improving the accuracy of LLM applications today!

Taught by

Sharon Zhou and Amit Sangani

Related Courses

Discover, Validate & Launch New Business Ideas with ChatGPT
Udemy
150 Digital Marketing Growth Hacks for Businesses
Udemy
AI: Executive Briefing
Pluralsight
The Complete Digital Marketing Guide - 25 Courses in 1
Udemy
Learn to build a voice assistant with Alexa
Udemy