Langchain- Auto Evaluate LLMs and Agents- OpenAI GPT-3 Evaluation Using Langchain and Custom Prompts
Offered By: echohive via YouTube
Course Description
Overview
Explore a comprehensive tutorial on evaluating the performance of language models like OpenAI's GPT-3 using Langchain and custom prompts. Learn to create environments, install necessary packages, and review code for LLM question-answering tasks. Discover how to craft custom prompts for LLM evaluation and analyze code for agent evaluation with tools. Gain insights into using datasets from Huggingface and leverage Langchain's evaluation capabilities. Follow along with a detailed timeline, from introduction and demo to final code review, to enhance your understanding of LLM performance assessment.
Syllabus
intro and demo
Creating environment and pip installs
Code review for llm QA
Custom prompt for llm eval
Code review for Agent with tool eval
Final Code review
Taught by
echohive
Related Courses
How to Build Codex SolutionsMicrosoft via YouTube Unlocking the Power of OpenAI for Startups - Microsoft for Startups
Microsoft via YouTube Building Intelligent Applications with World-Class AI
Microsoft via YouTube Stanford Seminar - Transformers in Language: The Development of GPT Models Including GPT-3
Stanford University via YouTube ChatGPT: GPT-3, GPT-4 Turbo: Unleash the Power of LLM's
Udemy