YoVDO

Langchain- Auto Evaluate LLMs and Agents- OpenAI GPT-3 Evaluation Using Langchain and Custom Prompts

Offered By: echohive via YouTube

Tags

LangChain Courses Programming Courses Quality Assurance Courses GPT-3 Courses

Course Description

Overview

Explore a comprehensive tutorial on evaluating the performance of language models like OpenAI's GPT-3 using Langchain and custom prompts. Learn to create environments, install necessary packages, and review code for LLM question-answering tasks. Discover how to craft custom prompts for LLM evaluation and analyze code for agent evaluation with tools. Gain insights into using datasets from Huggingface and leverage Langchain's evaluation capabilities. Follow along with a detailed timeline, from introduction and demo to final code review, to enhance your understanding of LLM performance assessment.

Syllabus

intro and demo
Creating environment and pip installs
Code review for llm QA
Custom prompt for llm eval
Code review for Agent with tool eval
Final Code review


Taught by

echohive

Related Courses

Prompt Templates for GPT-3.5 and Other LLMs - LangChain
James Briggs via YouTube
Getting Started with GPT-3 vs. Open Source LLMs - LangChain
James Briggs via YouTube
Chatbot Memory for Chat-GPT, Davinci + Other LLMs - LangChain
James Briggs via YouTube
Chat in LangChain
James Briggs via YouTube
LangChain Data Loaders, Tokenizers, Chunking, and Datasets - Data Prep
James Briggs via YouTube