YoVDO

AI Agent Evaluation with RAGAS Using LangChain, Claude 3, and Pinecone

Offered By: James Briggs via YouTube

Tags

LangChain Courses Pinecone Courses Cohere Courses Vector Databases Courses Ragas Courses AI Agents Courses Retrieval Augmented Generation Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the RAGAS (RAG ASsessment) evaluation framework for RAG pipelines in this 20-minute video tutorial. Learn how to assess an AI agent built with LangChain, utilizing Anthropic's Claude 3, Cohere's embedding models, and the Pinecone vector database. Dive into the process of evaluating RAG systems, understanding RAGAS metrics, and implementing metrics-driven development. Gain insights into retrieval metrics like context recall and precision, as well as generation metrics such as faithfulness and answer relevancy. Access the accompanying code, article, and additional resources to enhance your understanding of RAG evaluation techniques.

Syllabus

RAG Evaluation
Overview of LangChain RAG Agent
RAGAS Code Prerequisites
Agent Output for RAGAS
RAGAS Evaluation Format
RAGAS Metrics
Understanding RAGAS Metrics
Retrieval Metrics
RAGAS Context Recall
RAGAS Context Precision
Generation Metrics
RAGAS Faithfulness
RAGAS Answer Relevancy
Metrics Driven Development


Taught by

James Briggs

Related Courses

Pinecone Vercel Starter Template and RAG - Live Code Review Part 2
Pinecone via YouTube
Will LLMs Kill Search? The Future of Information Retrieval
Aleksa Gordić - The AI Epiphany via YouTube
RAG But Better: Rerankers with Cohere AI - Improving Retrieval Pipelines
James Briggs via YouTube
Advanced RAG - Contextual Compressors and Filters - Lecture 4
Sam Witteveen via YouTube
LangChain Multi-Query Retriever for RAG - Advanced Technique for Broader Vector Space Search
James Briggs via YouTube