YoVDO

Fundamentals of AI Agents Using RAG and LangChain

Offered By: IBM via Coursera

Tags

LangChain Courses PyTorch Courses Prompt Engineering Courses Hugging Face Courses Faiss Courses In-context Learning Courses AI Agents Courses Retrieval Augmented Generation Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Business demand for technical gen AI skills is exploding and AI engineers who can work with large language models (LLMs) are in high demand. This Fundamentals of Building AI Agents using RAG and LangChain course builds job-ready skills that will fuel your AI career. During this course, you’ll explore retrieval-augmented generation (RAG), prompt engineering, and LangChain concepts. You’ll look at RAG, its applications, and its process, along with encoders, their tokenizers, and the FAISS library. Then, you’ll apply in-context learning and prompt engineering to design and refine prompts for accurate responses. Plus, you’ll explore LangChain tools, components, and chat models, and work with LangChain to simplify the application development process using LLMs. Additionally, you’ll get valuable hands-on practice in online labs developing applications using integrated LLM, LangChain, and RAG technologies. Plus, you’ll complete a real-world project you can discuss in interviews. If you’re keen to boost your resume and extend your generative AI skills to applying transformer-based LLMs, ENROLL today and build job-ready skills in just 8 hours.

Syllabus

  • RAG Framework
    • In this module, you will learn how RAG is used to generate responses for different applications such as chatbots. You’ll then learn about the RAG process, the Dense Passage Retrieval (DPR) context encoder and question encoder with their tokenizers, and the Faiss library developed by Facebook AI Research for searching high-dimensional vectors. In hands-on labs, you will use RAG with PyTorch to evaluate content appropriateness and with Hugging Face to retrieve information from the dataset.
  • Prompt Engineering and LangChain
    • In this module, you will learn about in-context learning and advanced methods of prompt engineering to design and refine the prompts for generating relevant and accurate responses from AI. You’ll then be introduced to the LangChain framework, which is an open-source interface for simplifying the application development process using LLM. You’ll learn about its tools, components, and chat models. The module also includes concepts such as prompt templates, example selectors, and output parsers. You’ll then explore the LangChain document loader and retriever, LangChain chains and agents for building applications. In hands-on labs, you will enhance LLM applications and develop an agent that uses integrated LLM, LangChain, and RAG technologies for interactive and efficient document retrieval.

Taught by

Joseph Santarcangelo, Kang Wang, Sina Nazeri, and Wojciech 'Victor' Fulmyk

Tags

Related Courses

Pinecone Vercel Starter Template and RAG - Live Code Review Part 2
Pinecone via YouTube
Will LLMs Kill Search? The Future of Information Retrieval
Aleksa Gordić - The AI Epiphany via YouTube
RAG But Better: Rerankers with Cohere AI - Improving Retrieval Pipelines
James Briggs via YouTube
Advanced RAG - Contextual Compressors and Filters - Lecture 4
Sam Witteveen via YouTube
LangChain Multi-Query Retriever for RAG - Advanced Technique for Broader Vector Space Search
James Briggs via YouTube