Introduction to AI Orchestration with LangChain and LlamaIndex
Offered By: LinkedIn Learning
Course Description
Overview
Learn how to rapidly build future-proof generative AI apps, locally or in the cloud, using AI orchestration frameworks like LangChain and LlamaIndex.
Syllabus
Introduction
- Building local AI apps with LangChain and LlamaIndex
- What you should know
- Setting up your environment for building AI apps
- AI orchestration concepts
- Building an app with the OpenAI API
- Running local LLMs
- Your first LangChain app
- Your first LlamaIndex app
- Debugging AI apps
- AI over local documents: Retrieval-augmented generation
- Choosing an embedding
- RAG with LlamaIndex
- RAG with LangChain
- Challenge: Document summarization
- Solution: Document summarization
- App concepts for chaining and more complex workflows
- Getting JSON out of the LLM
- LLM function calling
- Challenge: Local LLM task offloading
- Solution: Local LLM task offloading
- Introduction to the ReAct agent framework
- Implementing a ReAct agent
- Challenge: LangChain and LlamaIndex strengths and weaknesses
- Solution: LangChain and LlamaIndex strengths and weaknesses
- Next steps for AI app engineers
Taught by
M. Joel Dubinko
Related Courses
Pinecone Vercel Starter Template and RAG - Live Code Review Part 2Pinecone via YouTube Will LLMs Kill Search? The Future of Information Retrieval
Aleksa Gordić - The AI Epiphany via YouTube RAG But Better: Rerankers with Cohere AI - Improving Retrieval Pipelines
James Briggs via YouTube Advanced RAG - Contextual Compressors and Filters - Lecture 4
Sam Witteveen via YouTube LangChain Multi-Query Retriever for RAG - Advanced Technique for Broader Vector Space Search
James Briggs via YouTube