Private RAG with Open Source and Custom LLMs - BentoML and OpenLLM
Offered By: LLMOps Space via YouTube
Course Description
Overview
Explore practical considerations for building private Retrieval-Augmented Generation (RAG) applications using open source and custom LLMs in this informative talk by Chaoyu Yang, Founder and CEO at BentoML. Discover the benefits of self-hosting open source LLMs or embedding models for RAG, learn common best practices for optimizing inference performance, and understand how BentoML can be used to build RAG as a service. Gain insights into seamlessly chaining language models with various components, including text and multi-modal embedding, OCR pipelines, semantic chunking, classification models, and reranking models. Additionally, learn about OpenLLM and its role in LLM deployments. This 51-minute session, presented by LLMOps Space, offers valuable knowledge for practitioners interested in deploying LLMs into production.
Syllabus
Private RAG with Open Source and Custom LLMs | BentoML | OpenLLM
Taught by
LLMOps Space
Related Courses
Large Language Models: Application through ProductionDatabricks via edX LLMOps - LLM Bootcamp
The Full Stack via YouTube MLOps: Why DevOps Solutions Fall Short in the Machine Learning World
Linux Foundation via YouTube Quick Wins Across the Enterprise with Responsible AI
Microsoft via YouTube End-to-End AI App Development: Prompt Engineering to LLMOps
Microsoft via YouTube