YoVDO

Building a Chat Assistant with Canopy and Anyscale Endpoints

Offered By: Anyscale via YouTube

Tags

Retrieval Augmented Generation (RAG) Courses Pinecone Courses Vector Databases Courses Embeddings Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the challenges of building a chat assistant and discover how Canopy and Anyscale endpoints offer the fastest and easiest way to create RAG-based applications for free in this 45-minute webinar. Dive into the architecture, examine a real-life example, and follow a guide on getting started with building your own chat assistant. Learn about Canopy, a flexible framework built on top of the Pinecone vector database, which provides libraries and a simple API for chunking, embedding, chat history management, query optimization, and context retrieval. Gain insights into Anyscale Endpoints, a fast and performant LLM API for building AI-based applications, offering a serverless service for serving and fine-tuning open LLMs like Llama-2 and Mistral. Discover how Anyscale Endpoints now provides an embedding endpoint and allows fine-tuning of the largest Llama-2 70B model, giving you flexibility for open LLMs through an API.

Syllabus

Build a chat assistant fast using Canopy from Pinecone and Anyscale Endpoints


Taught by

Anyscale

Related Courses

Metadata Filtering for Vector Search - Latest Filter Tech
James Briggs via YouTube
Cohere vs. OpenAI Embeddings - Multilingual Search
James Briggs via YouTube
Building the Future with LLMs, LangChain, & Pinecone
Pinecone via YouTube
Supercharging Semantic Search with Pinecone and Cohere
Pinecone via YouTube
Preventing Déjà Vu - Vector Similarity Search for Security Alerts, with Expel and Pinecone
Pinecone via YouTube