Vectoring Into The Future: AWS Empowered RAG Systems for LLMs
Offered By: Conf42 via YouTube
Course Description
Overview
Explore the future of AWS-empowered RAG systems for Large Language Models in this conference talk from Conf42 LLMs 2024. Dive into the world of foundation models, generative AI use cases, and AWS's extensive generative AI capabilities. Discover the limitations of LLMs and learn about vector embeddings and databases. Gain insights into enabling vector search across AWS services, including Amazon Aurora, OpenSearch, DocumentDB, MemoryDB, and Neptune Analytics. Understand the power of Amazon Bedrock, its knowledge bases, and vector databases. Witness a live demonstration of the Retrieve and Generate API, showcasing practical applications of these cutting-edge technologies in action.
Syllabus
intro
preamble
agenda
why foundation models?
generative ai can be used for a wide range of use cases
aws offers a broad choice of generative ai capabilities
limitations of llms
vector embeddings
vector databases
enabling vector search across aws services
amazon autota with postgresql compatibility
using pgvector in aws
amazon opensearch service
using opensearch in aws
amazon documentdb
amazon memorydb
amazon neptune analytics
amazon bedrock
knowledge bases for amazon bedrock
vector databases for amazon bedrock
retrieve and generate api
demo time
Taught by
Conf42
Related Courses
Amazon Bedrock Getting Started (Simplified Chinese)Amazon Web Services via AWS Skill Builder Introducción a Amazon Bedrock (Español de España) | Amazon Bedrock Getting Started (Spanish from Spain)
Amazon Web Services via AWS Skill Builder Amazon Bedrock Getting Started (Indonesian)
Amazon Web Services via AWS Skill Builder Nozioni di base su Amazon Bedrock (Italiano) | Amazon Bedrock Getting Started (Italian)
Amazon Web Services via AWS Skill Builder Amazon Bedrock : guide de démarrage (Français) | Amazon Bedrock Getting Started (French)
Amazon Web Services via AWS Skill Builder