Building Applications with Vector Databases
Offered By: DeepLearning.AI via Coursera
Course Description
Overview
Vector databases use embeddings to capture the meaning of data, gauge the similarity between different pairs of vectors, and navigate large datasets to identify the most similar vectors. In the context of large language models, the primary use of vector databases is retrieval augmented generation (RAG), where text embeddings are stored and retrieved for specific queries.
However, the versatility of vector databases extends beyond RAG and makes it possible to build a wide range of applications quickly with minimal coding.
In this course, you’ll explore the implementation of six applications using vector databases:
1. Semantic Search: Create a search tool that goes beyond keyword matching, focusing on the meaning of content for efficient text-based searches on a user Q/A dataset.
2. RAG: Enhance your LLM applications by incorporating content from sources the model wasn’t trained on, like answering questions using the Wikipedia dataset.
3. Recommender System: Develop a system that combines semantic search and RAG to recommend topics, and demonstrate it with a news article dataset.
4. Hybrid Search: Build an application that finds items using both images and descriptive text, using an eCommerce dataset as an example.
5. Facial Similarity: Create an app to compare facial features, using a database of public figures to determine the likeness between them.
6. Anomaly Detection: Learn how to build an anomaly detection app that identifies unusual patterns in network communication logs.
After taking this course, you’ll be equipped with new ideas for building applications with any vector database.
Syllabus
- Project Overview
- Vector databases use embeddings to capture the meaning of data, gauge the similarity between different pairs of vectors, and navigate large datasets to identify the most similar vectors. In the context of large language models, the primary use of vector databases is retrieval augmented generation (RAG), where text embeddings are stored and retrieved for specific queries. However, the versatility of vector databases extends beyond RAG and makes it possible to build a wide range of applications quickly with minimal coding.In this course, you’ll explore the implementation of six applications using vector databases: (1) Semantic Search: Create a search tool that goes beyond keyword matching, focusing on the meaning of content for efficient text-based searches on a user Q/A dataset. (2) RAG: Enhance your LLM applications by incorporating content from sources the model wasn’t trained on, like answering questions using the Wikipedia dataset. (3) Recommender System: Develop a system that combines semantic search and RAG to recommend topics, and demonstrate it with a news article dataset. (4) Hybrid Search: Build an application that finds items using both images and descriptive text, using an eCommerce dataset as an example. (5) Facial Similarity: Create an app to compare facial features, using a database of public figures to determine the likeness between them.(6) Anomaly Detection: Learn how to build an anomaly detection app that identifies unusual patterns in network communication logs.After taking this course, you’ll be equipped with new ideas for building applications with any vector database.
Taught by
Tim Tully
Related Courses
Vector Similarity SearchData Science Dojo via YouTube Supercharging Semantic Search with Pinecone and Cohere
Pinecone via YouTube Search Like You Mean It - Semantic Search with NLP and a Vector Database
Pinecone via YouTube The Rise of Vector Data
Pinecone via YouTube NER Powered Semantic Search in Python
James Briggs via YouTube