Emerging Architectures for Large Language Model Applications - Building a Custom LLM Application
Offered By: Data Science Dojo via YouTube
Course Description
Overview
Explore emerging architectures for large language model applications in this comprehensive tutorial. Dive into the world of generative AI and learn how to build custom LLM-powered apps. Discover prevalent approaches, canonical architecture, and available tools for creating LLM applications. Gain insights into embeddings, vector databases, retrieval augmented generation (RAG), and orchestration frameworks. Understand the nuances of LLMs, including prompt engineering, foundation models, context windows, and token limits. No prior background in Generative AI or LLMs required. Engage in a Q&A session to further enhance your understanding of this rapidly evolving field.
Syllabus
– Introduction + Agenda
– Canonical Design Patterns
– Embeddings
– Vector Database, Storing and Indexing of Vectors, Vector Similarity
– Large Language Models
– Prompt Engineering
– Foundation Models
– Context Window and Token Limits
– Customizing Large Language Models
– Questions and Answers
Taught by
Data Science Dojo
Related Courses
Vector Similarity SearchData Science Dojo via YouTube Supercharging Semantic Search with Pinecone and Cohere
Pinecone via YouTube Search Like You Mean It - Semantic Search with NLP and a Vector Database
Pinecone via YouTube The Rise of Vector Data
Pinecone via YouTube NER Powered Semantic Search in Python
James Briggs via YouTube