End to End LLMs with Azure
Offered By: Duke University via Coursera
Course Description
Overview
This comprehensive course equips you with skills to leverage Azure for building and deploying Large Language Model (LLM) applications. Learn to use Azure OpenAI Service for deploying LLMs, utilizing inference APIs, and integrating with Python. Explore architectural patterns like Retrieval-Augmented Generation (RAG) and Azure services like Azure Search for robust applications. Gain insights into streamlining deployments with GitHub Actions. Apply your knowledge by implementing RAG with Azure Search, creating GitHub Actions workflows, and deploying end-to-end LLM applications. Develop a deep understanding of Azure's ecosystem for LLM solutions, from model deployment to architectural patterns and deployment pipelines.
Syllabus
- Get Started with LLMs in Azure
- This week, you will explore architectural patterns and deployment of large language model applications. By studying RAG, Azure services, and GitHub Actions, you will learn how to build robust applications. You will apply your learning by implementing RAG with Azure search, creating GitHub Actions workflows, and deploying an end-to-end application.
Taught by
Noah Gift and Alfredo Deza
Tags
Related Courses
Pinecone Vercel Starter Template and RAG - Live Code Review Part 2Pinecone via YouTube Will LLMs Kill Search? The Future of Information Retrieval
Aleksa Gordić - The AI Epiphany via YouTube RAG But Better: Rerankers with Cohere AI - Improving Retrieval Pipelines
James Briggs via YouTube Advanced RAG - Contextual Compressors and Filters - Lecture 4
Sam Witteveen via YouTube LangChain Multi-Query Retriever for RAG - Advanced Technique for Broader Vector Space Search
James Briggs via YouTube