Deploying GenAI Applications to Enterprises: Custom Evaluation Models and LLMOps Workflow - MLOps World
Offered By: MLOps World: Machine Learning in Production via YouTube
Course Description
Overview
Explore the challenges and solutions in deploying generative AI applications to enterprises through this 49-minute conference talk from MLOps World: Machine Learning in Production. Gain insights from Alexander Kvamme, CEO of Echo AI, and Arjun Bansal, CEO & Co-founder of Log10, as they share Echo AI's journey in deploying a conversational intelligence platform to billion-dollar retail brands. Discover how they overcame LLM accuracy issues through iterative prompt engineering and collaborative workflows. Learn about the importance of end-to-end LLMOps workflows in resolving accuracy problems and scaling enterprise customers to production. Understand the role of AI-powered assistance, such as Log10's Prompt Engineering Copilot, in systematically improving accuracy and handling increased customer demand. Delve into the infrastructure requirements for successfully deploying conversational intelligence platforms at enterprise scale, including logging, tagging, debugging, prompt optimization, feedback, fine-tuning, and seamless integration with existing AI tech stacks and developer tooling.
Syllabus
What It Actually Takes to Deploy GenAI Applications to Enterprises Custom Evaluation Models
Taught by
MLOps World: Machine Learning in Production
Related Courses
Google BARD and ChatGPT AI for Increased ProductivityUdemy Bringing LLM to the Enterprise - Training From Scratch or Just Fine-Tune With Cerebras-GPT
Prodramp via YouTube Generative AI and Long-Term Memory for LLMs
James Briggs via YouTube Extractive Q&A With Haystack and FastAPI in Python
James Briggs via YouTube OpenAssistant First Models Are Here! - Open-Source ChatGPT
Yannic Kilcher via YouTube