Deploying GenAI Applications to Enterprises: Custom Evaluation Models and LLMOps Workflow - MLOps World
Offered By: MLOps World: Machine Learning in Production via YouTube
Course Description
Overview
Explore the challenges and solutions in deploying generative AI applications to enterprises through this 49-minute conference talk from MLOps World: Machine Learning in Production. Gain insights from Alexander Kvamme, CEO of Echo AI, and Arjun Bansal, CEO & Co-founder of Log10, as they share Echo AI's journey in deploying a conversational intelligence platform to billion-dollar retail brands. Discover how they overcame LLM accuracy issues through iterative prompt engineering and collaborative workflows. Learn about the importance of end-to-end LLMOps workflows in resolving accuracy problems and scaling enterprise customers to production. Understand the role of AI-powered assistance, such as Log10's Prompt Engineering Copilot, in systematically improving accuracy and handling increased customer demand. Delve into the infrastructure requirements for successfully deploying conversational intelligence platforms at enterprise scale, including logging, tagging, debugging, prompt optimization, feedback, fine-tuning, and seamless integration with existing AI tech stacks and developer tooling.
Syllabus
What It Actually Takes to Deploy GenAI Applications to Enterprises Custom Evaluation Models
Taught by
MLOps World: Machine Learning in Production
Related Courses
Large Language Models: Application through ProductionDatabricks via edX LLMOps - LLM Bootcamp
The Full Stack via YouTube MLOps: Why DevOps Solutions Fall Short in the Machine Learning World
Linux Foundation via YouTube Quick Wins Across the Enterprise with Responsible AI
Microsoft via YouTube End-to-End AI App Development: Prompt Engineering to LLMOps
Microsoft via YouTube