Deploying GenAI Applications to Enterprises: Custom Evaluation Models and LLMOps Workflow - MLOps World
Offered By: MLOps World: Machine Learning in Production via YouTube
Course Description
Overview
          Explore the challenges and solutions in deploying generative AI applications to enterprises through this 49-minute conference talk from MLOps World: Machine Learning in Production. Gain insights from Alexander Kvamme, CEO of Echo AI, and Arjun Bansal, CEO & Co-founder of Log10, as they share Echo AI's journey in deploying a conversational intelligence platform to billion-dollar retail brands. Discover how they overcame LLM accuracy issues through iterative prompt engineering and collaborative workflows. Learn about the importance of end-to-end LLMOps workflows in resolving accuracy problems and scaling enterprise customers to production. Understand the role of AI-powered assistance, such as Log10's Prompt Engineering Copilot, in systematically improving accuracy and handling increased customer demand. Delve into the infrastructure requirements for successfully deploying conversational intelligence platforms at enterprise scale, including logging, tagging, debugging, prompt optimization, feedback, fine-tuning, and seamless integration with existing AI tech stacks and developer tooling.
        
Syllabus
What It Actually Takes to Deploy GenAI Applications to Enterprises Custom Evaluation Models
Taught by
MLOps World: Machine Learning in Production
Related Courses
Machine Learning Operations (MLOps): Getting StartedGoogle Cloud via Coursera Проектирование и реализация систем машинного обучения
Higher School of Economics via Coursera Demystifying Machine Learning Operations (MLOps)
Pluralsight Machine Learning Engineer with Microsoft Azure
Microsoft via Udacity Machine Learning Engineering for Production (MLOps)
DeepLearning.AI via Coursera