All the Hard Stuff with LLMs in Product Development - MLOps Podcast #170
Offered By: MLOps.community via YouTube
Course Description
Overview
Explore the complexities of implementing Large Language Models (LLMs) in product development through this insightful podcast episode featuring Phillip Carter. Gain valuable insights into security challenges, collaborative defense strategies against attacks, and the crucial roles of ML engineers and product managers in successful LLM implementation. Learn about identifying leading indicators and measuring ROI for impactful AI initiatives. Discover Phillip's expertise in developer tooling, OpenTelemetry, and prompt engineering at Honeycomb. Delve into topics such as querying natural language, function calls, error pattern analysis, prompt injection cycles, and the often undervalued importance of user interface in AI features. Understand the cost considerations and ROI of AI implementations, and explore the balance between ML and product perspectives in AI model trade-offs. Gain practical knowledge on improving LLMs in production through observability and iterative processes.
Syllabus
[] Phillip's preferred coffee
[] Takeaways
[] Please like, share, and subscribe to our MLOps channels!
[] Phillip's background
[] Querying Natural Language
[] Function calls
[] Pasting errors or traces
[] Error patterns
[] Honeycomb's Improvement cycle
[] Prompt boxes rationale
[] Prompt injection cycles
[] Injection Attempt
[] UI undervalued, charging the AI feature
[] ROI cost
[] Bridging ML and Product Perspective
[] AI Model Trade-offs
[] Query assistant
[] Honeycomb is hiring!
[] Wrap up
Taught by
MLOps.community
Related Courses
Discover, Validate & Launch New Business Ideas with ChatGPTUdemy 150 Digital Marketing Growth Hacks for Businesses
Udemy AI: Executive Briefing
Pluralsight The Complete Digital Marketing Guide - 25 Courses in 1
Udemy Learn to build a voice assistant with Alexa
Udemy