YoVDO

Incorporating LLMs in High-stake Use Cases - Best Practices and Considerations

Offered By: MLOps.community via YouTube

Tags

Artificial Intelligence Courses Machine Learning Courses MLOps Courses Prompt Engineering Courses Model Evaluation Courses Fine-Tuning Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore best practices for incorporating Large Language Models (LLMs) in high-stakes applications through this insightful conference talk by Yada Pruksachatkun. Learn how to build applications with non-determinism in mind, consider human-in-the-loop systems, and address challenges in deploying LLMs for critical use cases. Discover techniques for breaking tasks into smaller components, utilizing in-context examples, and evaluating models rigorously. Gain valuable insights on when to use LLMs, the benefits of fine-tuning custom models, and open questions surrounding black-box LLMs in high-risk scenarios. Drawing from her experience as an NLP scientist and engineer, Pruksachatkun provides practical guidance for leveraging LLMs responsibly in production environments.

Syllabus

Intro
Let's start with a therapy bot
Best practices from Dialogue Systems 1.0
Best Practice: Breaking up your tasks into smaller tasks
Best Practice: In context Examples
Best Practice: Don't use LLMs if you don't have to
Best Practice: Finetune your own LLM
Evaluate your model rigorously
Open Questions in using Black Box LLMs for high-stakes use cases


Taught by

MLOps.community

Related Courses

TensorFlow: Working with NLP
LinkedIn Learning
Introduction to Video Editing - Video Editing Tutorials
Great Learning via YouTube
HuggingFace Crash Course - Sentiment Analysis, Model Hub, Fine Tuning
Python Engineer via YouTube
GPT3 and Finetuning the Core Objective Functions - A Deep Dive
David Shapiro ~ AI via YouTube
How to Build a Q&A AI in Python - Open-Domain Question-Answering
James Briggs via YouTube