YoVDO

Evaluating LLMs for AI Risk - Techniques for Red Teaming Generative AI

Offered By: MLOps.community via YouTube

Tags

AI Governance Courses MLOps Courses Prompt Engineering Courses AI Ethics Courses Model Evaluation Courses Security Testing Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore cutting-edge techniques for evaluating and stress-testing Large Language Models (LLMs) in this lightning talk from the LLMs in Production Conference III. Learn about a comprehensive framework for managing AI risk throughout the model lifecycle, from data collection to production deployment. Discover methods for red-teaming generative AI systems and building validation engines that algorithmically probe models for security, ethics, and safety issues. Gain insights into failure modes, automation strategies, and specific testing approaches such as prompt injection attacks, prompt extraction, data transformation, and model alignment tests. Equip yourself with the knowledge to effectively assess and mitigate potential risks associated with LLMs in production environments.

Syllabus

Introduction
Why Red teaming
What is Red teaming
What to test
Failure modes
Automation
Red teaming
Prompt injection attack
Prompt extraction
Data transformation
Model alignment test
Summary


Taught by

MLOps.community

Related Courses

Macroeconometric Forecasting
International Monetary Fund via edX
Machine Learning With Big Data
University of California, San Diego via Coursera
Data Science at Scale - Capstone Project
University of Washington via Coursera
Structural Equation Model and its Applications | 结构方程模型及其应用 (粤语)
The Chinese University of Hong Kong via Coursera
Data Science in Action - Building a Predictive Churn Model
SAP Learning