Evaluating LLMs for AI Risk - Techniques for Red Teaming Generative AI
Offered By: MLOps.community via YouTube
Course Description
Overview
Explore cutting-edge techniques for evaluating and stress-testing Large Language Models (LLMs) in this lightning talk from the LLMs in Production Conference III. Learn about a comprehensive framework for managing AI risk throughout the model lifecycle, from data collection to production deployment. Discover methods for red-teaming generative AI systems and building validation engines that algorithmically probe models for security, ethics, and safety issues. Gain insights into failure modes, automation strategies, and specific testing approaches such as prompt injection attacks, prompt extraction, data transformation, and model alignment tests. Equip yourself with the knowledge to effectively assess and mitigate potential risks associated with LLMs in production environments.
Syllabus
Introduction
Why Red teaming
What is Red teaming
What to test
Failure modes
Automation
Red teaming
Prompt injection attack
Prompt extraction
Data transformation
Model alignment test
Summary
Taught by
MLOps.community
Related Courses
Knowledge-Based AI: Cognitive SystemsGeorgia Institute of Technology via Udacity AI for Everyone: Master the Basics
IBM via edX Introducción a La Inteligencia Artificial (IA)
IBM via Coursera AI for Legal Professionals (I): Law and Policy
National Chiao Tung University via FutureLearn Artificial Intelligence Ethics in Action
LearnQuest via Coursera