Red Teaming of LLM Applications - From Prototype to Production
Offered By: Databricks via YouTube
Course Description
Overview
Explore techniques for detecting and identifying vulnerabilities in LLM applications in this 40-minute breakout session. Learn about the challenges of putting LLM applications into production, including hallucinations, discriminatory behavior, and prompt injection attacks. Discover the concepts of LLM app vulnerabilities and the red-teaming process. Dive deep into automated detection techniques and benchmarking methods for GenAI systems. Gain a better understanding of automated safety and security assessments tailored to LLM applications. Led by Corey Abshire, Senior AI Specialist Solutions Architect at Databricks, this talk aims to transform the journey of LLM deployment into a secure and confident stride towards innovation. Access additional resources like the LLM Compact Guide and Big Book of MLOps for further exploration.
Syllabus
Red Teaming of LLM Applications: Going from Prototype to Production
Taught by
Databricks
Related Courses
AI CTF Solutions - DEFCon31 Hackathon and Kaggle CompetitionRob Mulla via YouTube Indirect Prompt Injections in the Wild - Real World Exploits and Mitigations
Ekoparty Security Conference via YouTube Hacking Neural Networks - Introduction and Current Techniques
media.ccc.de via YouTube The Curious Case of the Rogue SOAR - Vulnerabilities and Exploits in Security Automation
nullcon via YouTube Mastering Large Language Model Evaluations - Techniques for Ensuring Generative AI Reliability
Data Science Dojo via YouTube