AI Safety, Security, and Play - Episode 137
Offered By: DevSecCon via YouTube
Course Description
Overview
Explore AI safety and security in this 52-minute DevSecCon episode featuring David Haber, co-founder of Lakera.ai and creator of Gandalf. Dive into prompt injections, AI behavior, risks, and vulnerabilities. Learn about data poisoning, protections, and the motivation behind Gandalf's creation. Examine two approaches to informing Large Language Models (LLMs) about sensitive data, discussing their pros and cons. Gain insights into LLM security and the importance of considering model-specific knowledge. Discover expert perspectives on AI safety, security, and play from David Haber in this informative discussion.
Syllabus
Ep. #137, AI Safety, Security, and Play
Taught by
DevSecCon
Related Courses
AI CTF Solutions - DEFCon31 Hackathon and Kaggle CompetitionRob Mulla via YouTube Indirect Prompt Injections in the Wild - Real World Exploits and Mitigations
Ekoparty Security Conference via YouTube Hacking Neural Networks - Introduction and Current Techniques
media.ccc.de via YouTube The Curious Case of the Rogue SOAR - Vulnerabilities and Exploits in Security Automation
nullcon via YouTube Mastering Large Language Model Evaluations - Techniques for Ensuring Generative AI Reliability
Data Science Dojo via YouTube