YoVDO

AI Safety, Security, and Play - Episode 137

Offered By: DevSecCon via YouTube

Tags

Artificial Intelligence Courses Data Poisoning Courses Prompt Injection Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore AI safety and security in this 52-minute DevSecCon episode featuring David Haber, co-founder of Lakera.ai and creator of Gandalf. Dive into prompt injections, AI behavior, risks, and vulnerabilities. Learn about data poisoning, protections, and the motivation behind Gandalf's creation. Examine two approaches to informing Large Language Models (LLMs) about sensitive data, discussing their pros and cons. Gain insights into LLM security and the importance of considering model-specific knowledge. Discover expert perspectives on AI safety, security, and play from David Haber in this informative discussion.

Syllabus

Ep. #137, AI Safety, Security, and Play


Taught by

DevSecCon

Related Courses

AI CTF Solutions - DEFCon31 Hackathon and Kaggle Competition
Rob Mulla via YouTube
Indirect Prompt Injections in the Wild - Real World Exploits and Mitigations
Ekoparty Security Conference via YouTube
Hacking Neural Networks - Introduction and Current Techniques
media.ccc.de via YouTube
The Curious Case of the Rogue SOAR - Vulnerabilities and Exploits in Security Automation
nullcon via YouTube
Mastering Large Language Model Evaluations - Techniques for Ensuring Generative AI Reliability
Data Science Dojo via YouTube