Mitigating Prompt Injection and Prompt Hacking
Offered By: LinkedIn Learning
Course Description
Overview
Understand and learn to mitigate the issues and problems with prompt hacking, a technique used by malicious hackers to manipulate large language models.
Syllabus
Introduction
- What is prompt hacking?
- Prompt hacking techniques
- Mitigating prompt hacking attacks
Taught by
Ray Villalobos
Related Courses
AI CTF Solutions - DEFCon31 Hackathon and Kaggle CompetitionRob Mulla via YouTube Indirect Prompt Injections in the Wild - Real World Exploits and Mitigations
Ekoparty Security Conference via YouTube Hacking Neural Networks - Introduction and Current Techniques
media.ccc.de via YouTube The Curious Case of the Rogue SOAR - Vulnerabilities and Exploits in Security Automation
nullcon via YouTube Mastering Large Language Model Evaluations - Techniques for Ensuring Generative AI Reliability
Data Science Dojo via YouTube