Prompt Injection: When Hackers Befriend Your AI
Offered By: NDC Conferences via YouTube
Course Description
Overview
Explore a technical presentation from NDC Security in Oslo that delves into attacks on Large Language Model (LLM) implementations used in chatbots, sentiment analysis, and similar applications. Learn about the serious prompt injection vulnerabilities that can be exploited by adversaries to weaponize AI against users. Examine how prompt injection attacks occur, why they are effective, and the variations such as direct and indirect injections. Discover potential solutions for mitigating these risks and understand the process of "jailbreaking" LLMs to bypass their alignment and produce dangerous content. Gain insights into the importance of taking security seriously when considering the use of AI for sensitive operations, as LLM usage is expected to increase dramatically in the coming years.
Syllabus
Prompt Injection: When Hackers Befriend Your AI - Vetle Hjelle - NDC Security 2024
Taught by
NDC Conferences
Related Courses
AI CTF Solutions - DEFCon31 Hackathon and Kaggle CompetitionRob Mulla via YouTube Indirect Prompt Injections in the Wild - Real World Exploits and Mitigations
Ekoparty Security Conference via YouTube Hacking Neural Networks - Introduction and Current Techniques
media.ccc.de via YouTube The Curious Case of the Rogue SOAR - Vulnerabilities and Exploits in Security Automation
nullcon via YouTube Mastering Large Language Model Evaluations - Techniques for Ensuring Generative AI Reliability
Data Science Dojo via YouTube