Prompt Injection: When Hackers Befriend Your AI
Offered By: NDC Conferences via YouTube
Course Description
Overview
Explore a technical presentation from NDC Security in Oslo that delves into attacks on Large Language Model (LLM) implementations used in chatbots, sentiment analysis, and similar applications. Learn about the serious prompt injection vulnerabilities that can be exploited by adversaries to weaponize AI against users. Examine how prompt injection attacks occur, why they are effective, and the variations such as direct and indirect injections. Discover potential solutions for mitigating these risks and understand the process of "jailbreaking" LLMs to bypass their alignment and produce dangerous content. Gain insights into the importance of taking security seriously when considering the use of AI for sensitive operations, as LLM usage is expected to increase dramatically in the coming years.
Syllabus
Prompt Injection: When Hackers Befriend Your AI - Vetle Hjelle - NDC Security 2024
Taught by
NDC Conferences
Related Courses
Computer SecurityStanford University via Coursera Cryptography II
Stanford University via Coursera Malicious Software and its Underground Economy: Two Sides to Every Story
University of London International Programmes via Coursera Building an Information Risk Management Toolkit
University of Washington via Coursera Introduction to Cybersecurity
National Cybersecurity Institute at Excelsior College via Canvas Network