YoVDO

Prompt Injection: When Hackers Befriend Your AI

Offered By: NDC Conferences via YouTube

Tags

Cybersecurity Courses Chatbot Courses Sentiment Analysis Courses AI Ethics Courses Jailbreaking Courses Prompt Injection Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a technical presentation from NDC Security in Oslo that delves into attacks on Large Language Model (LLM) implementations used in chatbots, sentiment analysis, and similar applications. Learn about the serious prompt injection vulnerabilities that can be exploited by adversaries to weaponize AI against users. Examine how prompt injection attacks occur, why they are effective, and the variations such as direct and indirect injections. Discover potential solutions for mitigating these risks and understand the process of "jailbreaking" LLMs to bypass their alignment and produce dangerous content. Gain insights into the importance of taking security seriously when considering the use of AI for sensitive operations, as LLM usage is expected to increase dramatically in the coming years.

Syllabus

Prompt Injection: When Hackers Befriend Your AI - Vetle Hjelle - NDC Security 2024


Taught by

NDC Conferences

Related Courses

Ethical Hacking: Mobile Devices and Platforms
LinkedIn Learning
CNIT 128: Hacking Mobile Devices
CNIT - City College of San Francisco via Independent
Jailbreaking the AppleTV3 - Tales From A Full Stack Hack
nullcon via YouTube
How to Influence Security Technology in Kiwi Underpants
YouTube
Machswap - Stephen Parkinson
White Hat Cal Poly via YouTube