Red Teaming Large Language Models
Offered By: NDC Conferences via YouTube
Course Description
Overview
Explore the critical practice of red teaming large language models in this comprehensive conference talk from NDC Security 2024. Delve into the security and ethical challenges posed by the integration of machine learning models, particularly large language models (LLMs), into digital infrastructure. Gain valuable insights into the vulnerabilities of LLMs, including their potential to generate harmful content, leak confidential data, and cause security breaches. Differentiate between structured red team exercises and isolated adversarial attacks, such as model jailbreaks, through case studies and practical examples. Learn about the types of vulnerabilities that red teaming can uncover in LLMs and discover potential strategies for mitigating these risks. Equip yourself with the knowledge necessary to evaluate the security and ethical implications of deploying Large Language Models in organizational settings.
Syllabus
Red Teaming Large Language Models - Armin Buescher - NDC Security 2024
Taught by
NDC Conferences
Related Courses
Machine Learning and Artificial Intelligence Security Risk: Categorizing Attacks and Failure ModesLinkedIn Learning How Apple Scans Your Phone and How to Evade It - NeuralHash CSAM Detection Algorithm Explained
Yannic Kilcher via YouTube Deep Learning New Frontiers
Alexander Amini via YouTube Deep Learning New Frontiers
Alexander Amini via YouTube MIT 6.S191 - Deep Learning Limitations and New Frontiers
Alexander Amini via YouTube