YoVDO

Red Teaming Large Language Models

Offered By: NDC Conferences via YouTube

Tags

Cybersecurity Courses Risk Mitigation Courses Data Privacy Courses Ethical AI Courses Machine Learning Security Courses Adversarial Attacks Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the critical practice of red teaming large language models in this comprehensive conference talk from NDC Security 2024. Delve into the security and ethical challenges posed by the integration of machine learning models, particularly large language models (LLMs), into digital infrastructure. Gain valuable insights into the vulnerabilities of LLMs, including their potential to generate harmful content, leak confidential data, and cause security breaches. Differentiate between structured red team exercises and isolated adversarial attacks, such as model jailbreaks, through case studies and practical examples. Learn about the types of vulnerabilities that red teaming can uncover in LLMs and discover potential strategies for mitigating these risks. Equip yourself with the knowledge necessary to evaluate the security and ethical implications of deploying Large Language Models in organizational settings.

Syllabus

Red Teaming Large Language Models - Armin Buescher - NDC Security 2024


Taught by

NDC Conferences

Related Courses

Artificial Intelligence Algorithms Models and Limitations
LearnQuest via Coursera
Artificial Intelligence Data Fairness and Bias
LearnQuest via Coursera
Towards an Ethical Digital Society: From Theory to Practice
NPTEL via Swayam
Human Factors in AI
Duke University via Coursera
Identify principles and practices for responsible AI
Microsoft via Microsoft Learn