YoVDO

Privacy and Security of Large Language Models - Risks and Mitigation

Offered By: Toronto Machine Learning Series (TMLS) via YouTube

Tags

Privacy Courses Artificial Intelligence Courses Adversarial Attacks Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the critical security and privacy challenges associated with large language models (LLMs) in this 28-minute conference talk from the Toronto Machine Learning Series. Delve into the potential risks of LLMs, including sensitive information leaks, unsafe code generation, and vulnerability to adversarial attacks such as PromptInject and differentiable language model attacks. Gain insights into existing and proposed solutions for mitigating these threats in both code and natural language applications. Examine the ethical and legal implications of LLM usage and discover potential avenues for future research and development in this field. Presented by Dr. Ehsan Amjadian, Head of Data Science at RBC, this talk offers a comprehensive overview of the complex landscape surrounding LLM security and privacy.

Syllabus

Privacy & Security of Large Language Models, Risks and Mitigation


Taught by

Toronto Machine Learning Series (TMLS)

Related Courses

Machine Learning and Artificial Intelligence Security Risk: Categorizing Attacks and Failure Modes
LinkedIn Learning
How Apple Scans Your Phone and How to Evade It - NeuralHash CSAM Detection Algorithm Explained
Yannic Kilcher via YouTube
Deep Learning New Frontiers
Alexander Amini via YouTube
Deep Learning New Frontiers
Alexander Amini via YouTube
MIT 6.S191 - Deep Learning Limitations and New Frontiers
Alexander Amini via YouTube