Privacy and Security of Large Language Models - Risks and Mitigation
Offered By: Toronto Machine Learning Series (TMLS) via YouTube
Course Description
Overview
Explore the critical security and privacy challenges associated with large language models (LLMs) in this 28-minute conference talk from the Toronto Machine Learning Series. Delve into the potential risks of LLMs, including sensitive information leaks, unsafe code generation, and vulnerability to adversarial attacks such as PromptInject and differentiable language model attacks. Gain insights into existing and proposed solutions for mitigating these threats in both code and natural language applications. Examine the ethical and legal implications of LLM usage and discover potential avenues for future research and development in this field. Presented by Dr. Ehsan Amjadian, Head of Data Science at RBC, this talk offers a comprehensive overview of the complex landscape surrounding LLM security and privacy.
Syllabus
Privacy & Security of Large Language Models, Risks and Mitigation
Taught by
Toronto Machine Learning Series (TMLS)
Related Courses
Introduction to Artificial IntelligenceStanford University via Udacity Probabilistic Graphical Models 1: Representation
Stanford University via Coursera Artificial Intelligence for Robotics
Stanford University via Udacity Computer Vision: The Fundamentals
University of California, Berkeley via Coursera Learning from Data (Introductory Machine Learning course)
California Institute of Technology via Independent