The Hidden Dangers of Loading Open-Source AI Models
Offered By: Yannic Kilcher via YouTube
Course Description
Overview
Explore the hidden dangers of loading open-source AI models in this eye-opening video. Discover how a seemingly innocent act of loading a model can potentially execute arbitrary code on your machine. Delve into the intricacies of Hugging Face model loading, the connection between PyTorch and pickle, and the inner workings of pickle data saving. Learn how to execute arbitrary code and examine the final code implementation. Gain valuable insights on protecting yourself from potential security risks associated with open-source AI models. This informative presentation covers essential topics for AI practitioners and enthusiasts, including model loading processes, data serialization, and cybersecurity best practices in the context of artificial intelligence.
Syllabus
- Introduction
- Sponsor: Weights & Biases
- How Hugging Face models are loaded
- From PyTorch to pickle
- Understanding how pickle saves data
- Executing arbitrary code
- The final code
- How can you protect yourself?
Taught by
Yannic Kilcher
Related Courses
Computer SecurityStanford University via Coursera Cryptography II
Stanford University via Coursera Malicious Software and its Underground Economy: Two Sides to Every Story
University of London International Programmes via Coursera Building an Information Risk Management Toolkit
University of Washington via Coursera Introduction to Cybersecurity
National Cybersecurity Institute at Excelsior College via Canvas Network