Unveiling Hidden Backdoors in Manifold Distribution Gaps
Offered By: BIMSA via YouTube
Course Description
Overview
Explore the critical security concern of backdoor attacks on deep neural networks in this 55-minute conference talk from ICBS2024. Delve into the innovative approach of separating classification models into manifold embedding and classifier components. Discover how mode mixture features within manifold distribution gaps can be exploited as backdoors to extend decision boundaries. Learn about a universal backdoor attack framework applicable across various data modalities, offering high explainability and stealthiness. Examine the effectiveness of this method on high-dimensional natural datasets and gain insights into the potential vulnerabilities of classification models.
Syllabus
Min Zhang: Unveiling Hidden Backdoors in Manifold Distribution Gaps #ICBS2024
Taught by
BIMSA
Related Courses
Classification ModelsUdacity Predictive Modeling and Machine Learning with MATLAB
MathWorks via Coursera Predictive Analytics for Business
Tableau via Udacity Explainable Machine Learning with LIME and H2O in R
Coursera Project Network via Coursera Automated Machine Learning en Power BI Clasificación
Coursera Project Network via Coursera