YoVDO

LEMNA - Explaining Deep Learning Based Security Applications

Offered By: Association for Computing Machinery (ACM) via YouTube

Tags

Model Interpretability Courses Deep Learning Courses Regression Models Courses

Course Description

Overview

Explore a 21-minute conference talk that delves into LEMNA, a novel approach for explaining deep learning-based security applications. Learn about the challenges of opaque deep learning models in security-critical domains and the limitations of existing explanation techniques. Discover how LEMNA addresses these issues by supporting locally non-linear decision boundaries and modeling feature dependency. Gain insights into deriving explanations from deep neural networks, evaluating explanation accuracy, and practical applications such as identifying binary function starts. Understand how LEMNA contributes to building trust in target models and aids in troubleshooting and patching model errors, ultimately enhancing the transparency and reliability of deep learning in security contexts.

Syllabus

Intro
The Concerns of Opaque Deep Learning Model
Existing Explanation Techniques & Limitations
One Example of Model Explanation (LIME. KDD'16)
Limitation of Existing Explanation Techniques
LEMNA: Local Explanation Method using Nonlinear Approximation
Supporting Locally Non-linear Decision Boundaries
Modeling the Feature Dependency . Mature regression model with fused lasso
Deriving an Explanation from DNN with LEMNA
Explanation Accuracy Evaluation
Demonstration of LEMNA in Identifying Binary Function Start
Building Trust in the Target Models
Troubleshooting and Patching Model Errors


Taught by

Association for Computing Machinery (ACM)

Related Courses

Aprendizaje automático con Python y Azure Notebooks
Coursera Project Network via Coursera
Big Data Solutions for Social and Economic Disparities
Harvard University via edX
Build a Regression Model using PyCaret
Coursera Project Network via Coursera
Build Regression, Classification, and Clustering Models
CertNexus via Coursera
Interpretable Machine Learning
Duke University via Coursera