The Curse of Class Imbalance and Conflicting Metrics with Machine Learning for Side-channel Evaluation
Offered By: TheIACR via YouTube
Course Description
Overview
Explore the challenges of class imbalance and conflicting metrics in machine learning for side-channel evaluation in this 21-minute conference talk presented at the Cryptographic Hardware and Embedded Systems Conference 2019. Delve into the importance of Hamming Weight (HW) and the impact of imbalanced data on machine learning models. Learn about various data sampling techniques, including random under sampling and random oversampling with replacement. Examine experimental results from two datasets and understand the implications of different evaluation metrics, particularly the relationship between Success Rate/Guessing Entropy (SR/GE) and accuracy. Gain valuable insights and takeaways for improving machine learning approaches in side-channel analysis.
Syllabus
Intro
Big Picture
Labels
Why do we use HW?
Why do we care about imbalanced data?
What to do?
Random under sampling
Random oversampling with replacement
Experiments
Dataset 1
Dataset 2
Data sampling techniques
Further results
Evaluation metrics
SR/GE vs acc
Take away
Taught by
TheIACR
Related Courses
AI Workflow: Feature Engineering and Bias DetectionIBM via Coursera Language Classification with Naive Bayes in Python
Coursera Project Network via Coursera Machine Learning in Production
DeepLearning.AI via Coursera Machine Learning
YouTube Simple Training with the Transformers Trainer
HuggingFace via YouTube