YoVDO

CMU Neural Nets for NLP: Model Interpretation

Offered By: Graham Neubig via YouTube

Tags

Neural Networks Courses Natural Language Processing (NLP) Courses Interpretability Courses Sentence Embedding Courses

Course Description

Overview

Explore model interpretation in neural networks for natural language processing through this comprehensive lecture from CMU's CS 11-747 course. Delve into the importance and definition of interpretability, examining two broad themes in the field. Investigate source syntax in neural machine translation and discover why neural translations achieve appropriate lengths. Analyze sentence embeddings in-depth, including probing techniques and their limitations. Learn about Minimum Description Length (MDL) Probes and evaluation methods. Examine explanation techniques such as gradient-based importance scores and extractive rationale generation. Gain valuable insights into the inner workings of neural models for NLP applications.

Syllabus

Intro
Why interpretability?
What is interpretability?
Two broad themes
Source Syntax in NMT
Why neural translations are the right length?
Fine grained analysis of sentence embeddings
What you can cram into a single vector: Probing sentence embeddings for linguistic properties
Issues with probing
Minimum Description Length (MDL) Probes
How to evaluate?
Explanation Techniques: gradient based importance scores
Explanation Technique: Extractive Rationale Generation


Taught by

Graham Neubig

Related Courses

Neural Networks for Machine Learning
University of Toronto via Coursera
Good Brain, Bad Brain: Basics
University of Birmingham via FutureLearn
Statistical Learning with R
Stanford University via edX
Machine Learning 1—Supervised Learning
Brown University via Udacity
Fundamentals of Neuroscience, Part 2: Neurons and Networks
Harvard University via edX