Neural Nets for NLP 2021 - Distributional Semantics and Word Vectors
Offered By: Graham Neubig via YouTube
Course Description
Overview
Syllabus
Intro
Remember: Neural Models
How to Train Embeddings?
What do we want to know about words?
Contextualization of Word Representations
A Manual Attempt: WordNet
An Answer (?): Word Embeddings!
Word Embeddings are Cool! (An Obligatory Slide)
Distributional vs. Distributed Representations
Distributional Representations (see Goldberg 10.4.1)
Count-based Methods
Prediction-basd Methods (See Goldberg 10.4.2)
Word Embeddings from Language Models giving
Context Window Methods
Glove (Pennington et al. 2014)
What Contexts?
Types of Evaluation
Non-linear Projection • Non-linear projections group things that are close in high
t-SNE Visualization can be Misleading! Wattenberg et al. 2016
Intrinsic Evaluation of Embeddings (categorization from Schnabel et al 2015)
Extrinsic Evaluation
How Do I Choose Embeddings?
When are Pre-trained Embeddings Useful?
Limitations of Embeddings
Unsupervised Coordination of Embeddings
Retrofitting of Embeddings to Existing Lexicons . We have an existing lexicon like WordNet, and would like our vectors to match (Faruqui et al. 2015)
Sparse Embeddings
De-biasing Word
FastText Toolkit
Taught by
Graham Neubig
Related Courses
Natural Language ProcessingColumbia University via Coursera Natural Language Processing
Stanford University via Coursera Introduction to Natural Language Processing
University of Michigan via Coursera moocTLH: Nuevos retos en las tecnologías del lenguaje humano
Universidad de Alicante via Miríadax Natural Language Processing
Indian Institute of Technology, Kharagpur via Swayam