Neural Nets for NLP 2021 - Neural Nets + Knowledge Bases
Offered By: Graham Neubig via YouTube
Course Description
Overview
Explore neural networks and knowledge bases in this comprehensive lecture from CMU's Neural Networks for NLP course. Dive into methods for learning knowledge bases from neural embeddings and incorporating them into neural models. Discover techniques for probing language models for knowledge. Learn about WordNet, knowledge graph embeddings, relation extraction, distant supervision, retrofitting embeddings, and open information extraction. Examine the differences between modeling word embeddings and modeling relations. Investigate P-tuning for direct optimization of embeddings and compare nonparametric and parametric models in this informative 44-minute session led by Graham Neubig and Zhengbao Jiang.
Syllabus
Intro
Knowledge Bases
WordNet (Miller 1995)
Learning Knowledge Graph Embeddings (Bordes et al. 2013)
Remember: Consistency in Embeddings
Relation Extraction w/ Neural Tensor Networks (Socher et al. 2013)
Distant Supervision for Relation Extraction (Mintz et al. 2009)
Jointly Modeling KB Relations and Text (Toutanova et al. 2015)
Modeling Distant Supervision Noise in Neural Models (Lug et al. 2017)
Retrofitting of Embeddings to Existing Lexicons (Faruqui et al. 2015)
Open Information Extraction (Banko et al 2007)
Neural Models for Open IE
Modeling Word Embeddings vs. Modeling Relations
P-tuning: Directly Optimize Embeddings (Liu et al. 2021)
Nonparametric Models Outperform Parametric Models
Taught by
Graham Neubig
Related Courses
Natural Language ProcessingColumbia University via Coursera Natural Language Processing
Stanford University via Coursera Introduction to Natural Language Processing
University of Michigan via Coursera moocTLH: Nuevos retos en las tecnologĂas del lenguaje humano
Universidad de Alicante via MirĂadax Natural Language Processing
Indian Institute of Technology, Kharagpur via Swayam