Deep Learning for Natural Language Processing
Offered By: Alfredo Canziani via YouTube
Course Description
Overview
Explore the foundations and advanced concepts of deep learning for Natural Language Processing (NLP) in this comprehensive lecture. Delve into various architectures used in NLP applications, including CNNs, RNNs, and the state-of-the-art transformer model. Understand the modules that make transformers advantageous for NLP tasks and learn effective training techniques. Discover beam search as a middle ground between greedy decoding and exhaustive search, and explore "top-k" sampling for text generation. Examine sequence-to-sequence models, back-translation, and unsupervised learning approaches for embedding, including word2vec, GPT, and BERT. Gain insights into pre-training techniques for NLP and future directions in the field.
Syllabus
– Week 12 – Lecture
– Introduction to deep learning in NLP and language models
– Transformer language model structure and intuition
– Some tricks and facts of Transformer Language Models and decoding Language Models
– Beam Search, Sampling and Text Generation
– Back-translation, word2vec and BERT's
– Pre-training for NLP and Next Steps
Taught by
Alfredo Canziani
Tags
Related Courses
Neural Networks for Machine LearningUniversity of Toronto via Coursera 機器學習技法 (Machine Learning Techniques)
National Taiwan University via Coursera Machine Learning Capstone: An Intelligent Application with Deep Learning
University of Washington via Coursera Прикладные задачи анализа данных
Moscow Institute of Physics and Technology via Coursera Leading Ambitious Teaching and Learning
Microsoft via edX