Generating discrete sequences: language and music
Offered By: Ural Federal University via edX
Course Description
Overview
This course covers modern approaches to the generation of sequential data. It includes the generation of natural language as a sequence of subword tokens and music as a sequence of notes. We mostly focus on modern deep learning methods and pay a lot of attention to challenges and open questions in the field. The main goal of the course is to expose students to novel techniques in sequence generation and help them develop skills to use these techniques in practice. The course aims to bring students to the point where they have a general understanding of sequence generation and are ready to do a deeper dive into any particular area they are interested in: language, music or bioinformatic sequences.
Syllabus
Word2Vec, BPE, Markov chain-nased Language Models, RNN, LSTM, autoencoder, self-attention, transformer, BERT
Taught by
Ivan P. Yamshchikov
Tags
Related Courses
Natural Language Processing with Attention ModelsDeepLearning.AI via Coursera Deploy a BERT question answering bot on Django
Coursera Project Network via Coursera Fine Tune BERT for Text Classification with TensorFlow
Coursera Project Network via Coursera Build, Train, and Deploy ML Pipelines using BERT
DeepLearning.AI via Coursera Sentiment Analysis with Deep Learning using BERT
Coursera Project Network via Coursera