YoVDO

Leveraging Pretrained Language Models for Natural Language Understanding

Offered By: Toronto Machine Learning Series (TMLS) via YouTube

Tags

Sentiment Analysis Courses BERT Courses Transformers Courses Named Entity Recognition Courses Document Classification Courses Fine-Tuning Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the power of pretrained language models in natural language understanding through this comprehensive workshop presented by Pooja Bhojwani, Senior Data Scientist and Manager at Scotiabank, at the Toronto Machine Learning Series. Dive into the world of deep learning-based language models using transformer architecture like BERT and GPT, and learn how to apply them to real-world NLP tasks. Discover techniques for fine-tuning these complex models on domain-specific datasets, and gain hands-on experience with sentiment analysis, document classification, question answering, and named entity recognition. Master the process of data labeling using tools like 'Doccano' and transforming datasets into standard formats such as CoNLL. Extract embeddings from text to score sentence similarity and leverage Google Colab for all computations using Python in Jupyter notebooks. Access workshop materials and code on SlideShare and GitHub to enhance your learning experience in this 1 hour and 32 minutes session.

Syllabus

Leveraging Pretrained Language Models for Natural Language Understanding


Taught by

Toronto Machine Learning Series (TMLS)

Related Courses

Sentiment Analysis with Deep Learning using BERT
Coursera Project Network via Coursera
Natural Language Processing with Attention Models
DeepLearning.AI via Coursera
Fine Tune BERT for Text Classification with TensorFlow
Coursera Project Network via Coursera
Deploy a BERT question answering bot on Django
Coursera Project Network via Coursera
Generating discrete sequences: language and music
Ural Federal University via edX