YoVDO

Generative AI Language Modeling with Transformers

Offered By: IBM via Coursera

Tags

Transformer Architecture Courses BERT Courses Text Classification Courses Attention Mechanisms Courses Self-Attention Courses Positional Encoding Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
This course provides you with an overview of how to use transformer-based models for natural language processing (NLP). In this course, you will learn to apply transformer-based models for text classification, focusing on the encoder component. You’ll learn about positional encoding, word embedding, and attention mechanisms in language transformers and their role in capturing contextual information and dependencies. Additionally, you will be introduced to multi-head attention and gain insights on decoder-based language modeling with generative pre-trained transformers (GPT) for language translation, training the models, and implementing them in PyTorch. Further, you’ll explore encoder-based models with bidirectional encoder representations from transformers (BERT) and train using masked language modeling (MLM) and next sentence prediction (NSP). Finally, you will apply transformers for translation by gaining insight into the transformer architecture and performing its PyTorch implementation. The course offers practical exposure with hands-on activities that enables you to apply your knowledge in real-world scenarios. This course is part of a specialized program tailored for individuals interested in Generative AI engineering. This course requires a working knowledge of Python, PyTorch, and machine learning.

Syllabus

  • Fundamental Concepts of Transformer Architecture
    • In this module, you will learn the techniques to achieve positional encoding and how to implement positional encoding in PyTorch. You will learn how attention mechanism works and how to apply attention mechanism to word embeddings and sequences. You will also learn how self-attention mechanisms help in simple language modeling to predict the token. In addition, you will learn about scaled dot-product attention mechanism with multiple heads and how the transformer architecture enhances the efficiency of attention mechanisms. You will also learn how to implement a series of encoder layer instances in PyTorch. Finally, you will learn how to use transformer-based models for text classification, including creating the text pipeline and the model and training the model.
  • Advanced Concepts of Transformer Architecture
    • In this module, you will learn about decoders and GPT-like models for language translation, train the models, and implement them using PyTorch. You will also gain knowledge about encoder models with Bidirectional Encoder Representations from Transformers (BERT) and pretrain them using masked language modeling (MLM) and next sentence prediction (NSP). You will also perform data preparation for BERT using PyTorch. Finally, you learn about the applications of transformers for translation by understanding the transformer architecture and performing its PyTorch Implementation. The hands-on labs in this module will give you good practice in how you can use the decoder model, encoder model, and transformers for real-world applications.

Taught by

Joseph Santarcangelo, Fateme Akbari, and Kang Wang

Tags

Related Courses

Transformer Models and BERT Model - Deutsch
Google Cloud via Coursera
Generative AI: Introduction to Large Language Models
LinkedIn Learning
Generative AI: Working with Large Language Models
LinkedIn Learning
TensorFlow: Working with NLP
LinkedIn Learning
Transformers: Text Classification for NLP Using BERT
LinkedIn Learning