YoVDO

Transformer Models and BERT Model - Locales

Offered By: Google via Google Cloud Skills Boost

Tags

Transformer Architecture Courses Text Classification Courses Self-Attention Mechanisms Courses Natural Language Inference Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
This course, Transformer Models and BERT Model - Locales, is intended for non-English learners. If you want to take this course in English, please enroll in Transformer Models and BERT Model. This course introduces you to the Transformer architecture and the Bidirectional Encoder Representations from Transformers (BERT) model. You learn about the main components of the Transformer architecture, such as the self-attention mechanism, and how it is used to build the BERT model. You also learn about the different tasks that BERT can be used for, such as text classification, question answering, and natural language inference. This course is estimated to take approximately 45 minutes to complete.

Syllabus

  • Transformer Models and BERT Model: Overview
    • Transformer Models and BERT Model: Overview
    • Transformer Models and BERT Model: Lab Walkthrough
    • Transformer Models and BERT Model: Quiz
    • Transformer Models and BERT Model: Lab Resources

Tags

Related Courses

Axial-DeepLab - Stand-Alone Axial-Attention for Panoptic Segmentation
Yannic Kilcher via YouTube
Linformer - Self-Attention with Linear Complexity
Yannic Kilcher via YouTube
Synthesizer - Rethinking Self-Attention in Transformer Models
Yannic Kilcher via YouTube
The Narrated Transformer Language Model
Jay Alammar via YouTube
Learning the Structure of EHR with Graph Convolutional Transformer - Edward Choi
Stanford University via YouTube