Generative AI and LLMs: Architecture and Data Preparation
Offered By: IBM via Coursera
Course Description
Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
This IBM short course, a part of Generative AI Engineering Essentials with LLMs Professional Certificate, will teach you the basics of using generative AI and Large Language Models (LLMs). This course is suitable for existing and aspiring data scientists, machine learning engineers, deep-learning engineers, and AI engineers.
You will learn about the types of generative AI and its real-world applications. You will gain the knowledge to differentiate between various generative AI architectures and models, such as Recurrent Neural Networks (RNNs), Transformers, Generative Adversarial Networks (GANs), Variational AutoEncoders (VAEs), and Diffusion Models. You will learn the differences in the training approaches used for each model. You will be able to explain the use of LLMs, such as Generative Pre-Trained Transformers (GPT) and Bidirectional Encoder Representations from Transformers (BERT).
You will also learn about the tokenization process, tokenization methods, and the use of tokenizers for word-based, character-based, and subword-based tokenization. You will be able to explain how you can use data loaders for training generative AI models and list the PyTorch libraries for preparing and handling data within data loaders. The knowledge acquired will help you use the generative AI libraries in Hugging Face. It will also prepare you to implement tokenization and create an NLP data loader.
For this course, a basic knowledge of Python and PyTorch and an awareness of machine learning and neural networks would be an advantage, though not strictly required.
Syllabus
- Generative AI Architecture
- In this module, you will learn about the significance of generative AI models and how they are used across a wide range of fields for generating various types of content. You will learn about the architectures and models commonly used in generative AI and the differences in the training approaches of these models. You will learn how large language models (LLMs) are used to build NLP-based applications. You will build a simple chatbot using the transformers library from Hugging Face.
- Data Preparation for LLMs
- In this module, you will learn to prepare data for training large language models (LLMs) by implementing tokenization. You will learn about the tokenization methods and the use of tokenizers. You will also learn about the purpose of data loaders and how you can use the DataLoader class in PyTorch. You will implement tokenization using various libraries such as nltk, spaCy, BertTokenizer, and XLNetTokenizer. You will also create a data loader with a collate function that processes batches of text.
Taught by
Joseph Santarcangelo and Roodra Pratap Kanwar
Tags
Related Courses
Models and Platforms for Generative AIIBM via edX AWS Flash - Generative AI with Diffusion Models
Amazon Web Services via AWS Skill Builder AWS Flash - Generative AI with Diffusion Models (Japanese)
Amazon Web Services via AWS Skill Builder AWS Flash - Generative AI with Diffusion Models (Simplified Chinese)
Amazon Web Services via AWS Skill Builder AWS Flash - Generative AI with Diffusion Models (Traditional Chinese)
Amazon Web Services via AWS Skill Builder