YoVDO

Trying Out Flan 20B with UL2 - Working in Colab with 8-Bit Inference

Offered By: Sam Witteveen via YouTube

Tags

Language Models Courses Chain of Thought Prompting Courses

Course Description

Overview

Explore the capabilities of Google's latest publicly released Flan model, Flan-UL2 20 Billion, in this informative video tutorial. Learn how to run the model on a high-end Google Colab using the HuggingFace Library and 8-Bit inference. Discover the model's performance in various tasks, including chain of thought prompting, zero-shot logical reasoning, generation, story writing, common sense reasoning, and speech writing. Gain insights into loading the model, comparing non-8Bit and 8Bit inference, testing large token spans, and utilizing the HuggingFace Inference API. Follow along with the provided Colab notebook to experiment with this powerful language model firsthand.

Syllabus

Flan-20B-UL2 Launched
Loading the Model
Non 8Bit Inference
8Bit inference with CoT
Chain of Thought Prompting
Zeroshot Logical Reasoning
Zeroshot Generation
Zeroshot Story Writing
Zeroshot Common Sense Reasoning
Zeroshot Speech Writing
Testing a Large Token Span
Using the HuggingFace Inference API


Taught by

Sam Witteveen

Related Courses

Prompt Engineering with Llama 2&3
DeepLearning.AI via Coursera
Essentials of Prompt Engineering (Indonesian)
Amazon Web Services via AWS Skill Builder
Essentials of Prompt Engineering (Japanese)
Amazon Web Services via AWS Skill Builder
Essentials of Prompt Engineering (Japanese) (Sub) 日本語字幕版
Amazon Web Services via AWS Skill Builder
Essentials of Prompt Engineering (Korean)
Amazon Web Services via AWS Skill Builder