Long Sequence Modeling with up to 8K Tokens - Overview, Dataset & Google Colab Code
Offered By: Venelin Valkov via YouTube
Course Description
Overview
Explore the capabilities of XGen-7B, an open-source large language model with 8K token input capacity, in this comprehensive video tutorial. Learn about the model's architecture, pre-training data, and training methods. Examine its performance on NLP benchmarks, long sequence modeling tasks, and code generation. Follow along as the instructor demonstrates how to load the instruction model in Google Colab and test its abilities through various prompts, including answering questions, generating code, and comprehending documents. Gain insights into the model's strengths and limitations in areas such as joke writing, investment advice, and question-answering over text.
Syllabus
- Introduction
- XGen Model
- Pre-training Data
- Training Methods
- Evaluation Results
- HuggingFace Repository
- Google Colab Setup
- Prompting XGen
- Writing Jokes
- Investing Advice
- Coding
- QA over Text
- Conclusion
Taught by
Venelin Valkov
Related Courses
CompilersStanford University via Coursera Build a Modern Computer from First Principles: Nand to Tetris Part II (project-centered course)
Hebrew University of Jerusalem via Coursera Разработка веб-сервисов на Go - основы языка
Moscow Institute of Physics and Technology via Coursera Complete Guide to Protocol Buffers 3 [Java, Golang, Python]
Udemy Angular tooling: Generating code with schematics
Coursera Project Network via Coursera