The Era of 1-bit LLMs Explained - BitNet b1.58 and New Scaling Laws
Offered By: Unify via YouTube
Course Description
Overview
Explore the groundbreaking research presented in the paper "The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits" during this 58-minute session. Delve into the innovative BitNet b1.58 model, which uses ternary parameters {-1, 0, 1} to match full-precision Transformer LLMs in performance while offering significant cost-effectiveness in latency, memory, throughput, and energy consumption. Discover how this 1.58-bit LLM establishes a new scaling law and training recipe for high-performance, cost-effective large language models. Gain insights from the research led by Shuming Ma and Hongyu Wang at Microsoft, and understand its potential impact on the future of AI development. Learn about additional resources for staying updated on AI research, industry trends, and deployment strategies.
Syllabus
The Era of 1-bit LLMs Explained
Taught by
Unify
Related Courses
Artificial Intelligence Foundations: Neural NetworksLinkedIn Learning Transformers: Text Classification for NLP Using BERT
LinkedIn Learning TensorFlow: Working with NLP
LinkedIn Learning BERTによる自然言語処理を学ぼう! -Attention、TransformerからBERTへとつながるNLP技術-
Udemy Complete Natural Language Processing Tutorial in Python
Keith Galli via YouTube