The Era of 1-bit LLMs Explained - BitNet b1.58 and New Scaling Laws
Offered By: Unify via YouTube
Course Description
Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the groundbreaking research presented in the paper "The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits" during this 58-minute session. Delve into the innovative BitNet b1.58 model, which uses ternary parameters {-1, 0, 1} to match full-precision Transformer LLMs in performance while offering significant cost-effectiveness in latency, memory, throughput, and energy consumption. Discover how this 1.58-bit LLM establishes a new scaling law and training recipe for high-performance, cost-effective large language models. Gain insights from the research led by Shuming Ma and Hongyu Wang at Microsoft, and understand its potential impact on the future of AI development. Learn about additional resources for staying updated on AI research, industry trends, and deployment strategies.
Syllabus
The Era of 1-bit LLMs Explained
Taught by
Unify
Related Courses
TensorFlow Developer Certificate Exam PrepA Cloud Guru Post Graduate Certificate in Advanced Machine Learning & AI
Indian Institute of Technology Roorkee via Coursera Advanced AI Techniques for the Supply Chain
LearnQuest via Coursera Advanced Learning Algorithms
DeepLearning.AI via Coursera IBM AI Engineering
IBM via Coursera