The Era of 1-bit LLMs Explained - BitNet b1.58 and New Scaling Laws
Offered By: Unify via YouTube
Course Description
Overview
Explore the groundbreaking research presented in the paper "The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits" during this 58-minute session. Delve into the innovative BitNet b1.58 model, which uses ternary parameters {-1, 0, 1} to match full-precision Transformer LLMs in performance while offering significant cost-effectiveness in latency, memory, throughput, and energy consumption. Discover how this 1.58-bit LLM establishes a new scaling law and training recipe for high-performance, cost-effective large language models. Gain insights from the research led by Shuming Ma and Hongyu Wang at Microsoft, and understand its potential impact on the future of AI development. Learn about additional resources for staying updated on AI research, industry trends, and deployment strategies.
Syllabus
The Era of 1-bit LLMs Explained
Taught by
Unify
Related Courses
Neural Networks for Machine LearningUniversity of Toronto via Coursera Good Brain, Bad Brain: Basics
University of Birmingham via FutureLearn Statistical Learning with R
Stanford University via edX Machine Learning 1—Supervised Learning
Brown University via Udacity Fundamentals of Neuroscience, Part 2: Neurons and Networks
Harvard University via edX