YoVDO

Transformers that Transform Well Enough to Support Near-Shallow Architectures - Stanford CS25 Lecture

Offered By: Stanford University via YouTube

Tags

Transformers Courses Machine Learning Courses Model Optimization Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a lecture on transformers and precision language models (PLMs) delivered by Jake Williams from Drexel University. Delve into effectiveness-enhancing and cost-cutting augmentations for language model learning, including non-random parameter initializations for specialized self-attention architectures. Discover how PLMs can efficiently train both large and small language models with limited resources. Learn about an innovative application that localizes untrained PLMs on microprocessors for hardware-based control of small electronics. Examine the utility of PLMs in air-gapped environments, CPU-based training of progressively larger models, and a fully developed control system with its user interface. Gain insights from recent experiments on Le Potato, demonstrating effective inference of user directives after brief lay interactions. Understand the speaker's background in physics, math, and quantitative linguistics, and his contributions to data science education at Drexel University.

Syllabus

Stanford CS25: V4 I Transformers that Transform Well Enough to Support Near-Shallow Architectures


Taught by

Stanford Online

Tags

Related Courses

Introduction to Artificial Intelligence
Stanford University via Udacity
Natural Language Processing
Columbia University via Coursera
Probabilistic Graphical Models 1: Representation
Stanford University via Coursera
Computer Vision: The Fundamentals
University of California, Berkeley via Coursera
Learning from Data (Introductory Machine Learning course)
California Institute of Technology via Independent