Knowledge Circuits in Pretrained Transformers Explained
Offered By: Unify via YouTube
Course Description
Overview
Explore the intricacies of knowledge circuits in pretrained transformers during this one-hour conference talk by Yunzhi Yao from Zhejiang University China. Delve into the computation graph of language models to uncover the instrumental knowledge circuits responsible for articulating specific knowledge. Gain insights from the paper "Knowledge Circuits in Pretrained Transformers," co-authored by Yao, which examines the inner workings of these advanced AI models. Learn about the research methodologies and findings that shed light on how transformers process and utilize information. Discover the implications of this research for the field of artificial intelligence and natural language processing. Enhance your understanding of AI research and industry trends by exploring additional resources such as The Deep Dive newsletter and Unify's blog posts on AI deployment. Connect with the AI community through various platforms to stay updated on the latest developments in machine learning and deep learning.
Syllabus
Knowledge Circuits in Pretrained Transformers Explained
Taught by
Unify
Related Courses
Introduction to Artificial IntelligenceStanford University via Udacity Natural Language Processing
Columbia University via Coursera Probabilistic Graphical Models 1: Representation
Stanford University via Coursera Computer Vision: The Fundamentals
University of California, Berkeley via Coursera Learning from Data (Introductory Machine Learning course)
California Institute of Technology via Independent