Introduction to Language Models - LLMs, Prompt Engineering, and Architectures
Offered By: Neuro Symbolic via YouTube
Course Description
Overview
Dive into a comprehensive 58-minute video tutorial on language models, covering essential topics for beginners in the field of artificial intelligence. Explore generative AI, large language models (LLMs) like GPT, encoders such as BERT, prompt engineering techniques, fine-tuning, self-supervised learning, transformer architecture, attention mechanisms, and embeddings. Learn about the fundamentals of language models through a structured curriculum, starting with an introduction to embeddings and progressing through transformer architecture, encoder-decoder models, and practical applications of prompt engineering. Gain insights into the limitations and potential hallucinations of LLMs, equipping yourself with a well-rounded understanding of this cutting-edge technology. Originally part of an AI course from Arizona State University, this tutorial offers valuable knowledge at the intersection of symbolic methods and deep learning, paving the way for advancements in artificial general intelligence (AGI).
Syllabus
Introduction
Embeddings
Transformer architecture
Encoders BERT
Generative AI decoders/LLM/GPT
Basic Prompt Engineering
Advanced Prompt Engineering
LLM Limitations and Hallucinations
Taught by
Neuro Symbolic
Related Courses
Artificial Intelligence Foundations: Thinking MachinesLinkedIn Learning Deep Learning for Computer Vision
NPTEL via YouTube NYU Deep Learning
YouTube Stanford Seminar - Representation Learning for Autonomous Robots, Anima Anandkumar
Stanford University via YouTube A Path Towards Autonomous Machine Intelligence - Paper Explained
Yannic Kilcher via YouTube