Introduction to Language Models - LLMs, Prompt Engineering, and Architectures
Offered By: Neuro Symbolic via YouTube
Course Description
Overview
Dive into a comprehensive 58-minute video tutorial on language models, covering essential topics for beginners in the field of artificial intelligence. Explore generative AI, large language models (LLMs) like GPT, encoders such as BERT, prompt engineering techniques, fine-tuning, self-supervised learning, transformer architecture, attention mechanisms, and embeddings. Learn about the fundamentals of language models through a structured curriculum, starting with an introduction to embeddings and progressing through transformer architecture, encoder-decoder models, and practical applications of prompt engineering. Gain insights into the limitations and potential hallucinations of LLMs, equipping yourself with a well-rounded understanding of this cutting-edge technology. Originally part of an AI course from Arizona State University, this tutorial offers valuable knowledge at the intersection of symbolic methods and deep learning, paving the way for advancements in artificial general intelligence (AGI).
Syllabus
Introduction
Embeddings
Transformer architecture
Encoders BERT
Generative AI decoders/LLM/GPT
Basic Prompt Engineering
Advanced Prompt Engineering
LLM Limitations and Hallucinations
Taught by
Neuro Symbolic
Related Courses
TensorFlow on Google CloudGoogle Cloud via Coursera Art and Science of Machine Learning 日本語版
Google Cloud via Coursera Art and Science of Machine Learning auf Deutsch
Google Cloud via Coursera Art and Science of Machine Learning em Português Brasileiro
Google Cloud via Coursera Art and Science of Machine Learning en Español
Google Cloud via Coursera