Intro to Dense Vectors for NLP and Vision
Offered By: James Briggs via YouTube
Course Description
Overview
Explore the evolution and importance of dense vector representations in Natural Language Processing (NLP) and computer vision. Learn about the groundbreaking word2vec model and its impact on the field, then dive into modern approaches like Sentence Transformers, Dense Passage Retrieval (DPR), and Vision Transformers. Discover practical applications through Python implementations, including question-answering systems and OpenAI's CLIP model for image-text understanding. Gain insights into why dense vectors are crucial for advancing NLP and vision technologies, and prepare for future developments in these rapidly evolving fields.
Syllabus
Intro
Why Dense Vectors?
Word2vec and Representing Meaning
Sentence Transformers
Sentence Transformers in Python
Question-Answering
DPR in Python
Vision Transformers
OpenAI's CLIP in Python
Review and What's Next
Taught by
James Briggs
Related Courses
Vision Transformers Explained + Fine-Tuning in PythonJames Briggs via YouTube ConvNeXt- A ConvNet for the 2020s - Paper Explained
Aleksa Gordić - The AI Epiphany via YouTube Do Vision Transformers See Like Convolutional Neural Networks - Paper Explained
Aleksa Gordić - The AI Epiphany via YouTube Stable Diffusion and Friends - High-Resolution Image Synthesis via Two-Stage Generative Models
HuggingFace via YouTube Geo-localization Framework for Real-world Scenarios - Defense Presentation
University of Central Florida via YouTube