Author Interview - Transformer Memory as a Differentiable Search Index
Offered By: Yannic Kilcher via YouTube
Course Description
Overview
Explore an in-depth interview with authors Yi Tay and Don Metzler discussing their groundbreaking paper on Transformer Memory as a Differentiable Search Index. Delve into the innovative concept of using a single Transformer model to encode an entire corpus for information retrieval, eliminating the need for separate indexing structures. Learn about the Differentiable Search Index (DSI) paradigm, which maps queries directly to relevant document IDs using only the model's parameters. Discover insights on document representation, training procedures, and scalability challenges. Gain understanding of the model's inner workings, generalization capabilities, and potential applications. Examine comparisons with traditional search methods, explore future research directions, and get advice on how to get started in this exciting field of neural search technology.
Syllabus
- Intro
- Start of Interview
- How did this idea start?
- How does memorization play into this?
- Why did you not compare to cross-encoders?
- Instead of the ID, could one reproduce the document itself?
- Passages vs documents
- Where can this model be applied?
- Can we make this work on large collections?
- What's up with the NQ100K dataset?
- What is going on inside these models?
- What's the smallest scale to obtain meaningful results?
- Investigating the document identifiers
- What's the end goal?
- What are the hardest problems currently?
- Final comments & how to get started
Taught by
Yannic Kilcher
Related Courses
How Google does Machine Learning en EspaƱolGoogle Cloud via Coursera Creating Custom Callbacks in Keras
Coursera Project Network via Coursera Automatic Machine Learning with H2O AutoML and Python
Coursera Project Network via Coursera AI in Healthcare Capstone
Stanford University via Coursera AutoML con Pycaret y TPOT
Coursera Project Network via Coursera