YoVDO

LLM Explainability and Controllability Improvements with Tensor Networks

Offered By: ChemicalQDevice via YouTube

Tags

Tensor Networks Courses Neural Networks Courses Transformers Courses Self-Attention Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the potential of Tensor Networks in enhancing the explainability and controllability of Large Language Models (LLMs) in this one-hour seminar. Delve into the literature addressing these factors through modifications to LLM building blocks such as transformer self-attention and multi-layer perceptron layers. Gain additional insights from the "Tensor Networks Meet Neural Networks: A Survey and Future Perspectives" and other relevant papers. Examine case studies including Hypoformer and approaches from Multiverse and Terra Quantum to understand how Tensor Networks can improve the processing of sensitive data in LLMs. Discover the intersection of Tensor Networks and Neural Networks, and consider future perspectives in this rapidly evolving field.

Syllabus

LLM Explainability or Controllability Improvements with Tensor Networks


Taught by

ChemicalQDevice

Related Courses

Transformers: Text Classification for NLP Using BERT
LinkedIn Learning
TensorFlow: Working with NLP
LinkedIn Learning
TransGAN - Two Transformers Can Make One Strong GAN - Machine Learning Research Paper Explained
Yannic Kilcher via YouTube
Nyströmformer- A Nyström-Based Algorithm for Approximating Self-Attention
Yannic Kilcher via YouTube
Recreate Google Translate - Model Training
Edan Meyer via YouTube