YoVDO

Interpretability via Symbolic Distillation

Offered By: Simons Institute via YouTube

Tags

Interpretability Courses Transformers Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore Miles Cranmer's insightful lecture on interpretability through symbolic distillation, delivered as part of the Large Language Models and Transformers series at the Simons Institute. Delve into advanced techniques for enhancing the transparency and understanding of complex machine learning models, with a focus on their application to large language models and transformers. Gain valuable insights into cutting-edge research that aims to bridge the gap between the power of neural networks and the interpretability of symbolic systems.

Syllabus

Interpretability via Symbolic Distillation


Taught by

Simons Institute

Related Courses

Machine Learning Modeling Pipelines in Production
DeepLearning.AI via Coursera
Live Responsible AI Dashboard: One-Stop Shop for Operationalizing RAI in Practice - Episode 43
Microsoft via YouTube
Build Responsible AI Using Error Analysis Toolkit
Microsoft via YouTube
Neural Networks Are Decision Trees - With Alexander Mattick
Yannic Kilcher via YouTube
Interpretable Explanations of Black Boxes by Meaningful Perturbation - CAP6412 Spring 2021
University of Central Florida via YouTube