YoVDO

End-to-End Differentiable Proving - Tim Rocktäschel, University of Oxford

Offered By: Alan Turing Institute via YouTube

Tags

Neural Networks Courses Artificial Intelligence Courses Machine Learning Courses Gradient Descent Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore an innovative approach to knowledge base reasoning in this 42-minute lecture by Tim Rocktäschel from the University of Oxford, presented at the Alan Turing Institute. Delve into the concept of end-to-end differentiable proving using neural networks that operate on dense vector representations of symbols. Discover how this method combines symbolic reasoning with learning subsymbolic vector representations by replacing symbolic unification with a differentiable computation using a radial basis function kernel. Learn how gradient descent enables the neural network to infer facts from incomplete knowledge bases, place similar symbols in close proximity within a vector space, prove queries using these similarities, induce logical rules, and perform multi-hop reasoning. Examine the performance of this architecture compared to ComplEx, a state-of-the-art neural link prediction model, across four benchmark knowledge bases, and understand its ability to induce interpretable function-free first-order logic rules.

Syllabus

End-to-End Differentiable Proving: Tim Rocktäschel, University of Oxford


Taught by

Alan Turing Institute

Related Courses

Practical Predictive Analytics: Models and Methods
University of Washington via Coursera
Deep Learning Fundamentals with Keras
IBM via edX
Introduction to Machine Learning
Duke University via Coursera
Intro to Deep Learning with PyTorch
Facebook via Udacity
Introduction to Machine Learning for Coders!
fast.ai via Independent