YoVDO

End-to-End Differentiable Proving - Tim Rocktäschel, University of Oxford

Offered By: Alan Turing Institute via YouTube

Tags

Neural Networks Courses Artificial Intelligence Courses Machine Learning Courses Gradient Descent Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore an innovative approach to knowledge base reasoning in this 42-minute lecture by Tim Rocktäschel from the University of Oxford, presented at the Alan Turing Institute. Delve into the concept of end-to-end differentiable proving using neural networks that operate on dense vector representations of symbols. Discover how this method combines symbolic reasoning with learning subsymbolic vector representations by replacing symbolic unification with a differentiable computation using a radial basis function kernel. Learn how gradient descent enables the neural network to infer facts from incomplete knowledge bases, place similar symbols in close proximity within a vector space, prove queries using these similarities, induce logical rules, and perform multi-hop reasoning. Examine the performance of this architecture compared to ComplEx, a state-of-the-art neural link prediction model, across four benchmark knowledge bases, and understand its ability to induce interpretable function-free first-order logic rules.

Syllabus

End-to-End Differentiable Proving: Tim Rocktäschel, University of Oxford


Taught by

Alan Turing Institute

Related Courses

Introduction to Artificial Intelligence
Stanford University via Udacity
Probabilistic Graphical Models 1: Representation
Stanford University via Coursera
Artificial Intelligence for Robotics
Stanford University via Udacity
Computer Vision: The Fundamentals
University of California, Berkeley via Coursera
Learning from Data (Introductory Machine Learning course)
California Institute of Technology via Independent