Differentiable Functional Programming
Offered By: Scala Days Conferences via YouTube
Course Description
Overview
Explore differentiable functional programming in this Scala Days Berlin 2018 conference talk. Dive into parameterised functions, supervised learning, and gradient descent techniques. Understand deep learning as supervised learning of parameterised functions through gradient descent. Examine tensor multiplication, non-linearity, and algorithms for calculating gradients. Compare mathematician's and programmer's approaches to differentiation, including symbolic differentiation and automatic differentiation. Learn about dual numbers, forward-mode scaling, and the chain rule. Discover the importance of expressive type systems and compilation for GPU performance in implementing these concepts.
Syllabus
Intro
Parametrised functions Supervised learning Gradient descent
Calculate gradient for current parameters
Deep learning is supervised learning of parameterised functions by gradient descent
Tensor multiplication and non-linearity
Algorithms for calculating gradients
Composition of Derivatives
Mathematician's approach
Symbolic differentiation
Programmer's approach
Automatic differentiation approach
Calculate with dual numbers
Forward-mode scales in the size of the input dimension
Chain rule doesn't care about order
Tensor dimensions must agree
Solution: expressive type systems
Need compilation (to GPU) for performance
Taught by
Scala Days Conferences
Related Courses
Introduction to Neural Networks and PyTorchIBM via Coursera Regression with Automatic Differentiation in TensorFlow
Coursera Project Network via Coursera Neural Network from Scratch in TensorFlow
Coursera Project Network via Coursera Customising your models with TensorFlow 2
Imperial College London via Coursera PyTorch Fundamentals
Microsoft via Microsoft Learn