Differentiable Functional Programming
Offered By: Scala Days Conferences via YouTube
Course Description
Overview
Explore differentiable functional programming in this Scala Days Berlin 2018 conference talk. Dive into parameterised functions, supervised learning, and gradient descent techniques. Understand deep learning as supervised learning of parameterised functions through gradient descent. Examine tensor multiplication, non-linearity, and algorithms for calculating gradients. Compare mathematician's and programmer's approaches to differentiation, including symbolic differentiation and automatic differentiation. Learn about dual numbers, forward-mode scaling, and the chain rule. Discover the importance of expressive type systems and compilation for GPU performance in implementing these concepts.
Syllabus
Intro
Parametrised functions Supervised learning Gradient descent
Calculate gradient for current parameters
Deep learning is supervised learning of parameterised functions by gradient descent
Tensor multiplication and non-linearity
Algorithms for calculating gradients
Composition of Derivatives
Mathematician's approach
Symbolic differentiation
Programmer's approach
Automatic differentiation approach
Calculate with dual numbers
Forward-mode scales in the size of the input dimension
Chain rule doesn't care about order
Tensor dimensions must agree
Solution: expressive type systems
Need compilation (to GPU) for performance
Taught by
Scala Days Conferences
Related Courses
Machine LearningUniversity of Washington via Coursera Machine Learning
Stanford University via Coursera Machine Learning
Georgia Institute of Technology via Udacity Statistical Learning with R
Stanford University via edX Machine Learning 1—Supervised Learning
Brown University via Udacity