On Random Grid Neural Processes for Solving Forward and Inverse Problems
Offered By: Alan Turing Institute via YouTube
Course Description
Overview
Explore a groundbreaking approach to solving forward and inverse problems in parametric partial differential equations (PDEs) through this 47-minute conference talk. Delve into the introduction of a new class of spatially stochastic physics and data-informed deep latent models that operate using scalable variational neural processes. Learn how probability measures are assigned to the spatial domain, allowing for the treatment of collocation grids as random variables to be marginalized out. Discover the innovative Grid Invariant Convolutional Networks (GICNets) architecture, designed to overcome the unique challenges posed by implementing random grids in inverse physics-informed deep learning frameworks. Understand the method for incorporating noisy data into the physics-informed model to enhance predictions in scenarios where data is available but measurement locations do not align with fixed meshes or grids. Examine the application of this method to a nonlinear Poisson problem, Burgers equation, and Navier-Stokes equations.
Syllabus
Arnaud Vadeboncoeur - On Random Grid Neural Processes for Solving Forward and Inverse Problems
Taught by
Alan Turing Institute
Related Courses
Neural Networks for Machine LearningUniversity of Toronto via Coursera 機器學習技法 (Machine Learning Techniques)
National Taiwan University via Coursera Machine Learning Capstone: An Intelligent Application with Deep Learning
University of Washington via Coursera Прикладные задачи анализа данных
Moscow Institute of Physics and Technology via Coursera Leading Ambitious Teaching and Learning
Microsoft via edX