On Random Grid Neural Processes for Solving Forward and Inverse Problems
Offered By: Alan Turing Institute via YouTube
Course Description
Overview
Explore a groundbreaking approach to solving forward and inverse problems in parametric partial differential equations (PDEs) through this 47-minute conference talk. Delve into the introduction of a new class of spatially stochastic physics and data-informed deep latent models that operate using scalable variational neural processes. Learn how probability measures are assigned to the spatial domain, allowing for the treatment of collocation grids as random variables to be marginalized out. Discover the innovative Grid Invariant Convolutional Networks (GICNets) architecture, designed to overcome the unique challenges posed by implementing random grids in inverse physics-informed deep learning frameworks. Understand the method for incorporating noisy data into the physics-informed model to enhance predictions in scenarios where data is available but measurement locations do not align with fixed meshes or grids. Examine the application of this method to a nonlinear Poisson problem, Burgers equation, and Navier-Stokes equations.
Syllabus
Arnaud Vadeboncoeur - On Random Grid Neural Processes for Solving Forward and Inverse Problems
Taught by
Alan Turing Institute
Related Courses
Computational Fluid DynamicsIndian Institute of Technology Madras via Swayam Modeling Transport Phenomena of Microparticles
Indian Institute of Technology, Kharagpur via Swayam Transport Phenomena in Materials
Indian Institute of Technology Madras via Swayam Advanced Fluid Mechanics
Indian Institute of Technology, Kharagpur via Swayam Fluid and Particle Mechanics
Indian Institute of Technology Madras via Swayam