YoVDO

On Sparse Linear Programming and Simple Neural Networks

Offered By: Institute for Pure & Applied Mathematics (IPAM) via YouTube

Tags

Neural Networks Courses Deep Learning Courses Linear Programming Courses Combinatorial Optimization Courses

Course Description

Overview

Explore a thought-provoking lecture on the intersection of linear programming, sparse inference constraints, and neural networks. Delve into two key instances where these optimization and estimation problems naturally arise in neural network analysis. Discover how overparametrized learning with shallow ReLU networks can be reduced to a finite-dimensional linear program, bridging the gap between NP-hard worst-case scenarios and practical gradient-based learning solutions. Examine the computational perspective of sparse inference, comparing traditional L1-relaxed linear programming approaches with iterative shrinkage algorithms and their unrolled deep, narrow network counterparts. Investigate whether depth is truly necessary for sparse inference tasks through approximation lower bounds using shallow and wide networks. Gain insights from this collaborative research presented by Joan Bruna from New York University, part of the Deep Learning and Combinatorial Optimization 2021 series at the Institute for Pure and Applied Mathematics, UCLA.

Syllabus

Joan Bruna: "On Sparse Linear Programming and (simple) neural networks"


Taught by

Institute for Pure & Applied Mathematics (IPAM)

Related Courses

Linear and Discrete Optimization
École Polytechnique Fédérale de Lausanne via Coursera
Linear and Integer Programming
University of Colorado Boulder via Coursera
Graph Partitioning and Expanders
Stanford University via NovoEd
Discrete Inference and Learning in Artificial Vision
École Centrale Paris via Coursera
Convex Optimization
Stanford University via edX