Are Neural Networks Optimal Approximation Algorithms for Constraint Satisfaction Problems?
Offered By: USC Probability and Statistics Seminar via YouTube
Course Description
Overview
Explore the capabilities of neural networks in solving NP-hard optimization problems, particularly constraint satisfaction problems, in this 40-minute talk from the USC Probability and Statistics Seminar. Delve into the OptGNN graph neural network architecture and its ability to capture optimal approximation algorithms for constraint satisfaction. Discover how OptGNN functions as a convex program solver, providing bounds on combinatorial problem optimality. Examine the competitive performance of OptGNN against state-of-the-art unsupervised neural baselines in neural combinatorial optimization benchmarks. Gain insights into the connections between neural networks and computation, and consider potential avenues for future research in this field.
Syllabus
Morris Yau: Are Neural Networks Optimal Approximation Algorithms (MIT)
Taught by
USC Probability and Statistics Seminar
Related Courses
Deep Learning for Natural Language ProcessingUniversity of Oxford via Independent Improving Deep Neural Networks: Hyperparameter Tuning, Regularization and Optimization
DeepLearning.AI via Coursera Deep Learning Part 1 (IITM)
Indian Institute of Technology Madras via Swayam Deep Learning - Part 1
Indian Institute of Technology, Ropar via Swayam Logistic Regression with Python and Numpy
Coursera Project Network via Coursera