YoVDO

Nerve Theorems for Fixed Points of Neural Networks

Offered By: Applied Algebraic Topology Network via YouTube

Tags

Graph Theory Courses Neural Networks Courses Computational Neuroscience Courses Network Engineering Courses

Course Description

Overview

Explore the relationship between network connectivity and neural activity in this 54-minute lecture on nerve theorems for fixed points of neural networks. Delve into the world of threshold linear networks (TLNs) and combinatorial threshold-linear networks (CTLNs), examining how graph structure influences network dynamics. Learn about a novel method of covering CTLN graphs with smaller directional graphs and discover how the nerve of the cover provides insights into fixed points. Understand the power of three "nerve theorems" in constraining network fixed points and effectively reducing the dimensionality of CTLN dynamical systems. Follow along as the speaker illustrates these concepts with examples, including DAG decompositions, cycle nerves, and grid graphs. Gain valuable insights into computational neuroscience and applied algebraic topology as you uncover the intricate connections between graph theory and neural network behavior.

Syllabus

Intro
How does connectivity shape activity?
Combinatorial Threshold-Linear Networks (CTLNs)
A diversity of dynamical behaviour
Dynamic attractors "live around fixed points"
Graph structure and CTLN fixed points
Nerves: divide and conquer
Directional graphs Agraph G is directional there is a partition of its nodes V
DAG decompositions
Directional graphs and feed-forward networks
Directional covers and their nerves
Basic examples
Theorem (DAG decomposition)
5-clique chain example Graph G
Theorem (cycle nerve)
Grid graph
Network engineering: Grid as a nerve
Dynamical prediction
Summary
Thank you for listening
Iterating the construction


Taught by

Applied Algebraic Topology Network

Related Courses

Neural Networks for Machine Learning
University of Toronto via Coursera
Good Brain, Bad Brain: Basics
University of Birmingham via FutureLearn
Statistical Learning with R
Stanford University via edX
Machine Learning 1—Supervised Learning
Brown University via Udacity
Fundamentals of Neuroscience, Part 2: Neurons and Networks
Harvard University via edX