Learning What We Know and Knowing What We Learn - Gaussian Process Priors for Neural Data Analysis
Offered By: MITCBMM via YouTube
Course Description
Overview
Explore the intricacies of Gaussian process priors for neural data analysis in this comprehensive lecture. Delve into the importance of latent variable models, Bayesian inference, and covariance kernels in neural data analysis. Learn about factor analysis, spectral mixture kernels, and margin likelihood through practical examples and Colab notebooks. Discover the challenges and limitations of data-driven approaches, and examine the results of Bayesian GPFA. Gain insights from additional resources, including papers on Gaussian process factor analysis with dynamical structure and extensions to non-Euclidean manifolds. Understand how these techniques can be applied to real-world scenarios, such as analyzing hippocampal encoding in evidence accumulation tasks.
Syllabus
Introduction
Why should we use latent variable models
Fitting variable models
Data hungry
Simple regression
Bayesian inference
Covariance
Covariance kernels
Spectral mixture kernels
Margin likelihood
Correlation kernel
Factor analysis
Collab notebook
Challenges
Bayesian GPFA
Data limitations
Results
Taught by
MITCBMM
Related Courses
Structural Equation Model and its Applications | 结构方程模型及其应用 (普通话)The Chinese University of Hong Kong via Coursera Applied Multivariate Statistical Modeling
Indian Institute of Technology, Kharagpur via Swayam Structural Equation Model and its Applications | 结构方程模型及其应用 (粤语)
The Chinese University of Hong Kong via Coursera Survey analysis to Gain Marketing Insights
Emory University via Coursera Business Analytics and Digital Media
Indian School of Business via Coursera