How Brain Computations Can Inspire New Paths in AI - Part 2
Offered By: MITCBMM via YouTube
Course Description
Overview
Explore how brain computations can inspire new paths in artificial intelligence in this lecture by Gabriel Kreiman from Harvard University and Children's Hospital Boston. Delve into current computational models and their limitations, examining topics such as occluded objects, backward masking, and limiting presentation time. Analyze observations and interpretations at the neurophysiological level, including individual trials and computational models like RNN. Investigate object recognition, minimal context, and contextual reasoning, while evaluating model performance. Examine computer graphics, adversary images, and the challenges of understanding humor in images. Gain insights into the intersection of neuroscience and AI, uncovering potential avenues for advancing machine learning algorithms inspired by human brain function.
Syllabus
Intro
What current computational models capture
Occluded objects
Bubbles
Backward masking
Limiting presentation time
Observations
Interpretation
Neurophysiological level
Individual trials
Computational model
RNNH
Unfolding and Folding
Object recognition
Minimal context
Contextual reasoning
Model performance
Computer graphics
Paper picks can fly
Adversary images
Understanding an image
Predicting humor
Taught by
MITCBMM
Related Courses
2D image processingHigher School of Economics via Coursera A Simple Picture Storing App with Java and Android Studio
Coursera Project Network via Coursera Using Python's Math, Science, and Engineering Libraries
A Cloud Guru Exam Prep AI-102: Microsoft Azure AI Engineer Associate
Whizlabs via Coursera AI Capstone Project with Deep Learning
IBM via Coursera