Modeling Bottom-Up and Top-Down Visual Attention in Humans and Monkeys - 2009
Offered By: Center for Language & Speech Processing(CLSP), JHU via YouTube
Course Description
Overview
Explore the complex interplay between bottom-up and top-down visual processing in a lecture by Dr. Laurent Itti from the University of Southern California. Delve into the mathematical principles and neuro-computational architectures underlying visual attentional selection in humans and monkeys. Discover how these models can be applied to real-world vision challenges using stimuli from television and video games. Learn about Dr. Itti's research on developing flexible models of visual attention that can be modulated by specific tasks. Gain insights into the comparison of model predictions with behavioral recordings from primates. Understand the importance of combining sensory signals from the environment with behavioral goals in processing complex natural environments. Examine the speaker's background in electrical engineering, computation, and neural systems, as well as his extensive research and teaching experience in artificial intelligence, robotics, and biological vision.
Syllabus
Modeling Bottom-Up and Top-Down Visual Attention in Humans and Monkeys – Laurent Itti (USC) - 2009
Taught by
Center for Language & Speech Processing(CLSP), JHU
Related Courses
Introduction to Artificial IntelligenceStanford University via Udacity Probabilistic Graphical Models 1: Representation
Stanford University via Coursera Artificial Intelligence for Robotics
Stanford University via Udacity Computer Vision: The Fundamentals
University of California, Berkeley via Coursera Learning from Data (Introductory Machine Learning course)
California Institute of Technology via Independent