YoVDO

Cognitive Maps in Large Language Models - Multiscale Predictive Representations

Offered By: Santa Fe Institute via YouTube

Tags

LLM (Large Language Model) Courses GPT-4 Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a conference talk examining cognitive maps in large language models, focusing on multiscale predictive representations in hippocampal and prefrontal hierarchies. Delve into the comparison between GPT-4 32K and GPT-3.5 Turbo under various temperature settings, analyzing their performance in graph navigation and shortest path problems. Investigate the potential of chain of thought prompts to enhance LLMs' cognitive map capabilities. Consider the implications of errors and response latencies in understanding AI systems, while acknowledging the fundamental differences between LLMs and human cognition.

Syllabus

Intro
Multiscale Predictive Cognitive Maps In Hippocampal & Prefrontal hierarchies
Cognitive Maps Learned representations of relational structures For goal-directed multistep planning & inference
Conditions
GPT-4 32K vs. GPT-3.5 Turbo Temperature 0, .5, 1
GPT-4 32K is comfortable with deeper trees
GPT-4 fails shortest path in graphs with dense community structure & sometimes hallucinates edges
Can chain of thought prompts (COT) Improve LLMs' cog map performance?
In Cog & neuro-sciences Errors & response latencies are windows into minds & brains & AI/LLMs?
LLMs are not comparable to one person Specific latent states in response to prompt may appear so But they don't qualify for mental life


Taught by

Santa Fe Institute

Tags

Related Courses

Artificial Intelligence: Reinforcement Learning in Python
Udemy
Advanced AI: Deep Reinforcement Learning in Python
Udemy
Cutting-Edge AI: Deep Reinforcement Learning in Python
Udemy
GPT-4 - What, Why, How?
Edan Meyer via YouTube
GPT 4 - Superpower Results With Search
James Briggs via YouTube