Cognitive Maps in Large Language Models - Multiscale Predictive Representations
Offered By: Santa Fe Institute via YouTube
Course Description
Overview
Explore a conference talk examining cognitive maps in large language models, focusing on multiscale predictive representations in hippocampal and prefrontal hierarchies. Delve into the comparison between GPT-4 32K and GPT-3.5 Turbo under various temperature settings, analyzing their performance in graph navigation and shortest path problems. Investigate the potential of chain of thought prompts to enhance LLMs' cognitive map capabilities. Consider the implications of errors and response latencies in understanding AI systems, while acknowledging the fundamental differences between LLMs and human cognition.
Syllabus
Intro
Multiscale Predictive Cognitive Maps In Hippocampal & Prefrontal hierarchies
Cognitive Maps Learned representations of relational structures For goal-directed multistep planning & inference
Conditions
GPT-4 32K vs. GPT-3.5 Turbo Temperature 0, .5, 1
GPT-4 32K is comfortable with deeper trees
GPT-4 fails shortest path in graphs with dense community structure & sometimes hallucinates edges
Can chain of thought prompts (COT) Improve LLMs' cog map performance?
In Cog & neuro-sciences Errors & response latencies are windows into minds & brains & AI/LLMs?
LLMs are not comparable to one person Specific latent states in response to prompt may appear so But they don't qualify for mental life
Taught by
Santa Fe Institute
Tags
Related Courses
Google BARD and ChatGPT AI for Increased ProductivityUdemy Bringing LLM to the Enterprise - Training From Scratch or Just Fine-Tune With Cerebras-GPT
Prodramp via YouTube Generative AI and Long-Term Memory for LLMs
James Briggs via YouTube Extractive Q&A With Haystack and FastAPI in Python
James Briggs via YouTube OpenAssistant First Models Are Here! - Open-Source ChatGPT
Yannic Kilcher via YouTube