Accountable LLMs - Metacognitive Intervention Through Sparsity
Offered By: Neuro Symbolic via YouTube
Course Description
Overview
Explore a 31-minute talk by Tianlong Chen, a postdoc at MIT and soon-to-be faculty member at UNC Chapel Hill, on Metacognitive Intervention for Accountable LLMs through Sparsity. Delve into the cutting-edge research on making large language models more accountable using sparsity techniques. Gain insights into the intersection of symbolic methods and deep learning as part of the Neuro Symbolic Channel's content, which originates from an AI course at Arizona State University. Learn about the latest algorithms, Python packages, and progress towards artificial general intelligence (AGI) in this informative presentation on accountable LLMs.
Syllabus
Accountable LLM's (Tianlong Chen, MIT - Metacog AI)
Taught by
Neuro Symbolic
Related Courses
Machine Learning Interpretable: interpretML y LIMECoursera Project Network via Coursera Machine Learning Interpretable: SHAP, PDP y permutacion
Coursera Project Network via Coursera Evaluating Model Effectiveness in Microsoft Azure
Pluralsight MIT Deep Learning in Life Sciences Spring 2020
Massachusetts Institute of Technology via YouTube Applied Data Science Ethics
statistics.com via edX