On the Dangers of Stochastic Parrots - Can Language Models Be Too Big?
Offered By: Association for Computing Machinery (ACM) via YouTube
Course Description
Overview
Explore the potential risks and ethical concerns surrounding large language models in this thought-provoking conference talk from FAccT 2021. Delve into the concept of "stochastic parrots" as researchers E. Bender, T. Gebru, and A. McMillan-Major examine whether language models can become too big. Investigate the challenges of managing vast datasets, the implications for research time allocation, and the environmental impact of training extensive models. Gain insights into proposed risk mitigation strategies and consider the broader implications of AI development on society and scientific progress.
Syllabus
Intro
Risks
Unmanageable Data
Research Time
Stochastic Parrots
Risk Mitigation Strategies
Taught by
ACM FAccT Conference
Related Courses
Writing II: Rhetorical ComposingOhio State University via Coursera Practice Based Research in the Arts
Stanford University via NovoEd DNA - from structure to therapy
Jacobs University via iversity Public Speaking
University of Washington via edX Shakespeare and his World
The University of Warwick via FutureLearn