Steelmanning the Doomer Argument: How Uncontrollable Super Intelligence Could Kill Everyone
Offered By: David Shapiro ~ AI via YouTube
Course Description
Overview
Explore a thought-provoking analysis of the potential risks associated with uncontrollable super intelligence (USI) in this 23-minute video. Delve into the "Doomer" argument, examining how USI could pose an existential threat to humanity. Investigate concepts such as split half consistency, international cooperation challenges, bioweapons, terminal race conditions, and the window of conflict. Consider the role of human morality, potential machine wars, and cyberpunk scenarios. Gain a deeper understanding of the complex issues surrounding artificial intelligence and its potential impact on our future.
Syllabus
Intro
Doomer Argument
Split Half Consistency
International Cooperation
Bioweapons
Terminal Race Condition
Window of Conflict
Human Morality
Machine Wars
Cyberpunk
Conclusion
Taught by
David Shapiro ~ AI
Related Courses
Introduction to Artificial IntelligenceStanford University via Udacity Probabilistic Graphical Models 1: Representation
Stanford University via Coursera Artificial Intelligence for Robotics
Stanford University via Udacity Computer Vision: The Fundamentals
University of California, Berkeley via Coursera Learning from Data (Introductory Machine Learning course)
California Institute of Technology via Independent