YoVDO

Steelmanning the Doomer Argument: How Uncontrollable Super Intelligence Could Kill Everyone

Offered By: David Shapiro ~ AI via YouTube

Tags

Artificial Intelligence Courses Machine Learning Courses Ethics Courses Futurism Courses Transhumanism Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a thought-provoking analysis of the potential risks associated with uncontrollable super intelligence (USI) in this 23-minute video. Delve into the "Doomer" argument, examining how USI could pose an existential threat to humanity. Investigate concepts such as split half consistency, international cooperation challenges, bioweapons, terminal race conditions, and the window of conflict. Consider the role of human morality, potential machine wars, and cyberpunk scenarios. Gain a deeper understanding of the complex issues surrounding artificial intelligence and its potential impact on our future.

Syllabus

Intro
Doomer Argument
Split Half Consistency
International Cooperation
Bioweapons
Terminal Race Condition
Window of Conflict
Human Morality
Machine Wars
Cyberpunk
Conclusion


Taught by

David Shapiro ~ AI

Related Courses

Futurism
Kurzgesagt – In a Nutshell via YouTube
Futurism and Space Exploration
PBS via YouTube
Stay Competitive Using Design Thinking
LinkedIn Learning
2030: How Today's Trends Will Reshape the Future (Book Bite)
LinkedIn Learning
Reward Is Enough - Machine Learning Research Paper Explained
Yannic Kilcher via YouTube