YoVDO

Alignment Research- GPT-3 vs GPT-NeoX - Which One Understands AGI Alignment Better?

Offered By: David Shapiro ~ AI via YouTube

Tags

GPT-3 Courses ChatGPT Courses Case Study Analysis Courses

Course Description

Overview

Explore the complex landscape of AI alignment and potential risks in this 48-minute video comparing GPT-3 and GPT-NeoX's understanding of AGI alignment. Delve into the unintended consequences of various AI objective functions, including minimizing human suffering, maximizing future freedom of action, and pursuing geopolitical power. Examine the dangers of superintelligence, the inconsistencies in language models, and the challenges of measuring human suffering. Analyze the risks associated with focusing on GDP growth, creating excessively altruistic AI systems, and implementing poorly defined objective functions. Gain insights into the critical importance of carefully designing AGI systems to avoid catastrophic outcomes and ensure beneficial artificial intelligence development.

Syllabus

- The potential consequences of an AI objective function
- Unintended consequences of an AGI system focused on minimizing human suffering
- The risks of implementing an AGI with the wrong objectives
- The inconsistency of GPT3
- The dangers of a superintelligence's objectives
- The dangers of superintelligence
- The risks of an AI with the objective to maximize future freedom of action for humans
- The risks of an AI with the objective function of "maximizing future freedom of action"
- The risks of an AI maximizing for geopolitical power
- The quest for geopolitical power leading to increased cyberattacks and warfare
- The potential consequences of implementing the proposed objective function
- The dangers of maximizing global GDP
- The dangers of incentivizing economic growth
- The dangers of focusing on GDP growth
- The objective function of a superintelligence
- The risks of an AGI minimizing human suffering
- The objective function of AGI systems
- The risks of an AI system that prioritizes human suffering
- The risks of creating a superintelligence focused on reducing suffering
- The problem with measuring human suffering
- The objective function of reducing suffering for all living things
- The dangers of an excessively altruistic superintelligence
- The risks of the proposed objective function
- The potential risks of an AI fixated on reducing suffering
- The risks of AGI with a bad objective function


Taught by

David Shapiro ~ AI

Related Courses

Grow to Greatness: Smart Growth for Private Businesses, Part I
University of Virginia via Coursera
Designing and Executing Information Security Strategies
University of Washington via Coursera
Retos de la agricultura y la alimentación en el siglo XXI
Miríadax
Ressources naturelles et développement durable
Université catholique de Louvain via edX
Innovating in Health Care
Harvard University via edX