Introduction to Artificial Intelligence
Offered By: Independent
Course Description
Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Artificial Intelligence began in the 1950s, when the first computers became available. At that time, combinatorial problems (such as playing chess) were considered to be interesting because they required some mental effort, that programmers did not know how to describe, encode or simulate. The main tools of symbolic AI are combinatorics and logic processing. Due to the rapid increase in computing power (Moore's Law), some of those problems have become solvable, and computers now regularly defeat humans (for example, in chess or in the game of Go). In this course, we will look back to the early days of AI to understand the kinds of problems that were being solved.
The next important step in AI was the development of so-called "machine learning" approaches, in which we do not encode a problem using logic. The data and the solution are presented to the computer and the system learns directly from a big data set. The first type of systems developed were pattern recognition systems. Since the data is automatically encoded by the computer, no symbols are processed (as in logic), and we call this "subsymbolic learning".
In the 1990s and early 2000s, one of the most important problems in AI was to bring symbolic and subsymbolic systems together. This has been made possible in recent years by the development of Large Language Models (LLMs), which are based on deep neural networks, but which can handle language and even logic questions in a way that is fundamentally different from the past. In this course, we will explore this convergence and its possible applications.
We also look at so-called graphical models, where knowledge is stored in the nodes and edges of graphs, and how probabilistic reasoning can be instrumented on top of them. We are living in an era where probabilistic reasoning and even formal mathematical methods could be incorporated into LLMs.
Given the importance of the field and the profound questions it raises, even for our own identity as human beings, we look at the ethical issues being discussed today around AI and the future of work as we have known it until now.
The course provides students with the necessary background to decide whether to study AI and pursue a career in the field.
The next important step in AI was the development of so-called "machine learning" approaches, in which we do not encode a problem using logic. The data and the solution are presented to the computer and the system learns directly from a big data set. The first type of systems developed were pattern recognition systems. Since the data is automatically encoded by the computer, no symbols are processed (as in logic), and we call this "subsymbolic learning".
In the 1990s and early 2000s, one of the most important problems in AI was to bring symbolic and subsymbolic systems together. This has been made possible in recent years by the development of Large Language Models (LLMs), which are based on deep neural networks, but which can handle language and even logic questions in a way that is fundamentally different from the past. In this course, we will explore this convergence and its possible applications.
We also look at so-called graphical models, where knowledge is stored in the nodes and edges of graphs, and how probabilistic reasoning can be instrumented on top of them. We are living in an era where probabilistic reasoning and even formal mathematical methods could be incorporated into LLMs.
Given the importance of the field and the profound questions it raises, even for our own identity as human beings, we look at the ethical issues being discussed today around AI and the future of work as we have known it until now.
The course provides students with the necessary background to decide whether to study AI and pursue a career in the field.
Syllabus
- What is AI? Examples of current applications.
- How do computers process logic?
- Symbolic AI basics
- What are neural networks?
- Understanding machine learning
- Automatic feature processing in deep networks
- Knowledge graphs
- Bayesian networks
- Ethic issues in AI
- Starting a career in AI
Taught by
Prof. Dr. Raúl Rojas
Related Courses
AWS Certified Machine Learning - Specialty (LA)A Cloud Guru Google Cloud AI Services Deep Dive
A Cloud Guru Introduction to Machine Learning
A Cloud Guru Deep Learning and Python Programming for AI with Microsoft Azure
Cloudswyft via FutureLearn Advanced Artificial Intelligence on Microsoft Azure: Deep Learning, Reinforcement Learning and Applied AI
Cloudswyft via FutureLearn