YoVDO

Deep Learning - Artificial Neural Networks with TensorFlow

Offered By: Packt via Coursera

Tags

Machine Learning Courses Deep Learning Courses Neural Networks Courses TensorFlow Courses Keras Courses Gradient Descent Courses Loss Functions Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
This course delves into deep learning and artificial neural networks using TensorFlow. - It begins with foundational machine learning concepts, covering linear classification and regression, before exploring neurons, model learning, and predictions. - Core modules focus on forward propagation, activation functions, and multiclass classification, with practical examples like the MNIST dataset for image classification and regression tasks. - It also covers model saving, Keras usage, and hyperparameter selection. - The final sections provide an in-depth look at loss functions and gradient descent optimization techniques, including Adam. - Key outcomes include understanding machine learning concepts, implementing ANN models, and optimizing deep learning models using TensorFlow. This course suits those interested in deep learning, TensorFlow 2, and foundational concepts for advanced neural networks like CNNs, RNNs, LSTMs, and transformers. Proficiency in Python and familiarity with NumPy and Matplotlib are required.

Syllabus

  • Welcome
    • In this module, we will introduce the author and provide an overview of the course's learning objectives and structure. We will discuss the approach taken in this course, the prerequisites needed, and provide a summary of the topics that will be covered throughout the course.
  • Machine Learning and Neurons
    • In this module, we will delve into the foundational concepts of machine learning and neural networks. We will begin by understanding what machine learning is and exploring linear classification and regression theories with TensorFlow 2.0. Through practical examples, you will learn how to apply these theories using real-world datasets. We will also cover the structure and function of neurons, the learning process of models, and how to make predictions. Additionally, we will demonstrate how to save and load models, discuss the use of Keras, and gather feedback for continuous improvement.
  • Feedforward Artificial Neural Networks
    • In this module, we will delve into the world of feedforward artificial neural networks (ANNs). Starting with an introduction to ANNs, we will explore forward propagation and the geometrical significance of neural networks. We will cover various activation functions, multiclass classification, and the representation of image data. You will gain hands-on experience by preparing code for ANN using the MNIST dataset, and applying ANN techniques for both image classification and regression tasks. Finally, we will discuss strategies for choosing the optimal hyperparameters for your neural networks.
  • In-Depth: Loss Functions
    • In this module, we will dive deep into the crucial aspect of loss functions used in neural networks. We will start by understanding Mean Squared Error (MSE) from a probabilistic viewpoint, which is commonly used in regression tasks. Next, we will explore binary cross entropy, the appropriate loss function for binary classification problems. Finally, we will examine categorical cross entropy, essential for multiclass classification scenarios. Additionally, we will differentiate between various types of loss functions and their specific applications, analyze how these loss functions impact model training and performance, and learn how to apply the correct loss functions based on the nature of the classification or regression problem. This detailed study will enhance your understanding of how different loss functions impact model performance and guide you in selecting the right one for your specific tasks.
  • In-Depth: Gradient Descent
    • In this module, we will delve into the critical optimization technique of gradient descent and its variations. We will begin with an introduction to the fundamental concept of gradient descent, followed by an exploration of stochastic gradient descent and its advantages. You will learn about the role of momentum in accelerating convergence and the importance of variable and adaptive learning rates in optimization. We will then cover the basics of Adam optimization, one of the most popular optimization algorithms, and conclude with a deeper exploration of its advanced aspects. This comprehensive study will equip you with a thorough understanding of gradient descent and its variations, essential for training effective neural networks.

Taught by

Packt - Course Instructors

Related Courses

AI skills for Engineers: Supervised Machine Learning
Delft University of Technology via edX
Audio Classification with TensorFlow
Coursera Project Network via Coursera
Deep Neural Network for Beginners Using Python
Packt via Coursera
Introduction to RNN and DNN
Packt via Coursera
Python Fundamentals and Data Science Essentials
Packt via Coursera