YoVDO

Self Driving Car Engineer

Offered By: Mercedes Benz via Udacity

Tags

Autonomous Vehicles Courses Machine Learning Courses Deep Learning Courses Computer Vision Courses Kalman Filter Courses Image Processing Courses Localization Courses PID Controllers Courses Path Planning Courses Sensor Fusion Courses

Course Description

Overview

Work on the future of autonomous vehicles and help make the self-driving car revolution a reality!

Syllabus

  • Welcome to Self Driving Car Engineer Nanodegree
    • Welcome to the Self-Driving Car Engineer Nanodegree program! Learn about the Nanodegree experience, as well as hear from Waymo, one of Udacity's partners for the program.
  • Computer Vision
    • In this course, you will develop critical Machine Learning skills that are commonly leveraged in autonomous vehicle engineering. You will learn about the life cycle of a Machine Learning project, from framing the problem and choosing metrics to training and improving models. This course will focus on the camera sensor and you will learn how to process raw digital images before feeding them into different algorithms, such as neural networks. You will build convolutional neural networks using TensorFlow and learn how to classify and detect objects in images. With this course, you will be exposed to the whole Machine Learning workflow and get a good understanding of the work of a Machine Learning Engineer and how it translates to the autonomous vehicle context.
  • Sensor fusion
    • Besides cameras, self-driving cars rely on other sensors with complementary measurement principles to improve robustness and reliability, using sensor fusion. You will learn about the lidar sensor, different lidar types, and relevant criteria for sensor selection. Also, you will learn how to detect objects in a 3D lidar point cloud using a deep-learning approach, and then evaluate detection performance using a set of metrics.

      In the second half of the course, you will learn how to fuse camera and lidar detections and track objects over time with an Extended Kalman Filter. You will get hands-on experience with multi-target tracking, where you will initialize, update and delete tracks, assign measurements to tracks with data association techniques, and manage several tracks simultaneously.
  • Localization
    • In this course, you will learn all about robotic localization, from one-dimensional motion models up to using three-dimensional point cloud maps obtained from lidar sensors. You’ll begin by learning about the bicycle motion model, an approach to use simple motion to estimate location at the next time step, before gathering sensor data. Then, you’ll move onto using Markov localization in order to do 1D object tracking. From there, you will learn how to implement two scan matching algorithms, Iterative Closest Point (ICP) and Normal Distributions Transform (NDP), which work with 2D and 3D data. Finally, you will utilize these scan matching algorithms in the Point Cloud Library (PCL) to localize a simulated car with lidar sensing, using a 3D point cloud map obtained from the CARLA simulator.
  • Planning
    • Path planning routes a vehicle from one point to another, and it handles how to react when emergencies arise. The Mercedes-Benz Vehicle Intelligence team will take you through the three stages of path planning. First, you’ll apply model-driven and data-driven approaches to predict how other vehicles on the road will
      behave. Then you’ll construct a finite state machine to decide which of several maneuvers your own vehicle should undertake. Finally, you’ll generate a safe and comfortable trajectory to execute that maneuver.
  • Control
    • This course will teach you how to control a car once you have a desired trajectory. In other words, how to activate the throttle and the steering wheel of the car to move it following a trajectory described by coordinates. The course will cover the most basic but also the most common controller: the Proportional Integral Derivative or PID controller. You will understand the basic principle of feedback control and how they are used in autonomous driving techniques.
  • Congratulations!
    • Congratulations on making it through the program!
  • Career Services
    • Find career services projects here.
  • Additional Content: Kalman Filters
    • Find additional content here on Unscented Kalman Filters, for sensor fusion and tracking.
  • Additional Content: Prediction
    • Find additional content here on prediction, helping autonomous vehicles to predict how other vehicles and objects might move in the future.
  • Additional Content: Control
    • Find additional content here on Vehicles Models and Model Predictive Control, a more advanced form of control.
  • Additional Content: Deep Learning
    • Find additional content on deep learning here, including fully convolutional networks and semantic segmentation for scene understanding, as well as how to improve inference performance from a speed standpoint.
  • Additional Content: Functional Safety
    • Find additional content around the field of functional safety for autonomous vehicles and a high level overview of ISO 26262.
  • Autonomous Systems Interview
    • Learn about interviewing for autonomous systems interviews and find plenty of practice questions depending on the role you are looking for!

Taught by

Sebastian Thrun, David Silver, Ryan Keenan, Cezanne Camacho, Mercedes-Benz, NVIDIA, Uber ATG, Farhan A., Krishna K., Tim H., Anu A., Shreyas R. and Vishal R.

Related Courses

Emerging Automotive Technologies
Chalmers University of Technology via edX
State Estimation and Localization for Self-Driving Cars
University of Toronto via Coursera
Flying Car and Autonomous Flight Engineer
Udacity
Sensor Fusion
Mercedes Benz via Udacity