YoVDO

Robotic Vision: Making Robots See

Offered By: Queensland University of Technology via FutureLearn

Tags

Computer Vision Courses

Course Description

Overview

Learn about the functions you need to program a robotic vision system

This three-week course will guide you through the essential skills needed to make a robot see.

You’ll develop your knowledge of image geometry before learning the programming skills used in robotic vision.

You’ll gain an in-depth understanding of robotics and practical skills as you’re guided by experts at Queensland University of Technology to complete a robotic vision system.

Refine your skills using MATLAB

You’ll cement your understanding of robotic vision by completing MATLAB exercises to see these processes in action.

You’ll learn how to demonstrate basic coding in MATLAB for calibration, shape classification, and workspace coordination.

This will help you build the practical skills to use in robotic programming.

Grow your knowledge of computer vision to create a vision system

You’ll take part in a robotic vision programming project to hone your skills and learn important functions such as improving colour segmentation, detecting shape and size, improving your homography matrix, rectifying your image, and forming a complete vision system.

Along the way, you’ll be able to reflect on your robotic vision systems as well as your peers’ projects to understand what makes a successful system.

As an optional project, if you have built or bought a robot, you’ll also learn the information needed for integrating your vision system.

This course is designed for those with some programming knowledge and concepts from advanced high-school mathematics or undergraduate engineering,

You can enrol in the MATLAB Onramp tutorial here.

The course requires you to code your robot vision system in MATLAB. You will need to download the full MATLAB software to a computer. With support from MathWorks, free access to MATLAB will be provided for the duration of the course plus 30 days.

Optional robot arm project

The purpose of this course is to program a robotic vision system, and optionally to integrate it with a robot to perform a simple, visual task. If you completed the course Introducing Robotics: Build a Robot Arm, you may already have a working robot arm you can use; or you might choose to purchase a LEGO MINDSTORMS NXT or EV3 development kit or something equivalent to it, or to borrow hobby robot components. This course does not run through how to assemble your robot arm, but rather provides all of the task instructions, demonstrations and worksheets for programming the vision system.

There are many ways to integrate the vision system and some of the most common approaches are:

1. Computer vision and robotics control on your computer

An attached web camera is used to acquire images that you process, to display results and to send motion commands to the robot. You will require a 64-bit computer as well as the full MATLAB software. There are many options to control the robot depending on the technology that you use to create it, for example:

a. MINDSTORMS NXT toolbox (NXT kits) or EV3 require custom software toolboxes to control your robot.

b. Arduino or RaspberryPi robot controllers might require a serial, WiFi or Ethernet cable connection to allow the MATLAB code to command it.

2. Computer vision on your computer

An attached web camera is used to acquire images that you process and display results for. You will require a 64-bit computer as well as the full MATLAB software.

3. Computer vision in the cloud

Your image processing works in an offline mode: you capture images of the worksheet using any camera and upload them to MATLAB Online using MATLAB Drive, where it is accessible by your program.

You can discuss your design ideas and options with your peers and the course mentors.


Syllabus

  • Getting started
    • Welcome to the course
    • The mathematics of cameras
    • Starting your project
    • Summary and next steps
  • Programming functions
    • Welcome back
    • Improving colour segmentation
    • Classifying blobs
    • Summary and next steps
  • Completing the vision system
    • Welcome back
    • Using a homography matrix
    • Forming a vision system
    • Optional integration activity
    • Summary and next steps

Taught by

Peter Corke

Tags

Related Courses

2D image processing
Higher School of Economics via Coursera
3D Reconstruction - Multiple Viewpoints
Columbia University via Coursera
3D Reconstruction - Single Viewpoint
Columbia University via Coursera
Post Graduate Certificate in Advanced Machine Learning & AI
Indian Institute of Technology Roorkee via Coursera
Advanced Computer Vision with TensorFlow
DeepLearning.AI via Coursera