YoVDO

Capstone: Retrieving, Processing, and Visualizing Data with Python

Offered By: University of Michigan via Coursera

Tags

Programming Courses Data Analysis Courses Data Visualization Courses Python Courses Data Processing Courses

Course Description

Overview

In the capstone, students will build a series of applications to retrieve, process and visualize data using Python. The projects will involve all the elements of the specialization. In the first part of the capstone, students will do some visualizations to become familiar with the technologies in use and then will pursue their own project to visualize some other data that they have or can find. Chapter 15 from the book “Python for Informatics” will serve as the backbone for the capstone. This course covers Python 2.


Syllabus

Welcome to the Capstone
Congratulations to everyone for making it this far. Before you begin, please view the Introduction video and read the Capstone Overview. The Course Resources section contains additional course-wide material that you may want to refer to in future weeks.

Building a Search Engine
This week we will download and run a simple version of the Google PageRank Algorithm and practice spidering some content. The assignment is peer-graded, and the first of three required assignments in the course. This a continuation of the material covered in Course 4 of the specialization, and is based on Chapter 15 of the textbook.

Exploring Data Sources (Project)
The optional Capstone project is your opportunity to select, process, and visualize the data of your choice, and receive feedback from your peers. The project is not graded, and can be as simple or complex as you like. This week's assignment is to identify a data source and make a short discussion forum post describing the data source and outlining some possible analysis that could be done with it. You will not be required to use the data source presented here for your actual analysis.

Spidering and Modeling Email Data
In our second required assignment, we will retrieve and process email data from the Sakai open source project. Video lectures will walk you through the process of retrieving, cleaning up, and modeling the data.

Accessing New Data Sources (Project)
The task for this week is to make a discussion thread post that reflects the progress you have made to date in retrieving and cleaning up your data source so can perform your analysis. Feedback from other students is encouraged to help you refine the process.

Visualizing Email Data
In the final required assignment, we will do two visualizations of the email data you have retrieved and processed: a word cloud to visualize the frequency distribution and a timeline to show how the data is changing over time.

Visualizing new Data Sources (Project)
This week you will discuss the analysis of your data to the class. While many of the projects will result in a visualization of the data, any other results of analyzing the data are equally valued, so use whatever form of analysis and display is most appropriate to the data set you have selected.


Taught by

Charles Severance

Tags

Related Courses

Design Computing: 3D Modeling in Rhinoceros with Python/Rhinoscript
University of Michigan via Coursera
3D SARS-CoV-19 Protein Visualization With Biopython
Coursera Project Network via Coursera
Access Bioinformatics Databases with Biopython
Coursera Project Network via Coursera
Accounting Data Analytics
University of Illinois at Urbana-Champaign via Coursera
Lean Data Approaches to Measure Social Impact
Acumen Academy