Data Science Decisions in Time: Using Data Effectively
Offered By: Johns Hopkins University via Coursera
Course Description
Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Sequential Decisions builds from math and algorithms that can be understood and used by Coursera Students. This course will start from a consideration of the simplest type of data streams and then gradually advance to more complex types of data and more nuanced decisions being made on that data. You will be able to: (a) program optimal decisions for data arriving from known distribution functions, (b) define error bars and nuanced hedges about ongoing data streams to reflect missing data and/or missing knowledge, (c)understand and use the connections from these models to further understand Markov Chains and Markov Processes and how these ideas connect to Reinforcement Learning and (d) Understand better the nuances between time-independent, time-dependent, one-dimensional and multi-dimensional data.
The course is aimed at those working with data, this includes both those charged with analyzing the data and those in charge of making decisions based on that data.
Syllabus
- Wald and Sequential Decisions
- This module introduces the class and the approach to teaching it to be used for the next five weeks. We begin with simple sequential data, similar to Wald’s model: data arrives from a distribution and is not time dependent. This can be generative data. We then explore increasingly complex data from distributions collected for health or business reasons. We finish the week with connections to code work and to AI.
- Thompson Sampling
- This module is the bridge into Markov Processes and Markov Chains. Thompson sampling is an old algorithm, that has been revived and is currently in-use on many challenging problems. By understanding this material and the connections to last week and to the week ahead, students will be well positioned to have mastered this first course in the specialization
- Change Points
- Change points are locations where the previously stationary distributions of the last two modules shift to a new distribution In a manufacturing line this could be due to a new batch of materials that arrive with different characteristics, so the failure rate changes.
- Markov Chains
- Markov chains describe a sequence of state changes. They are often used to describe complex transitions between states and are a primary modeling tool for improving understanding of a complex system. We will use them as a model for how sequential data may be produced by a more complex system.
- Markov Decision Processes
- The next step in modeling ability is Markov processes with decisions. This connects to modern research in reinforcement learning and enables optimization over the sets of decisions for an optimal outcome. In this last week of the first course we will cover the basics of how these Markov Decision Processes can be parameterized and what they mean.
Taught by
Thomas Woolf
Tags
Related Courses
Adaptive Sampling via Sequential Decision Making - András GyörgyAlan Turing Institute via YouTube Adversarial Bandits: Theory and Algorithms
Simons Institute via YouTube Better Learning from the Past - Counterfactual - Batch RL
Simons Institute via YouTube Decision Diagrams for Efficient Inference and Optimization in Expressive Discrete-Continuous Domains
Simons Institute via YouTube Deep Reinforcement Learning for Sequential Decision Making Tasks with Natural Language Interaction
Center for Language & Speech Processing(CLSP), JHU via YouTube