Statistical Inference for Estimation in Data Science
Offered By: University of Colorado Boulder via Coursera
Course Description
Overview
This course introduces statistical inference, sampling distributions, and confidence intervals. Students will learn how to define and construct good estimators, method of moments estimation, maximum likelihood estimation, and methods of constructing confidence intervals that will extend to more general settings.
This course can be taken for academic credit as part of CU Boulder’s Master of Science in Data Science (MS-DS) degree offered on the Coursera platform. The MS-DS is an interdisciplinary degree that brings together faculty from CU Boulder’s departments of Applied Mathematics, Computer Science, Information Science, and others. With performance-based admissions and no application process, the MS-DS is ideal for individuals with a broad range of undergraduate education and/or professional experience in computer science, information science, mathematics, and statistics. Learn more about the MS-DS program at https://www.coursera.org/degrees/master-of-science-data-science-boulder.
Logo adapted from photo by Christopher Burns on Unsplash.
Syllabus
- Start Here!
- Welcome to the course! This module contains logistical information to get you started!
- Point Estimation
- In this module you will learn how to estimate parameters from a large population based only on information from a small sample. You will learn about desirable properties that can be used to help you to differentiate between good and bad estimators. We will review the concepts of expectation, variance, and covariance, and you will be introduced to a formal, yet intuitive, method of estimation known as the "method of moments".
- Maximum Likelihood Estimation
- In this module we will learn what a likelihood function is and the concept of maximum likelihood estimation. We will construct maximum likelihood estimators (MLEs) for one and two parameter examples and functions of parameters using the invariance property of MLEs.
- Large Sample Properties of Maximum Likelihood Estimators
- In this module we will explore large sample properties of maximum likelihood estimators including asymptotic unbiasedness and asymptotic normality. We will learn how to compute the “Cramér–Rao lower bound” which gives us a benchmark for the smallest possible variance for an unbiased estimator.
- Confidence Intervals Involving the Normal Distribution
- In this module we learn about the theory of “interval estimation”. We will learn the definition and correct interpretation of a confidence interval and how to construct one for the mean of an unseen population based on both large and small samples. We will look at the cases where the variance is known and unknown.
- Beyond Normality: Confidence Intervals Unleashed!
- In this module, we will generalize the lessons of Module 4 so that we can develop confidence intervals for other quantities of interest beyond the distribution mean and for other distributions entirely. This module covers two sample confidence intervals in more depth, and confidence intervals for population variances and proportions. We will also learn how to develop confidence intervals for parameters of interest in non-normal distributions.
Taught by
Jem Corcoran
Tags
Related Courses
ANOVA and Experimental DesignUniversity of Colorado Boulder via Coursera Preparing for the AP* Statistics Exam
University of Houston System via Coursera Basic Statistics
University of Amsterdam via Coursera Battery State-of-Health (SOH) Estimation
University of Colorado System via Coursera Mathematical Biostatistics Boot Camp 1
Johns Hopkins University via Coursera