Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Coursera

Statistical Learning

Illinois Institute of Technology via Coursera

Overview

This course offers a deep dive into the world of statistical analysis, equipping learners with cutting-edge techniques to understand and interpret data effectively. We explore a range of methodologies, from regression and classification to advanced approaches like kernel methods and support vector machines, all designed to enhance your data analysis skills. Our journey is guided by the well-known textbook "The Elements of Statistical Learning" by T. Hastie, R. Tibshirani, and J. Friedman. This course provides examples written in Python. Your system should have Python 3.8 or higher, as well as essential libraries such as NumPy, pandas, matplotlib, seaborn, scikit-learn, SciPy, and PyTorch. These tools not only support the learning process but also prepare you for real-world data analysis challenges. Whether you're aiming to refine your expertise or just starting out in the field of data science, this course provides the knowledge and tools to transform your understanding and application of statistical learning. It's a perfect blend of theory and practice, ideal for anyone looking to enhance their skills in data interpretation and analysis.

Syllabus

  • Module 1: Statistical Learning - Terminology and Ideas
    • Welcome to Statistical Learning! In this course, we will cover the topics: Statistical Learning: Terminology and Ideas, Linear Regression Methods, Linear Classification Methods, Basis Expansion Methods, Kernel Smoothing Methods, Model Assessment and Selection, Maximum Likelihood Inference, and Advanced Topics. Module 1 offers an in-depth exploration of statistical learning, beginning with the rationale behind choosing a pre-defined family of functions and optimizing the expected prediction error (EPE). It covers the essentials of statistical learning, including the loss function, the bias-variance tradeoff in model selection, and the significance of model evaluation. This module also distinguishes between supervised and unsupervised learning, discusses various types of statistical learning models and data representation, and delves into the three core elements of a statistical learning problem, providing a comprehensive introduction to this field.
  • Module 2: Linear Regression Methods
    • Welcome to Module 2 of Math 569: Statistical Learning. Here, we explore what is arguably the foundational model of the field: linear regression. This simple yet highly useful model helps us better understand the statistical learning problem discussed in Module 1. In Lesson 1, we'll carefully review what linear regression aims to do, how we construct the model's parameters with a given dataset, and what kinds of statistical tests we can perform on our estimated coefficients. In Lesson 2, we’ll cover a method known as Subset Selection, which aims to improve linear regression by eliminating unimpactful independent variables. In Lesson 3, we explore introducing bias into the linear regression model with two regularization methods: Ridge Regression and LASSO. These methods utilize a hyperparameter, a key concept in this course, to limit the growth of the coefficients. This is the source of the bias and will help us understand why a biased estimator can outperform our unbiased estimator for the coefficients of linear regression in Lesson 1. Finally, Lesson 4 introduces the concept of data transformations, which allow one to address complexities within a dataset. It also provides a simple way of converting a linear model to a nonlinear model.
  • Module 3: Linear Classification Methods
    • Welcome to Module 3 of Math 569: Statistical Learning, where we delve into linear classification. In Lesson 1, we explore how linear regression, typically used for predicting continuous outcomes, can be adapted for classification tasks-predicting discrete categories. We'll cover the conversion of categorical data into a numerical format suitable for classification and introduce essential classification metrics such as accuracy, precision, and recall. In Lesson 2, we'll explore Linear Discriminant Analysis (LDA) as an alternative method for constructing linear classifications. This method introduces the notion that classification maximizes the probability of a category given a data point, a framing we will revisit later in the course. Maximizing the likelihood of classification, given some simplifying assumptions, leads to a linear model that can also reduce the dimensionality of the problem. Finally, in Lesson 3, we will cover logistic regression, which is constructed by assuming the log-likelihood odds are linear models. The outcome, similar to LDA, produces a linear decision boundary.
  • Module 4: Basis Expansion Methods
    • Welcome to Module 4 of Math 569: Statistical Learning, focusing on advanced methods in statistical modeling. This module starts with an introduction to Basis Expansion Methods, exploring how these techniques enhance linear models by incorporating non-linear relationships. We then delve into Piecewise Polynomials, discussing their utility in capturing varying trends across different segments of data. In Lesson 2, we explore Smoothing Splines, emphasizing their role in effectively balancing model fit and complexity. Lastly, Lesson 3 covers Regularization and Kernel Functions, elaborating on how these concepts contribute to constructing more complex models without significantly increasing computational complexity.
  • Module 5: Kernel Smoothing Methods
    • Welcome to Module 5 of Math 569: Statistical Learning, dedicated to advanced techniques in non-linear data modeling. In Lesson 1, we delve into Kernel Smoothers, exploring how they make predictions based on local data and their comparison to k-Nearest Neighbors (kNN) models. Lesson 2 focuses on Local Regression, particularly Local Linear Regression (LLR) and Local Polynomial Regression (LPR). We'll examine how LLR overcomes some kernel smoothing limitations and how LPR provides flexibility in capturing local data structure. The module emphasizes the adaptiveness of these techniques for complex data relationships and addresses the challenges in selecting hyperparameters and computational demands, especially for large datasets.
  • Module 6: Model Assessment and Selection
    • Module 6 of Math 569: Statistical Learning delves into model evaluation and model selection via hyperparameter choice. It begins with an understanding of Bias-Variance Decomposition, highlighting the trade-off between model simplicity and accuracy. The module then explores model complexity, offering strategies for balancing this complexity with predictive performance. Building on the importance of balancing model complexity with performance, we move on to cover model selection metrics, namely: AIC, BIC, and MDL. These are information-theoretic metrics that balance error with model complexity, such as the number of parameters. Finally, the module concludes with lessons on estimating test error without a testing set, using concepts like VC Dimension, Cross-Validation, and Bootstrapping. This module is pivotal for mastering model evaluation and selection in statistical learning.
  • Module 7: Maximum Likelihood Inference
    • Module 7 of Math 569: Statistical Learning introduces advanced inferential techniques. Lesson 1 focuses on Maximum Likelihood Inference, explaining how to find optimal model parameters by maximizing the likelihood function. This method is pivotal in estimating parameters for which a dataset is most likely. Lesson 2 dives into Bayesian Inference, contrasting it with frequentist approaches. It covers Bayes' Theorem, which integrates prior beliefs with new evidence to update beliefs dynamically. The module thoroughly discusses the process of Bayesian modeling, including the construction and updating of models using prior and posterior distributions. This module is crucial for understanding complex inference methods in statistical learning.
  • Module 8: Advanced Topics
    • Module 8 of Math 569: Statistical Learning covers diverse advanced machine learning techniques. It begins with Decision Trees, focusing on their structure and application in both classification and regression tasks. Next, it explores Support Vector Machines (SVM), detailing their function in creating optimal decision boundaries. The module then examines k-Means Clustering, an unsupervised learning method for data grouping. Finally, it concludes with Neural Networks, discussing their architecture and role in complex pattern recognition. Each lesson offers a deep dive into these techniques, showcasing their unique advantages and applications in statistical learning.
  • Summative Course Assessment
    • This module contains the summative course assessment that has been designed to evaluate your understanding of the course material and assess your ability to apply the knowledge you have acquired throughout the course. Be sure to review the course material thoroughly before taking the assessment.

Taught by

Shahrzad Jamshidi

Reviews

Start your review of Statistical Learning

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.