Matrix Methods in Data Analysis, Signal Processing, and Machine Learning (Spring 2018)

Matrix Methods in Data Analysis, Signal Processing, and Machine Learning (Spring 2018)

Prof. Gilbert Strang via MIT OpenCourseWare Direct link

Course Introduction of 18.065 by Professor Strang

1 of 36

1 of 36

Course Introduction of 18.065 by Professor Strang

Class Central Classrooms beta

YouTube playlists curated by Class Central.

Classroom Contents

Matrix Methods in Data Analysis, Signal Processing, and Machine Learning (Spring 2018)

Automatically move to the next video in the Classroom when playback concludes

  1. 1 Course Introduction of 18.065 by Professor Strang
  2. 2 An Interview with Gilbert Strang on Teaching Matrix Methods in Data Analysis, Signal Processing,...
  3. 3 1. The Column Space of A Contains All Vectors Ax
  4. 4 2. Multiplying and Factoring Matrices
  5. 5 3. Orthonormal Columns in Q Give Q'Q = I
  6. 6 4. Eigenvalues and Eigenvectors
  7. 7 5. Positive Definite and Semidefinite Matrices
  8. 8 6. Singular Value Decomposition (SVD)
  9. 9 7. Eckart-Young: The Closest Rank k Matrix to A
  10. 10 8. Norms of Vectors and Matrices
  11. 11 9. Four Ways to Solve Least Squares Problems
  12. 12 10. Survey of Difficulties with Ax = b
  13. 13 11. Minimizing _x_ Subject to Ax = b
  14. 14 12. Computing Eigenvalues and Singular Values
  15. 15 13. Randomized Matrix Multiplication
  16. 16 14. Low Rank Changes in A and Its Inverse
  17. 17 15. Matrices A(t) Depending on t, Derivative = dA/dt
  18. 18 16. Derivatives of Inverse and Singular Values
  19. 19 17. Rapidly Decreasing Singular Values
  20. 20 18. Counting Parameters in SVD, LU, QR, Saddle Points
  21. 21 19. Saddle Points Continued, Maxmin Principle
  22. 22 20. Definitions and Inequalities
  23. 23 21. Minimizing a Function Step by Step
  24. 24 22. Gradient Descent: Downhill to a Minimum
  25. 25 23. Accelerating Gradient Descent (Use Momentum)
  26. 26 24. Linear Programming and Two-Person Games
  27. 27 25. Stochastic Gradient Descent
  28. 28 26. Structure of Neural Nets for Deep Learning
  29. 29 27. Backpropagation: Find Partial Derivatives
  30. 30 30. Completing a Rank-One Matrix, Circulants!
  31. 31 31. Eigenvectors of Circulant Matrices: Fourier Matrix
  32. 32 32. ImageNet is a Convolutional Neural Network (CNN), The Convolution Rule
  33. 33 33. Neural Nets and the Learning Function
  34. 34 34. Distance Matrices, Procrustes Problem
  35. 35 35. Finding Clusters in Graphs
  36. 36 36. Alan Edelman and Julia Language

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.