Course Content
This page includes all the content for the course thus far. We will update this page with all lecture materials, readings, and homework as the class goes on.
Schedule and Main Content
This class has six main modules, two for each “pillar” of machine learning: linear algebra, calculus and optimization, and probability and statistics. All class files will be available here. For a more detailed outline of the course thus far, see the Course Skeleton.
- Lecture slides can be found by clicking on the lecture title for the appropriate day.
- All the materials and reading on the right column is optional, but reading (a subset of) these materials before each lecture might help digesting the content during lecture.
- Problem sets will be posted here, as well as their solutions.
This is a tentative schedule and is subject to change. Readings, slides, and assignments will be posted as the class goes on.
Optional readings. MML refers to Mathematics for Machine Learning by Marc Peter Deisenroth, A. Aldo Faisal, and Cheng Soon Ong. VMLS refers to Introduction to Applied Linear Algebra - Vectors, Matrices, and Least Squares by Stephen Boyd and Lieven Vandenberghe.
Story of the course. As the lectures go on, the goal will be to develop two main ideas from machine learning: least squares regression (LS) and gradient descent (GD). During each lecture, we will build these ideas with the mathematical tools from that lecture; at the same time, we’ll gradually develop a “picture” of LS and GD as the course goes on. An evolving 3D rendering of each “picture” will be linked in each module below.
Problem sets. The problem sets will usually look relatively long, but much of it is exposition – the problems in this course are mostly structured to guide you through the discovery or derivation of some result or perspective on a concept. As such, the problem sets serve the double purpose of some “required reading” interspersed with problems for you to fill in the gaps.
Lecture pace. It’s really easy, in my experience, to get lost in a math lecture when lots of derivations or proofs are involved. At the same time, though, it can often be intimidating to speak up for fear of asking a “dumb question” (no such thing!). To this end, during every lecture, I’ll have a fully anonymous interactive poll to keep an eye on how people are feeling during lecture and I’ll check it intermittently, especially during proofs. When prompted to regsiter, just click “Skip for now.” The poll link is here.
Linear Algebra I (matrices, vectors, bases, and orthogonality)
- Jun 26
- PS 0 released + Ed Announcement
- ps0_template.zip
- Jul 1
- Jul 2
- PS 1 released, due July 11, 11:59 PM ET
- ps1.pdf, ps1_template.zip, ps1.ipynb, ps1_tex.zip
- Paper reading project released. Evaluation due July 8 11:59 PM ET
- Jul 3
- Jul 4
- DUE PS 0 due
- LS (Story thus far)
- Lecture 1.1: Least squares regression can be solved geometrically with the Pythagorean Theorem.
- Lecture 1.2: Least squares regression has a simpler solution with orthonormal bases.
- GD (Story thus far)
- Lecture 1.1, 1.2: Gradient descent with a “bowl-shaped” function gets us to the minimum.
Linear Algebra II (singular value decomposition and eigendecomposition)
- Jul 8
- Lecture: Singular Value Decomposition
- 3D SVD (unprojected), 3D SVD (u1, u2), 3D SVD (u1), Orthogonal Complement, Class Photo Singular Values, MML 4.2, 4.4, 4.5, Daniel Hsu’s Computational Linear Algebra (CLA) course notes on SVD, Daniel Hsu’s CLA interactive example of “best-fitting 1d subspace”
- DUE Project first evaluation due
- Jul 9
- PS 2 released, due July 22 11:59 PM ET (updated from July 18, 11:59 PM ET)
- ps2.pdf, ps2_template.zip, ps2.ipynb, ps2_tex.zip
- Jul 10
- Lecture: Eigendecomposition and PSD Matrices
- Positive Definite Quad. Form, Positive Semidefinite Quad. Form, Indefinite Quad. Form (bad initialization), Indefinite Quad. Form (good initialization), Quadratics are dominated by the degree-2 terms, MML 4.2, 4.4, 4.5, 3Blue1Brown on eigenvalues/eigenvectors
- Jul 11
- DUE PS 1 due
- LS (Story thus far)
- Lecture 2.1 & 2.2: The problem of least squares regression is unified under the pseudoinverse.
- GD (Story thus far)
- Lecture 2.1 (nothing new): Gradient descent with a “bowl-shaped” function gets us to the minimum.
- Lecture 2.2: On quadratic forms, it seems that gradient descent on three different types of shapes has different behavior: positive definite, positive definite, and indefinite.
Calculus and Optimization I (differentiation and Taylor Series)
- Jul 15
- Jul 17
- Jul 18
- PS 3 released, due July 29, 11:59 PM ET
- ps3.pdf, ps3_template.zip, ps3.ipynb, ps3_tex.zip
- LS (Story thus far)
- Lecture 3.1, 3.2: We can derive the exact same OLS theorem from linear algebra section from just the tools of optimization and viewing the notion of least squares error as an “objective function.”
- GD (Story thus far)
- Lecture 3.1: We can now write down the algorithm for gradient descent. Intuitively, positive semidefinite or positive definite quadratic forms seem good for gradient descent.
- Lecture 3.2: Using Taylor’s approximations and Taylor’s theorem for the first-order approximation (linearization), we can provide intuition and a formal guarantee that gradient descent makes the function values decrease. The behavior of gradient descent depends on the learning rate eta: eta too big will result in erratic behavior but small enough eta results in stable convergence.
Calculus and Optimization II (optimization and convexity) -- SAM OUT OF TOWN
- Jul 22
- Lecture: Optimization and the Lagrangian (recording in three parts in Video Library)
- Constrained least squares (ridge regression), MML 7.1 - 7.2
- DUE PS 2 due
- Jul 24
- Lecture: Convexity and convex optimization (recording in one part in Video Library)
- MML 7.3, Convexity Definition in 3D, Convexity First-order Definition in 3D, Boyd and Vandenberghe’s Convex Optimization Chapters 1 - 3
- PS 4 released, due Aug 6th, 11:59 PM ET
- ps4.pdf, ps4_template.zip, ps4.ipynb, ps4_tex.zip
- LS (Story thus far)
- Lecture 4.1: In some applications, it may be favorable to regularize the least squares objective by trading off minimizing the objective with the norm of the weights.
- Lecture 4.2: The least squares objective is a convex function (also: first-order definition); applying gradient descent takes us to a global minimum.
- GD (Story thus far)
- Lecture 4.1: Nothing new here.
- Lecture 4.2: Applying gradient descent to beta-smooth, convex functions takes us to a global minimum. One such function is the least squares objective.
Probability and Statistics I (basic probability theory and statistical estimation)
- Jul 29
- Lecture: Basic Probability Theory, Models, and Data
- Regression setup w/ randomness, MML 6.1-6.4, Blitzstein and Hwang’s Ch. 9 on Conditional Expectation
- DUE PS 3 due
- Jul 31
- Lecture: Bias, Variance, and Statistical Estimators
- Regression (d = 2) with test point, SGD with batch size 1, SGD with batch size 10
- Final paper reading evaluation released. Evaluation due August 12 11:59 PM ET
- Aug 1
- PS 5 released, due Aug 13th, 11:59 PM ET (no programming portion)
- ps5.pdf, ps5_template.zip, ps5_tex.zip
- LS (Story thus far)
- Lecture 5.1: Modeled the regression problem with a linear model with random errors. Found that OLS’ conditional expectation is the true linear model and its variance scales with the variance of the random errors.
- Lecture 5.2: OLS is the lowest variance unbiased linear estimator (Gauss-Markov Theorem). Derived expression for the risk (generalization error) of OLS.
- GD (Story thus far)
- Lecture 5.1: Nothing new here.
- Lecture 5.2: Closed the story of gradient descent by defining stochastic gradient descent, where we use unbiased estimators of the gradient instead of the full gradient over all the data.
Probability and Statistics II (Maximum likelihood and Gaussian distribution)
- Aug 5
- Lecture: The Central Limit Theorem, “Named” Distributions, and MLE
- MML 6.1-6.8, MML Ch. 8, 3Blue1Brown’s video on the Central Limit Theorem
- Please fill out SEAS course evaluations on Courseworks!
- Aug 6
- DUE PS 4 due
- Aug 7
- Lecture: Multivariate Gaussian and Course Overview
- 3Blue1Brown’s video on adding Gaussian distributions, 3Blue1Brown’s video on normalizing the Gaussian, MML Ch. 11 (Gaussian Mixture Models, not covered), OLS distribution with standard normal eps, true w = (1,1), MVN with mean (0, 0), Identity covariance, MVN with mean (0, 0), Diagonal covariance, MVN with mean (0, 0), Non-diagonal covariance, MVN with mean (1, 1), Non-diagonal covariance
- Please fill out SEAS course evaluations on Courseworks! and Post-Course Survey!
- Aug 12
- DUE Final Project Evaluation due
- Aug 13
- DUE PS 5 due
- LS (Story thus far)
- Lecture 6.1: Under another paradigm for machine learning (maximum likelihood estimation), the OLS estimator corresponds to MLE on the Gaussian error model
- Lecture 6.2: Under the Gaussian error model, the distribution of the OLS estimator is multivariate Gaussian.
Resources
I’ll update this with additional resources as the class progresses. Feel free to use these or ignore completely. If you know of any additional resources that you think would be helpful for the class, let me know and I’ll add it here!
LaTeX
- Overleaf, the Google Docs for LaTeX. Can be used for all the assignments in this class.
- Overleaf’s guide to learn LaTeX in 30 minutes
- David Xiao’s Beginner’s guide to LaTeX
- Eddie Kohler’s LaTeX usage notes. These might be worth a browse to rectify common stylistic problems with using LaTeX.
- Detexify, an applet to get the LaTeX command for any handwritten symbol.
In general, Googling an issue you’re having with LaTeX usually provides a plethora of solutions.
Python
- Whirlwind Tour of Python should have most everything you need to get up to speed with the programming required in this course.
Linear Algebra Prerequisites
If you need to refresh any linear algebra, these may be good resources.
- Linear Algebra and Applications by Gilbert Strang
- Gilbert Strang’s MIT Course on Linear Algebra
- Linear Algebra Done Wrong by Sergei Treil, available free as PDF here
- Daniel Hsu’s course notes for Computational Linear Algebra
- 3Blue1Brown’s Essence of Linear Algebra videos
Multivariable Calculus Prerequisites
If you need to refresh any multivariable calculus, these may be good resources.
- MIT OpenCourseware course on multivariable calculus
- Vector Calculus, Linear Algebra, and Differential Forms: A Unified Approach by Barbara Burke Hubbard and John H. Hubbard.
- Vector Calculus by Susan Jane Colley
Probability Theory and Statistics Prerequisites
If you need to refresh any probability and statistics, these may be good resources.
- Introduction to Probability for Data Science by Stanley H. Chan
- A First Course in Probability by Sheldon Ross.
- Introduction to Probability by Joseph K. Blitzstein and Jessica Hwang.
- Probability and Statistics for Engineers and Scientists by Ronald E. Wadpole.