Course Content

This page includes all the content for the course thus far. We will update this page with all lecture materials, readings, and homework as the class goes on.

  1. Schedule and Main Content
  2. Resources
    1. LaTeX
    2. Python
    3. Linear Algebra Prerequisites
    4. Multivariable Calculus Prerequisites
    5. Probability Theory and Statistics Prerequisites

Schedule and Main Content

This class has six main modules, two for each “pillar” of machine learning: linear algebra, calculus and optimization, and probability and statistics. All class files will be available here. For a more detailed outline of the course thus far, see the Course Skeleton.

  • Lecture slides can be found by clicking on the lecture title for the appropriate day.
  • All the materials and reading on the right column is optional, but reading (a subset of) these materials before each lecture might help digesting the content during lecture.
  • Problem sets will be posted here, as well as their solutions.

This is a tentative schedule and is subject to change. Readings, slides, and assignments will be posted as the class goes on.

Optional readings. MML refers to Mathematics for Machine Learning by Marc Peter Deisenroth, A. Aldo Faisal, and Cheng Soon Ong. VMLS refers to Introduction to Applied Linear Algebra - Vectors, Matrices, and Least Squares by Stephen Boyd and Lieven Vandenberghe.

Story of the course. As the lectures go on, the goal will be to develop two main ideas from machine learning: least squares regression (LS) and gradient descent (GD). During each lecture, we will build these ideas with the mathematical tools from that lecture; at the same time, we’ll gradually develop a “picture” of LS and GD as the course goes on. An evolving 3D rendering of each “picture” will be linked in each module below.

Problem sets. The problem sets will usually look relatively long, but much of it is exposition – the problems in this course are mostly structured to guide you through the discovery or derivation of some result or perspective on a concept. As such, the problem sets serve the double purpose of some “required reading” interspersed with problems for you to fill in the gaps.

Lecture pace. It’s really easy, in my experience, to get lost in a math lecture when lots of derivations or proofs are involved. At the same time, though, it can often be intimidating to speak up for fear of asking a “dumb question” (no such thing!). To this end, during every lecture, I’ll have a fully anonymous interactive poll to keep an eye on how people are feeling during lecture and I’ll check it intermittently, especially during proofs. Access the poll on the Pacing page.

Unit reviews. At the end of each “pillar” of the course, we will hold an optional unit review session to make sure that everyone is on the same page before moving onto the next session. These will be informal recitations where we recap the Course Skeleton to get a big picture view and, more importantly, answer any questions and confusion you might have. The dates/times/locations will be posted here and on the Calendar.

Linear Algebra I (matrices, vectors, bases, and orthogonality)

Linear Algebra II (singular value decomposition and eigendecomposition)

Calculus and Optimization I (differentiation and Taylor Series)

Jun 10
Lecture: Differentiation and vector calculus
“Peaks” Function, Derivative Ex. 1, Derivative Ex. 2, Derivative Ex. 3, MML 5.1 - 5.5, The Matrix Cookbook, Annotated Slides
PS 3 released (due June 20 11:59 PM ET)
ps3.pdf, ps3_student.zip, ps3.ipynb
Jun 12
Lecture: Gradient Descent, Linearization, and Taylor Series
3Blue1Brown video on Taylor Series, MML 5.8, 3Blue1Brown video on Gradient Descent and Neural Networks
Jun 13
DUE PS 2 due
LS (Story thus far)
Lecture 3.1: We can derive the exact same OLS theorem from linear algebra section from just the tools of optimization and viewing the notion of least squares error as an “objective function.”
GD (Story thus far)
Lecture 3.1: We can now write down the algorithm for gradient descent. Intuitively, positive semidefinite or positive definite quadratic forms seem good for gradient descent.
Lecture 3.2: Using Taylor’s theorem for the first-order approximation (linearization), we can provide intuition and a formal guarantee that gradient descent makes the function values decrease. The behavior of gradient descent depends on the learning rate eta: eta too big will result in erratic behavior but small enough eta results in stable convergence. This eta setting depends intimately on the second order information, or “smoothness” of the function

Calculus and Optimization II (optimization and convexity)

Jun 17
Lecture: Optimization and the Lagrangian
Constrained least squares (ridge regression), MML 7.1 - 7.2
PS 4 released (due June 27 11:59 PM ET)
ps4.pdf, ps4_student.zip, ps4.ipynb
Jun 19
Class rescheduled to Friday, June 20th due to Juneteenth
Jun 20
Lecture: Convexity and convex optimization (Changed time and location: 12:45pm - 4pm in CSB 451)
MML 7.3, Convexity Definition in 3D, Convexity First-order Definition in 3D, Boyd and Vandenberghe’s Convex Optimization Chapters 1 - 3
Jun 20
DUE PS 3 due
LS (Story thus far)
Lecture 4.1: In some applications, it may be favorable to regularize the least squares objective by trading off minimizing the objective with the norm of the weights.
Lecture 4.2: The least squares objective is a convex function (also: first-order definition); applying gradient descent takes us to a global minimum
GD (Story thus far)
Lecture 4.1: Nothing new here.
Lecture 4.2: Applying gradient descent to beta-smooth, convex functions takes us to a global minimum. One such function is the least squares objective.

Probability and Statistics I (basic probability theory and statistical estimation)

Probability and Statistics II (Maximum likelihood and Gaussian distribution)

Resources

I’ll update this with additional resources as the class progresses. Feel free to use these or ignore completely. If you know of any additional resources that you think would be helpful for the class, let me know and I’ll add it here!

LaTeX

In general, Googling an issue you’re having with LaTeX usually provides a plethora of solutions.

Python

  • Whirlwind Tour of Python should have most everything you need to get up to speed with the programming required in this course.
  • A condensed version of this Whirlwind Tour of Python can be found here: python_crashcourse.ipynb.
  • Here is a video going through this crash course in case you want to get up to speed in video format.

Linear Algebra Prerequisites

If you need to refresh any linear algebra, these may be good resources.

Multivariable Calculus Prerequisites

If you need to refresh any multivariable calculus, these may be good resources.

Probability Theory and Statistics Prerequisites

If you need to refresh any probability and statistics, these may be good resources.