How do you solve a system of five equations with five unknowns? What happens if you have more unknown than equations? Is there any system that can give you an infinite number of solutions? How do you add column vectors? Can you find the point at which three two-dimensional planes meet in three dimensions? What are the three elementary matrix operations? How do you multiply matrices with each other? Does it matter which order you do it? What is the connection between matrix multiplication and dot products?
What is an inverse of a matrix? When are left and right inverses the same? What is a transposed matrix? What are the subspaces of R2 and how do we know? What is the difference between the column space and the null space? What does it mean for vectors or subspaces to be orthogonal? How do you make vectors orthonormal with the Gram–Schmidt process? What is a determinant?
What are their properties? How are they calculated? Is there a formula for calculating any determinants of arbitrary size? How do you apply Cramer’s rule? What is the connection between determinants and volume? What are eigenvalues and eigenvectors? What is a Markov matrix and what are their two core properties? To answers these questions about matrices and determinants, we need to learn something about linear algebra.
Gilbert Strang is a Professor of Mathematics at Massachusetts Institute of Technology and an Honorary Fellow at Balliol College in Oxford. He held a course in linear algebra at MIT in 1999 and the university uploaded all of the lectures to their Youtube channel a few years later. The entire playlist is available here. This treasure trove of linear algebra consists of a total of 35 video lectures (34 + lecture 24b quiz 2 review) with crystal clear explanations of complicated mathematical concepts, as well as the pedagogic brilliance of Gilbert Strang.
Topics include solving systems of linear equations with matrices, solving many different matrix equations, null and column spaces, definitions and calculations of determinants, eigenvalues and eigenvectors, and many more. These video lectures span over 25 hours and are filled to the brim with exciting linear algebra knowledge.
This first lecture starts off this series of video lectures on linear algebra by explaining how this field of mathematics efficiently handles systems of linear equations. How do you solve them using a matrix?
Strang continues with a formalized method of solving matrices called matrix elimination, knowing when it works and when it does not, as well as starting to think about how to multiply two matrices together.
Now it is time to crystallize the rules behind matrix multiplication, shows different ways to do it, introduces the inverse of a matrix and how to use the Gauss-Jordan method of how to find it.
This lecture asks us what the inverse of a product of two matrices are and how we get them, as well as how to perform LU decomposition.
What are permutations of a matrix? How do you transpose a matrix? What are vector spaces and subspaces? These are many other questions are tackled in this lecture.
This sixth video gets to the center of linear algebra by continuing the previous discussion of vector spaces and subspaces, introducing column space and null space and why they are important.
How do you solve matrix equations of the type Ax = 0? This lecture also define pivot variables and what it means for a variable to be a free variable. Are there special solutions to row reduced echelon matrices? If so, what are they?
Strang shows us how to fully solve matrix equations of the type Ax = b, explicitly list their solutions or find out when there are no such solutions. What is the rank of a matrix?
What does it mean for a matrix to have linearly independent columns? What is the basis for a subspace? What is the dimension of a subspace? This is the lecture where they are given proper definitions.
This video lecture reveals the four fundamental subspaces of a matrix A, namely the null space, the column space (both encountered previously), the row space the left null space and how they are related.
Strang introduces new vector spaces (such as all 3×3 matrices) and their subspaces (such as all symmetric or upper triangular 3×3 matrices) as well as the basis and dimensions of these subspaces. What are rank 1 matrices and why are they special?
This part of the video lectures series forages into applications of linear algebra such as representing graphs of (e. g. currents with nodes and edges as matrices) and Kirchhoff’s law’s.
This first review focuses on solving matrix equations of the form Ax = b and properties of rectangular matrices. The basic idea revolves around solving old exam questions on the blackboard. What possible subspace can be spanned by three non-zero vectors? What is the null space of a 5×3 matrix with three pivots? Does the fact that a square matrix squared is zero imply that the square matrix itself is 0? If A and B have the same four subspaces, does this mean that one of them must be a multiple of the other?
What are orthogonal and orthonormal vectors, subspaces and basis? This is the 90 degree lecture that draws the big picture of introductory linear algebra. What is the relationship between the row space and null space of a matrix with the column space and null space of the transposed matrix?
Strang explains how projections are used in linear algebra. Where does orthogonality comes in when dealing with projections? How does linear algebra allow you to do calculations that would otherwise be more convoluted in trigonometry?
In this second lecture on projections, Strang revises the formula for projection matrices and what it does. What happens if a vector b is in the column space? Perpendicular to the column space? How do you pick the best line that makes the overall error as small as possible to fit a line to a set of data points? What is the connection to linear algebra?
This lecture finishes off orthogonality with orthogonal basis, orthogonal matrices and the Gram-Schmidt method that can turn certain sets of independent vectors into orthonormal ones.
This video begins the second half of this linear algebra course and delves into determinants and their properties.
Strang goes through the special cases of a 2×2 determinant and a 3×3 determinant before showing an explicit formula for how to calculate a determinant of an arbitrary size.
This video lecture looks at applications for determinants, gives an explicit formula for the inverse of a matrix and how to use Cramer’s rule that can solve a system of linear equations using determinants.
An eigenvector is a vector that, when multiplied by a certain matrix, is a multiple of itself. This multiple is called the eigenvalue. Why are these two concepts important in linear algebra?
What happens when you square a matrix? How do you diagonalize a matrix with an eigenvector matrix? A matrix will have n independent eigenvectors and be diagonalizable if all the eigenvalues are different. How do we know this?
This video lecture powers on to solve a system of linear equations involving first-order differential equations. What happens when a matrix is in the exponent and the base is the natural exponential function?
Strang gives us some applications of eigenvalues and projections, such as Markov matrices and Fourier series. The key to a Markov matrix is that all elements are larger than zero and all columns sum to 1. Why are these properties so important?
Time for a second review. Like the first review, the lecturer starts by going over the main points of what has been done during the period since the last quiz review and then goes on to go through an old linear algebra exam to highlight the most significant questions and solutions.
This lecture is the key lecture for symmetric matrices. What is special with the eigenvalues and eigenvectors of a symmetric matrix? Why are the signs of the pivots of symmetric matrices the same as the signs of the eigenvalues?
What happens when a matrix has complex eigenvalues or eigenvectors? What about when the matrix itself has complex elements? This lecture is a detour from the real into the complex. What is the dot product of two complex vectors? This lecture also looks at applications of complex matrices and why the Fourier matrix is the most important such complex matrix.
What is a positive definite matrix and what tests can you use to find out? What is the connection to ellipses? How can linear algebra tools be used to find if you have a minimum?
Why is ATA positive definite? Where do positive definite matrices come from? What is matrix similarity and how can you tell if two matrices are similar? What is a Jordan matrix?
Strang explains how to factor matrices with a key linear algebra method called singular value decomposition. In what way does it bring together everything that has been covered on the course before.
Why does every linear transformations lead to a matrix? Examples include projection, shifting whole planes, rotations, and multiplication by matrix.
How do you change from one basis to another? What about the applications of this, such as signal or image compression? What is the connection between a linear transformation and a matrix?
This third and final review covers all the material talked about since the last quiz. As before, Strang initially reviews the material and goes through old exam questions on eigenvalues and eigenvectors, differential equations using matrices, symmetry, similarity, positive definite matrices.
This penultimate lecture contrasts left and right inverses, their ranks and various subspaces. Null spaces sometimes screw up the ability to calculate inverses. How can the inverse concept be generalized to a pseudoinverse to handle that?
This final lecture reviews all of the material covered in the course from start to finish. Strang spends the entire lecture going through old linear algebra exam questions.
If you want to start with linear algebra and like the video lecture format of education, this is the course you should watch. Strang is fantastic, fun and pedagogic.