These are the fundamental rules for working with the addition, and scalar multiplication, of matrices. We saw something very similar in the previous chapter (Theorem VSPCV). Together, these two definitions will provide our definition for the key definition, Definition VS.

Theorem SLSLC connected linear combinations with systems of equations. Theorem SLEMM connects the matrix-vector product (Definition MVP) and column vector equality (Definition CVE) with systems of equations. We'll see this one regularly.

This theorem is a workhorse in Section MM and will continue to make regular appearances. If you want to get better at formulating proofs, the application of this theorem can be a key step in gaining that broader understanding. While it might be hard to imagine Theorem EMP as a *definition* of matrix multiplication, we'll see in Exercise MR.T80 that in theory it is actually a *better* definition of matrix multiplication long-term.

The inverse of a matrix is key. Here's how you can get one if you know how to row-reduce.

This theorem is a fundamental tool for proving subsequent important theorems, such as Theorem NI. It may also be the best explantion for the term “nonsingular.”

“Nonsingularity” or “invertibility”? Pick your favorite, or show your versatility by using one or the other in the right context. They mean the same thing.

Given a coefficient matrix, which vectors of constants create consistent systems. This theorem tells us that the answer is exactly those column vectors in the column space. Conversely, we also use this teorem to test for membership in the column space by checking the consistency of the appropriate system of equations.

Another theorem that provides a linearly independent set of vectors whose span equals some set of interest (a column space this time).

Yet another theorem that provides a linearly independent set of vectors whose span equals some set of interest (a row space).

Column spaces, row spaces, transposes, rows, columns. Many of the connections between these objects are based on the simple observation captured in this theorem. This is not a deep result. We state it as a theorem for convenience, so we can refer to it as needed.

This theorem is inherently interesting, if not computationally satisfying. Null space, row space, column space, left null space — here they all are, simply by row reducing the extended matrix and applying Theorem BNS and Theorem BCS twice (each). Nice.