Section 1.1 Introduction
¶This book is about advanced topics in linear algebra. So we presume you have some experience with matrix algebra, vector spaces (possibly abstract ones), eigenvalues, linear transformations, and matrix representations of linear transformations. All of this material can be found in A First Course in Linear Algebra, which we will reference frequently.
Our approach is mathematical, which means we include proofs of our results. However, we are also practical, and will not always be as general as we could be. For example, we will stick to a single inner product throughout (the sesquilinear one that is most useful when employing complex numbers). We will sometimes be careful about our field of scalars, but will not dwell on the distinctions peculiar to the real numbers (versus the algebraically closed complex numbers). This is not a course in numerical linear algebra, but much of what we do provides the mathematical underpinninngs of that topic, so this could be a very useful resource for study in that area. We will make mention of algorithmic performance, relying on Trefethen and Bau's excellent Numerical Linear Algebra for details.
Many topics we consider are motivated by trying to find simpler versions of matrices. Here “simpler” can be taken to mean many zero entries. Barring a zero entry, then maybe an entry equal to one is second-best. An overall form that is much like a diagonal matrix is also desirable, since diagonal matrices are simple to work with. (forward referenc eto exercise). A familiar example may help to make these ideas more precise.
Example 1.1.1. Reduced Row-Echelon Form as a Factorization.
Given an \(m\times n\) matrix, \(A\text{,}\) we know that its reduced row-echelon form is unique (Theorem RREFU). We also know that we can accomplish row-operations by multiplying \(A\) on the left by a (nonsingular) elementary matrix (Subsection DM.EM). Suppose we perform repeated row-operations to transform \(A\) into a matrix in reduced row-echelon form, \(B\text{.}\) Then the product of the elementary matrices is a square nonsingular matrix, \(J\) such that
or equivalently
We call the second version a factorization, or matrix decomposition, of \(A\) (Though some might use the same terms for the first version, perhaps saying it is a factorization of \(B\)). The pieces of this decomposition have certain properties. The matrix \(\inverse{J}\) is a nonsingular matrix of size \(m\text{.}\) The matrix \(B\) has an abundance of zero entries, and some strategically placed “leading ones” which signify the pivot columns. The exact structure of \(B\) is described by Definition RREF and Theorem RREF tells us that we can accomplish this decomposition given any matrix \(A\text{.}\)
If \(A\) is not of full rank, then there are many possibilities for the matrix \(J\text{,}\) even though \(B\) is unique. However, results on extended echelon form (Subsection FS.PEEF suggest a choice for \(J\) that is unambiguous. We say that choice is canonical. This example gives the following theorem, where we have changed the notation slightly.
Again, many of the topics in this book will have a flavor similar to the previous example and theorem. However, we will often need to limit the possibilities for the original matrix (it may need to be square, or its eigenvalues may need certain properties). We may get more specific information about the components of the factorization, or we may get less. We will also be interested in obtaining canonical forms of matrices. You can view orthonormal diagonalization (Section OD) as a good example of another matrix decomposition, and we will cover it again in some detail in Section (((section on orthonormal diagonalization/Schur))).