Inner product is \reversed” from prior material, see changelog explanation
This chapter is about breaking up a matrix
We have studied one matrix decomposition already, so we will review that here in this introduction, both as a way of previewing the topic in a familiar setting, but also since it does not deserve another section all of its own.
A diagonalizable matrix (Definition DZM) is defined to be a square matrix A such that there is an invertible matrix S and a diagonal matrix D where {S}^{−1}AS = D. We can re-write this as A = SD{S}^{−1}. Here we have a decomposition of A into three matrices, S, D and {S}^{−1}, which recombine through matrix multiplication to recreate A. We also know that the diagonal entries of D are the eigenvalues of A. We cannot form this decomposition for just any matrix — A must be square and we know from Theorem DC that a matrix of size n is diagonalizable if and only if there is a basis for {ℂ}^{n} composed entirely of eigenvectors of A, or by Theorem DMFE we know that A is diagonalizable if and only if each eigenvalue of A has a geometric multiplicity equal to its algebraic multiplicity. Some authors prefer to call this an eigen decomposition of A rather than a matrix diagonalization.
Another decomposition, which is similar in flavor to matrix diagonalization, is orthonormal diagonalization (Theorem OD). Here we require the matrix A to be normal and we get the decomposition A = UD{U}^{∗}, where D is a diagonal matrix with the eigenvalues of A on the diagonal, and U is unitary. The hypothesis that A is normal guarantees the decomposition and we get the extra information that U is unitary.
Each section of this chapter features a different matrix decomposition, with the exception of Section PSM, which presents background information on positive semi-definite matrices required for singular value decompositions, square roots and polar decompositions.