Inner product is \reversed” from prior material, see changelog explanation

This chapter is about breaking up a matrix A into pieces that somehow combine to recreate A. Usually the pieces are again matrices, and usually they are then combined via matrix multiplication (Definition MM). In some cases, the decomposition will be valid for any matrix, but often we might need extra conditions on A, such as being square (Definition SQM), nonsingular (Definition NM) or diagonalizable (Definition DZM) before we can guarantee the decomposition. If you are comfortable with topics like decomposing a solution vector into linear combinations (Subsection LC.VFSS) or decomposing vector spaces into direct sums (Subsection PD.DS), then we will be doing similar things in this chapter. If not, review these ideas and take another look at Technique DC on decompositions.

We have studied one matrix decomposition already, so we will review that here in this introduction, both as a way of previewing the topic in a familiar setting, but also since it does not deserve another section all of its own.

A diagonalizable matrix (Definition DZM) is defined to be a square matrix A such that there is an invertible matrix S and a diagonal matrix D where {S}^{−1}AS = D. We can re-write this as A = SD{S}^{−1}. Here we have a decomposition of A into three matrices, S, D and {S}^{−1}, which recombine through matrix multiplication to recreate A. We also know that the diagonal entries of D are the eigenvalues of A. We cannot form this decomposition for just any matrix — A must be square and we know from Theorem DC that a matrix of size n is diagonalizable if and only if there is a basis for {ℂ}^{n} composed entirely of eigenvectors of A, or by Theorem DMFE we know that A is diagonalizable if and only if each eigenvalue of A has a geometric multiplicity equal to its algebraic multiplicity. Some authors prefer to call this an eigen decomposition of A rather than a matrix diagonalization.

Another decomposition, which is similar in flavor to matrix diagonalization, is orthonormal diagonalization (Theorem OD). Here we require the matrix A to be normal and we get the decomposition A = UD{U}^{∗}, where D is a diagonal matrix with the eigenvalues of A on the diagonal, and U is unitary. The hypothesis that A is normal guarantees the decomposition and we get the extra information that U is unitary.

Each section of this chapter features a different matrix decomposition, with the exception of Section PSM, which presents background information on positive semi-definite matrices required for singular value decompositions, square roots and polar decompositions.

Section ROD Rank One Decomposition

Section TD Triangular Decomposition

Subsection TD: Triangular Decomposition

Subsection TDSSE: Triangular Decomposition and Solving Systems of Equations

Subsection CTD: Computing Triangular Decompositions

Section SVD Singular Value Decomposition

Subsection MAP: Matrix-Adjoint Product

Subsection SVD: Singular Value Decomposition

Section SR Square Roots

Subsection SRM: Square Root of a Matrix

Section POD Polar Decomposition

Section CF Curve Fitting

Subsection DF: Data Fitting

Subsection EXC: Exercises

Section SAS Sharing A Secret

Section TD Triangular Decomposition

Subsection TD: Triangular Decomposition

Subsection TDSSE: Triangular Decomposition and Solving Systems of Equations

Subsection CTD: Computing Triangular Decompositions

Section SVD Singular Value Decomposition

Subsection MAP: Matrix-Adjoint Product

Subsection SVD: Singular Value Decomposition

Section SR Square Roots

Subsection SRM: Square Root of a Matrix

Section POD Polar Decomposition

Section CF Curve Fitting

Subsection DF: Data Fitting

Subsection EXC: Exercises

Section SAS Sharing A Secret