Skip to main content

Section 2.3 Singular Value Decomposition

The singular value decomposition is one of the more useful ways to represent any matrix, even rectangular ones. We can also view the singular values of a (rectangular) matrix as analogues of the eigenvalues of a square matrix.

Subsection 2.3.1 Matrix-Adjoint Products

Our definitions and theorems in this section rely heavily on the properties of the matrix-adjoint products (\(\adjoint{A}A\) and \(A\adjoint{A}\)). We start by examining some of the basic properties of these two positive semi-definite matrices. Now would be a good time to review the basic facts about positive semi-definite matrices in Section Section 1.8.

Suppose that \(\vect{x}\in\complex{n}\) is any eigenvector of \(\adjoint{A}A\) for a nonzero eigenvalue \(\lambda\text{.}\) We will show that \(A\vect{x}\) is an eigenvector of \(A\adjoint{A}\) for the same eigenvalue, \(\lambda\text{.}\) First, we ascertain that \(A\vect{x}\) is not the zero vector.

\begin{equation*} \innerproduct{A\vect{x}}{A\vect{x}}=\innerproduct{\vect{x}}{\adjoint{A}A\vect{x}}=\innerproduct{\vect{x}}{\lambda\vect{x}}=\lambda\innerproduct{\vect{x}}{\vect{x}} \end{equation*}

Since \(\vect{x}\) is an eigenvector, \(\vect{x}\neq\zerovector\text{,}\) and by Theorem PIP, \(\innerproduct{\vect{x}}{\vect{x}}\neq 0\text{.}\) As \(\lambda\) was assumed to be nonzero, we see that \(\innerproduct{A\vect{x}}{A\vect{x}}\neq 0\text{.}\) Again, Theorem PIP tells us that \(A\vect{x}\neq\zerovector\text{.}\)

Much of the sequel turns on the following simple computation. If you ever wonder what all the fuss is about adjoints, Hermitian matrices, square roots, and singular values, return to this brief computation, as it holds the key. There is much more to do in this proof, but after this it is mostly bookkeeping. Here we go. We check that \(A\vect{x}\) functions as an eigenvector of \(A\adjoint{A}\) for the eigenvalue \(\lambda\text{,}\)

\begin{equation*} \left(A\adjoint{A}\right)A\vect{x}=A\lambda\vect{x}=\lambda\left(A\vect{x}\right) \end{equation*}

That's it. If \(\vect{x}\) is an eigenvector of \(\adjoint{A}A\text{,}\) for a nonzero eigenvalue, then \(A\vect{x}\) is an eigenvector for \(A\adjoint{A}\) for the same nonzero eigenvalue. Let's see what this buys us.

\(\adjoint{A}A\) and \(A\adjoint{A}\) are Hermitian matrices (Definition HM), and hence are normal (Definition Definition 1.7.1). This provides the existence of orthonormal bases of eigenvectors for each matrix by Theorem OBNM. Also, since each matrix is diagonalizable (Definition DZM) by Theorem OD the algebraic and geometric multiplicities of each eigenvalue (zero or not) are equal by Theorem DMFE.

Our first step is to establish that a nonzero eigenvalue \(\lambda\) has the same geometric multiplicity for both \(\adjoint{A}A\) and \(A\adjoint{A}\text{.}\) Suppose \(\set{\vectorlist{x}{s}}\) is an orthonormal basis of eigenvectors of \(\adjoint{A}A\) for the eigenspace \(\eigenspace{\adjoint{A}A}{\lambda}\text{.}\) Then for \(1\leq i\lt j\leq s\text{,}\)

\begin{equation*} \innerproduct{A\vect{x}_i}{A\vect{x}_j}=\innerproduct{\vect{x}_i}{\adjoint{A}A\vect{x}_j}=\innerproduct{\vect{x}_i}{\lambda\vect{x}_j}=\lambda\innerproduct{\vect{x}_i}{\vect{x}_j}=\lambda(0) \end{equation*}

So the set \(E=\set{A\vect{x}_1,\,A\vect{x}_2,\,A\vect{x}_3,\,\dots,\,A\vect{x}_s}\) is an orthogonal set of nonzero eigenvectors of \(A\adjoint{A}\) for the eigenvalue \(\lambda\text{.}\) By Theorem OSLI, the set \(E\) is linearly independent and so the geometric multiplicity of \(\lambda\) as an eigenvalue of \(A\adjoint{A}\) is \(s\) or greater. We have

\begin{equation*} \algmult{\adjoint{A}A}{\lambda} = \geomult{\adjoint{A}A}{\lambda} \leq \geomult{A\adjoint{A}}{\lambda} = \algmult{A\adjoint{A}}{\lambda} \end{equation*}

This inequality applies to any matrix for any of its nonzero eigenvalues. We now apply it to the matrix \(\adjoint{A}\text{,}\)

\begin{equation*} \algmult{A\adjoint{A}}{\lambda} = \algmult{\adjoint{\left(\adjoint{A}\right)}\adjoint{A}}{\lambda} \leq \algmult{\adjoint{A}\adjoint{\left(\adjoint{A}\right)}}{\lambda} = \algmult{\adjoint{A}A}{\lambda} \end{equation*}

With the twin inequalities, we see that the four multiplicities of a common nonzero eigenvalue of \(\adjoint{A}A\) and \(A\adjoint{A}\) are all equal. This is enough to establish that \(p=q\text{,}\) since we cannot have an eigenvalue of one of the matrix-adjoint products along with a zero algebraic multiplicity for the other matrix-adjoint product. Then the eigenvalues can be ordered such that \(\lambda_i=\rho_i\) for \(1\leq i\leq p\text{.}\)

For any matrix, the null space is identical to the eigenspace of the zero eigenvalue, and thus the nullity of the matrix is equal to the geometric multiplicity of the zero eigenvalue. As our matrix-adjoint products are diagonalizable, the nullity is equal to the algebraic multiplicity of the zero eigenvalue. The algebraic multiplicities of all the eigenvalues sum to the size of the matrix (Theorem NEM), as does the rank and nullity (Theorem RPNC). So the rank of our matrix-adjoint products is equal to the sum of the algebraic multiplicities of the nonzero eigenvalues. So the ranks of \(\adjoint{A}A\) and \(A\adjoint{A}\) are equal,

\begin{equation*} \rank{\adjoint{A}A}=\sum_{i=1}^{p}\algmult{\adjoint{A}A}{\lambda_i}=\sum_{i=1}^{q}\algmult{A\adjoint{A}}{\rho_i}=\rank{A\adjoint{A}} \end{equation*}

When \(A\) is rectangular, the square matrices \(\adjoint{A}A\) and \(A\adjoint{A}\) have different sizes. With equal algebraic and geometric multiplicities for their common nonzero eigenvalues, the difference in their sizes is manifest in different algebraic multiplicities for the zero eigenvalue and different nullities. Specifically,

\begin{align*} \nullity{\adjoint{A}A}&=n-r & \nullity{A\adjoint{A}}&=m-r \end{align*}

Suppose that \(\vectorlist{x}{n}\) is an orthonormal basis of \(\complex{n}\) composed of eigenvectors of \(\adjoint{A}A\) and ordered so that \(\vect{x}_i\text{,}\) \(r+1\leq i\leq n\) are eigenvectors of \(\adjoint{A}A\) for the zero eigenvalue. Denote the associated nonzero eigenvalues of \(\adjoint{A}A\) for these eigenvectors by \(\vect{\delta}_i\text{,}\) \(1\leq i\leq r\text{.}\) Then define

\begin{equation*} \vect{y}_i=\frac{1}{\sqrt{\delta_i}}A\vect{x}_i,\quad 1\leq i\leq r \end{equation*}

Let \(\vect{y}_{r+1},\,\vect{y}_{r+2},\,\vect{y}_{r+2},\,\dots,\,\vect{y}_{m}\) be an orthonormal basis for the eigenspace \(\eigenspace{A\adjoint{A}}{0}\text{,}\) whose existence is guaranteed by the Gram-Schmidt process (Theorem GSP). As scalar multiples of demonstrated eigenvectors of \(A\adjoint{A}\text{,}\) \(\vect{y}_i\text{,}\) \(1\leq i\leq r\) are also eigenvectors of \(A\adjoint{A}\text{,}\) and \(\vect{y}_i\text{,}\) \(r+1\leq i\leq n\) have been chosen as eigenvectors of \(A\adjoint{A}\text{.}\) These eigenvectors also have norm 1, as we now show.

For \(1\leq i\leq r\text{,}\)

\begin{align*} \norm{\vect{y}_i}^2&=\norm{\frac{1}{\sqrt{\delta_i}}A\vect{x}_i}^2=\innerproduct{\frac{1}{\sqrt{\delta_i}}A\vect{x}_i}{\frac{1}{\sqrt{\delta_i}}A\vect{x}_i}\\ &=\conjugate{\frac{1}{\sqrt{\delta_i}}}\frac{1}{\sqrt{\delta_i}}\innerproduct{A\vect{x}_i}{A\vect{x}_i}=\frac{1}{\delta_i}\innerproduct{A\vect{x}_i}{A\vect{x}_i}\\ &=\frac{1}{\delta_i}\innerproduct{\vect{x}_i}{\adjoint{A}A\vect{x}_i}=\frac{1}{\delta_i}\innerproduct{\vect{x}_i}{\delta_i\vect{x}_i}\\ &=\frac{1}{\delta_i}\delta_i\innerproduct{\vect{x}_i}{\vect{x}_i}=1 \end{align*}

For \(r+1\leq i\leq n\text{,}\) the \(\vect{y}_i\) have been chosen to have norm 1.

Finally we check orthogonality. Consider two eigenvectors \(\vect{y}_i\) and \(\vect{y}_j\) with \(1\leq i\lt j\leq m\text{.}\) If these two vectors have different eigenvalues, then Theorem HMOE establishes that the two eigenvectors are orthogonal. If the two eigenvectors have a zero eigenvalue, then they are orthogonal by the choice of the orthonormal basis of \(\eigenspace{A\adjoint{A}}{0}\text{.}\) If the two eigenvectors have identical eigenvalues, which are nonzero, then

\begin{align*} \innerproduct{\vect{y}_i}{\vect{y}_j}&= \innerproduct{\frac{1}{\sqrt{\delta_i}}A\vect{x}_i}{\frac{1}{\sqrt{\delta_j}}A\vect{x}_j}=\conjugate{\frac{1}{\sqrt{\delta_i}}}\frac{1}{\sqrt{\delta_j}} \innerproduct{A\vect{x}_i}{A\vect{x}_j}\\ &=\frac{1}{\sqrt{\delta_i\delta_j}}\innerproduct{A\vect{x}_i}{A\vect{x}_j}=\frac{1}{\sqrt{\delta_i\delta_j}} \innerproduct{\vect{x}_i}{\adjoint{A}A\vect{x}_j}\\ &=\frac{1}{\sqrt{\delta_i\delta_j}} \innerproduct{\vect{x}_i}{\delta_j\vect{x}_j}= \frac{\delta_j}{\sqrt{\delta_i\delta_j}} \innerproduct{\vect{x}_i}{\vect{x}_j}\\ &=\frac{\delta_j}{\sqrt{\delta_i\delta_j}}(0)=0 \end{align*}

So \(\set{\vectorlist{y}{m}}\) is an orthonormal set of eigenvectors for \(A\adjoint{A}\text{.}\) The critical relationship between these two orthonormal bases is present by design. For \(1\leq i\leq r\text{,}\)

\begin{equation*} A\vect{x}_i =\sqrt{\delta_i}\frac{1}{\sqrt{\delta_i}}A\vect{x}_i =\sqrt{\delta_i}\vect{y}_i \end{equation*}

For \(r+1\leq i\leq n\) we have

\begin{equation*} \innerproduct{A\vect{x}_i}{A\vect{x}_i}=\innerproduct{\vect{x}_i}{\adjoint{A}A\vect{x}_i}=\innerproduct{\vect{x}_i}{\zerovector}=0 \end{equation*}

So by Theorem PIP, \(A\vect{x}_i=\zerovector\text{.}\)

Subsection 2.3.2 Singular Value Decomposition

The square roots of the eigenvalues of \(\adjoint{A}A\) (or almost equivalently, \(A\adjoint{A}\text{!}\)) are known as the singular values of \(A\text{.}\) Here is the definition.

Definition 2.3.2. Singular Values.

Suppose \(A\) is an \(m\times n\) matrix. If the eigenvalues of \(\adjoint{A}A\) are \(\scalarlist{\delta}{n}\text{,}\) then the singular values of \(A\) are

\begin{equation*} \sqrt{\delta_1},\,\sqrt{\delta_2},\,\sqrt{\delta_3},\,\ldots,\,\sqrt{\delta_n} \end{equation*}

Theorem Theorem 2.3.1 is a total setup for the singular value decomposition. This remarkable theorem says that any matrix can be broken into a product of three matrices. Two are square and unitary. In light of Theorem UMPIP, we can view these matrices as transforming vectors or coordinates in a rotational fashion. The middle matrix of this decomposition is rectangular, but is as close to being diagonal as a rectangular matrix can be. Viewed as a transformation, this matrix effects, reflections, contractions, or expansions along axes — it stretches vectors. So any matrix, viewed as a geometric transformation is the product of a rotation, a stretch and a rotation.

The singular value theorem can also be viewed as an application of our most general statement about matrix representations of linear transformations relative to different bases. Theorem MRCB concerns linear transformations \(\ltdefn{T}{U}{V}\) where \(U\) and \(V\) are possibly different vector spaces. When \(U\) and \(V\) have different dimensions, the resulting matrix representation will be rectangular. In Section CB we quickly specialized to the case where \(U=V\text{,}\) and the matrix representations are square, with one of our most central results, Theorem SCB. Theorem Theorem 2.3.3, next, is an application of the full generality of Theorem MRCB where the relevant bases are now orthonormal sets.

Let \(\vectorlist{x}{n}\) and \(\vectorlist{y}{m}\) be the orthonormal bases described by the conclusion of Theorem Theorem 2.3.1. Define \(U\) to be the \(m\times m\) matrix whose columns are \(\vect{y}_i\text{,}\) \(1\leq i\leq m\text{,}\) and define \(V\) to be the \(n\times n\) matrix whose columns are \(\vect{x}_i\text{,}\) \(1\leq i\leq n\text{.}\) With orthonormal sets of columns, both \(U\) and \(V\) are unitary matrices by Theorem CUMOS.

Then for \(1\leq i\leq m\text{,}\) \(1\leq j\leq n\text{,}\)

\begin{align*} \matrixentry{AV}{ij}&=\vectorentry{A\vect{x}_j}{i}=\vectorentry{\sqrt{\delta_j}\vect{y}_j}{i}=s_j\vectorentry{\vect{y}_j}{i}\\ &=\matrixentry{U}{ij}\matrixentry{S}{jj}=\sum_{k=1}^{m}\matrixentry{U}{ik}\matrixentry{S}{kj}=\matrixentry{US}{ij} \end{align*}

So by Theorem ME, \(AV\) and \(US\) are equal matrices, \(AV=US\text{.}\) \(V\) is unitary, so applying its inverse yields the decomposition

\begin{equation*} A=US\adjoint{V} \end{equation*}

Typically, the singular values of a matrix are ordered from largest to smallest, so this convention is used for the diagonal elements of the matrix \(S\) in the decomposition, and then the columns of \(U\) and \(V\) will be ordered similarly.

Subsection 2.3.3 Visualizing the SVD

It is helpful to think of the orthonormal bases that are the columns of \(U\) and \(V\) as “coordinate systems,” since they are pairwise “perpendicular” unit vectors, like the \(\vec{i},\,\vec{j},\,\vec{k}\) often used in describing the geometry of space. We would then call each basis vector an “axis”.

Now think of an \(m\times n\) matrix as a function from \(\complex{n}\) to \(\complex{m}\text{.}\) For an input vector \(\vect{w}\in\complex{n}\text{,}\) we have the output vector \(\vect{y}=A\vect{w}\in\complex{m}\text{.}\) If we write the output vector using the SVD decomposition of \(A\) as \(A\vect{w}=US\adjoint{V}\vect{w}\) we can consider the output as a three-step process that is more formally a composition of three linear transformations. Recall that unitary matrices preserve inner products, and thus preserve norms (length) and relative positions of two vectors (the “angle” between vectors), which is why unitary matrices are sometimes called “isometries” (Theorem UMPIP).

  1. \(\adjoint{V}\) is the inverse of \(V\) so it will take any basis vector \(\vect{x}_i\) to a column of the identity matrix \(\vect{e}_i\text{,}\) an element of the standard basis of \(\complex{n}\text{,}\) or and axis of the “usual” coordinate system. Any other vector, such as our \(\vect{w}\text{,}\) will have its length preserved, but changing its position, though its position relative to the axes is unchanged.
  2. \(S\) converts the “rotated” \(\vect{w}\) from \(\complex{n}\) to \(\complex{m}\text{.}\) But it does so in a very simple way. It simply scales each entry of the vector by a positive amount using a different singular value for each entry. If \(m\gt n\text{,}\) then the extra entries are simply new zeros. If \(m\lt n\text{,}\) then some entries get discarded.
  3. \(U\) will convert the standard basis vectors (the usual axes) to the new orthonormal basis given by the \(\vect{y}_i\text{.}\) The twice-transformed version of \(\vect{w}\) will have its length preserved, but change position, though its position relative to the axes is unchanged.

So, every matrix is a rotation, a stretch, and a rotation. That is a simple, but accurate, understanding of what the SVD tells us.

Here is another look at the same idea. Consider the columns of \(U\) and \(V\) again as the axes of new coordinate systems. Then their adjoints are their inverses and each take the new axes to the standard unit vectors (columns of the identity matrix), the axes of the usual coordinate system. Consider an input vector \(\vect{w}\text{,}\) and its output \(\vect{y}=A\vect{w}\text{,}\) relative to these new bases and convert each to the standard coordinate system, \(\vect{w}^\prime=\adjoint{V}\vect{w}\) and \(\vect{y}^\prime=\adjoint{U}\vect{y}\text{.}\) Then

\begin{equation*} \vect{y}^\prime=\adjoint{U}\vect{y}=\adjoint{U}A\vect{w}=\adjoint{U}US\adjoint{V}\vect{w}=S\vect{w}^\prime \end{equation*}

In the “primed” spaces, the action of \(A\) is very simple. Or in other words, every linear transformation has a matrix representation that is basically diagonal, if only we pick the right bases for the domain and codomain.

Subsection 2.3.4 Properties of the SVD

The appeal of the singular value decomposition is two-fold. First it is applicable to any matrix. That should be obvious enough. Second, components of the SVD provide a wealth of information about a matrix, and in the case of numerical matrices, they are well-behaved. In this subsection we collect various theorems about the SVD, and explore the consequences in a section about applications, Section (((section-applications-of-SVD))).

The SVD gives a decomposition of a matrix of a sum of rank one matrices, and the magnitude of the singular values tells us which of these rank one matrices is the most “important”.

As usual, let \(\vect{e}_i\) be column \(i\) of the identity matrix \(I_m\text{.}\) Define \(S_i\text{,}\) for \(1\leq i\leq r\) to be the \(m\times n\) matrix where every entry is zero, except column \(i\) is \(s_i\vect{e}_i\text{.}\) Then, by design, \(S=\sum_{i=1}^r\,S_i\text{.}\) We have,

\begin{align*} A &=US\adjoint{V}=U\,\sum_{i=1}^r\,S_i\,\adjoint{V} =\sum_{i=1}^rUS_i\adjoint{V}\\ &=\sum_{i=1}^r\,U\left\lbrack\zerovector|\dots|s_i\vect{e}_i|\dots|\zerovector\right\rbrack\adjoint{V} =\sum_{i=1}^r\,s_i\left\lbrack\zerovector|\dots|U\vect{e}_i|\dots|\zerovector\right\rbrack\adjoint{V}\\ &=\sum_{i=1}^r\,s_i\left\lbrack\zerovector|\dots|\vect{y}_i|\dots|\zerovector\right\rbrack\adjoint{V} =\sum_{i=1}^r\,s_i\vect{y}_i\adjoint{\vect{x}_i} \end{align*}

Be sure to recognize \(\vect{y}_i\adjoint{\vect{x}_i}\) as the outer product, an \(m\times n\) matrix of rank one (every row is a multiple of every other row, and similarly for columns). See Subsection (((subsection-SVD-image-compression))) for a good example of the utility of this result.