Section POD  Polar Decomposition

From A First Course in Linear Algebra
Version 2.10
© 2004.
Licensed under the GNU Free Documentation License.
http://linear.ups.edu/

This Section is a Draft, Subject to Changes
Needs Numerical Examples

The polar decomposition of a matrix writes any matrix as the product of a unitary matrix (Definition UM)and a positive semi-definite matrix (Definition PSM). It takes its name from a special way to write complex numbers. If you’ve had a basic course in complex analysis, the next paragraph will help explain the name. If the next paragraph makes no sense to you, there’s no harm in skipping it.

Any complex number z ∈ ℂ can be written as z = r{e}^{iθ} where r is a positive number (computed as a square root of a function of the real amd imaginary parts of z) and θ is an angle of rotation that converts 1 to the complex number {e}^{iθ} =\mathop{ cos}\nolimits (θ) + i\mathop{ sin}\nolimits (θ). The polar form of a square matrix is a product of a positive semi-definite matrix that is a square root of a function of the matrix together with a unitary matrix, which can be viewed as achieving a rotation (Theorem UMPIP).

OK, enough preliminaries. We have all the tools in place to jump straight to our main theorem.

Theorem PDM
Polar Decomposition of a Matrix
Suppose that A is a square matrix. Then there is a unitary matrix U such that A ={ \left (A{A}^{∗}\right )}^{1∕2}U.

Proof   This theorem only claims the existence of a unitary matrix U that does a certain job. We will manufacture U and check that it meets the requirements.

Suppose A has size n and rank r. We begin by applying Theorem EEMAP to A. Let B = \left \{{x}_{1},\kern 1.95872pt {x}_{2},\kern 1.95872pt {x}_{3},\kern 1.95872pt \mathop{\mathop{…}},\kern 1.95872pt {x}_{n}\right \} be the orthonormal basis of {ℂ}^{n} composed of eigenvectors for {A}^{∗}A, and let C = \left \{{y}_{1},\kern 1.95872pt {y}_{2},\kern 1.95872pt {y}_{3},\kern 1.95872pt \mathop{\mathop{…}},\kern 1.95872pt {y}_{n}\right \} be the orthonormal basis of {ℂ}^{n} composed of eigenvectors for A{A}^{∗}. We have A{x}_{i} = \sqrt{{δ}_{i}}{x}_{i}, 1 ≤ i ≤ r, and A{x}_{i} = 0, r + 1 ≤ i ≤ n, where {δ}_{i}, 1 ≤ i ≤ r are the distinct nonzero eigenvalues of {A}^{∗}A.

Define T : {ℂ}^{n}\mathrel{↦}{ℂ}^{n} to be the unique linear transformation such that T\left ({x}_{i}\right ) = {y}_{i}, 1 ≤ i ≤ n, as guaranteed by Theorem LTDB. Let E be the basis of standard unit vectors for {ℂ}^{n} (Definition SUV), and define U to be the matrix representation (Definition MR) of T with respect to E, more carefully U = {M}_{E,E}^{T }. This is the matrix we are after. Notice that

\eqalignno{ U{x}_{i} & = {M}_{E,E}^{T }{ρ}_{ E}\left ({x}_{i}\right ) & &\text{@(a href="fcla-jsmath-2.10li56.html#definition.VR")Definition VR@(/a)} & & & & \cr & = {ρ}_{E}\left (T\left ({x}_{i}\right )\right ) & &\text{@(a href="fcla-jsmath-2.10li57.html#theorem.FTMR")Theorem FTMR@(/a)} & & & & \cr & = {ρ}_{E}\left ({y}_{i}\right ) & &\text{@(a href="fcla-jsmath-2.10li57.html#theorem.FTMR")Theorem FTMR@(/a)} & & & & \cr & = {y}_{i} & &\text{@(a href="fcla-jsmath-2.10li56.html#definition.VR")Definition VR@(/a)} & & & & }

Since B and C are orthonormal bases, and C is the result of multiplying the vectors of B by U, we conclude that U is unitary by Theorem UMCOB. So once again, Theorem EEMAP is a big part of the setup for a decomposition.

Let x ∈ {ℂ}^{n} be any vector. Since B is a basis of {ℂ}^{n}, there are scalars {a}_{1},\kern 1.95872pt {a}_{2},\kern 1.95872pt {a}_{3},\kern 1.95872pt \mathop{\mathop{…}},\kern 1.95872pt {a}_{n} expressing x as a linear combination of the vectors in B. then

\eqalignno{ {\left (A{A}^{∗}\right )}^{1∕2}Ux & ={ \left (A{A}^{∗}\right )}^{1∕2}U{\mathop{∑ }}_{i=1}^{n}{a}_{ i}{x}_{i} & &\text{@(a href="fcla-jsmath-2.10li40.html#definition.B")Definition B@(/a)} & & & & \cr & ={ \mathop{∑ }}_{i=1}^{n}{\left (A{A}^{∗}\right )}^{1∕2}U{a}_{ i}{x}_{i} & &\text{@(a href="fcla-jsmath-2.10li31.html#theorem.MMDAA")Theorem MMDAA@(/a)} & & & & \cr & ={ \mathop{∑ }}_{i=1}^{n}{a}_{ i}{\left (A{A}^{∗}\right )}^{1∕2}U{x}_{ i} & &\text{@(a href="fcla-jsmath-2.10li31.html#theorem.MMSMM")Theorem MMSMM@(/a)} & & & & \cr & ={ \mathop{∑ }}_{i=1}^{n}{a}_{ i}{\left (A{A}^{∗}\right )}^{1∕2}{y}_{ i} & & & & \cr & ={ \mathop{∑ }}_{i=1}^{r}{a}_{ i}{\left (A{A}^{∗}\right )}^{1∕2}{y}_{ i} +{ \mathop{∑ }}_{i=r+1}^{n}{a}_{ i}{\left (A{A}^{∗}\right )}^{1∕2}{y}_{ i} & &\text{@(a href="fcla-jsmath-2.10li23.html#property.AAC")Property AAC@(/a)} & & & & \cr & ={ \mathop{∑ }}_{i=1}^{r}{a}_{ i}\sqrt{{δ}_{i}}{y}_{i} +{ \mathop{∑ }}_{i=r+1}^{n}{a}_{ i}(0){y}_{i} & &\text{@(a href="fcla-jsmath-2.10li108.html#theorem.EESR")Theorem EESR@(/a)} & & & & \cr & ={ \mathop{∑ }}_{i=1}^{r}{a}_{ i}\sqrt{{δ}_{i}}{y}_{i} +{ \mathop{∑ }}_{i=r+1}^{n}{a}_{ i}0 & &\text{@(a href="fcla-jsmath-2.10li37.html#theorem.ZSSM")Theorem ZSSM@(/a)} & & & & \cr & ={ \mathop{∑ }}_{i=1}^{r}{a}_{ i}A{x}_{i} +{ \mathop{∑ }}_{i=r+1}^{n}{a}_{ i}A{x}_{i} & &\text{@(a href="fcla-jsmath-2.10li107.html#theorem.EEMAP")Theorem EEMAP@(/a)} & & & & \cr & ={ \mathop{∑ }}_{i=1}^{n}{a}_{ i}A{x}_{i} & &\text{@(a href="fcla-jsmath-2.10li23.html#property.AAC")Property AAC@(/a)} & & & & \cr & ={ \mathop{∑ }}_{i=1}^{n}A{a}_{ i}{x}_{i} & &\text{@(a href="fcla-jsmath-2.10li31.html#theorem.MMSMM")Theorem MMSMM@(/a)} & & & & \cr & = A{\mathop{∑ }}_{i=1}^{n}{a}_{ i}{x}_{i} & &\text{@(a href="fcla-jsmath-2.10li31.html#theorem.MMDAA")Theorem MMDAA@(/a)} & & & & \cr & = Ax & & & & }

So by Theorem EMMVP we have the matrix equality {\left (A{A}^{∗}\right )}^{1∕2}U = A.