Section SR  Square Roots

From A First Course in Linear Algebra
Version 1.08
© 2004.
Licensed under the GNU Free Documentation License.
http://linear.ups.edu/

This Section is a Draft, Subject to Changes
Needs Numerical Examples

With all our results about Hermitian matrices, their eigenvalues and their diagonalizations, it will be a nearly trivial matter to now construct a “square root” of a positive semi-definite matrix. We will describe the square root of a matrix A as a matrix S such that A = S2. In general, a matrix A might have many such square roots. But with a few results in hand we will be able to impose an extra condition on S that will make a unique S such that A = S2. At that point we can define the square root of A formally.

Subsection SRM: Square Root of a Matrix

Theorem PSMSR
Positive Semi-Definite Matrices and Square Roots
Suppose A is a square matrix. There is a positive semi-definite matrix S such that A = S2 if and only if A is positive semi-definite.

Proof   Let n denote the size of A.

( ) Suppose that A is positive semi-definite. Since A is Hermitian (Definition PSM) we know A is normal (Definition NRML) and so by Theorem OD there is a unitary matrix U and a diagonal matrix D, whose diagonal entries are the eigenvalues of A, such that D = UAU. The eigenvalues of A are all non-negative (Theorem EPSM), which allows us to define a diagonal matrix E whose diagonal entries are the positive square roots of the eigenvalues of A, in the same order as they appear in D. More precisely, define E to be the diagonal matrix with non-negative diagonal entries such that E2 = D. Set S = UEU, and compute

S2 = UEUUEU = UEInEU  Definition UM = UEEU  Theorem MMIM = UDU = UUAUU  Theorem OD = InAIn  Definition UM = A  Theorem MMIM

We need to first verify that S is Hermitian.

S = UEU = UEU = UEU  Theorem MMAD = UEU  Theorem AA = U E¯tU  Definition A = UEtU  Theorem HMRE = UEU  Diagonal matrix = S

And finally, we want to check the use of S in an inner product. Notice that E is Hermitian since it is a diagonal matrix with real entries. Furthermore, as a diagonal matrix, the eigenvalues of E are precisely the diagonal entries, and since these were chosen to be positive, an application of Theorem EPSM tells us that E is positive semi-definite. Now, for any x n,

Sx,x = UEUx,x = EUx,Ux  Theorem AIP = E Ux,Ux 0  Definition PSM

So, according to Definition PSM, S is positive semi-definite.

( ) Assume that A = S2, with S positive semi-definite. Then S is Hermitian, and we check that A is Hermitian.

A = SS = SS  Theorem MMAD = SS  Definition HM = A

Now for the use of A in an inner product. For any x n,

Ax,x = S2x,x = Sx,Sx  Theorem AIP = Sx,Sx  Definition HM 0  Theorem PIP

So by Definition PSM, A is positive semi-definite.

There is a very close relationship between the eigenvalues and eigenspaces of a positive semi-definite matrix and its positive semi-definite square root. The next theorem is interesting in its own right, but is also an important technical step in some other important results, such as the upcoming uniqueness of the square root (Theorem USR).

Theorem EESR
Eigenvalues and Eigenspaces of a Square Root
Suppose that A is a positive semi-definite matrix and S is a positive semi-definite matrix such that A = S2. If λ1,λ2,λ3,,λp are the distinct eigenvalues of A, then the distinct eigenvalues of S are λ1,λ2,λ3,,λp, and S λi = A λi for 1 i p.

Proof   Let x be an eigenvector of S for an eigenvalue ρ. Then, in the style of Theorem EPM,

Ax = S2x = S Sx = S ρx = ρSx = ρ2x

so ρ2 is an eigenvalue of A and must equal some λi. Furthermore, because S is positive semi-definite, Theorem EPSM tells us that ρ 0. The impact for us here is that we cannot have two different eigenvalues of S whose squares equal the same eigenvalue of A, so we can pair each eigenvalue of S with a different eigenvalue of A, equal to its square. (A good exercise is to track through the rest of this proof in the situation where S is not assumed to be positive semi-definite and we do not have this condition on the eigenvalues. Where does the proof then break down?) Let ρi, 1 i q denote the q distinct eigenvalues of S. The discussion above implies that we can order the eigenvalues of A and S so that λi = ρi2 for 1 i q. Notice that at this point we know that q p, though we will be showing that q = p.

Additionally, the equation above tells us that every eigenvector of S for ρi is again an eigenvector of A for ρi2. So for 1 i q, the relevant eigenspaces are related by

S λi = S ρi A ρi2 = A λi

So the eigenspaces of S are subsets of the eigenspaces of A, for the related eigenvalues. However, we will be showing that these sets are indeed equal to each other.

Both A and S are positive semi-definite, hence Hermitian and therefore normal. Theorem OD then tells us that each is diagonalizable (Definition DZM). Then Theorem DMFE says that the algebraic multiplicity and geometric multiplicity of each eigenvalue are equal. Then, if we let n denote the size of A,

n = i=1qα S λi  Theorem NEM = i=1qγ S λi  Theorem DMFE = i=1q dim S λi  Definition GME i=1q dim A λi  Theorem PSSD i=1p dim A λi  Definition D = i=1pγ A λi  Definition GME = i=1pα A λi  Theorem DMFE = n  Theorem NEM

With equal values at the two ends of this chain of equalities and inequalities, we know that the two inequalities are forced to actually be equalities. In particular, the second inequality implies that p = q and the first, in conjunction with Theorem EDYES, implies that S λi = A λi for 1 i p.

Notice that we defined the singular values of a matrix A as the square roots of the eigenvalues of AA (Definition SV). With Theorem EESR in hand we recognize the singular values of A as simply the eigenvalues of AA12. Indeed, many authors take this as the definition of singular values, since it is equivalent to our definition. We have chosen not to wait for a discussion of square roots before making a definition of singular values, allowing us to present the singular value decomposition (Theorem SVD) all the sooner.

In the first half of the proof of Theorem PSMSR we could have chosen the matrix E (which was the essential component of the desired matrix S) in a variety of ways. Any collection of diagonal entries of E could be replaced by their negatives and we would maintain the property that E2 = D. However, if we decide to enforce the entries of E as non-negative quantities then E is positive semi-definite, and then S follows along as a positive semi-definite matrix. We now show that of all the possible square roots of a positive semi-definite matrix, only one is itself again positive semi-definite. In other words, the S of Theorem PSMSR is unique.

Theorem USR
Unique Square Root
Suppose A is a positive semi-definite matrix. Then there is a unique positive semi-definite matrix S such that A = S2.

Proof   Theorem PSMSR gives us the existence of at least one positive semi-definite matrix S such that A = S2. As usual, we will assume that S1 and S2 are positive semi-definite matrices such that A = S12 = S 22 (Technique U).

As A is diagonalizable, there is a basis of n composed entirely of eigenvectors of A (Theorem DC), say B = x1,x2,x3,,xn. Let δ1,δ2,δ3,,δn denote the associated eigenvalues. Theorem EESR allows to conclude that A δi = S1 δi = S2 δi. So S1xi = δixi = S2xi for 1 i n.

Choose any x n. The spanning property of B allows us to conclude the existence of a set of scalars, a1,a2,a3,,an, yielding x as a linear combination of the vectors in B. So,

S1x = S1 i=1na ixi = i=1na iS1xi = i=1na iδixi = i=1na iS2xi = S2 i=1na ixi = S2x

Since S1 and S2 have the same action on every vector, Theorem EMMVP yields the conclusion that S1 = S2.

With a criteria that distinguishes one square root from all the rest (positive semi-definiteness) we can now define the square root of a positive semi-definite matrix.

Definition SRM
Square Root of a Matrix
Suppose A is a positive semi-definite matrix and S is the positive semi-definite matrix such that S2 = SS = A. Then S is the square root of A and we write S = A12.

(This definition contains Notation SRM.)