Skip to main content

Section PEE Properties of Eigenvalues and Eigenvectors

The previous section introduced eigenvalues and eigenvectors, and concentrated on their existence and determination. This section will be more about theorems, and the various properties eigenvalues and eigenvectors enjoy. Like a good 4×100 meter relay, we will lead-off with one of our better theorems and save the very best for the anchor leg.

Subsection BPE Basic Properties of Eigenvalues

We will establish by induction (Proof Technique I) that that any set of \(k\) eigenvectors of \(A\) with distinct eigenvalues \(\scalarlist{\lambda}{k}\) is a linearly independent set. Suppose \(A\) has size \(n\text{.}\)

Base Case.

When \(k=1\text{,}\) \(\set{\vect{x}_1}\) is a set with a single nonzero vector and thus is linearly independent.

Induction Step.

Begin with a relation of linear dependence on the set \(\set{\vectorlist{x}{k}}\)

\begin{equation*} \lincombo{a}{x}{k} = \zerovector\text{.} \end{equation*}

Then

\begin{align*} \zerovector &= \left(A-\lambda_k I_n\right)\zerovector&&\knowl{./knowl/theorem-MMZM.html}{\text{Theorem MMZM}}\\ &= \left(A-\lambda_k I_n\right)\left(\lincombo{a}{x}{k}\right)&&\knowl{./knowl/definition-RLD.html}{\text{Definition RLD}}\\ &=\left(A-\lambda_k I_n\right)a_1\vect{x}_1+ \cdots+ \left(A-\lambda_k I_n\right)a_k\vect{x}_k&& \knowl{./knowl/theorem-MMDAA.html}{\text{Theorem MMDAA}}\\ &=a_1\left(A-\lambda_k I_n\right)\vect{x}_1+ \cdots+ a_k\left(A-\lambda_k I_n\right)\vect{x}_k&& \knowl{./knowl/theorem-MMSMM.html}{\text{Theorem MMSMM}}\\ &=a_1\left(A\vect{x}_1-\lambda_k I_n\vect{x}_1\right)+ \cdots+ a_k\left(A\vect{x}_k-\lambda_k I_n\vect{x}_k\right)&& \knowl{./knowl/theorem-MMDAA.html}{\text{Theorem MMDAA}}\\ &=a_1\left(A\vect{x}_1-\lambda_k\vect{x}_1\right)+ \cdots+ a_k\left(A\vect{x}_k-\lambda_k\vect{x}_k\right)&& \knowl{./knowl/theorem-MMIM.html}{\text{Theorem MMIM}}\\ &=a_1\left(\lambda_1\vect{x}_1-\lambda_k\vect{x}_1\right)+ \cdots+ a_k\left(\lambda_k\vect{x}_k-\lambda_k\vect{x}_k\right)&& \knowl{./knowl/definition-EEM.html}{\text{Definition EEM}}\\ &=a_1\left(\lambda_1-\lambda_k\right)\vect{x}_1+ \cdots+ a_k\left(\lambda_k-\lambda_k\right)\vect{x}_k&& \knowl{./knowl/property-DSA.html}{\text{Property DSA}}\\ &=a_1\left(\lambda_1-\lambda_k\right)\vect{x}_1+ \cdots+ a_{k-1}\left(\lambda_{k-1}-\lambda_k\right)\vect{x}_{k-1}+ a_k\left(0\right)\vect{x}_k&& \knowl{./knowl/property-AICN.html}{\text{Property AICN}}\\ &=a_1\left(\lambda_1-\lambda_k\right)\vect{x}_1+ \cdots+ a_{k-1}\left(\lambda_{k-1}-\lambda_k\right)\vect{x}_{k-1}+ \zerovector&& \knowl{./knowl/theorem-ZSSM.html}{\text{Theorem ZSSM}}\\ &=a_1\left(\lambda_1-\lambda_k\right)\vect{x}_1+ \cdots+ a_{k-1}\left(\lambda_{k-1}-\lambda_k\right)\vect{x}_{k-1}&& \knowl{./knowl/property-Z.html}{\text{Property Z}} \end{align*}

This equation is a relation of linear dependence on the set \(\set{\vectorlist{x}{k-1}}\text{,}\) which is a linearly independent set by the induction hypothesis. So the scalars must all be zero by Definition LI. That is, \(a_i\left(\lambda_i-\lambda_k\right)=0\) for \(1\leq i\leq k-1\text{.}\) However, we have the hypothesis that the eigenvalues are distinct, so \(\lambda_i-\lambda_k\neq 0\) for \(1\leq i\leq k-1\text{.}\) So Theorem ZPZF implies \(a_i=0\) for \(1\leq i\leq k-1\text{.}\)

This reduces the original relation of linear dependence on \(\set{\vectorlist{x}{k}}\) to the simpler equation \(a_k\vect{x}_k=\zerovector\text{.}\) By Theorem SMEZV we conclude that \(a_k=0\) or \(\vect{x}_k=\zerovector\text{.}\) Eigenvectors are never the zero vector (Definition EEM), so \(a_k=0\text{.}\) Now all of the scalars \(a_i\text{,}\) \(1\leq i\leq k\) are zero, and so the only relation of linear dependence on the set \(\set{\vectorlist{x}{k}}\) is trivial. So by Definition LI, the set \(\set{\vectorlist{x}{k}}\) is linearly independent.

The next theorem gives us a convenient upper limit on the number of eigenvalues.

We argue by contradiction (Proof Technique CD). Assume that \(A\) has \(n+1\) or more distinct eigenvalues. Then there is a set of \(n+1\) or more eigenvectors of \(A\text{,}\) with distinct eigenvalues. This is a set of \(n+1\) or more vectors from \(\complex{n}\) that will be linearly independent by Theorem EDELI. But this contradicts Theorem MVSLD, so our assumption is false, and there are no more than \(n\) distinct eigenvalues.

Notice that once we have found \(n\) distinct eigenvalues for a matrix of size \(n\text{,}\) then we know there are no more eigenvalues. Example ESM4 is an example, and the upcoming Theorem DED also considers this situation.

There is a simple connection between the eigenvalues of a matrix and whether or not the matrix is nonsingular.

We have the following equivalences:

\begin{align*} A\text{ is singular} &\iff A-0I_n\text{ is singular}&&\knowl{./knowl/property-ZM.html}{\text{Property ZM}}\\ &\iff 0\text{ is an eigenvalue of A}&&\knowl{./knowl/theorem-ESM.html}{\text{Theorem ESM}}\text{.} \end{align*}

With an equivalence about singular matrices we can update our list of equivalences about nonsingular matrices.

The equivalence of the first and last statements is Theorem SMZE, reformulated by negating each statement in the equivalence. So we are able to improve on Theorem NME6 with this addition.

Zero eigenvalues are another marker of singular matrices. We illustrate with two matrices, the first nonsingular, the second not.

Certain changes to a matrix change its eigenvalues in a predictable way.

Let \(\vect{x}\neq\zerovector\) be one eigenvector of \(A\) for \(\lambda\text{.}\) Then

\begin{align*} \left(\alpha A\right)\vect{x}&=\alpha\left(A\vect{x}\right)&& \knowl{./knowl/theorem-MMSMM.html}{\text{Theorem MMSMM}}\\ &=\alpha\left(\lambda\vect{x}\right)&& \knowl{./knowl/definition-EEM.html}{\text{Definition EEM}}\\ &=\left(\alpha\lambda\right)\vect{x}&& \knowl{./knowl/property-SMAC.html}{\text{Property SMAC}}\text{.} \end{align*}

So \(\vect{x}\neq\zerovector\) is an eigenvector of \(\alpha A\) for the eigenvalue \(\alpha\lambda\text{.}\)

Unfortunately, there are not parallel theorems about the sum or product of arbitrary matrices. But we can prove a similar result for powers of a matrix.

Let \(\vect{x}\neq\zerovector\) be one eigenvector of \(A\) for \(\lambda\text{.}\) Then we proceed by induction on \(s\) (Proof Technique I). First, for \(s=0\text{,}\) employing Theorem MMIM and Property OC to establish the base case,

\begin{gather*} A^s\vect{x}=A^0\vect{x}=I_n\vect{x}=\vect{x}=1\vect{x}=\lambda^0\vect{x}=\lambda^s\vect{x}\text{.} \end{gather*}

So \(\lambda^s\) is an eigenvalue of \(A^s\) in this special case. For the induction step, we assume the theorem is true for \(s\text{,}\) and find

\begin{align*} A^{s+1}\vect{x}&=A^sA\vect{x}\\ &=A^s\left(\lambda\vect{x}\right)&& \knowl{./knowl/definition-EEM.html}{\text{Definition EEM}}\\ &=\lambda\left(A^s\vect{x}\right)&& \knowl{./knowl/theorem-MMSMM.html}{\text{Theorem MMSMM}}\\ &=\lambda\left(\lambda^s\vect{x}\right)&&\text{Induction Hypothesis}\\ &=\left(\lambda\lambda^s\right)\vect{x}&& \knowl{./knowl/property-SMAC.html}{\text{Property SMAC}}\\ &=\lambda^{s+1}\vect{x}\text{.} \end{align*}

So \(\vect{x}\neq\zerovector\) is an eigenvector of \(A^{s+1}\) for \(\lambda^{s+1}\text{,}\) and induction tells us the theorem is true for all \(s\geq 0\text{.}\)

While we cannot prove that the sum of two arbitrary matrices behaves in any reasonable way with regard to eigenvalues, we can work with the sum of dissimilar powers of the same matrix. We have already seen that eigenvalues arise as roots of polynomials (Theorem EMHE). A theme of this chapter is relationships between eigenvalues and polynomials, and our next theorem is the next component of this theme.

Let \(\vect{x}\neq\zerovector\) be one eigenvector of \(A\) for \(\lambda\text{,}\) and write \(q(x)=a_0+a_1x+a_2x^2+\cdots+a_mx^m\text{.}\) Then

\begin{align*} q(A)\vect{x}&=\left(a_0A^0+a_1A^1+a_2A^2+\cdots+a_mA^m\right)\vect{x}\\ &=(a_0A^0)\vect{x}+(a_1A^1)\vect{x}+(a_2A^2)\vect{x}+\cdots+(a_mA^m)\vect{x}&& \knowl{./knowl/theorem-MMDAA.html}{\text{Theorem MMDAA}}\\ &=a_0(A^0\vect{x})+a_1(A^1\vect{x})+a_2(A^2\vect{x})+\cdots+a_m(A^m\vect{x})&& \knowl{./knowl/theorem-MMSMM.html}{\text{Theorem MMSMM}}\\ &=a_0(\lambda^0\vect{x})+a_1(\lambda^1\vect{x})+a_2(\lambda^2\vect{x})+\cdots+a_m(\lambda^m\vect{x})&& \knowl{./knowl/theorem-EOMP.html}{\text{Theorem EOMP}}\\ &=(a_0\lambda^0)\vect{x}+(a_1\lambda^1)\vect{x}+(a_2\lambda^2)\vect{x}+\cdots+(a_m\lambda^m)\vect{x}&& \knowl{./knowl/property-SMAC.html}{\text{Property SMAC}}\\ &=\left(a_0\lambda^0+a_1\lambda^1+a_2\lambda^2+\cdots+a_m\lambda^m\right)\vect{x}&& \knowl{./knowl/property-DSAC.html}{\text{Property DSAC}}\\ &=q(\lambda)\vect{x}\text{.} \end{align*}

So \(\vect{x}\neq 0\) is an eigenvector of \(q(A)\) for the eigenvalue \(q(\lambda)\text{.}\)

Inverses and transposes also behave predictably with regard to their eigenvalues.

Notice that since \(A\) is assumed nonsingular, \(\inverse{A}\) exists by Theorem NI, but more importantly, \(\lambda^{-1}=1/\lambda\) does not involve division by zero since Theorem SMZE prohibits this possibility.

Let \(\vect{x}\neq\zerovector\) be one eigenvector of \(A\) for \(\lambda\text{.}\) Suppose \(A\) has size \(n\text{.}\) Then

\begin{align*} \inverse{A}\vect{x}&=\inverse{A}(1\vect{x})&& \knowl{./knowl/property-OC.html}{\text{Property OC}}\\ &=\inverse{A}(\frac{1}{\lambda}\lambda\vect{x})&& \knowl{./knowl/property-MICN.html}{\text{Property MICN}}\\ &=\frac{1}{\lambda}\inverse{A}(\lambda\vect{x})&& \knowl{./knowl/theorem-MMSMM.html}{\text{Theorem MMSMM}}\\ &=\frac{1}{\lambda}\inverse{A}(A\vect{x})&& \knowl{./knowl/definition-EEM.html}{\text{Definition EEM}}\\ &=\frac{1}{\lambda}(\inverse{A}A)\vect{x}&& \knowl{./knowl/theorem-MMA.html}{\text{Theorem MMA}}\\ &=\frac{1}{\lambda}I_n\vect{x}&& \knowl{./knowl/definition-MI.html}{\text{Definition MI}}\\ &=\frac{1}{\lambda}\vect{x}&& \knowl{./knowl/theorem-MMIM.html}{\text{Theorem MMIM}}\text{.} \end{align*}

So \(\vect{x}\neq\zerovector\) is an eigenvector of \(\inverse{A}\) for the eigenvalue \(\frac{1}{\lambda}\text{.}\)

Eigenvales of the transpose of a matrix are interesting, but we will not be able to investigate them until the end of the course. See Exercise PEE.M10 for a preview and more discussion.

If a matrix has only real entries, then eigenvalues will arise as the roots of a polynomial with real coefficients. Complex numbers could result as roots of this polynomial, but they are roots of quadratic factors with real coefficients, and as such, come in conjugate pairs. The next theorem proves this, and a bit more, without ever mentioning a polynomial.

We have

\begin{align*} A\conjugate{\vect{x}}&=\conjugate{A}\conjugate{\vect{x}}&& A\text{ has real entries}\\ &=\conjugate{A\vect{x}}&& \knowl{./knowl/theorem-MMCC.html}{\text{Theorem MMCC}}\\ &=\conjugate{\lambda\vect{x}}&& \knowl{./knowl/definition-EEM.html}{\text{Definition EEM}}\\ &=\conjugate{\lambda}\conjugate{\vect{x}}&& \knowl{./knowl/theorem-CRSM.html}{\text{Theorem CRSM}}\text{.} \end{align*}

So \(\conjugate{\vect{x}}\) is an eigenvector of \(A\) for the eigenvalue \(\conjugate{\lambda}\text{.}\)

This phenomenon is amply illustrated in Example ESM4, where the four complex eigenvalues come in two pairs, and the two basis vectors of the eigenspaces are complex conjugates of each other. Theorem ERMCP can be a time-saver for computing eigenvalues and eigenvectors of real matrices with complex eigenvalues, since the conjugate eigenvalue and eigenspace can be inferred from the theorem rather than computed.

Subsection EHM Eigenvalues of Hermitian Matrices

Recall that a matrix is Hermitian (or self-adjoint) if \(A=\adjoint{A}\) (Definition HM). In the case where \(A\) is a matrix whose entries are all real numbers, being Hermitian is identical to being symmetric (Definition SYM). Keep this in mind as you read the next two theorems. Their hypotheses could be changed to “suppose \(A\) is a real symmetric matrix.”

Let \(\vect{x}\neq\zerovector\) be one eigenvector of \(A\) for the eigenvalue \(\lambda\text{.}\) Then

\begin{align*} \left(\lambda - \conjugate{\lambda}\right)\innerproduct{\vect{x}}{\vect{x}} &=\lambda\innerproduct{\vect{x}}{\vect{x}} - \conjugate{\lambda}\innerproduct{\vect{x}}{\vect{x}}&& \knowl{./knowl/property-DCN.html}{\text{Property DCN}}\\ &=\innerproduct{\vect{x}}{\lambda\vect{x}} - \innerproduct{\lambda\vect{x}}{\vect{x}}&& \knowl{./knowl/theorem-IPSM.html}{\text{Theorem IPSM}}\\ &=\innerproduct{\vect{x}}{A\vect{x}} - \innerproduct{A\vect{x}}{\vect{x}}&& \knowl{./knowl/definition-EEM.html}{\text{Definition EEM}}\\ &=\innerproduct{A\vect{x}}{\vect{x}} - \innerproduct{A\vect{x}}{\vect{x}}&& \knowl{./knowl/theorem-HMIP.html}{\text{Theorem HMIP}}\\ &=0\text{.} \end{align*}

Since \(\vect{x}\neq\zerovector\text{,}\) by Theorem PIP we know \(\innerproduct{\vect{x}}{\vect{x}}\neq 0\text{.}\) Then by Theorem ZPZF, \(\lambda - \conjugate{\lambda}=0\text{,}\) and so \(\lambda = \conjugate{\lambda}\text{.}\) If a complex number is equal to its conjugate, then it has a complex part equal to zero, and therefore is a real number.

Notice the key step of this proof is the ability to pitch a Hermitian matrix from one side of the inner product to the other.

In many physical problems, a matrix of interest will be real and symmetric, or Hermitian. Then if the eigenvalues are to represent physical quantities of interest, Theorem HMRE guarantees that these values will not be complex numbers.

The eigenvectors of a Hermitian matrix also enjoy a pleasing property that we will exploit later.

Let \(\vect{x}\) be an eigenvector of \(A\) for \(\lambda\) and let \(\vect{y}\) be an eigenvector of \(A\) for a different eigenvalue \(\rho\text{.}\) So we have \(\lambda-\rho\neq 0\text{.}\) Then

\begin{align*} \left(\lambda-\rho\right)\innerproduct{\vect{x}}{\vect{y}} &=\lambda\innerproduct{\vect{x}}{\vect{y}}-\rho\innerproduct{\vect{x}}{\vect{y}}&& \knowl{./knowl/property-DCN.html}{\text{Property DCN}}\\ &=\innerproduct{\conjugate{\lambda}\vect{x}}{\vect{y}}-\innerproduct{\vect{x}}{\rho\vect{y}}&& \knowl{./knowl/theorem-IPSM.html}{\text{Theorem IPSM}}\\ &=\innerproduct{\lambda\vect{x}}{\vect{y}}-\innerproduct{\vect{x}}{\rho\vect{y}}&& \knowl{./knowl/theorem-HMRE.html}{\text{Theorem HMRE}}\\ &=\innerproduct{A\vect{x}}{\vect{y}}-\innerproduct{\vect{x}}{A\vect{y}}&& \knowl{./knowl/definition-EEM.html}{\text{Definition EEM}}\\ &=\innerproduct{A\vect{x}}{\vect{y}}-\innerproduct{A\vect{x}}{\vect{y}}&& \knowl{./knowl/theorem-HMIP.html}{\text{Theorem HMIP}}\\ &=0\text{.} \end{align*}

Because \(\lambda\) and \(\rho\) are presumed to be different, \(\lambda-\rho\neq 0\text{,}\) and Theorem ZPZF implies that \(\innerproduct{\vect{x}}{\vect{y}}=0\text{.}\) In other words, \(\vect{x}\) and \(\vect{y}\) are orthogonal vectors according to Definition OV.

Notice again how the key step in this proof is the fundamental property of a Hermitian matrix (Theorem HMIP) — the ability to swap \(A\) across the two arguments of the inner product. Notice too, that we can always apply the Gram-Schmidt procedure (Theorem GSP) to any basis of any eigenspace, along with scaling the resulting the orthogonal vectors by their norm to arrive at an orthonormal basis of the eigenspace. For a Hermitian matrix, pairs of eigenvectors from different eigenspaces are also orthogonal. If we dumped all these basis vectors into one big set it would be an orthonormal set. We will build on these results and continue to see some more interesting properties in Section OD.

Reading Questions PEE Reading Questions

1.

How can you identify a nonsingular matrix just by looking at its eigenvalues?

2.

How many different eigenvalues may a square matrix of size \(n\) have?

3.

What is amazing about the eigenvalues of a Hermitian matrix and why is it amazing?

Exercises PEE Exercises

M10.

Grab several square matrices whose eigenvalues you know (previous examples or exercises are fine), form the transpose \(\transpose{A}\text{,}\) and compute its eigenvalues. After a few examples, formulate a conjecture.

Solution

You should very quickly observe that \(A\) and \(\transpose{A}\) have identical eigenvalues. However, their eigenvectors are unpredictably different. So our techniques in this section will not lead to a viable proof. Similarity, a topic in this chapter (Section SD), and specifically Theorem PSMS, are an avenue for a proof but require some advanced topics that we cannot address. Finally, in our last chapter, the determinant (Chapter D) provides a proof (Theorem ETM).

M20.

This exercise will show we can use a polynomial to convert one matrix into another, with predictable changes in its eigenvalues. In [cross-reference to target(s) "example-ESMS4" missing or not unique] the \(4\times 4\) symmetric matrix

\begin{equation*} C= \begin{bmatrix} 1 & 0 & 1 & 1\\ 0 & 1 & 1 & 1\\ 1 & 1 & 1 & 0\\ 1 & 1 & 0 & 1 \end{bmatrix} \end{equation*}

is shown to have the three eigenvalues \(\lambda=3,\,1,\,-1\text{.}\) Suppose we wanted a \(4\times 4\) matrix that has the three eigenvalues \(\lambda=4,\,0,\,-2\text{.}\) We can employ Theorem EPM by finding a polynomial that converts \(3\) to \(4\text{,}\) \(1\) to \(0\text{,}\) and \(-1\) to \(-2\text{.}\) Such a polynomial is called an interpolating polynomial, and in this example we can use

\begin{equation*} r(x)=\frac{1}{4}\left(x^2+4x-5\right)=\frac{1}{4}x^2+x-\frac{5}{4}\text{.} \end{equation*}
(a)

Verify that the polynomial \(r(x)\) converts the eigenvalues as advertised.

Solution

We will not discuss how to concoct the interpolating polynomial, \(r(x)\text{,}\) but a text on numerical analysis should provide the details. For now, it should be routine to verify that \(r(3)=4\text{,}\) \(r(1)=0\) and \(r(-1)=-2\text{.}\)

(b)

In the style of Example PM, compute the matrix \(r(C)\text{.}\)

Solution

Now compute

\begin{align*} r(C)&=\frac{1}{4}C^2+C-\frac{5}{4}I_4\\ &= \frac{1}{4} \begin{bmatrix} 3 & 2 & 2 & 2\\ 2 & 3 & 2 & 2\\ 2 & 2 & 3 & 2\\ 2 & 2 & 2 & 3 \end{bmatrix} + \begin{bmatrix} 1 & 0 & 1 & 1\\ 0 & 1 & 1 & 1\\ 1 & 1 & 1 & 0\\ 1 & 1 & 0 & 1 \end{bmatrix} -\frac{5}{4} \begin{bmatrix} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1 \end{bmatrix} = \frac{1}{2} \begin{bmatrix} 1 & 1 & 3 & 3\\ 1 & 1 & 3 & 3\\ 3 & 3 & 1 & 1\\ 3 & 3 & 1 & 1 \end{bmatrix}\text{.} \end{align*}
(c)

Compute the eigenvalues of \(r(C)\) directly and verify that they are as expected.

Solution

notice that the multiplicities are the same, and the eigenspaces of \(C\) and \(r(C)\) are identical.

T20.

Suppose that \(A\) is a square matrix. Prove that a single vector may not be an eigenvector of \(A\) for two different eigenvalues.

Solution

Suppose that the vector \(\vect{x}\neq\zerovector\) is an eigenvector of \(A\) for the two eigenvalues \(\lambda\) and \(\rho\text{,}\) where \(\lambda\neq\rho\text{.}\) Then \(\lambda-\rho\neq 0\text{,}\) and we also have

\begin{align*} \zerovector &=A\vect{x}-A\vect{x}&& \knowl{./knowl/property-AIC.html}{\text{Property AIC}}\\ &=\lambda\vect{x}-\rho\vect{x}&& \knowl{./knowl/definition-EEM.html}{\text{Definition EEM}}\\ &=(\lambda-\rho)\vect{x}&& \knowl{./knowl/property-DSAC.html}{\text{Property DSAC}}\text{.} \end{align*}

By Theorem SMEZV, either \(\lambda-\rho=0\) or \(\vect{x}=\zerovector\text{,}\) which are both contradictions.

T22.

Suppose that \(U\) is a unitary matrix with eigenvalue \(\lambda\text{.}\) Prove that \(\lambda\) has modulus 1, i.e. \(\modulus{\lambda}=1\text{.}\) This says that all of the eigenvalues of a unitary matrix lie on the unit circle of the complex plane.