Section EE  Eigenvalues and Eigenvectors

From A First Course in Linear Algebra
Version 2.23
© 2004.
Licensed under the GNU Free Documentation License.
http://linear.ups.edu/

We start with the principal definition for this chapter.

Subsection EEM: Eigenvalues and Eigenvectors of a Matrix

Definition EEM
Eigenvalues and Eigenvectors of a Matrix
Suppose that A is a square matrix of size n, x\mathrel{≠}0 is a vector in {ℂ}^{n}, and λ is a scalar in {ℂ}^{}. Then we say x is an eigenvector of A with eigenvalue λ if

Ax = λx

Before going any further, perhaps we should convince you that such things ever happen at all. Understand the next example, but do not concern yourself with where the pieces come from. We will have methods soon enough to be able to discover these eigenvectors ourselves.

Example SEE
Some eigenvalues and eigenvectors
Consider the matrix

A = \left [\array{ 204 & 98 &−26&−10\cr −280 &−134 & 36 & 14 \cr 716 & 348 &−90&−36\cr −472 &−232 & 60 & 28 } \right ]

and the vectors

\eqalignno{ x = \left [\array{ 1\cr −1 \cr 2\cr 5 } \right ] & &y = \left [\array{ −3\cr 4 \cr −10\cr 4 } \right ] & &z = \left [\array{ −3\cr 7 \cr 0\cr 8 } \right ] & &w = \left [\array{ 1\cr −1 \cr 4\cr 0 } \right ] & & & & & & & & }

Then

Ax = \left [\array{ 204 & 98 &−26&−10\cr −280 &−134 & 36 & 14 \cr 716 & 348 &−90&−36\cr −472 &−232 & 60 & 28 } \right ]\left [\array{ 1\cr −1 \cr 2\cr 5 } \right ] = \left [\array{ 4\cr −4 \cr 8\cr 20 } \right ] = 4\left [\array{ 1\cr −1 \cr 2\cr 5 } \right ] = 4x

so x is an eigenvector of A with eigenvalue λ = 4. Also,

Ay = \left [\array{ 204 & 98 &−26&−10\cr −280 &−134 & 36 & 14 \cr 716 & 348 &−90&−36\cr −472 &−232 & 60 & 28 } \right ]\left [\array{ −3\cr 4 \cr −10\cr 4 } \right ] = \left [\array{ 0\cr 0 \cr 0\cr 0 } \right ] = 0\left [\array{ −3\cr 4 \cr −10\cr 4 } \right ] = 0y

so y is an eigenvector of A with eigenvalue λ = 0. Also,

Az = \left [\array{ 204 & 98 &−26&−10\cr −280 &−134 & 36 & 14 \cr 716 & 348 &−90&−36\cr −472 &−232 & 60 & 28 } \right ]\left [\array{ −3\cr 7 \cr 0\cr 8 } \right ] = \left [\array{ −6\cr 14 \cr 0\cr 16 } \right ] = 2\left [\array{ −3\cr 7 \cr 0\cr 8 } \right ] = 2z

so z is an eigenvector of A with eigenvalue λ = 2. Also,

Aw = \left [\array{ 204 & 98 &−26&−10\cr −280 &−134 & 36 & 14 \cr 716 & 348 &−90&−36\cr −472 &−232 & 60 & 28 } \right ]\left [\array{ 1\cr −1 \cr 4\cr 0 } \right ] = \left [\array{ 2\cr −2 \cr 8\cr 0 } \right ] = 2\left [\array{ 1\cr −1 \cr 4\cr 0 } \right ] = 2w

so w is an eigenvector of A with eigenvalue λ = 2.

So we have demonstrated four eigenvectors of A. Are there more? Yes, any nonzero scalar multiple of an eigenvector is again an eigenvector. In this example, set u = 30x. Then

\eqalignno{ Au & = A(30x) & & & & \cr & = 30Ax & &\text{@(a href="fcla-jsmath-2.23li31.html#theorem.MMSMM")Theorem MMSMM@(/a)} & & & & \cr & = 30(4x) & &\text{$x$ an eigenvector of $A$} & & & & \cr & = 4(30x) & &\text{@(a href="fcla-jsmath-2.23li30.html#property.SMAM")Property SMAM@(/a)} & & & & \cr & = 4u & & & & }

so that u is also an eigenvector of A for the same eigenvalue, λ = 4.

The vectors z and w are both eigenvectors of A for the same eigenvalue λ = 2, yet this is not as simple as the two vectors just being scalar multiples of each other (they aren’t). Look what happens when we add them together, to form v = z + w, and multiply by A,

\eqalignno{ Av & = A(z + w) & & & & \cr & = Az + Aw & &\text{@(a href="fcla-jsmath-2.23li31.html#theorem.MMDAA")Theorem MMDAA@(/a)} & & & & \cr & = 2z + 2w & &\text{$z$, $w$ eigenvectors of $A$} & & & & \cr & = 2(z + w) & &\text{@(a href="fcla-jsmath-2.23li23.html#property.DVAC")Property DVAC@(/a)} & & & & \cr & = 2v & & & & }

so that v is also an eigenvector of A for the eigenvalue λ = 2. So it would appear that the set of eigenvectors that are associated with a fixed eigenvalue is closed under the vector space operations of {ℂ}^{n}. Hmmm.

The vector y is an eigenvector of A for the eigenvalue λ = 0, so we can use Theorem ZSSM to write Ay = 0y = 0. But this also means that y ∈N\kern -1.95872pt \left (A\right ). There would appear to be a connection here also.

Example SEE hints at a number of intriguing properties, and there are many more. We will explore the general properties of eigenvalues and eigenvectors in Section PEE, but in this section we will concern ourselves with the question of actually computing eigenvalues and eigenvectors. First we need a bit of background material on polynomials and matrices.

Subsection PM: Polynomials and Matrices

A polynomial is a combination of powers, multiplication by scalar coefficients, and addition (with subtraction just being the inverse of addition). We never have occasion to divide when computing the value of a polynomial. So it is with matrices. We can add and subtract matrices, we can multiply matrices by scalars, and we can form powers of square matrices by repeated applications of matrix multiplication. We do not normally divide matrices (though sometimes we can multiply by an inverse). If a matrix is square, all the operations constituting a polynomial will preserve the size of the matrix. So it is natural to consider evaluating a polynomial with a matrix, effectively replacing the variable of the polynomial by a matrix. We’ll demonstrate with an example,

Example PM
Polynomial of a matrix
Let

\eqalignno{ p(x) = 14 + 19x − 3{x}^{2} − 7{x}^{3} + {x}^{4} & &D = \left [\array{ −1&3& 2\cr 1 &0 &−2 \cr −3&1& 1 } \right ] & & & & }

and we will compute p(D). First, the necessary powers of D. Notice that {D}^{0} is defined to be the multiplicative identity, {I}_{3}, as will be the case in general.

\eqalignno{ {D}^{0} & = {I}_{ 3} = \left [\array{ 1&0&0\cr 0&1 &0 \cr 0&0&1} \right ] & & \cr {D}^{1} & = D = \left [\array{ −1&3& 2\cr 1 &0 &−2 \cr −3&1& 1 } \right ] & & \cr {D}^{2} & = D{D}^{1} = \left [\array{ −1&3& 2\cr 1 &0 &−2 \cr −3&1& 1 } \right ]\left [\array{ −1&3& 2\cr 1 &0 &−2 \cr −3&1& 1 } \right ] = \left [\array{ −2&−1&−6\cr 5 & 1 & 0 \cr 1 &−8&−7 } \right ] & & \cr {D}^{3} & = D{D}^{2} = \left [\array{ −1&3& 2\cr 1 &0 &−2 \cr −3&1& 1 } \right ]\left [\array{ −2&−1&−6\cr 5 & 1 & 0 \cr 1 &−8&−7 } \right ] = \left [\array{ 19&−12&−8\cr −4 & 15 & 8 \cr 12& −4 &11 } \right ] & & \cr {D}^{4} & = D{D}^{3} = \left [\array{ −1&3& 2\cr 1 &0 &−2 \cr −3&1& 1 } \right ]\left [\array{ 19&−12&−8\cr −4 & 15 & 8 \cr 12& −4 &11 } \right ] = \left [\array{ −7 &49& 54\cr −5 &−4 &−30 \cr −49&47& 43 } \right ] & & \cr & & }

Then

\eqalignno{ p(D) & = 14 + 19D − 3{D}^{2} − 7{D}^{3} + {D}^{4} & & \cr & = 14\left [\array{ 1&0&0\cr 0&1 &0 \cr 0&0&1} \right ] + 19\left [\array{ −1&3& 2\cr 1 &0 &−2 \cr −3&1& 1 } \right ] − 3\left [\array{ −2&−1&−6\cr 5 & 1 & 0 \cr 1 &−8&−7 } \right ] & & \cr &\quad \quad − 7\left [\array{ 19&−12&−8\cr −4 & 15 & 8 \cr 12& −4 &11 } \right ] + \left [\array{ −7 &49& 54\cr −5 &−4 &−30 \cr −49&47& 43 } \right ] & & \cr & = \left [\array{ −139&193& 166\cr 27 &−98 &−124 \cr −193&118& 20 } \right ] & & }

Notice that p(x) factors as

p(x) = 14 + 19x − 3{x}^{2} − 7{x}^{3} + {x}^{4} = (x − 2)(x − 7){(x + 1)}^{2}

Because D commutes with itself (DD = DD), we can use distributivity of matrix multiplication across matrix addition (Theorem MMDAA) without being careful with any of the matrix products, and just as easily evaluate p(D) using the factored form of p(x),

\eqalignno{ p(D) & = 14 + 19D − 3{D}^{2} − 7{D}^{3} + {D}^{4} = (D − 2{I}_{ 3})(D − 7{I}_{3}){(D + {I}_{3})}^{2} & & \cr & = \left [\array{ −3& 3 & 2\cr 1 &−2 &−2 \cr −3& 1 &−1 } \right ]\kern 1.95872pt \left [\array{ −8& 3 & 2\cr 1 &−7 &−2 \cr −3& 1 &−6 } \right ]\kern 1.95872pt {\left [\array{ 0 &3& 2\cr 1 &1 &−2 \cr −3&1& 2 } \right ]}^{2} & & \cr & = \left [\array{ −139&193& 166\cr 27 &−98 &−124 \cr −193&118& 20 } \right ] & & }

This example is not meant to be too profound. It is meant to show you that it is natural to evaluate a polynomial with a matrix, and that the factored form of the polynomial is as good as (or maybe better than) the expanded form. And do not forget that constant terms in polynomials are really multiples of the identity matrix when we are evaluating the polynomial with a matrix.

Subsection EEE: Existence of Eigenvalues and Eigenvectors

Before we embark on computing eigenvalues and eigenvectors, we will prove that every matrix has at least one eigenvalue (and an eigenvector to go with it). Later, in Theorem MNEM, we will determine the maximum number of eigenvalues a matrix may have.

The determinant (Definition D) will be a powerful tool in Subsection EE.CEE when it comes time to compute eigenvalues. However, it is possible, with some more advanced machinery, to compute eigenvalues without ever making use of the determinant. Sheldon Axler does just that in his book, Linear Algebra Done Right. Here and now, we give Axler’s “determinant-free” proof that every matrix has an eigenvalue. The result is not too startling, but the proof is most enjoyable.

Theorem EMHE
Every Matrix Has an Eigenvalue
Suppose A is a square matrix. Then A has at least one eigenvalue.

Proof   Suppose that A has size n, and choose x as any nonzero vector from {ℂ}^{n}. (Notice how much latitude we have in our choice of x. Only the zero vector is off-limits.) Consider the set

S = \left \{x,\kern 1.95872pt Ax,\kern 1.95872pt {A}^{2}x,\kern 1.95872pt {A}^{3}x,\kern 1.95872pt \mathop{\mathop{…}},\kern 1.95872pt {A}^{n}x\right \}

This is a set of n + 1 vectors from {ℂ}^{n}, so by Theorem MVSLD, S is linearly dependent. Let {a}_{0},\kern 1.95872pt {a}_{1},\kern 1.95872pt {a}_{2},\kern 1.95872pt \mathop{\mathop{…}},\kern 1.95872pt {a}_{n} be a collection of n + 1 scalars from {ℂ}^{}, not all zero, that provide a relation of linear dependence on S. In other words,

{a}_{0}x + {a}_{1}Ax + {a}_{2}{A}^{2}x + {a}_{ 3}{A}^{3}x + \mathrel{⋯} + {a}_{ n}{A}^{n}x = 0

Some of the {a}_{i} are nonzero. Suppose that just {a}_{0}\mathrel{≠}0, and {a}_{1} = {a}_{2} = {a}_{3} = \mathrel{⋯} = {a}_{n} = 0. Then {a}_{0}x = 0 and by Theorem SMEZV, either {a}_{0} = 0 or x = 0, which are both contradictions. So {a}_{i}\mathrel{≠}0 for some i ≥ 1. Let m be the largest integer such that {a}_{m}\mathrel{≠}0. From this discussion we know that m ≥ 1. We can also assume that {a}_{m} = 1, for if not, replace each {a}_{i} by {a}_{i}∕{a}_{m} to obtain scalars that serve equally well in providing a relation of linear dependence on S.

Define the polynomial

p(x) = {a}_{0} + {a}_{1}x + {a}_{2}{x}^{2} + {a}_{ 3}{x}^{3} + \mathrel{⋯} + {a}_{ m}{x}^{m}

Because we have consistently used {ℂ}^{} as our set of scalars (rather than ), we know that we can factor p(x) into linear factors of the form (x − {b}_{i}), where {b}_{i} ∈ {ℂ}^{}. So there are scalars, {b}_{1},\kern 1.95872pt {b}_{2},\kern 1.95872pt {b}_{3},\kern 1.95872pt \mathop{\mathop{…}},\kern 1.95872pt {b}_{m}, from {ℂ}^{} so that,

p(x) = (x − {b}_{m})(x − {b}_{m−1})\mathrel{⋯}(x − {b}_{3})(x − {b}_{2})(x − {b}_{1})

Put it all together and

\eqalignno{ 0& = {a}_{0}x + {a}_{1}Ax + {a}_{2}{A}^{2}x + {a}_{ 3}{A}^{3}x + \mathrel{⋯} + {a}_{ n}{A}^{n}x && && \cr & = {a}_{0}x + {a}_{1}Ax + {a}_{2}{A}^{2}x + {a}_{ 3}{A}^{3}x + \mathrel{⋯} + {a}_{ m}{A}^{m}x &&\text{${a}_{ i} = 0$ for $i > m$} &&&& \cr & = \left ({a}_{0}{I}_{n} + {a}_{1}A + {a}_{2}{A}^{2} + {a}_{ 3}{A}^{3} + \mathrel{⋯} + {a}_{ m}{A}^{m}\right )x &&\text{@(a href="fcla-jsmath-2.23li31.html#theorem.MMDAA")Theorem MMDAA@(/a)}&&&& \cr & = p(A)x &&\text{Definition of $p(x)$} &&&& \cr & = (A − {b}_{m}{I}_{n})(A − {b}_{m−1}{I}_{n})\mathrel{⋯}(A − {b}_{3}{I}_{n})(A − {b}_{2}{I}_{n})(A − {b}_{1}{I}_{n})x&& && }

Let k be the smallest integer such that

(A − {b}_{k}{I}_{n})(A − {b}_{k−1}{I}_{n})\mathrel{⋯}(A − {b}_{3}{I}_{n})(A − {b}_{2}{I}_{n})(A − {b}_{1}{I}_{n})x = 0.

From the preceding equation, we know that k ≤ m. Define the vector z by

z = (A − {b}_{k−1}{I}_{n})\mathrel{⋯}(A − {b}_{3}{I}_{n})(A − {b}_{2}{I}_{n})(A − {b}_{1}{I}_{n})x

Notice that by the definition of k, the vector z must be nonzero. In the case where k = 1, we understand that z is defined by z = x, and z is still nonzero. Now

(A − {b}_{k}{I}_{n})z = (A − {b}_{k}{I}_{n})(A − {b}_{k−1}{I}_{n})\mathrel{⋯}(A − {b}_{3}{I}_{n})(A − {b}_{2}{I}_{n})(A − {b}_{1}{I}_{n})x = 0

which allows us to write

\eqalignno{ Az & = (A + O)z & &\text{@(a href="fcla-jsmath-2.23li30.html#property.ZM")Property ZM@(/a)} & & & & \cr & = (A − {b}_{k}{I}_{n} + {b}_{k}{I}_{n})z & &\text{@(a href="fcla-jsmath-2.23li30.html#property.AIM")Property AIM@(/a)} & & & & \cr & = (A − {b}_{k}{I}_{n})z + {b}_{k}{I}_{n}z & &\text{@(a href="fcla-jsmath-2.23li31.html#theorem.MMDAA")Theorem MMDAA@(/a)} & & & & \cr & = 0 + {b}_{k}{I}_{n}z & &\text{Defining property of $z$} & & & & \cr & = {b}_{k}{I}_{n}z & &\text{@(a href="fcla-jsmath-2.23li30.html#property.ZM")Property ZM@(/a)} & & & & \cr & = {b}_{k}z & &\text{@(a href="fcla-jsmath-2.23li31.html#theorem.MMIM")Theorem MMIM@(/a)} & & & & }

Since z\mathrel{≠}0, this equation says that z is an eigenvector of A for the eigenvalue λ = {b}_{k} (Definition EEM), so we have shown that any square matrix A does have at least one eigenvalue.

The proof of Theorem EMHE is constructive (it contains an unambiguous procedure that leads to an eigenvalue), but it is not meant to be practical. We will illustrate the theorem with an example, the purpose being to provide a companion for studying the proof and not to suggest this is the best procedure for computing an eigenvalue.

Example CAEHW
Computing an eigenvalue the hard way
This example illustrates the proof of Theorem EMHE, so will employ the same notation as the proof — look there for full explanations. It is not meant to be an example of a reasonable computational approach to finding eigenvalues and eigenvectors. OK, warnings in place, here we go.

Let

A = \left [\array{ −7 &−1& 11 & 0 &−4\cr 4 & 1 & 0 & 2 & 0 \cr −10&−1& 14 & 0 &−4\cr 8 & 2 &−15 &−1 & 5 \cr −10&−1& 16 & 0 &−6 } \right ]

and choose

x = \left [\array{ 3\cr 0 \cr 3\cr −5 \cr 4 } \right ]

It is important to notice that the choice of x could be anything, so long as it is not the zero vector. We have not chosen x totally at random, but so as to make our illustration of the theorem as general as possible. You could replicate this example with your own choice and the computations are guaranteed to be reasonable, provided you have a computational tool that will factor a fifth degree polynomial for you.

The set

\eqalignno{ S & = \left \{x,\kern 1.95872pt Ax,\kern 1.95872pt {A}^{2}x,\kern 1.95872pt {A}^{3}x,\kern 1.95872pt {A}^{4}x,\kern 1.95872pt {A}^{5}x\right \} & & \cr & = \left \{\left [\array{ 3\cr 0 \cr 3\cr −5 \cr 4 } \right ],\kern 1.95872pt \left [\array{ −4\cr 2 \cr −4\cr 4 \cr −6 } \right ],\kern 1.95872pt \left [\array{ 6\cr −6 \cr 6\cr −2 \cr 10 } \right ],\kern 1.95872pt \left [\array{ −10\cr 14 \cr −10\cr −2 \cr −18 } \right ],\kern 1.95872pt \left [\array{ 18\cr −30 \cr 18\cr 10 \cr 34} \right ],\kern 1.95872pt \left [\array{ −34\cr 62 \cr −34\cr −26 \cr −66 } \right ]\right \} & & }

is guaranteed to be linearly dependent, as it has six vectors from {ℂ}^{5} (Theorem MVSLD). We will search for a non-trivial relation of linear dependence by solving a homogeneous system of equations whose coefficient matrix has the vectors of S as columns through row operations,

\left [\array{ 3 &−4& 6 &−10& 18 &−34\cr 0 & 2 &−6 & 14 &−30 & 62 \cr 3 &−4& 6 &−10& 18 &−34\cr −5 & 4 &−2 & −2 & 10 &−26 \cr 4 &−6&10&−18& 34 &−66 } \right ]\mathop{\longrightarrow}\limits_{}^{\text{RREF}}\left [\array{ \text{1}&0&−2&6&−14&30\cr 0&\text{1 } &−3 &7 &−15 &31 \cr 0&0& 0 &0& 0 & 0\cr 0&0 & 0 &0 & 0 & 0 \cr 0&0& 0 &0& 0 & 0 } \right ]

There are four free variables for describing solutions to this homogeneous system, so we have our pick of solutions. The most expedient choice would be to set {x}_{3} = 1 and {x}_{4} = {x}_{5} = {x}_{6} = 0. However, we will again opt to maximize the generality of our illustration of Theorem EMHE and choose {x}_{3} = −8, {x}_{4} = −3, {x}_{5} = 1 and {x}_{6} = 0. The leads to a solution with {x}_{1} = 16 and {x}_{2} = 12.

This relation of linear dependence then says that

\eqalignno{ 0 & = 16x + 12Ax − 8{A}^{2}x − 3{A}^{3}x + {A}^{4}x + 0{A}^{5}x & & \cr 0 & = \left (16 + 12A − 8{A}^{2} − 3{A}^{3} + {A}^{4}\right )x & & }

So we define p(x) = 16 + 12x − 8{x}^{2} − 3{x}^{3} + {x}^{4}, and as advertised in the proof of Theorem EMHE, we have a polynomial of degree m = 4 > 1 such that p(A)x = 0. Now we need to factor p(x) over {ℂ}^{}. If you made your own choice of x at the start, this is where you might have a fifth degree polynomial, and where you might need to use a computational tool to find roots and factors. We have

p(x) = 16 + 12x − 8{x}^{2} − 3{x}^{3} + {x}^{4} = (x − 4)(x + 2)(x − 2)(x + 1)

So we know that

0 = p(A)x = (A − 4{I}_{5})(A + 2{I}_{5})(A − 2{I}_{5})(A + 1{I}_{5})x

We apply one factor at a time, until we get the zero vector, so as to determine the value of k described in the proof of Theorem EMHE,

\eqalignno{ (A + 1{I}_{5})x & = \left [\array{ −6 &−1& 11 &0&−4\cr 4 & 2 & 0 &2 & 0 \cr −10&−1& 15 &0&−4\cr 8 & 2 &−15 &0 & 5 \cr −10&−1& 16 &0&−5 } \right ]\left [\array{ 3\cr 0 \cr 3\cr −5 \cr 4 } \right ] = \left [\array{ −1\cr 2 \cr −1\cr −1 \cr −2 } \right ] & & \cr (A − 2{I}_{5})(A + 1{I}_{5})x & = \left [\array{ −9 &−1& 11 & 0 &−4\cr 4 &−1 & 0 & 2 & 0 \cr −10&−1& 12 & 0 &−4\cr 8 & 2 &−15 &−3 & 5 \cr −10&−1& 16 & 0 &−8 } \right ]\left [\array{ −1\cr 2 \cr −1\cr −1 \cr −2 } \right ] = \left [\array{ 4\cr −8 \cr 4\cr 4 \cr 8 } \right ] & & \cr (A + 2{I}_{5})(A − 2{I}_{5})(A + 1{I}_{5})x & = \left [\array{ −5 &−1& 11 &0&−4\cr 4 & 3 & 0 &2 & 0 \cr −10&−1& 16 &0&−4\cr 8 & 2 &−15 &1 & 5 \cr −10&−1& 16 &0&−4 } \right ]\left [\array{ 4\cr −8 \cr 4\cr 4 \cr 8 } \right ] = \left [\array{ 0\cr 0 \cr 0\cr 0 \cr 0 } \right ] & & \cr & & }

So k = 3 and

z = (A−2{I}_{5})(A+1{I}_{5})x = \left [\array{ 4\cr −8 \cr 4\cr 4 \cr 8 } \right ]

is an eigenvector of A for the eigenvalue λ = −2, as you can check by doing the computation Az. If you work through this example with your own choice of the vector x (strongly recommended) then the eigenvalue you will find may be different, but will be in the set \left \{3,\kern 1.95872pt 0,\kern 1.95872pt 1,\kern 1.95872pt − 1,\kern 1.95872pt − 2\right \}. See Exercise EE.M60 for a suggested starting vector.

Subsection CEE: Computing Eigenvalues and Eigenvectors

Fortunately, we need not rely on the procedure of Theorem EMHE each time we need an eigenvalue. It is the determinant, and specifically Theorem SMZD, that provides the main tool for computing eigenvalues. Here is an informal sequence of equivalences that is the key to determining the eigenvalues and eigenvectors of a matrix,

Ax = λx\kern 3.26288pt \mathrel{⇔}\kern 3.26288pt Ax − λ{I}_{n}x = 0\kern 3.26288pt \mathrel{⇔}\kern 3.26288pt \left (A − λ{I}_{n}\right )x = 0

So, for an eigenvalue λ and associated eigenvector x\mathrel{≠}0, the vector x will be a nonzero element of the null space of A − λ{I}_{n}, while the matrix A − λ{I}_{n} will be singular and therefore have zero determinant. These ideas are made precise in Theorem EMRCP and Theorem EMNS, but for now this brief discussion should suffice as motivation for the following definition and example.

Definition CP
Characteristic Polynomial
Suppose that A is a square matrix of size n. Then the characteristic polynomial of A is the polynomial {p}_{A}\left (x\right ) defined by

{p}_{A}\left (x\right ) =\mathop{ det} \left (A − x{I}_{n}\right )

Example CPMS3
Characteristic polynomial of a matrix, size 3
Consider

F = \left [\array{ −13&−8&−4\cr 12 & 7 & 4 \cr 24 &16& 7 } \right ]

Then

\eqalignno{ {p}_{F }\left (x\right ) & =\mathop{ det} \left (F − x{I}_{3}\right ) & & & & \cr & = \left \vert \array{ −13 − x& −8 & −4\cr 12 &7 − x & 4 \cr 24 & 16 &7 − x } \right \vert & &\text{@(a href="#definition.CP")Definition CP@(/a)} & & & & \cr & = (−13 − x)\left \vert \array{ 7 − x& 4\cr 16 &7 − x } \right \vert + (−8)(−1)\left \vert \array{ 12& 4\cr 24 &7 − x } \right \vert & &\text{@(a href="fcla-jsmath-2.23li44.html#definition.DM")Definition DM@(/a)} & & & & \cr &\quad \quad + (−4)\left \vert \array{ 12&7 − x\cr 24 & 16 } \right \vert & & & & \cr & = (−13 − x)((7 − x)(7 − x) − 4(16)) & &\text{@(a href="fcla-jsmath-2.23li44.html#theorem.DMST")Theorem DMST@(/a)} & & & & \cr &\quad \quad + (−8)(−1)(12(7 − x) − 4(24)) & & & & \cr &\quad \quad + (−4)(12(16) − (7 − x)(24)) & & & & \cr & = 3 + 5x + {x}^{2} − {x}^{3} & & & & \cr & = −(x − 3){(x + 1)}^{2} & & & & }

The characteristic polynomial is our main computational tool for finding eigenvalues, and will sometimes be used to aid us in determining the properties of eigenvalues.

Theorem EMRCP
Eigenvalues of a Matrix are Roots of Characteristic Polynomials
Suppose A is a square matrix. Then λ is an eigenvalue of A if and only if {p}_{A}\left (λ\right ) = 0.

Proof   Suppose A has size n.

\eqalignno{ &\text{$λ$ is an eigenvalue of $A$} & & & & \cr &\kern 3.26288pt \mathrel{⇔}\kern 3.26288pt \text{ there exists $x\mathrel{≠}0$ so that $Ax = λx$} & &\text{@(a href="#definition.EEM")Definition EEM@(/a)} & & & & \cr &\kern 3.26288pt \mathrel{⇔}\kern 3.26288pt \text{ there exists $x\mathrel{≠}0$ so that $Ax − λx = 0$} & & & & \cr &\kern 3.26288pt \mathrel{⇔}\kern 3.26288pt \text{ there exists $x\mathrel{≠}0$ so that $Ax − λ{I}_{n}x = 0$} & &\text{@(a href="fcla-jsmath-2.23li31.html#theorem.MMIM")Theorem MMIM@(/a)} & & & & \cr &\kern 3.26288pt \mathrel{⇔}\kern 3.26288pt \text{ there exists $x\mathrel{≠}0$ so that $(A − λ{I}_{n})x = 0$} & &\text{@(a href="fcla-jsmath-2.23li31.html#theorem.MMDAA")Theorem MMDAA@(/a)} & & & & \cr &\kern 3.26288pt \mathrel{⇔}\kern 3.26288pt A − λ{I}_{n}\text{ is singular} & &\text{@(a href="fcla-jsmath-2.23li21.html#definition.NM")Definition NM@(/a)} & & & & \cr &\kern 3.26288pt \mathrel{⇔}\kern 3.26288pt \mathop{ det} \left (A − λ{I}_{n}\right ) = 0 & &\text{@(a href="fcla-jsmath-2.23li45.html#theorem.SMZD")Theorem SMZD@(/a)} & & & & \cr &\kern 3.26288pt \mathrel{⇔}\kern 3.26288pt {p}_{A}\left (λ\right ) = 0 & &\text{@(a href="#definition.CP")Definition CP@(/a)} & & & & }

Example EMS3
Eigenvalues of a matrix, size 3
In Example CPMS3 we found the characteristic polynomial of

F = \left [\array{ −13&−8&−4\cr 12 & 7 & 4 \cr 24 &16& 7 } \right ]

to be {p}_{F }\left (x\right ) = −(x − 3){(x + 1)}^{2}. Factored, we can find all of its roots easily, they are x = 3 and x = −1. By Theorem EMRCP, λ = 3 and λ = −1 are both eigenvalues of F, and these are the only eigenvalues of F. We’ve found them all.

Let us now turn our attention to the computation of eigenvectors.

Definition EM
Eigenspace of a Matrix
Suppose that A is a square matrix and λ is an eigenvalue of A. Then the eigenspace of A for λ, {ℰ}_{A}\left (λ\right ), is the set of all the eigenvectors of A for λ, together with the inclusion of the zero vector.

Example SEE hinted that the set of eigenvectors for a single eigenvalue might have some closure properties, and with the addition of the non-eigenvector, 0, we indeed get a whole subspace.

Theorem EMS
Eigenspace for a Matrix is a Subspace
Suppose A is a square matrix of size n and λ is an eigenvalue of A. Then the eigenspace {ℰ}_{A}\left (λ\right ) is a subspace of the vector space {ℂ}^{n}.

Proof   We will check the three conditions of Theorem TSS. First, Definition EM explicitly includes the zero vector in {ℰ}_{A}\left (λ\right ), so the set is non-empty.

Suppose that x,\kern 1.95872pt y ∈{ℰ}_{A}\left (λ\right ), that is, x and y are two eigenvectors of A for λ. Then

\eqalignno{ A\left (x + y\right ) & = Ax + Ay & &\text{@(a href="fcla-jsmath-2.23li31.html#theorem.MMDAA")Theorem MMDAA@(/a)} & & & & \cr & = λx + λy & &\text{$x,\kern 1.95872pt y$ eigenvectors of $A$} & & & & \cr & = λ\left (x + y\right ) & &\text{@(a href="fcla-jsmath-2.23li23.html#property.DVAC")Property DVAC@(/a)} & & & & }

So either x + y = 0, or x + y is an eigenvector of A for λ (Definition EEM). So, in either event, x + y ∈{ℰ}_{A}\left (λ\right ), and we have additive closure.

Suppose that α ∈ {ℂ}^{}, and that x ∈{ℰ}_{A}\left (λ\right ), that is, x is an eigenvector of A for λ. Then

\eqalignno{ A\left (αx\right ) & = α\left (Ax\right ) & &\text{@(a href="fcla-jsmath-2.23li31.html#theorem.MMSMM")Theorem MMSMM@(/a)} & & & & \cr & = αλx & &\text{$x$ an eigenvector of $A$} & & & & \cr & = λ\left (αx\right ) & &\text{@(a href="fcla-jsmath-2.23li23.html#property.SMAC")Property SMAC@(/a)} & & & & }

So either αx = 0, or αx is an eigenvector of A for λ (Definition EEM). So, in either event, αx ∈{ℰ}_{A}\left (λ\right ), and we have scalar closure.

With the three conditions of Theorem TSS met, we know {ℰ}_{A}\left (λ\right ) is a subspace.

Theorem EMS tells us that an eigenspace is a subspace (and hence a vector space in its own right). Our next theorem tells us how to quickly construct this subspace.

Theorem EMNS
Eigenspace of a Matrix is a Null Space
Suppose A is a square matrix of size n and λ is an eigenvalue of A. Then

{ℰ}_{A}\left (λ\right ) = N\kern -1.95872pt \left (A − λ{I}_{n}\right )

Proof   The conclusion of this theorem is an equality of sets, so normally we would follow the advice of Definition SE. However, in this case we can construct a sequence of equivalences which will together provide the two subset inclusions we need. First, notice that 0 ∈{ℰ}_{A}\left (λ\right ) by Definition EM and 0 ∈N\kern -1.95872pt \left (A − λ{I}_{n}\right ) by Theorem HSC. Now consider any nonzero vector x ∈ {ℂ}^{n},

\eqalignno{ x ∈{ℰ}_{A}\left (λ\right ) &\kern 3.26288pt \mathrel{⇔}\kern 3.26288pt Ax = λx & &\text{@(a href="#definition.EM")Definition EM@(/a)} & & & & \cr &\kern 3.26288pt \mathrel{⇔}\kern 3.26288pt Ax − λx = 0 & & & & \cr &\kern 3.26288pt \mathrel{⇔}\kern 3.26288pt Ax − λ{I}_{n}x = 0 & &\text{@(a href="fcla-jsmath-2.23li31.html#theorem.MMIM")Theorem MMIM@(/a)} & & & & \cr &\kern 3.26288pt \mathrel{⇔}\kern 3.26288pt \left (A − λ{I}_{n}\right )x = 0 & &\text{@(a href="fcla-jsmath-2.23li31.html#theorem.MMDAA")Theorem MMDAA@(/a)} & & & & \cr &\kern 3.26288pt \mathrel{⇔}\kern 3.26288pt x ∈N\kern -1.95872pt \left (A − λ{I}_{n}\right ) & &\text{@(a href="fcla-jsmath-2.23li20.html#definition.NSM")Definition NSM@(/a)} & & & & }

You might notice the close parallels (and differences) between the proofs of Theorem EMRCP and Theorem EMNS. Since Theorem EMNS describes the set of all the eigenvectors of A as a null space we can use techniques such as Theorem BNS to provide concise descriptions of eigenspaces. Theorem EMNS also provides a trivial proof for Theorem EMS.

Example ESMS3
Eigenspaces of a matrix, size 3
Example CPMS3 and Example EMS3 describe the characteristic polynomial and eigenvalues of the 3 × 3 matrix

F = \left [\array{ −13&−8&−4\cr 12 & 7 & 4 \cr 24 &16& 7 } \right ]

We will now take each eigenvalue in turn and compute its eigenspace. To do this, we row-reduce the matrix F − λ{I}_{3} in order to determine solutions to the homogeneous system ℒS\kern -1.95872pt \left (F − λ{I}_{3},\kern 1.95872pt 0\right ) and then express the eigenspace as the null space of F − λ{I}_{3} (Theorem EMNS). Theorem BNS then tells us how to write the null space as the span of a basis.

\eqalignno{ λ & = 3 &F − 3{I}_{3} & = \left [\array{ −16&−8&−4\cr 12 & 4 & 4 \cr 24 &16& 4 } \right ]\mathop{\longrightarrow}\limits_{}^{\text{RREF}}\left [\array{ \text{1}&0& {1\over 2} \cr 0&\text{1}&−{1\over 2} \cr 0&0& 0 } \right ] & & & & \cr & &{ℰ}_{F }\left (3\right ) & = N\kern -1.95872pt \left (F − 3{I}_{3}\right ) = \left \langle \left \{\left [\array{ −{1\over 2} \cr {1\over 2} \cr 1 } \right ]\right \}\right \rangle = \left \langle \left \{\left [\array{ −1\cr 1 \cr 2 } \right ]\right \}\right \rangle & & & & \cr λ & = −1 &F + 1{I}_{3} & = \left [\array{ −12&−8&−4\cr 12 & 8 & 4 \cr 24 &16& 8 } \right ]\mathop{\longrightarrow}\limits_{}^{\text{RREF}}\left [\array{ \text{1}&{2\over 3}&{1\over 3} \cr 0&0&0\cr 0&0 &0 } \right ] & & & & \cr & &{ℰ}_{F }\left (−1\right ) & = N\kern -1.95872pt \left (F + 1{I}_{3}\right ) = \left \langle \left \{\left [\array{ −{2\over 3} \cr 1\cr 0 } \right ],\kern 1.95872pt \left [\array{ −{1\over 3} \cr 0\cr 1 } \right ]\right \}\right \rangle = \left \langle \left \{\left [\array{ −2\cr 3 \cr 0 } \right ],\kern 1.95872pt \left [\array{ −1\cr 0 \cr 3 } \right ]\right \}\right \rangle & & & & }

Eigenspaces in hand, we can easily compute eigenvectors by forming nontrivial linear combinations of the basis vectors describing each eigenspace. In particular, notice that we can “pretty up” our basis vectors by using scalar multiples to clear out fractions. More powerful scientific calculators, and most every mathematical software package, will compute eigenvalues of a matrix along with basis vectors of the eigenspaces. Be sure to understand how your device outputs complex numbers, since they are likely to occur. Also, the basis vectors will not necessarily look like the results of an application of Theorem BNS. Duplicating the results of the next section (Subsection EE.ECEE) with your device would be very good practice.  See: Computation E.SAGE

Subsection ECEE: Examples of Computing Eigenvalues and Eigenvectors

No theorems in this section, just a selection of examples meant to illustrate the range of possibilities for the eigenvalues and eigenvectors of a matrix. These examples can all be done by hand, though the computation of the characteristic polynomial would be very time-consuming and error-prone. It can also be difficult to factor an arbitrary polynomial, though if we were to suggest that most of our eigenvalues are going to be integers, then it can be easier to hunt for roots. These examples are meant to look similar to a concatenation of Example CPMS3, Example EMS3 and Example ESMS3. First, we will sneak in a pair of definitions so we can illustrate them throughout this sequence of examples.

Definition AME
Algebraic Multiplicity of an Eigenvalue
Suppose that A is a square matrix and λ is an eigenvalue of A. Then the algebraic multiplicity of λ, {α}_{A}\left (λ\right ), is the highest power of (x − λ) that divides the characteristic polynomial, {p}_{A}\left (x\right ).

(This definition contains Notation AME.)

Since an eigenvalue λ is a root of the characteristic polynomial, there is always a factor of (x − λ), and the algebraic multiplicity is just the power of this factor in a factorization of {p}_{A}\left (x\right ). So in particular, {α}_{A}\left (λ\right ) ≥ 1. Compare the definition of algebraic multiplicity with the next definition.

Definition GME
Geometric Multiplicity of an Eigenvalue
Suppose that A is a square matrix and λ is an eigenvalue of A. Then the geometric multiplicity of λ, {γ}_{A}\left (λ\right ), is the dimension of the eigenspace {ℰ}_{A}\left (λ\right ).

(This definition contains Notation GME.)

Since every eigenvalue must have at least one eigenvector, the associated eigenspace cannot be trivial, and so {γ}_{A}\left (λ\right ) ≥ 1.

Example EMMS4
Eigenvalue multiplicities, matrix of size 4
Consider the matrix

B = \left [\array{ −2& 1 &−2&−4\cr 12 & 1 & 4 & 9 \cr 6 & 5 &−2&−4\cr 3 &−4 & 5 & 10 } \right ]

then

{p}_{B}\left (x\right ) = 8 − 20x + 18{x}^{2} − 7{x}^{3} + {x}^{4} = (x − 1){(x − 2)}^{3}

So the eigenvalues are λ = 1,\kern 1.95872pt 2 with algebraic multiplicities {α}_{B}\left (1\right ) = 1 and {α}_{B}\left (2\right ) = 3.

Computing eigenvectors,

\eqalignno{ λ & = 1 &B − 1{I}_{4} & = \left [\array{ −3& 1 &−2&−4\cr 12 & 0 & 4 & 9 \cr 6 & 5 &−3&−4\cr 3 &−4 & 5 & 9 } \right ]\mathop{\longrightarrow}\limits_{}^{\text{RREF}}\left [\array{ \text{1}&0& {1\over 3} &0\cr 0&\text{1 } &−1 &0 \cr 0&0& 0 &\text{1}\cr 0&0 & 0 &0 } \right ] & & & & \cr & &{ℰ}_{B}\left (1\right ) & = N\kern -1.95872pt \left (B − 1{I}_{4}\right ) = \left \langle \left \{\left [\array{ −{1\over 3} \cr 1\cr 1 \cr 0 } \right ]\right \}\right \rangle = \left \langle \left \{\left [\array{ −1\cr 3 \cr 3\cr 0 } \right ]\right \}\right \rangle & & & & \cr λ & = 2 &B − 2{I}_{4} & = \left [\array{ −4& 1 &−2&−4\cr 12 &−1 & 4 & 9 \cr 6 & 5 &−4&−4\cr 3 &−4 & 5 & 8 } \right ]\mathop{\longrightarrow}\limits_{}^{\text{RREF}}\left [\array{ \text{1}&0&0&1∕2 \cr 0&\text{1}&0&−1 \cr 0&0&\text{1}&1∕2 \cr 0&0&0& 0 } \right ] & & & & \cr & &{ℰ}_{B}\left (2\right ) & = N\kern -1.95872pt \left (B − 2{I}_{4}\right ) = \left \langle \left \{\left [\array{ −{1\over 2} \cr 1 \cr −{1\over 2} \cr 1 } \right ]\right \}\right \rangle = \left \langle \left \{\left [\array{ −1\cr 2 \cr −1\cr 2 } \right ]\right \}\right \rangle & & & & \cr & & & & }

So each eigenspace has dimension 1 and so {γ}_{B}\left (1\right ) = 1 and {γ}_{B}\left (2\right ) = 1. This example is of interest because of the discrepancy between the two multiplicities for λ = 2. In many of our examples the algebraic and geometric multiplicities will be equal for all of the eigenvalues (as it was for λ = 1 in this example), so keep this example in mind. We will have some explanations for this phenomenon later (see Example NDMS4).

Example ESMS4
Eigenvalues, symmetric matrix of size 4
Consider the matrix

C = \left [\array{ 1&0&1&1\cr 0&1 &1 &1 \cr 1&1&1&0\cr 1&1 &0 &1 } \right ]

then

{p}_{C}\left (x\right ) = −3 + 4x + 2{x}^{2} − 4{x}^{3} + {x}^{4} = (x − 3){(x − 1)}^{2}(x + 1)

So the eigenvalues are λ = 3,\kern 1.95872pt 1,\kern 1.95872pt − 1 with algebraic multiplicities {α}_{C}\left (3\right ) = 1, {α}_{C}\left (1\right ) = 2 and {α}_{C}\left (−1\right ) = 1.

Computing eigenvectors,

\eqalignno{ λ & = 3 &C − 3{I}_{4} & = \left [\array{ −2& 0 & 1 & 1\cr 0 &−2 & 1 & 1 \cr 1 & 1 &−2& 0\cr 1 & 1 & 0 &−2 } \right ]\mathop{\longrightarrow}\limits_{}^{\text{RREF}}\left [\array{ \text{1}&0&0&−1\cr 0&\text{1 } &0 &−1 \cr 0&0&\text{1}&−1\cr 0&0 &0 & 0 } \right ] & & & & \cr & &{ℰ}_{C}\left (3\right ) & = N\kern -1.95872pt \left (C − 3{I}_{4}\right ) = \left \langle \left \{\left [\array{ 1\cr 1 \cr 1\cr 1 } \right ]\right \}\right \rangle & & & & \cr λ & = 1 &C − 1{I}_{4} & = \left [\array{ 0&0&1&1\cr 0&0 &1 &1 \cr 1&1&0&0\cr 1&1 &0 &0 } \right ]\mathop{\longrightarrow}\limits_{}^{\text{RREF}}\left [\array{ \text{1}&1&0&0\cr 0&0 &\text{1 } &1 \cr 0&0&0&0\cr 0&0 &0 &0 } \right ] & & & & \cr & &{ℰ}_{C}\left (1\right ) & = N\kern -1.95872pt \left (C − 1{I}_{4}\right ) = \left \langle \left \{\left [\array{ −1\cr 1 \cr 0\cr 0 } \right ],\kern 1.95872pt \left [\array{ 0\cr 0 \cr −1\cr 1 } \right ]\right \}\right \rangle & & & & \cr λ & = −1 &C + 1{I}_{4} & = \left [\array{ 2&0&1&1\cr 0&2 &1 &1 \cr 1&1&2&0\cr 1&1 &0 &2 } \right ]\mathop{\longrightarrow}\limits_{}^{\text{RREF}}\left [\array{ \text{1}&0&0& 1\cr 0&\text{1 } &0 & 1 \cr 0&0&\text{1}&−1\cr 0&0 &0 & 0 } \right ] & & & & \cr & &{ℰ}_{C}\left (−1\right ) & = N\kern -1.95872pt \left (C + 1{I}_{4}\right ) = \left \langle \left \{\left [\array{ −1\cr −1 \cr 1\cr 1 } \right ]\right \}\right \rangle & & & & \cr & & & & }

So the eigenspace dimensions yield geometric multiplicities {γ}_{C}\left (3\right ) = 1, {γ}_{C}\left (1\right ) = 2 and {γ}_{C}\left (−1\right ) = 1, the same as for the algebraic multiplicities. This example is of interest because A is a symmetric matrix, and will be the subject of Theorem HMRE.

Example HMEM5
High multiplicity eigenvalues, matrix of size 5
Consider the matrix

E = \left [\array{ 29 & 14 & 2 & 6 &−9\cr −47 &−22 &−1 &−11 & 13 \cr 19 & 10 & 5 & 4 &−8\cr −19 &−10 &−3 & −2 & 8 \cr 7 & 4 & 3 & 1 &−3 } \right ]

then

{p}_{E}\left (x\right ) = −16 + 16x + 8{x}^{2} − 16{x}^{3} + 7{x}^{4} − {x}^{5} = −{(x − 2)}^{4}(x + 1)

So the eigenvalues are λ = 2,\kern 1.95872pt − 1 with algebraic multiplicities {α}_{E}\left (2\right ) = 4 and {α}_{E}\left (−1\right ) = 1.

Computing eigenvectors,

\eqalignno{ λ& = 2 &E − 2{I}_{5}& = \left [\array{ 27 & 14 & 2 & 6 &−9\cr −47 &−24 &−1 &−11 & 13 \cr 19 & 10 & 3 & 4 &−8\cr −19 &−10 &−3 & −4 & 8 \cr 7 & 4 & 3 & 1 &−5 } \right ]\mathop{\longrightarrow}\limits_{}^{\text{RREF}}\left [\array{ \text{1}&0&0& 1 & 0 \cr 0&\text{1}&0&−{3\over 2}&−{1\over 2} \cr 0&0&\text{1}& 0 &−1\cr 0&0 &0 & 0 & 0 \cr 0&0&0& 0 & 0 } \right ] &&&& \cr & &{ℰ}_{E}\left (2\right ) & = N\kern -1.95872pt \left (E − 2{I}_{5}\right ) = \left \langle \left \{\left [\array{ −1 \cr {3\over 2} \cr 0\cr 1 \cr 0 } \right ],\kern 1.95872pt \left [\array{ 0 \cr {1\over 2} \cr 1\cr 0 \cr 1 } \right ]\right \}\right \rangle = \left \langle \left \{\left [\array{ −2\cr 3 \cr 0\cr 2 \cr 0 } \right ],\kern 1.95872pt \left [\array{ 0\cr 1 \cr 2\cr 0 \cr 2 } \right ]\right \}\right \rangle &&&& \cr λ& = −1&E + 1{I}_{5}& = \left [\array{ 30 & 14 & 2 & 6 &−9\cr −47 &−21 &−1 &−11 & 13 \cr 19 & 10 & 6 & 4 &−8\cr −19 &−10 &−3 & −1 & 8 \cr 7 & 4 & 3 & 1 &−2 } \right ]\mathop{\longrightarrow}\limits_{}^{\text{RREF}}\left [\array{ \text{1}&0&0& 2 &0\cr 0&\text{1 } &0 &−4 &0 \cr 0&0&\text{1}& 1 &0\cr 0&0 &0 & 0 &\text{1} \cr 0&0&0& 0 &0 } \right ] &&&& \cr & &{ℰ}_{E}\left (−1\right )& = N\kern -1.95872pt \left (E + 1{I}_{5}\right ) = \left \langle \left \{\left [\array{ −2\cr 4 \cr −1\cr 1 \cr 0 } \right ]\right \}\right \rangle &&&& \cr & & & & }

So the eigenspace dimensions yield geometric multiplicities {γ}_{E}\left (2\right ) = 2 and {γ}_{E}\left (−1\right ) = 1. This example is of interest because λ = 2 has such a large algebraic multiplicity, which is also not equal to its geometric multiplicity.

Example CEMS6
Complex eigenvalues, matrix of size 6
Consider the matrix

F = \left [\array{ −59 & −34 & 41 & 12 & 25 & 30\cr 1 & 7 &−46 &−36 &−11 &−29 \cr −233&−119& 58 &−35& 75 & 54\cr 157 & 81 &−43 & 21 &−51 &−39 \cr −91 & −48 & 32 & −5 & 32 & 26\cr 209 & 107 &−55 & 28 &−69 &−50 } \right ]

then

\eqalignno{ {p}_{F }\left (x\right ) & = −50 + 55x + 13{x}^{2} − 50{x}^{3} + 32{x}^{4} − 9{x}^{5} + {x}^{6} & & \cr & = (x − 2)(x + 1){({x}^{2} − 4x + 5)}^{2} & & \cr & = (x − 2)(x + 1){((x − (2 + i))(x − (2 − i)))}^{2} & & \cr & = (x − 2)(x + 1){(x − (2 + i))}^{2}{(x − (2 − i))}^{2} & & \cr & & }

So the eigenvalues are λ = 2,\kern 1.95872pt − 1, 2 + i,\kern 1.95872pt 2 − i with algebraic multiplicities {α}_{F }\left (2\right ) = 1, {α}_{F }\left (−1\right ) = 1, {α}_{F }\left (2 + i\right ) = 2 and {α}_{F }\left (2 − i\right ) = 2.

Computing eigenvectors,

\eqalignno{ λ & = 2 && \cr F − 2{I}_{6}& = \left [\array{ −61 & −34 & 41 & 12 & 25 & 30\cr 1 & 5 &−46 &−36 &−11 &−29 \cr −233&−119& 56 &−35& 75 & 54\cr 157 & 81 &−43 & 19 &−51 &−39 \cr −91 & −48 & 32 & −5 & 30 & 26\cr 209 & 107 &−55 & 28 &−69 &−52 } \right ]\mathop{\longrightarrow}\limits_{}^{\text{RREF}}\left [\array{ \text{1}&0&0&0&0& {1\over 5} \cr 0&\text{1}&0&0&0& 0 \cr 0&0&\text{1}&0&0& {3\over 5} \cr 0&0&0&\text{1}&0&−{1\over 5} \cr 0&0&0&0&\text{1}& {4\over 5} \cr 0&0&0&0&0& 0 } \right ] && \cr {ℰ}_{F }\left (2\right ) & = N\kern -1.95872pt \left (F − 2{I}_{6}\right ) = \left \langle \left \{\left [\array{ −{1\over 5} \cr 0 \cr −{3\over 5} \cr {1\over 5} \cr −{4\over 5} \cr 1 } \right ]\right \}\right \rangle = \left \langle \left \{\left [\array{ −1\cr 0 \cr −3\cr 1 \cr −4\cr 5 } \right ]\right \}\right \rangle && \cr & & }

\eqalignno{ λ & = −1 && \cr F + 1{I}_{6}& = \left [\array{ −58 & −34 & 41 & 12 & 25 & 30\cr 1 & 8 &−46 &−36 &−11 &−29 \cr −233&−119& 59 &−35& 75 & 54\cr 157 & 81 &−43 & 22 &−51 &−39 \cr −91 & −48 & 32 & −5 & 33 & 26\cr 209 & 107 &−55 & 28 &−69 &−49 } \right ]\mathop{\longrightarrow}\limits_{}^{\text{RREF}}\left [\array{ \text{1}&0&0&0&0& {1\over 2} \cr 0&\text{1}&0&0&0&−{3\over 2} \cr 0&0&\text{1}&0&0& {1\over 2} \cr 0&0&0&\text{1}&0& 0 \cr 0&0&0&0&\text{1}&−{1\over 2} \cr 0&0&0&0&0& 0 } \right ] && \cr {ℰ}_{F }\left (−1\right )& = N\kern -1.95872pt \left (F + {I}_{6}\right ) = \left \langle \left \{\left [\array{ −{1\over 2} \cr {3\over 2} \cr −{1\over 2} \cr 0 \cr {1\over 2} \cr 1 } \right ]\right \}\right \rangle = \left \langle \left \{\left [\array{ −1\cr 3 \cr −1\cr 0 \cr 1\cr 2 } \right ]\right \}\right \rangle && \cr & & }
\eqalignno{ λ & = 2 + i & & \cr F − (2 + i){I}_{6} & = \left [\array{ −61 − i& −34 & 41 & 12 & 25 & 30 \cr 1 &5 − i& −46 & −36 & −11 & −29 \cr −233 &−119&56 − i& −35 & 75 & 54 \cr 157 & 81 & −43 &19 − i& −51 & −39 \cr −91 & −48 & 32 & −5 &30 − i& 26 \cr 209 & 107 & −55 & 28 & −69 &−52 − i } \right ] & & \cr &\mathop{\longrightarrow}\limits_{}^{\text{RREF}}\left [\array{ \text{1}&0&0&0&0& {1\over 5}(7 + i) \cr 0&\text{1}&0&0&0&{1\over 5}(−9 − 2i)\cr 0&0 &\text{1 } &0 &0 & 1 \cr 0&0&0&\text{1}&0& −1\cr 0&0 &0 &0 &\text{1 } & 1 \cr 0&0&0&0&0& 0 } \right ] & & \cr {ℰ}_{F }\left (2 + i\right ) & = N\kern -1.95872pt \left (F − (2 + i){I}_{6}\right ) = \left \langle \left \{\left [\array{ −{1\over 5}(7 + i) \cr {1\over 5}(9 + 2i)\cr −1 \cr 1\cr −1 \cr 1 } \right ]\right \}\right \rangle = \left \langle \left \{\left [\array{ −7 − i \cr 9 + 2i\cr −5 \cr 5\cr −5 \cr 5 } \right ]\right \}\right \rangle & & \cr & & }

\eqalignno{ λ & = 2 − i & & \cr F − (2 − i){I}_{6} & = \left [\array{ −61 + i& −34 & 41 & 12 & 25 & 30 \cr 1 &5 + i& −46 & −36 & −11 & −29 \cr −233 &−119&56 + i& −35 & 75 & 54 \cr 157 & 81 & −43 &19 + i& −51 & −39 \cr −91 & −48 & 32 & −5 &30 + i& 26 \cr 209 & 107 & −55 & 28 & −69 &−52 + i } \right ] & & \cr &\mathop{\longrightarrow}\limits_{}^{\text{RREF}}\left [\array{ \text{1}&0&0&0&0& {1\over 5}(7 − i) \cr 0&\text{1}&0&0&0&{1\over 5}(−9 + 2i)\cr 0&0 &\text{1 } &0 &0 & 1 \cr 0&0&0&\text{1}&0& −1\cr 0&0 &0 &0 &\text{1 } & 1 \cr 0&0&0&0&0& 0 } \right ] & & \cr {ℰ}_{F }\left (2 − i\right ) & = N\kern -1.95872pt \left (F − (2 − i){I}_{6}\right ) = \left \langle \left \{\left [\array{ {1\over 5}(−7 + i) \cr {1\over 5}(9 − 2i)\cr −1 \cr 1\cr −1 \cr 1 } \right ]\right \}\right \rangle = \left \langle \left \{\left [\array{ −7 + i \cr 9 − 2i\cr −5 \cr 5\cr −5 \cr 5 } \right ]\right \}\right \rangle & & \cr & & }
So the eigenspace dimensions yield geometric multiplicities {γ}_{F }\left (2\right ) = 1, {γ}_{F }\left (−1\right ) = 1, {γ}_{F }\left (2 + i\right ) = 1 and {γ}_{F }\left (2 − i\right ) = 1. This example demonstrates some of the possibilities for the appearance of complex eigenvalues, even when all the entries of the matrix are real. Notice how all the numbers in the analysis of λ = 2 − i are conjugates of the corresponding number in the analysis of λ = 2 + i. This is the content of the upcoming Theorem ERMCP.

Example DEMS5
Distinct eigenvalues, matrix of size 5
Consider the matrix

H = \left [\array{ 15 & 18 & −8 & 6 & −5\cr 5 & 3 & 1 & −1 & −3 \cr 0 & −4 & 5 & −4 & −2\cr −43 &−46 & 17 &−14 & 15 \cr 26 & 30 &−12& 8 &−10 } \right ]

then

{p}_{H}\left (x\right ) = −6x + {x}^{2} + 7{x}^{3} − {x}^{4} − {x}^{5} = x(x − 2)(x − 1)(x + 1)(x + 3)

So the eigenvalues are λ = 2,\kern 1.95872pt 1,\kern 1.95872pt 0,\kern 1.95872pt − 1,\kern 1.95872pt − 3 with algebraic multiplicities {α}_{H}\left (2\right ) = 1, {α}_{H}\left (1\right ) = 1, {α}_{H}\left (0\right ) = 1, {α}_{H}\left (−1\right ) = 1 and {α}_{H}\left (−3\right ) = 1.

Computing eigenvectors,

\eqalignno{ λ& = 2&H − 2{I}_{5}& = \left [\array{ 13 & 18 & −8 & 6 & −5\cr 5 & 1 & 1 & −1 & −3 \cr 0 & −4 & 3 & −4 & −2\cr −43 &−46 & 17 &−16 & 15 \cr 26 & 30 &−12& 8 &−12 } \right ]\mathop{\longrightarrow}\limits_{}^{\text{RREF}}\left [\array{ \text{1}&0&0&0&−1\cr 0&\text{1 } &0 &0 & 1 \cr 0&0&\text{1}&0& 2\cr 0&0 &0 &\text{1 } & 1 \cr 0&0&0&0& 0 } \right ]&&&& \cr & &{ℰ}_{H}\left (2\right ) & = N\kern -1.95872pt \left (H − 2{I}_{5}\right ) = \left \langle \left \{\left [\array{ 1\cr −1 \cr −2\cr −1 \cr 1 } \right ]\right \}\right \rangle &&&& }

\eqalignno{ λ& = 1&H − 1{I}_{5}& = \left [\array{ 14 & 18 & −8 & 6 & −5\cr 5 & 2 & 1 & −1 & −3 \cr 0 & −4 & 4 & −4 & −2\cr −43 &−46 & 17 &−15 & 15 \cr 26 & 30 &−12& 8 &−11 } \right ]\mathop{\longrightarrow}\limits_{}^{\text{RREF}}\left [\array{ \text{1}&0&0&0&−{1\over 2} \cr 0&\text{1}&0&0& 0 \cr 0&0&\text{1}&0& {1\over 2} \cr 0&0&0&\text{1}& 1\cr 0&0 &0 &0 & 0 } \right ] &&&& \cr & &{ℰ}_{H}\left (1\right ) & = N\kern -1.95872pt \left (H − 1{I}_{5}\right ) = \left \langle \left \{\left [\array{ {1\over 2} \cr 0 \cr −{1\over 2} \cr −1\cr 1 } \right ]\right \}\right \rangle = \left \langle \left \{\left [\array{ 1\cr 0 \cr −1\cr −2 \cr 2 } \right ]\right \}\right \rangle &&&& }
\eqalignno{ λ& = 0&H − 0{I}_{5}& = \left [\array{ 15 & 18 & −8 & 6 & −5\cr 5 & 3 & 1 & −1 & −3 \cr 0 & −4 & 5 & −4 & −2\cr −43 &−46 & 17 &−14 & 15 \cr 26 & 30 &−12& 8 &−10 } \right ]\mathop{\longrightarrow}\limits_{}^{\text{RREF}}\left [\array{ \text{1}&0&0&0& 1\cr 0&\text{1 } &0 &0 &−2 \cr 0&0&\text{1}&0&−2\cr 0&0 &0 &\text{1 } & 0 \cr 0&0&0&0& 0 } \right ]&&&& \cr & &{ℰ}_{H}\left (0\right ) & = N\kern -1.95872pt \left (H − 0{I}_{5}\right ) = \left \langle \left \{\left [\array{ −1\cr 2 \cr 2\cr 0 \cr 1 } \right ]\right \}\right \rangle &&&& }

\eqalignno{ λ& = −1&H + 1{I}_{5}& = \left [\array{ 16 & 18 & −8 & 6 &−5\cr 5 & 4 & 1 & −1 &−3 \cr 0 & −4 & 6 & −4 &−2\cr −43 &−46 & 17 &−13 & 15 \cr 26 & 30 &−12& 8 &−9 } \right ]\mathop{\longrightarrow}\limits_{}^{\text{RREF}}\left [\array{ \text{1}&0&0&0&−1∕2 \cr 0&\text{1}&0&0& 0\cr 0&0 &\text{1 } &0 & 0 \cr 0&0&0&\text{1}& 1∕2 \cr 0&0&0&0& 0 } \right ] &&&& \cr & &{ℰ}_{H}\left (−1\right ) & = N\kern -1.95872pt \left (H + 1{I}_{5}\right ) = \left \langle \left \{\left [\array{ {1\over 2} \cr 0\cr 0 \cr −{1\over 2} \cr 1 } \right ]\right \}\right \rangle = \left \langle \left \{\left [\array{ 1\cr 0 \cr 0\cr −1 \cr 2 } \right ]\right \}\right \rangle &&&& }
\eqalignno{ λ& = −3&H + 3{I}_{5}& = \left [\array{ 18 & 18 & −8 & 6 &−5\cr 5 & 6 & 1 & −1 &−3 \cr 0 & −4 & 8 & −4 &−2\cr −43 &−46 & 17 &−11 & 15 \cr 26 & 30 &−12& 8 &−7 } \right ]\mathop{\longrightarrow}\limits_{}^{\text{RREF}}\left [\array{ \text{1}&0&0&0&−1 \cr 0&\text{1}&0&0& {1\over 2} \cr 0&0&\text{1}&0& 1\cr 0&0 &0 &\text{1 } & 2 \cr 0&0&0&0& 0 } \right ] &&&& \cr & &{ℰ}_{H}\left (−3\right ) & = N\kern -1.95872pt \left (H + 3{I}_{5}\right ) = \left \langle \left \{\left [\array{ 1 \cr −{1\over 2} \cr −1\cr −2 \cr 1 } \right ]\right \}\right \rangle = \left \langle \left \{\left [\array{ −2\cr 1 \cr 2\cr 4 \cr −2 } \right ]\right \}\right \rangle &&&& }

So the eigenspace dimensions yield geometric multiplicities {γ}_{H}\left (2\right ) = 1, {γ}_{H}\left (1\right ) = 1, {γ}_{H}\left (0\right ) = 1, {γ}_{H}\left (−1\right ) = 1 and {γ}_{H}\left (−3\right ) = 1, identical to the algebraic multiplicities. This example is of interest for two reasons. First, λ = 0 is an eigenvalue, illustrating the upcoming Theorem SMZE. Second, all the eigenvalues are distinct, yielding algebraic and geometric multiplicities of 1 for each eigenvalue, illustrating Theorem DED.

Subsection READ: Reading Questions

Suppose A is the 2 × 2 matrix

A = \left [\array{ −5&8\cr −4 &7 } \right ]

  1. Find the eigenvalues of A.
  2. Find the eigenspaces of A.
  3. For the polynomial p(x) = 3{x}^{2} − x + 2, compute p(A).

Subsection EXC: Exercises

C10 Find the characteristic polynomial of the matrix A = \left [\array{ 1&2\cr 3&4 } \right ].  
Contributed by Chris Black Solution [1277]

C11 Find the characteristic polynomial of the matrix A = \left [\array{ 3&2&1\cr 0&1 &1 \cr 1&2&0} \right ].  
Contributed by Chris Black Solution [1277]

C12 Find the characteristic polynomial of the matrix A = \left [\array{ 1&2&1&0\cr 1&0 &1 &0 \cr 2&1&1&0\cr 3&1 &0 &1 } \right ].  
Contributed by Chris Black Solution [1277]

C19 Find the eigenvalues, eigenspaces, algebraic multiplicities and geometric multiplicities for the matrix below. It is possible to do all these computations by hand, and it would be instructive to do so.

C = \left [\array{ −1&2\cr −6 &6 } \right ]

 
Contributed by Robert Beezer Solution [1277]

C20 Find the eigenvalues, eigenspaces, algebraic multiplicities and geometric multiplicities for the matrix below. It is possible to do all these computations by hand, and it would be instructive to do so.

B = \left [\array{ −12&30\cr −5 &13 } \right ]

 
Contributed by Robert Beezer Solution [1278]

C21 The matrix A below has λ = 2 as an eigenvalue. Find the geometric multiplicity of λ = 2 using your calculator only for row-reducing matrices.

A = \left [\array{ 18&−15& 33 &−15\cr −4 & 8 & −6 & 6 \cr −9& 9 &−16& 9\cr 5 & −6 & 9 & −4 } \right ]

 
Contributed by Robert Beezer Solution [1280]

C22 Without using a calculator, find the eigenvalues of the matrix B.

B = \left [\array{ 2&−1\cr 1& 1 } \right ]

 
Contributed by Robert Beezer Solution [1281]

C23 Find the eigenvalues, eigenspaces, algebraic and geometric multiplicities for A = \left [\array{ 1&1\cr 1&1 } \right ].  
Contributed by Chris Black Solution [1281]

C24 Find the eigenvalues, eigenspaces, algebraic and geometric multiplicities for A = \left [\array{ 1 &−1& 1\cr −1 & 1 &−1 \cr 1 &−1& 1 } \right ].  
Contributed by Chris Black Solution [1282]

C25 Find the eigenvalues, eigenspaces, algebraic and geometric multiplicities for the 3 × 3 identity matrix {I}_{3}. Do your results make sense?  
Contributed by Chris Black Solution [1282]

C26 For matrix A = \left [\array{ 2&1&1\cr 1&2 &1 \cr 1&1&2} \right ], the characteristic polynomial of A is {p}_{A}\left (λ\right ) = (4 − x){(1 − x)}^{2}. Find the eigenvalues and corresponding eigenspaces of A.  
Contributed by Chris Black Solution [1282]

C27 For matrix A = \left [\array{ 0 &4&−1& 1\cr −2 &6 &−1 & 1 \cr −2&8&−1&−1\cr −2 &8 &−3 & 1 } \right ], the characteristic polynomial of A is

{p}_{A}(λ) = (x + 2){(x − 2)}^{2}(x − 4).

Find the eigenvalues and corresponding eigenspaces of A.  
Contributed by Chris Black Solution [1283]

M60 Repeat Example CAEHW by choosing x = \left [\array{ 0\cr 8 \cr 2\cr 1 \cr 2 } \right ] and then arrive at an eigenvalue and eigenvector of the matrix A. The hard way.  
Contributed by Robert Beezer Solution [1284]

T10 A matrix A is idempotent if {A}^{2} = A. Show that the only possible eigenvalues of an idempotent matrix are λ = 0 and λ = 1. Then give an example of a matrix that is idempotent and has both of these two values as eigenvalues.  
Contributed by Robert Beezer Solution [1285]

T15 The characteristic polynomial of the square matrix A is usually defined as {r}_{A}(x) =\mathop{ det} \left (x{I}_{n} − A\right ). Find a specific relationship between our characteristic polynomial, {p}_{A}\left (x\right ), and {r}_{A}(x), give a proof of your relationship, and use this to explain why Theorem EMRCP can remain essentially unchanged with either definition. Explain the advantages of each definition over the other. (Computing with both definitions, for a 2 × 2 and a 3 × 3 matrix, might be a good way to start.)  
Contributed by Robert Beezer Solution [1287]

T20 Suppose that λ and ρ are two different eigenvalues of the square matrix A. Prove that the intersection of the eigenspaces for these two eigenvalues is trivial. That is, {ℰ}_{A}\left (λ\right ) ∩{ℰ}_{A}\left (ρ\right ) = \left \{0\right \}.  
Contributed by Robert Beezer Solution [1288]

Subsection SOL: Solutions

C10 Contributed by Chris Black Statement [1272]
Answer: {p}_{A}\left (x\right ) = −2 − 5x + {x}^{2}

C11 Contributed by Chris Black Statement [1272]
Answer: {p}_{A}\left (x\right ) = −5 + 4{x}^{2} − {x}^{3}.

C12 Contributed by Chris Black Statement [1272]
Answer: {p}_{A}\left (x\right ) = 2 + 2x − 2{x}^{2} − 3{x}^{3} + {x}^{4}.

C19 Contributed by Robert Beezer Statement [1272]
First compute the characteristic polynomial,

\eqalignno{ {p}_{C}\left (x\right ) & =\mathop{ det} \left (C − x{I}_{2}\right ) & &\text{@(a href="#definition.CP")Definition CP@(/a)} & & & & \cr & = \left \vert \array{ −1 − x& 2\cr −6 &6 − x } \right \vert & & & & \cr & = (−1 − x)(6 − x) − (2)(−6) & & & & \cr & = {x}^{2} − 5x + 6 & & & & \cr & = (x − 3)(x − 2) & & & & }

So the eigenvalues of C are the solutions to {p}_{C}\left (x\right ) = 0, namely, λ = 2 and λ = 3.

To obtain the eigenspaces, construct the appropriate singular matrices and find expressions for the null spaces of these matrices.

\eqalignno{ λ & = 2 & & \cr C − (2){I}_{2} & = \left [\array{ −3&2\cr −6 &4 } \right ]\mathop{\longrightarrow}\limits_{}^{\text{RREF}}\left [\array{ \text{1}&−{2\over 3} \cr 0& 0} \right ] & & \cr {ℰ}_{C}\left (2\right ) & = N\kern -1.95872pt \left (C − (2){I}_{2}\right ) = \left \langle \left \{\left [\array{ {2\over 3} \cr 1 } \right ]\right \}\right \rangle = \left \langle \left \{\left [\array{ 2\cr 3 } \right ]\right \}\right \rangle & & }

\eqalignno{ λ & = 3 & & \cr C − (3){I}_{2} & = \left [\array{ −4&2\cr −6 &3 } \right ]\mathop{\longrightarrow}\limits_{}^{\text{RREF}}\left [\array{ \text{1}&−{1\over 2} \cr 0& 0} \right ] & & \cr {ℰ}_{C}\left (3\right ) & = N\kern -1.95872pt \left (C − (3){I}_{2}\right ) = \left \langle \left \{\left [\array{ {1\over 2} \cr 1 } \right ]\right \}\right \rangle = \left \langle \left \{\left [\array{ 1\cr 2 } \right ]\right \}\right \rangle & & }

C20 Contributed by Robert Beezer Statement [1272]
The characteristic polynomial of B is

\eqalignno{ {p}_{B}\left (x\right ) & =\mathop{ det} \left (B − x{I}_{2}\right ) & &\text{@(a href="#definition.CP")Definition CP@(/a)} & & & & \cr & = \left \vert \array{ −12 − x& 30\cr −5 &13 − x } \right \vert & & & & \cr & = (−12 − x)(13 − x) − (30)(−5) & &\text{@(a href="fcla-jsmath-2.23li44.html#theorem.DMST")Theorem DMST@(/a)} & & & & \cr & = {x}^{2} − x − 6 & & & & \cr & = (x − 3)(x + 2) & & & & }

From this we find eigenvalues λ = 3,\kern 1.95872pt − 2 with algebraic multiplicities {α}_{B}\left (3\right ) = 1 and {α}_{B}\left (−2\right ) = 1.

For eigenvectors and geometric multiplicities, we study the null spaces of B − λ{I}_{2} (Theorem EMNS).

\eqalignno{ λ & = 3 &B − 3{I}_{2} & = \left [\array{ −15&30\cr −5 &10 } \right ]\mathop{\longrightarrow}\limits_{}^{\text{RREF}}\left [\array{ \text{1}&−2\cr 0& 0 } \right ] & & & & \cr & &{ℰ}_{B}\left (3\right ) & = N\kern -1.95872pt \left (B − 3{I}_{2}\right ) = \left \langle \left \{\left [\array{ 2\cr 1 } \right ]\right \}\right \rangle & & & & }

\eqalignno{ λ & = −2 &B + 2{I}_{2} & = \left [\array{ −10&30\cr −5 &15 } \right ]\mathop{\longrightarrow}\limits_{}^{\text{RREF}}\left [\array{ \text{1}&−3\cr 0& 0 } \right ] & & & & \cr & &{ℰ}_{B}\left (−2\right ) & = N\kern -1.95872pt \left (B + 2{I}_{2}\right ) = \left \langle \left \{\left [\array{ 3\cr 1 } \right ]\right \}\right \rangle & & & & }
Each eigenspace has dimension one, so we have geometric multiplicities {γ}_{B}\left (3\right ) = 1 and {γ}_{B}\left (−2\right ) = 1.

C21 Contributed by Robert Beezer Statement [1273]
If λ = 2 is an eigenvalue of A, the matrix A − 2{I}_{4} will be singular, and its null space will be the eigenspace of A. So we form this matrix and row-reduce,

A−2{I}_{4} = \left [\array{ 16&−15& 33 &−15\cr −4 & 6 & −6 & 6 \cr −9& 9 &−18& 9\cr 5 & −6 & 9 & −6 } \right ]\mathop{\longrightarrow}\limits_{}^{\text{RREF}}\left [\array{ \text{1}&0&3&0\cr 0&\text{1 } &1 &1 \cr 0&0&0&0\cr 0&0 &0 &0 } \right ]

With two free variables, we know a basis of the null space (Theorem BNS) will contain two vectors. Thus the null space of A − 2{I}_{4} has dimension two, and so the eigenspace of λ = 2 has dimension two also (Theorem EMNS), {γ}_{A}\left (2\right ) = 2.

C22 Contributed by Robert Beezer Statement [1273]
The characteristic polynomial (Definition CP) is

\eqalignno{ {p}_{B}\left (x\right ) & =\mathop{ det} \left (B − x{I}_{2}\right ) & & & & \cr & = \left \vert \array{ 2 − x& −1\cr 1 &1 − x } \right \vert & & & & \cr & = (2 − x)(1 − x) − (1)(−1) & &\text{@(a href="fcla-jsmath-2.23li44.html#theorem.DMST")Theorem DMST@(/a)} & & & & \cr & = {x}^{2} − 3x + 3 & & & & \cr & = \left (x −{3 + \sqrt{3}i\over 2} \right )\left (x −{3 −\sqrt{3}i\over 2} \right ) & & & & }

where the factorization can be obtained by finding the roots of {p}_{B}\left (x\right ) = 0 with the quadratic equation. By Theorem EMRCP the eigenvalues of B are the complex numbers {λ}_{1} = {3+\sqrt{3}i\over 2} and {λ}_{2} = {3−\sqrt{3}i\over 2} .

C23 Contributed by Chris Black Statement [1274]

Eigenvalues Eigenspaces Algebraic Multiplicity Geometric Multiplicity
λ = 0{ℰ}_{A}\left (0\right ) = \left \langle \left [\array{ −1\cr 1 } \right ]\right \rangle {α}_{A } \left (0\right ) = 1{γ}_{A}\left (0\right ) = 1
λ = 2{ℰ}_{A}\left (2\right ) = \left \langle \left [\array{ 1\cr 1 } \right ]\right \rangle {α}_{A } \left (2\right ) = 1{γ}_{A}\left (2\right ) = 1

C24 Contributed by Chris Black Statement [1274]

Eigenvalues Eigenspaces Algebraic Multiplicity Geometric Multiplicity
λ = 0{ℰ}_{A}\left (0\right ) = \left \langle \left [\array{ 1\cr 1 \cr 0 } \right ],\left [\array{ −1\cr 0 \cr 1 } \right ]\right \rangle {α}_{A } \left (0\right ) = 2{γ}_{A } \left (0\right ) = 2
λ = 3{ℰ}_{A}\left (3\right ) = \left \langle \left [\array{ 1\cr −1 \cr 1 } \right ]\right \rangle {α}_{A } \left (3\right ) = 1{γ}_{A}\left (3\right ) = 1

C25 Contributed by Chris Black Statement [1274]
The characteristic polynomial for A = {I}_{3} is {p}_{{I}_{3}}\left (x\right ) = {(1 − x)}^{3}, which has eigenvalue λ = 1 with algebraic multiplicity {α}_{A}\left (1\right ) = 3. Looking for eigenvectors, we find that A−λI = \left [\array{ 0&0&0\cr 0&0 &0 \cr 0&0&0} \right ]. The nullspace of this matrix is all of {ℂ}^{3}, so that the eigenspace is {ℰ}_{{I}_{3}}\left (1\right ) = \left \langle \left [\array{ 1\cr 0 \cr 0 } \right ],\left [\array{ 0\cr 1 \cr 0 } \right ],\left [\array{ 0\cr 0 \cr 1 } \right ]\right \rangle , and the geometric multiplicity is {γ}_{A}(1) = 3.
Does this make sense? Yes! Every vector x is a solution to {I}_{3}x = 1x, so every nonzero vector is an eigenvector with eigenvalue 1. Since every vector is unchanged when multiplied by {I}_{3}, it makes sense that λ = 1 is the only eigenvalue.

C26 Contributed by Chris Black Statement [1274]
Since we are given that the characteristic polynomial of A is {p}_{A}\left (x\right ) = (4 − x){(1 − x)}^{2}, we see that the eigenvalues are λ = 4 with algebraic multiplicity {α}_{A}\left (4\right ) = 1 and λ = 1 with algebraic multiplicity {α}_{A}\left (1\right ) = 2. The corresponding eigenspaces are

\eqalignno{ {ℰ}_{A}\left (4\right ) & = \left \langle \left [\array{ 1\cr 1 \cr 1 } \right ]\right \rangle &{ℰ}_{A}\left (1\right ) & = \left \langle \left [\array{ 1\cr −1 \cr 0 } \right ],\left [\array{ 1\cr 0 \cr −1 } \right ]\right \rangle & & & & }

C27 Contributed by Chris Black Statement [1274]
Since we are given that the characteristic polynomial of A is {p}_{A}\left (x\right ) = (x + 2){(x − 2)}^{2}(x − 4), we see that the eigenvalues are λ = −2, λ = 2 and λ = 4. The eigenspaces are

\eqalignno{ {ℰ}_{A}\left (−2\right ) & = \left \langle \left [\array{ 0\cr 0 \cr 1\cr 1 } \right ]\right \rangle & & \cr {ℰ}_{A}\left (2\right ) & = \left \langle \left [\array{ 1\cr 1 \cr 2\cr 0 } \right ],\left [\array{ 3\cr 1 \cr 0\cr 2 } \right ]\right \rangle & & \cr {ℰ}_{A}\left (4\right ) & = \left \langle \left [\array{ 1\cr 1 \cr 1\cr 1 } \right ]\right \rangle & & }

M60 Contributed by Robert Beezer Statement [1275]
Form the matrix C whose columns are x,\kern 1.95872pt Ax,\kern 1.95872pt {A}^{2}x,\kern 1.95872pt {A}^{3}x,\kern 1.95872pt {A}^{4}x,\kern 1.95872pt {A}^{5}x and row-reduce the matrix,

\left [\array{ 0& 6 & 32 & 102 & 320 & 966\cr 8& 10 & 24 & 58 & 168 & 490 \cr 2&12& 50 & 156 & 482 & 1452\cr 1&−5 &−47 &−149 &−479 &−1445 \cr 2&12& 50 & 156 & 482 & 1452 } \right ]\mathop{\longrightarrow}\limits_{}^{\text{RREF}}\left [\array{ \text{1}&0&0&−3&−9&−30\cr 0&\text{1 } &0 & 1 & 0 & 1 \cr 0&0&\text{1}& 3 &10& 30\cr 0&0 &0 & 0 & 0 & 0 \cr 0&0&0& 0 & 0 & 0 } \right ]

The simplest possible relation of linear dependence on the columns of C comes from using scalars {α}_{4} = 1 and {α}_{5} = {α}_{6} = 0 for the free variables in a solution to ℒS\kern -1.95872pt \left (C,\kern 1.95872pt 0\right ). The remainder of this solution is {α}_{1} = 3, {α}_{2} = −1, {α}_{3} = −3. This solution gives rise to the polynomial

p(x) = 3 − x − 3{x}^{2} + {x}^{3} = (x − 3)(x − 1)(x + 1)

which then has the property that p(A)x = 0.

No matter how you choose to order the factors of p(x), the value of k (in the language of Theorem EMHE and Example CAEHW) is k = 2. For each of the three possibilities, we list the resulting eigenvector and the associated eigenvalue:

\eqalignno{ (C − 3{I}_{5})(C − {I}_{5})z & = \left [\array{ 8\cr 8 \cr 8\cr −24 \cr 8 } \right ] &λ & = −1 & & & & \cr (C − 3{I}_{5})(C + {I}_{5})z & = \left [\array{ 20\cr −20 \cr 20\cr −40 \cr 20} \right ] &λ & = 1 & & & & \cr (C + {I}_{5})(C − {I}_{5})z & = \left [\array{ 32\cr 16 \cr 48\cr −48 \cr 48} \right ] &λ & = 3 & & & & }

Note that each of these eigenvectors can be simplified by an appropriate scalar multiple, but we have shown here the actual vector obtained by the product specified in the theorem.

T10 Contributed by Robert Beezer Statement [1275]
Suppose that λ is an eigenvalue of A. Then there is an eigenvector x, such that Ax = λx. We have,

\eqalignno{ λx & = Ax & &\text{$x$ eigenvector of $A$} & & & & \cr & = {A}^{2}x & &\text{$A$ is idempotent} & & & & \cr & = A(Ax) & & & & \cr & = A(λx) & &\text{$x$ eigenvector of $A$} & & & & \cr & = λ(Ax) & &\text{@(a href="fcla-jsmath-2.23li31.html#theorem.MMSMM")Theorem MMSMM@(/a)} & & & & \cr & = λ(λx) & &\text{$x$ eigenvector of $A$} & & & & \cr & = {λ}^{2}x & & & & \text{From this we get} \cr 0 & = {λ}^{2}x − λx & & & & \cr & = ({λ}^{2} − λ)x & &\text{@(a href="fcla-jsmath-2.23li23.html#property.DSAC")Property DSAC@(/a)} & & & & \cr & & & &

}

Since x is an eigenvector, it is nonzero, and Theorem SMEZV leaves us with the conclusion that {λ}^{2} − λ = 0, and the solutions to this quadratic polynomial equation in λ are λ = 0 and λ = 1.

The matrix

\left [\array{ 1&0\cr 0&0 } \right ]

is idempotent (check this!) and since it is a diagonal matrix, its eigenvalues are the diagonal entries, λ = 0 and λ = 1, so each of these possible values for an eigenvalue of an idempotent matrix actually occurs as an eigenvalue of some idempotent matrix. So we cannot state any stronger conclusion about the eigenvalues of an idempotent matrix, and we can say that this theorem is the “best possible.”

T15 Contributed by Robert Beezer Statement [1275]
Note in the following that the scalar multiple of a matrix is equivalent to multiplying each of the rows by that scalar, so we actually apply Theorem DRCM multiple times below (and are passing up an opportunity to do a proof by induction in the process, which maybe you’d like to do yourself?).

\eqalignno{ {p}_{A}\left (x\right ) & =\mathop{ det} \left (A − x{I}_{n}\right ) & &\text{@(a href="#definition.CP")Definition CP@(/a)} & & & & \cr & =\mathop{ det} \left ((−1)(x{I}_{n} − A)\right ) & &\text{@(a href="fcla-jsmath-2.23li30.html#definition.MSM")Definition MSM@(/a)} & & & & \cr & = {(−1)}^{n}\mathop{ det} \left (x{I}_{ n} − A\right ) & &\text{@(a href="fcla-jsmath-2.23li45.html#theorem.DRCM")Theorem DRCM@(/a)} & & & & \cr & = {(−1)}^{n}{r}_{ A}(x) & & & & }

Since the polynomials are scalar multiples of each other, their roots will be identical, so either polynomial could be used in Theorem EMRCP.

Computing by hand, our definition of the characteristic polynomial is easier to use, as you only need to subtract x down the diagonal of the matrix before computing the determinant. However, the price to be paid is that for odd values of n, the coefficient of {x}^{n} is − 1, while {r}_{A}(x) always has the coefficient 1 for {x}^{n} (we say {r}_{A}(x) is “monic.”)

T20 Contributed by Robert Beezer Statement [1276]
This problem asks you to prove that two sets are equal, so use Definition SE.

First show that \left \{0\right \} ⊆{ℰ}_{A}\left (λ\right ) ∩{ℰ}_{A}\left (ρ\right ). Choose x ∈\left \{0\right \}. Then x = 0. Eigenspaces are subspaces (Theorem EMS), so both {ℰ}_{A}\left (λ\right ) and {ℰ}_{A}\left (ρ\right ) contain the zero vector, and therefore x ∈{ℰ}_{A}\left (λ\right ) ∩{ℰ}_{A}\left (ρ\right ) (Definition SI).

To show that {ℰ}_{A}\left (λ\right ) ∩{ℰ}_{A}\left (ρ\right ) ⊆\left \{0\right \}, suppose that x ∈{ℰ}_{A}\left (λ\right ) ∩{ℰ}_{A}\left (ρ\right ). Then x is an eigenvector of A for both λ and ρ (Definition SI) and so

\eqalignno{ x & = 1x & &\text{@(a href="fcla-jsmath-2.23li37.html#property.O")Property O@(/a)} & & & & \cr & = {1\over λ − ρ}\left (λ − ρ\right )x & &λ\mathrel{≠}ρ,\ λ − ρ\mathrel{≠}0 & & & & \cr & = {1\over λ − ρ}\left (λx − ρx\right ) & &\text{@(a href="fcla-jsmath-2.23li23.html#property.DSAC")Property DSAC@(/a)} & & & & \cr & = {1\over λ − ρ}\left (Ax − Ax\right ) & &\text{$x$ eigenvector of $A$ for $λ$, $ρ$} & & & & \cr & = {1\over λ − ρ}\left (0\right ) & & & & \cr & = 0 & &\text{@(a href="fcla-jsmath-2.23li37.html#theorem.ZVSM")Theorem ZVSM@(/a)} & & & & \cr & & & & }

So x = 0, and trivially, x ∈\left \{0\right \}.