### Archetype L

From A First Course in Linear Algebra
Version 2.99
http://linear.ups.edu/

Summary  Square matrix of size 5. Singular, nullity 2. 2 distinct eigenvalues, each of “high” multiplicity.

A matrix:

\eqalignno{ \left [\array{ −2&−1&−2&−4& 4\cr −6 &−5 &−4 &−4 & 6 \cr 10& 7 & 7 &10&−13\cr −7 &−5 &−6 &−9 & 10 \cr −4&−3&−4&−6& 6\cr } \right ]&&&&&& }

Matrix brought to reduced row-echelon form:

\eqalignno{ \left [\array{ \text{1}&0&0& 1 &−2\cr 0&\text{1 } &0 &−2 & 2 \cr 0&0&\text{1}& 2 &−1\cr 0&0 &0 & 0 & 0 \cr 0&0&0& 0 & 0 } \right ]&&&&&& }

Analysis of the row-reduced matrix (Notation RREFA):

\eqalignno{ r = 5 & &D = \left \{1,\kern 1.95872pt 2,\kern 1.95872pt 3\right \} & &F = \left \{4,\kern 1.95872pt 5\right \} & & & & & & }

Matrix (coefficient matrix) is nonsingular or singular? (Theorem NMRRI) at the same time, examine the size of the set F above.Notice that this property does not apply to matrices that are not square.
Singular.

This is the null space of the matrix. The set of vectors used in the span construction is a linearly independent set of column vectors that spans the null space of the matrix (Theorem SSNS, Theorem BNS). Solve the homogenous system with this matrix as the coefficient matrix and write the solutions in vector form (Theorem VFSLS) to see these vectors arise.
\left \langle \left \{\left [\array{ −1\cr 2 \cr −2\cr 1 \cr 0 } \right ],\kern 1.95872pt \left [\array{ 2\cr −2 \cr 1\cr 0 \cr 1 } \right ]\right \}\right \rangle

Column space of the matrix, expressed as the span of a set of linearly independent vectors that are also columns of the matrix. These columns have indices that form the set D above. (Theorem BCS)
\left \langle \left \{\left [\array{ −2\cr −6 \cr 10\cr −7 \cr −4 } \right ],\kern 1.95872pt \left [\array{ −1\cr −5 \cr 7\cr −5 \cr −3 } \right ],\kern 1.95872pt \left [\array{ −2\cr −4 \cr 7\cr −6 \cr −4 } \right ]\right \}\right \rangle

The column space of the matrix, as it arises from the extended echelon form of the matrix. The matrix L is computed as described in Definition EEF. This is followed by the column space described by a set of linearly independent vectors that span the null space of L, computed as according to Theorem FS and Theorem BNS. When r = m, the matrix L has no rows and the column space is all of {ℂ}^{m}.
L = \left [\array{ 1&0&−2&−6& 5\cr 0&1 & 4 & 10 &−9 } \right ]
\left \langle \left \{\left [\array{ −5\cr 9 \cr 0\cr 0 \cr 1 } \right ],\kern 1.95872pt \left [\array{ 6\cr −10 \cr 0\cr 1 \cr 0 } \right ],\kern 1.95872pt \left [\array{ 2\cr −4 \cr 1\cr 0 \cr 0 } \right ]\right \}\right \rangle

Column space of the matrix, expressed as the span of a set of linearly independent vectors. These vectors are computed by row-reducing the transpose of the matrix into reduced row-echelon form, tossing out the zero rows, and writing the remaining nonzero rows as column vectors. By Theorem CSRST and Theorem BRS, and in the style of Example CSROI, this yields a linearly independent set of vectors that span the column space.
\left \langle \left \{\left [\array{ 1\cr 0 \cr 0 \cr {9\over 4} \cr {5\over 2} } \right ],\kern 1.95872pt \left [\array{ 0\cr 1 \cr 0 \cr {5\over 4} \cr {3\over 2} } \right ],\kern 1.95872pt \left [\array{ 0\cr 0 \cr 1 \cr {1\over 2} \cr 1 } \right ]\right \}\right \rangle

Row space of the matrix, expressed as a span of a set of linearly independent vectors, obtained from the nonzero rows of the equivalent matrix in reduced row-echelon form. (Theorem BRS)
\left \langle \left \{\left [\array{ 1\cr 0 \cr 0\cr 1 \cr −2 } \right ],\kern 1.95872pt \left [\array{ 0\cr 1 \cr 0\cr −2 \cr 2 } \right ],\kern 1.95872pt \left [\array{ 0\cr 0 \cr 1\cr 2 \cr −1 } \right ]\right \}\right \rangle

Inverse matrix, if it exists. The inverse is not defined for matrices that are not square, and if the matrix is square, then the matrix must be nonsingular. (Definition MI, Theorem NI)

Subspace dimensions associated with the matrix. (Definition NOM, Definition ROM) Verify Theorem RPNC

\eqalignno{ \text{Matrix columns: }5 & &\text{Rank: }3 & &\text{Nullity: }2 & & & & & & }

Determinant of the matrix, which is only defined for square matrices. The matrix is nonsingular if and only if the determinant is nonzero (Theorem SMZD). (Product of all eigenvalues?)
\text{Determinant } =\ 0

Eigenvalues, and bases for eigenspaces. (Definition EEM,Definition EM)

\eqalignno{ λ & = −1 &{ℰ}_{L}\left (−1\right ) & = \left \langle \left \{\left [\array{ −5\cr 9 \cr 0\cr 0 \cr 1 } \right ],\kern 1.95872pt \left [\array{ 6\cr −10 \cr 0\cr 1 \cr 0 } \right ],\kern 1.95872pt \left [\array{ 2\cr −4 \cr 1\cr 0 \cr 0 } \right ]\right \}\right \rangle & & & & \cr λ & = 0 &{ℰ}_{L}\left (0\right ) & = \left \langle \left \{\left [\array{ 2\cr −2 \cr 1\cr 0 \cr 1 } \right ],\kern 1.95872pt \left [\array{ −1\cr 2 \cr −2\cr 1 \cr 0 } \right ]\right \}\right \rangle & & & & }

Geometric and algebraic multiplicities. (Definition GMEDefinition AME)

\eqalignno{ {γ}_{L}\left (−1\right ) & = 3 &{α}_{L}\left (−1\right ) & = 3 & & & & \cr {γ}_{L}\left (0\right ) & = 2 &{α}_{L}\left (0\right ) & = 2 & & & & }

Diagonalizable? (Definition DZM)
Yes, full eigenspaces, Theorem DMFE.

The diagonalization. (Theorem DC)

\eqalignno{ &\left [\array{ 4 & 3 & 4 & 6 & −6\cr 7 & 5 & 6 & 9 &−10 \cr −10&−7&−7&−10& 13\cr −4 &−3 &−4 & −6 & 7 \cr −7 &−5&−6& −8 & 10 } \right ]\left [\array{ −2&−1&−2&−4& 4\cr −6 &−5 &−4 &−4 & 6 \cr 10& 7 & 7 &10&−13\cr −7 &−5 &−6 &−9 & 10 \cr −4&−3&−4&−6& 6\cr } \right ]\left [\array{ −5& 6 & 2 & 2 &−1\cr 9 &−10 &−4 &−2 & 2 \cr 0 & 0 & 1 & 1 &−2\cr 0 & 1 & 0 & 0 & 1 \cr 1 & 0 & 0 & 1 & 0 } \right ]&& \cr & = \left [\array{ −1& 0 & 0 &0&0\cr 0 &−1 & 0 &0 &0 \cr 0 & 0 &−1&0&0\cr 0 & 0 & 0 &0 &0 \cr 0 & 0 & 0 &0&0 } \right ] && }