Skip to main content
\(\newcommand{\orderof}[1]{\sim #1} \newcommand{\Z}{\mathbb{Z}} \newcommand{\reals}{\mathbb{R}} \newcommand{\real}[1]{\mathbb{R}^{#1}} \newcommand{\complexes}{\mathbb{C}} \newcommand{\complex}[1]{\mathbb{C}^{#1}} \newcommand{\conjugate}[1]{\overline{#1}} \newcommand{\modulus}[1]{\left\lvert#1\right\rvert} \newcommand{\zerovector}{\vect{0}} \newcommand{\zeromatrix}{\mathcal{O}} \newcommand{\innerproduct}[2]{\left\langle#1,\,#2\right\rangle} \newcommand{\norm}[1]{\left\lVert#1\right\rVert} \newcommand{\dimension}[1]{\dim\left(#1\right)} \newcommand{\nullity}[1]{n\left(#1\right)} \newcommand{\rank}[1]{r\left(#1\right)} \newcommand{\ds}{\oplus} \newcommand{\detname}[1]{\det\left(#1\right)} \newcommand{\detbars}[1]{\left\lvert#1\right\rvert} \newcommand{\trace}[1]{t\left(#1\right)} \newcommand{\sr}[1]{#1^{1/2}} \newcommand{\spn}[1]{\left\langle#1\right\rangle} \newcommand{\nsp}[1]{\mathcal{N}\!\left(#1\right)} \newcommand{\csp}[1]{\mathcal{C}\!\left(#1\right)} \newcommand{\rsp}[1]{\mathcal{R}\!\left(#1\right)} \newcommand{\lns}[1]{\mathcal{L}\!\left(#1\right)} \newcommand{\per}[1]{#1^\perp} \newcommand{\augmented}[2]{\left\lbrack\left.#1\,\right\rvert\,#2\right\rbrack} \newcommand{\linearsystem}[2]{\mathcal{LS}\!\left(#1,\,#2\right)} \newcommand{\homosystem}[1]{\linearsystem{#1}{\zerovector}} \newcommand{\rowopswap}[2]{R_{#1}\leftrightarrow R_{#2}} \newcommand{\rowopmult}[2]{#1R_{#2}} \newcommand{\rowopadd}[3]{#1R_{#2}+R_{#3}} \newcommand{\leading}[1]{\boxed{#1}} \newcommand{\rref}{\xrightarrow{\text{RREF}}} \newcommand{\elemswap}[2]{E_{#1,#2}} \newcommand{\elemmult}[2]{E_{#2}\left(#1\right)} \newcommand{\elemadd}[3]{E_{#2,#3}\left(#1\right)} \newcommand{\scalarlist}[2]{{#1}_{1},\,{#1}_{2},\,{#1}_{3},\,\ldots,\,{#1}_{#2}} \newcommand{\vect}[1]{\mathbf{#1}} \newcommand{\colvector}[1]{\begin{bmatrix}#1\end{bmatrix}} \newcommand{\vectorcomponents}[2]{\colvector{#1_{1}\\#1_{2}\\#1_{3}\\\vdots\\#1_{#2}}} \newcommand{\vectorlist}[2]{\vect{#1}_{1},\,\vect{#1}_{2},\,\vect{#1}_{3},\,\ldots,\,\vect{#1}_{#2}} \newcommand{\vectorentry}[2]{\left\lbrack#1\right\rbrack_{#2}} \newcommand{\matrixentry}[2]{\left\lbrack#1\right\rbrack_{#2}} \newcommand{\lincombo}[3]{#1_{1}\vect{#2}_{1}+#1_{2}\vect{#2}_{2}+#1_{3}\vect{#2}_{3}+\cdots +#1_{#3}\vect{#2}_{#3}} \newcommand{\matrixcolumns}[2]{\left\lbrack\vect{#1}_{1}|\vect{#1}_{2}|\vect{#1}_{3}|\ldots|\vect{#1}_{#2}\right\rbrack} \newcommand{\transpose}[1]{#1^{t}} \newcommand{\inverse}[1]{#1^{-1}} \newcommand{\submatrix}[3]{#1\left(#2|#3\right)} \newcommand{\adj}[1]{\transpose{\left(\conjugate{#1}\right)}} \newcommand{\adjoint}[1]{#1^\ast} \newcommand{\set}[1]{\left\{#1\right\}} \newcommand{\setparts}[2]{\left\lbrace#1\,\middle|\,#2\right\rbrace} \newcommand{\card}[1]{\left\lvert#1\right\rvert} \newcommand{\setcomplement}[1]{\overline{#1}} \newcommand{\charpoly}[2]{p_{#1}\left(#2\right)} \newcommand{\eigenspace}[2]{\mathcal{E}_{#1}\left(#2\right)} \newcommand{\eigensystem}[3]{\lambda&=#2&\eigenspace{#1}{#2}&=\spn{\set{#3}}} \newcommand{\geneigenspace}[2]{\mathcal{G}_{#1}\left(#2\right)} \newcommand{\algmult}[2]{\alpha_{#1}\left(#2\right)} \newcommand{\geomult}[2]{\gamma_{#1}\left(#2\right)} \newcommand{\indx}[2]{\iota_{#1}\left(#2\right)} \newcommand{\ltdefn}[3]{#1\colon #2\rightarrow#3} \newcommand{\lteval}[2]{#1\left(#2\right)} \newcommand{\ltinverse}[1]{#1^{-1}} \newcommand{\restrict}[2]{{#1}|_{#2}} \newcommand{\preimage}[2]{#1^{-1}\left(#2\right)} \newcommand{\rng}[1]{\mathcal{R}\!\left(#1\right)} \newcommand{\krn}[1]{\mathcal{K}\!\left(#1\right)} \newcommand{\compose}[2]{{#1}\circ{#2}} \newcommand{\vslt}[2]{\mathcal{LT}\left(#1,\,#2\right)} \newcommand{\isomorphic}{\cong} \newcommand{\similar}[2]{\inverse{#2}#1#2} \newcommand{\vectrepname}[1]{\rho_{#1}} \newcommand{\vectrep}[2]{\lteval{\vectrepname{#1}}{#2}} \newcommand{\vectrepinvname}[1]{\ltinverse{\vectrepname{#1}}} \newcommand{\vectrepinv}[2]{\lteval{\ltinverse{\vectrepname{#1}}}{#2}} \newcommand{\matrixrep}[3]{M^{#1}_{#2,#3}} \newcommand{\matrixrepcolumns}[4]{\left\lbrack \left.\vectrep{#2}{\lteval{#1}{\vect{#3}_{1}}}\right|\left.\vectrep{#2}{\lteval{#1}{\vect{#3}_{2}}}\right|\left.\vectrep{#2}{\lteval{#1}{\vect{#3}_{3}}}\right|\ldots\left|\vectrep{#2}{\lteval{#1}{\vect{#3}_{#4}}}\right.\right\rbrack} \newcommand{\cbm}[2]{C_{#1,#2}} \newcommand{\jordan}[2]{J_{#1}\left(#2\right)} \newcommand{\hadamard}[2]{#1\circ #2} \newcommand{\hadamardidentity}[1]{J_{#1}} \newcommand{\hadamardinverse}[1]{\widehat{#1}} \newcommand{\lt}{<} \newcommand{\gt}{>} \newcommand{\amp}{&} \)

SectionDDimension

Almost every vector space we have encountered has been infinite in size (an exception is Example VSS). But some are bigger and richer than others. Dimension, once suitably defined, will be a measure of the size of a vector space, and a useful tool for studying its properties. You probably already have a rough notion of what a mathematical definition of dimension might be — try to forget these imprecise ideas and go with the new ones given here.

SubsectionDDimension

DefinitionDDimension

Suppose that \(V\) is a vector space and \(\set{\vectorlist{v}{t}}\) is a basis of \(V\text{.}\) Then the dimension of \(V\) is defined by \(\dimension{V}=t\text{.}\) If \(V\) has no finite bases, we say \(V\) has infinite dimension.

This is a very simple definition, which belies its power. Grab a basis, any basis, and count up the number of vectors it contains. That is the dimension. However, this simplicity causes a problem. Given a vector space, you and I could each construct different bases — remember that a vector space might have many bases. And what if your basis and my basis had different sizes? Applying Definition D we would arrive at different numbers! With our current knowledge about vector spaces, we would have to say that dimension is not “well-defined.” Fortunately, there is a theorem that will correct this problem.

In a strictly logical progression, the next two theorems would precede the definition of dimension. Many subsequent theorems will trace their lineage back to the following fundamental result.

Proof

Notice how the swap of the two summations is so much easier in the third step above, as opposed to all the rearranging and regrouping that takes place in the previous proof. And using only about half the space. And there are no ellipses (…).

Theorem SSLD can be viewed as a generalization of Theorem MVSLD. We know that \(\complex{m}\) has a basis with \(m\) vectors in it (Theorem SUVB), so it is a set of \(m\) vectors that spans \(\complex{m}\text{.}\) By Theorem SSLD, any set of more than \(m\) vectors from \(\complex{m}\) will be linearly dependent. But this is exactly the conclusion we have in Theorem MVSLD. Maybe this is not a total shock, as the proofs of both theorems rely heavily on Theorem HMVEI. The beauty of Theorem SSLD is that it applies in any vector space. We illustrate the generality of this theorem, and hint at its power, in the next example.

Theorem SSLD is indeed powerful, but our main purpose in proving it right now was to make sure that our definition of dimension (Definition D) is well-defined. Here is the theorem.

Proof

Theorem BIS tells us that if we find one finite basis in a vector space, then they all have the same size. This (finally) makes Definition D unambiguous.

SubsectionDVSDimension of Vector Spaces

We can now collect the dimension of some common, and not so common, vector spaces.

Proof
Proof
Proof

It is possible for a vector space to have no finite bases, in which case we say it has infinite dimension. Many of the best examples of this are vector spaces of functions, which lead to constructions like Hilbert spaces. We will focus exclusively on finite-dimensional vector spaces. OK, one infinite-dimensional example, and then we will focus exclusively on finite-dimensional vector spaces.

SubsectionRNMRank and Nullity of a Matrix

For any matrix, we have seen that we can associate several subspaces — the null space (Theorem NSMS), the column space (Theorem CSMS), row space (Theorem RSMS) and the left null space (Theorem LNSMS). As vector spaces, each of these has a dimension, and for the null space and column space, they are important enough to warrant names.

DefinitionNOMNullity Of a Matrix

Suppose that \(A\) is an \(m\times n\) matrix. Then the nullity of \(A\) is the dimension of the null space of \(A\text{,}\) \(\nullity{A}=\dimension{\nsp{A}}\text{.}\)

DefinitionROMRank Of a Matrix

Suppose that \(A\) is an \(m\times n\) matrix. Then the rank of \(A\) is the dimension of the column space of \(A\text{,}\) \(\rank{A}=\dimension{\csp{A}}\text{.}\)

There were no accidents or coincidences in the previous example — with the row-reduced version of a matrix in hand, the rank and nullity are easy to compute.

Proof

Every archetype (Appendix A) that involves a matrix lists its rank and nullity. You may have noticed as you studied the archetypes that the larger the column space is the smaller the null space is. A simple corollary states this trade-off succinctly. (See Proof Technique LC.)

Proof

When we first introduced \(r\) as our standard notation for the number of nonzero rows in a matrix in reduced row-echelon form you might have thought \(r\) stood for “rows.” Not really — it stands for “rank”!

SubsectionRNNMRank and Nullity of a Nonsingular Matrix

Let us take a look at the rank and nullity of a square matrix.

The value of either the nullity or the rank are enough to characterize a nonsingular matrix.

Proof

With a new equivalence for a nonsingular matrix, we can update our list of equivalences (Theorem NME5) which now becomes a list requiring double digits to number.

Proof

SubsectionReading Questions

1

What is the dimension of the vector space \(P_6\text{,}\) the set of all polynomials of degree 6 or less?

2

How are the rank and nullity of a matrix related?

3

Explain why we might say that a nonsingular matrix has “full rank.”

SubsectionExercises

C20

The archetypes listed below are matrices, or systems of equations with coefficient matrices. For each, compute the nullity and rank of the matrix. This information is listed for each archetype (along with the number of columns in the matrix, so as to illustrate Theorem RPNC), and notice how it could have been computed immediately after the determination of the sets \(D\) and \(F\) associated with the reduced row-echelon form of the matrix. Archetype A, Archetype B, Archetype C, Archetype D/Archetype E, Archetype F, Archetype G/Archetype H, Archetype I, Archetype J, Archetype K, Archetype L

C21

Find the dimension of the subspace \(W = \setparts{\colvector{a + b\\ a + c\\a + d \\ d}}{a, b, c, d \in\complexes}\) of \(\complex{4}\text{.}\)

Solution
C22

Find the dimension of the subspace \(W = \setparts{a + bx + cx^2 + dx^3}{a + b + c + d = 0}\) of \(P_3\text{.}\)

Solution
C23

Find the dimension of the subspace\(W = \setparts{\begin{bmatrix} a & b\\c & d \end{bmatrix}}{a + b = c, b + c = d, c + d = a}\) of \(M_{22}\text{.}\)

Solution
C30

For the matrix \(A\) below, compute the dimension of the null space of \(A\text{,}\) \(\dimension{\nsp{A}}\text{.}\) \begin{equation*} A= \begin{bmatrix} 2 & -1 & -3 & 11 & 9 \\ 1 & 2 & 1 & -7 & -3 \\ 3 & 1 & -3 & 6 & 8 \\ 2 & 1 & 2 & -5 & -3 \end{bmatrix} \end{equation*}

Solution
C31

The set \(W\) below is a subspace of \(\complex{4}\text{.}\) Find the dimension of \(W\text{.}\) \begin{equation*} W=\spn{\set{ \colvector{2\\-3\\4\\1},\, \colvector{3\\0\\1\\-2},\, \colvector{-4\\-3\\2\\5} }} \end{equation*}

Solution
C35

Find the rank and nullity of the matrix \(A\text{.}\) \begin{equation*} A = \begin{bmatrix} 1 & 0 & 1\\ 1 & 2 & 2\\ 2 & 1 & 1\\ -1 & 0 & 1\\ 1 & 1 & 2 \end{bmatrix} \end{equation*}

Solution
C36

Find the rank and nullity of the matrix \begin{equation*} A = \begin{bmatrix} 1 & 2 & 1 & 1 & 1\\ 1 & 3 & 2 & 0 & 4\\ 1 & 2 & 1 & 1 & 1 \end{bmatrix}\text{.} \end{equation*}

Solution
C37

Find the rank and nullity of the matrix \begin{equation*} A = \begin{bmatrix} 3 & 2 & 1 & 1 & 1\\ 2 & 3 & 0 & 1 & 1\\ -1 & 1 & 2 & 1 & 0\\ 1 & 1 & 0 & 1 & 1\\ 0 & 1 & 1 & 2 & -1 \end{bmatrix}\text{.} \end{equation*}

Solution
C40

In Example LDP4 we determined that the set of five polynomials, \(T\text{,}\) is linearly dependent by a simple invocation of Theorem SSLD. Prove that \(T\) is linearly dependent from scratch, beginning with Definition LI.

M20

\(M_{22}\) is the vector space of \(2\times 2\) matrices. Let \(S_{22}\) denote the set of all \(2\times 2\) symmetric matrices. That is \begin{equation*} S_{22}=\setparts{A\in M_{22}}{\transpose{A}=A} \end{equation*}

  1. Show that \(S_{22}\) is a subspace of \(M_{22}\text{.}\)
  2. Exhibit a basis for \(S_{22}\) and prove that it has the required properties.
  3. What is the dimension of \(S_{22}\text{?}\)
Solution
M21

A \(2\times 2\) matrix \(B\) is upper triangular if \(\matrixentry{B}{21}=0\text{.}\) Let \(UT_2\) be the set of all \(2\times 2\) upper triangular matrices. Then \(UT_2\) is a subspace of the vector space of all \(2\times 2\) matrices, \(M_{22}\) (you may assume this). Determine the dimension of \(UT_2\) providing all of the necessary justifications for your answer.

Solution