Skip to main content
\(\newcommand{\orderof}[1]{\sim #1} \newcommand{\Z}{\mathbb{Z}} \newcommand{\reals}{\mathbb{R}} \newcommand{\real}[1]{\mathbb{R}^{#1}} \newcommand{\complexes}{\mathbb{C}} \newcommand{\complex}[1]{\mathbb{C}^{#1}} \newcommand{\conjugate}[1]{\overline{#1}} \newcommand{\modulus}[1]{\left\lvert#1\right\rvert} \newcommand{\zerovector}{\vect{0}} \newcommand{\zeromatrix}{\mathcal{O}} \newcommand{\innerproduct}[2]{\left\langle#1,\,#2\right\rangle} \newcommand{\norm}[1]{\left\lVert#1\right\rVert} \newcommand{\dimension}[1]{\dim\left(#1\right)} \newcommand{\nullity}[1]{n\left(#1\right)} \newcommand{\rank}[1]{r\left(#1\right)} \newcommand{\ds}{\oplus} \newcommand{\detname}[1]{\det\left(#1\right)} \newcommand{\detbars}[1]{\left\lvert#1\right\rvert} \newcommand{\trace}[1]{t\left(#1\right)} \newcommand{\sr}[1]{#1^{1/2}} \newcommand{\spn}[1]{\left\langle#1\right\rangle} \newcommand{\nsp}[1]{\mathcal{N}\!\left(#1\right)} \newcommand{\csp}[1]{\mathcal{C}\!\left(#1\right)} \newcommand{\rsp}[1]{\mathcal{R}\!\left(#1\right)} \newcommand{\lns}[1]{\mathcal{L}\!\left(#1\right)} \newcommand{\per}[1]{#1^\perp} \newcommand{\augmented}[2]{\left\lbrack\left.#1\,\right\rvert\,#2\right\rbrack} \newcommand{\linearsystem}[2]{\mathcal{LS}\!\left(#1,\,#2\right)} \newcommand{\homosystem}[1]{\linearsystem{#1}{\zerovector}} \newcommand{\rowopswap}[2]{R_{#1}\leftrightarrow R_{#2}} \newcommand{\rowopmult}[2]{#1R_{#2}} \newcommand{\rowopadd}[3]{#1R_{#2}+R_{#3}} \newcommand{\leading}[1]{\boxed{#1}} \newcommand{\rref}{\xrightarrow{\text{RREF}}} \newcommand{\elemswap}[2]{E_{#1,#2}} \newcommand{\elemmult}[2]{E_{#2}\left(#1\right)} \newcommand{\elemadd}[3]{E_{#2,#3}\left(#1\right)} \newcommand{\scalarlist}[2]{{#1}_{1},\,{#1}_{2},\,{#1}_{3},\,\ldots,\,{#1}_{#2}} \newcommand{\vect}[1]{\mathbf{#1}} \newcommand{\colvector}[1]{\begin{bmatrix}#1\end{bmatrix}} \newcommand{\vectorcomponents}[2]{\colvector{#1_{1}\\#1_{2}\\#1_{3}\\\vdots\\#1_{#2}}} \newcommand{\vectorlist}[2]{\vect{#1}_{1},\,\vect{#1}_{2},\,\vect{#1}_{3},\,\ldots,\,\vect{#1}_{#2}} \newcommand{\vectorentry}[2]{\left\lbrack#1\right\rbrack_{#2}} \newcommand{\matrixentry}[2]{\left\lbrack#1\right\rbrack_{#2}} \newcommand{\lincombo}[3]{#1_{1}\vect{#2}_{1}+#1_{2}\vect{#2}_{2}+#1_{3}\vect{#2}_{3}+\cdots +#1_{#3}\vect{#2}_{#3}} \newcommand{\matrixcolumns}[2]{\left\lbrack\vect{#1}_{1}|\vect{#1}_{2}|\vect{#1}_{3}|\ldots|\vect{#1}_{#2}\right\rbrack} \newcommand{\transpose}[1]{#1^{t}} \newcommand{\inverse}[1]{#1^{-1}} \newcommand{\submatrix}[3]{#1\left(#2|#3\right)} \newcommand{\adj}[1]{\transpose{\left(\conjugate{#1}\right)}} \newcommand{\adjoint}[1]{#1^\ast} \newcommand{\set}[1]{\left\{#1\right\}} \newcommand{\setparts}[2]{\left\lbrace#1\,\middle|\,#2\right\rbrace} \newcommand{\card}[1]{\left\lvert#1\right\rvert} \newcommand{\setcomplement}[1]{\overline{#1}} \newcommand{\charpoly}[2]{p_{#1}\left(#2\right)} \newcommand{\eigenspace}[2]{\mathcal{E}_{#1}\left(#2\right)} \newcommand{\eigensystem}[3]{\lambda&=#2&\eigenspace{#1}{#2}&=\spn{\set{#3}}} \newcommand{\geneigenspace}[2]{\mathcal{G}_{#1}\left(#2\right)} \newcommand{\algmult}[2]{\alpha_{#1}\left(#2\right)} \newcommand{\geomult}[2]{\gamma_{#1}\left(#2\right)} \newcommand{\indx}[2]{\iota_{#1}\left(#2\right)} \newcommand{\ltdefn}[3]{#1\colon #2\rightarrow#3} \newcommand{\lteval}[2]{#1\left(#2\right)} \newcommand{\ltinverse}[1]{#1^{-1}} \newcommand{\restrict}[2]{{#1}|_{#2}} \newcommand{\preimage}[2]{#1^{-1}\left(#2\right)} \newcommand{\rng}[1]{\mathcal{R}\!\left(#1\right)} \newcommand{\krn}[1]{\mathcal{K}\!\left(#1\right)} \newcommand{\compose}[2]{{#1}\circ{#2}} \newcommand{\vslt}[2]{\mathcal{LT}\left(#1,\,#2\right)} \newcommand{\isomorphic}{\cong} \newcommand{\similar}[2]{\inverse{#2}#1#2} \newcommand{\vectrepname}[1]{\rho_{#1}} \newcommand{\vectrep}[2]{\lteval{\vectrepname{#1}}{#2}} \newcommand{\vectrepinvname}[1]{\ltinverse{\vectrepname{#1}}} \newcommand{\vectrepinv}[2]{\lteval{\ltinverse{\vectrepname{#1}}}{#2}} \newcommand{\matrixrep}[3]{M^{#1}_{#2,#3}} \newcommand{\matrixrepcolumns}[4]{\left\lbrack \left.\vectrep{#2}{\lteval{#1}{\vect{#3}_{1}}}\right|\left.\vectrep{#2}{\lteval{#1}{\vect{#3}_{2}}}\right|\left.\vectrep{#2}{\lteval{#1}{\vect{#3}_{3}}}\right|\ldots\left|\vectrep{#2}{\lteval{#1}{\vect{#3}_{#4}}}\right.\right\rbrack} \newcommand{\cbm}[2]{C_{#1,#2}} \newcommand{\jordan}[2]{J_{#1}\left(#2\right)} \newcommand{\hadamard}[2]{#1\circ #2} \newcommand{\hadamardidentity}[1]{J_{#1}} \newcommand{\hadamardinverse}[1]{\widehat{#1}} \newcommand{\lt}{<} \newcommand{\gt}{>} \newcommand{\amp}{&} \)

SectionCRSColumn and Row Spaces

A matrix-vector product (Definition MVP) is a linear combination of the columns of the matrix and this allows us to connect matrix multiplication with systems of equations via Theorem SLSLC. Row operations are linear combinations of the rows of a matrix, and of course, reduced row-echelon form (Definition RREF) is also intimately related to solving systems of equations. In this section we will formalize these ideas with two key definitions of sets of vectors derived from a matrix.

SubsectionCSSEColumn Spaces and Systems of Equations

Theorem SLSLC showed us that there is a natural correspondence between solutions to linear systems and linear combinations of the columns of the coefficient matrix. This idea motivates the following important definition.

DefinitionCSMColumn Space of a Matrix

Suppose that \(A\) is an \(m\times n\) matrix with columns \(\vectorlist{A}{n}\text{.}\) Then the column space of \(A\text{,}\) written \(\csp{A}\text{,}\) is the subset of \(\complex{m}\) containing all linear combinations of the columns of \(A\text{,}\) \begin{equation*} \csp{A}=\spn{\set{\vectorlist{A}{n}}} \end{equation*}

Some authors refer to the column space of a matrix as the range, but we will reserve this term for use with linear transformations (Definition RLT).

Upon encountering any new set, the first question we ask is what objects are in the set, and which objects are not? Here is an example of one way to answer this question, and it will motivate a theorem that will then answer the question precisely.

So if we fix the coefficient matrix, and vary the vector of constants, we can sometimes find consistent systems, and sometimes inconsistent systems. The vectors of constants that lead to consistent systems are exactly the elements of the column space. This is the content of the next theorem, and since it is an equivalence, it provides an alternate view of the column space.

Proof

This theorem tells us that asking if the system \(\linearsystem{A}{\vect{b}}\) is consistent is exactly the same question as asking if \(\vect{b}\) is in the column space of \(A\text{.}\) Or equivalently, it tells us that the column space of the matrix \(A\) is precisely those vectors of constants, \(\vect{b}\text{,}\) that can be paired with \(A\) to create a system of linear equations \(\linearsystem{A}{\vect{b}}\) that is consistent.

Employing Theorem SLEMM we can form the chain of equivalences. \begin{gather*} \vect{b}\in\csp{A} \iff \linearsystem{A}{\vect{b}}\text{ is consistent} \iff A\vect{x}=\vect{b}\text{ for some }\vect{x} \end{gather*}

Thus, an alternative (and popular) definition of the column space of an \(m\times n\) matrix \(A\) is \begin{align*} \csp{A}&= \setparts{ \vect{y}\in\complex{m} }{ \vect{y}=A\vect{x}\text{ for some }\vect{x}\in\complex{n} } = \setparts{A\vect{x}}{\vect{x}\in\complex{n}} \subseteq\complex{m}\text{.} \end{align*}

We recognize this as saying create all the matrix vector products possible with the matrix \(A\) by letting \(\vect{x}\) range over all of the possibilities. By Definition MVP we see that this means take all possible linear combinations of the columns of \(A\) — precisely the definition of the column space (Definition CSM) we have chosen.

Notice how this formulation of the column space looks very much like the definition of the null space of a matrix (Definition NSM), but for a rectangular matrix the column vectors of \(\csp{A}\) and \(\nsp{A}\) have different sizes, so the sets are very different.

Given a vector \(\vect{b}\) and a matrix \(A\) it is now very mechanical to test if \(\vect{b}\in\csp{A}\text{.}\) Form the linear system \(\linearsystem{A}{\vect{b}}\text{,}\) row-reduce the augmented matrix, \(\augmented{A}{\vect{b}}\text{,}\) and test for consistency with Theorem RCLS. Here is an example of this procedure.

Theorem CSCS completes a collection of three theorems, and one definition, that deserve comment. Many questions about spans, linear independence, null space, column spaces and similar objects can be converted to questions about systems of equations (homogeneous or not), which we understand well from our previous results, especially those in Chapter SLE. These previous results include theorems like Theorem RCLS which allows us to quickly decide consistency of a system, and Theorem BNS which allows us to describe solution sets for homogeneous systems compactly as the span of a linearly independent set of column vectors.

The table below lists these four definitions and theorems along with a brief reminder of the statement and an example of how the statement is used.

Definition NSM
Synopsis Null space is solution set of homogeneous system
Example General solution sets described by Theorem PSPHS
Theorem SLSLC
Synopsis Solutions for linear combinations with unknown scalars
Example Deciding membership in spans
Theorem SLEMM
Synopsis System of equations represented by matrix-vector product
Example Solution to \(\linearsystem{A}{\vect{b}}\) is \(\inverse{A}\vect{b}\) when \(A\) is nonsingular
Theorem CSCS
Synopsis Column space vectors create consistent systems
Example Deciding membership in column spaces

SubsectionCSSOCColumn Space Spanned by Original Columns

So we have a foolproof, automated procedure for determining membership in \(\csp{A}\text{.}\) While this works just fine a vector at a time, we would like to have a more useful description of the set \(\csp{A}\) as a whole. The next example will preview the first of two fundamental results about the column space of a matrix.

We will now formalize the previous example, which will make it trivial to determine a linearly independent set of vectors that will span the column space of a matrix, and is constituted of just columns of \(A\text{.}\)

Proof

This is a nice result since it gives us a handful of vectors that describe the entire column space (through the span), and we believe this set is as small as possible because we cannot create any more relations of linear dependence to trim it down further. Furthermore, we defined the column space (Definition CSM) as all linear combinations of the columns of the matrix, and the elements of the set \(T\) are still columns of the matrix (we will not be so lucky in the next two constructions of the column space).

Procedurally this theorem is extremely easy to apply. Row-reduce the original matrix, identify the \(r\) pivot columns of the row-reduced matrix, and grab the columns of the original matrix with the same indices as the pivot columns. But it is still important to study the proof of Theorem BS and its motivation in Example COV which lie at the root of this theorem. We will trot through an example all the same.

SubsectionCSNMColumn Space of a Nonsingular Matrix

Let us specialize to square matrices and contrast the column spaces of the coefficient matrices in Archetype A and Archetype B.

Example CSAA and Example CSAB together motivate the following equivalence, which says that nonsingular matrices have column spaces that are as big as possible.

Proof

With this equivalence for nonsingular matrices we can update our list, Theorem NME3.

Proof

SubsectionRSMRow Space of a Matrix

The rows of a matrix can be viewed as vectors, since they are just lists of numbers, arranged horizontally. So we will transpose a matrix, turning rows into columns, so we can then manipulate rows as column vectors. As a result we will be able to make some new connections between row operations and solutions to systems of equations. OK, here is the second primary definition of this section.

DefinitionRSMRow Space of a Matrix

Suppose \(A\) is an \(m\times n\) matrix. Then the row space of \(A\text{,}\) \(\rsp{A}\text{,}\) is the column space of \(\transpose{A}\text{,}\) i.e. \(\rsp{A}=\csp{\transpose{A}}\text{.}\)

Informally, the row space is the set of all linear combinations of the rows of \(A\text{.}\) However, we write the rows as column vectors, thus the necessity of using the transpose to make the rows into columns. Additionally, with the row space defined in terms of the column space, all of the previous results of this section can be applied to row spaces.

Notice that if \(A\) is a rectangular \(m\times n\) matrix, then \(\csp{A}\subseteq\complex{m}\text{,}\) while \(\rsp{A}\subseteq\complex{n}\) and the two sets are not comparable since they do not even hold objects of the same type. However, when \(A\) is square of size \(n\text{,}\) both \(\csp{A}\) and \(\rsp{A}\) are subsets of \(\complex{n}\text{,}\) though usually the sets will not be equal (but see Exercise CRS.M20).

The row space would not be too interesting if it was simply the column space of the transpose. However, when we do row operations on a matrix we have no effect on the many linear combinations that can be formed with the rows of the matrix. This is stated more carefully in the following theorem.

Proof

Theorem REMRS is at its best when one of the row-equivalent matrices is in reduced row-echelon form. The vectors that are zero rows can be ignored. (Who needs the zero vector when building a span? See Exercise LI.T10.) The echelon pattern insures that the nonzero rows yield vectors that are linearly independent. Here is the theorem.

Proof

Notice that in Example IAS all we had to do was row-reduce the right matrix and toss out a zero row. Next to row operations themselves, Theorem BRS is probably the most powerful computational technique at your disposal as it quickly provides a much improved description of a span, any span (row space, column space, …).

Theorem BRS and the techniques of Example IAS will provide yet another description of the column space of a matrix. First we state a triviality as a theorem, so we can reference it later.

Proof

So to find another expression for the column space of a matrix, build its transpose, row-reduce it, toss out the zero rows, and convert the nonzero rows to column vectors to yield an improved set for the span construction. We will do Archetype I, then you do Archetype J.

SubsectionReading Questions

1

Write the column space of the matrix below as the span of a set of three vectors and explain your choice of method. \begin{equation*} \begin{bmatrix} 1 & 3 & 1 & 3\\ 2 & 0 & 1 & 1\\ -1 & 2 & 1 & 0 \end{bmatrix} \end{equation*}

2

Suppose that \(A\) is an \(n\times n\) nonsingular matrix. What can you say about its column space?

3

Is the vector \(\colvector{0\\5\\2\\3}\) in the row space of the following matrix? Why or why not? \begin{equation*} \begin{bmatrix} 1 & 3 & 1 & 3\\ 2 & 0 & 1 & 1\\ -1 & 2 & 1 & 0 \end{bmatrix} \end{equation*}

SubsectionExercises

C20

For each matrix below, find a set of linearly independent vectors \(X\) so that \(\spn{X}\) equals the column space of the matrix, and a set of linearly independent vectors \(Y\) so that \(\spn{Y}\) equals the row space of the matrix. \begin{align*} A&= \begin{bmatrix} 1 & 2 & 3 & 1 \\ 0 & 1 & 1 & 2\\ 1 & -1 & 2 & 3 \\ 1 & 1 & 2 & -1 \end{bmatrix} & B&= \begin{bmatrix} 1 & 2 & 1 & 1 & 1 \\ 3 & 2 & -1 & 4 & 5\\ 0 & 1 & 1 & 1 & 2 \end{bmatrix} & C& =\begin{bmatrix} 2 & 1 & 0 \\ 3 & 0 & 3\\ 1 & 2 & -3 \\ 1 & 1 & -1 \\ 1 & 1 & -1\end{bmatrix} \end{align*} From your results for these three matrices, can you formulate a conjecture about the sets \(X\) and \(Y\text{?}\)

C30

Example CSOCD expresses the column space of the coefficient matrix from Archetype D (call the matrix \(A\) here) as the span of the first two columns of \(A\text{.}\) In Example CSMCS we determined that the vector \begin{equation*} \vect{c}=\colvector{2\\3\\2} \end{equation*} was not in the column space of \(A\) and that the vector \begin{equation*} \vect{b}=\colvector{8\\-12\\4} \end{equation*} was in the column space of \(A\text{.}\) Attempt to write \(\vect{c}\) and \(\vect{b}\) as linear combinations of the two vectors in the span construction for the column space in Example CSOCD and record your observations.

Solution
C31

For the matrix \(A\) below find a set of vectors \(T\) meeting the following requirements: (1) the span of \(T\) is the column space of \(A\text{,}\) that is, \(\spn{T}=\csp{A}\text{,}\) (2) \(T\) is linearly independent, and (3) the elements of \(T\) are columns of \(A\text{.}\) \begin{equation*} A= \begin{bmatrix} 2 & 1 & 4 & -1 & 2 \\ 1 & -1 & 5 & 1 & 1 \\ -1 & 2 & -7 & 0 & 1 \\ 2 & -1 & 8 & -1 & 2 \end{bmatrix} \end{equation*}

Solution
C32

In Example CSAA, verify that the vector \(\vect{b}\) is not in the column space of the coefficient matrix.

C33

Find a linearly independent set \(S\) so that the span of \(S\text{,}\) \(\spn{S}\text{,}\) is row space of the matrix \(B\text{,}\) and \(S\) is linearly independent. \begin{equation*} B= \begin{bmatrix} 2 & 3 & 1 & 1\\ 1 & 1 & 0 & 1\\ -1 & 2 & 3 & -4 \end{bmatrix} \end{equation*}

Solution
C34

For the \(3\times 4\) matrix \(A\) and the column vector \(\vect{y}\in\complex{4}\) given below, determine if \(\vect{y}\) is in the row space of \(A\text{.}\) In other words, answer the question: \(\vect{y}\in\rsp{A}\text{?}\) \begin{align*} A&= \begin{bmatrix} -2 & 6 & 7 & -1 \\ 7 & -3 & 0 & -3 \\ 8 & 0 & 7 & 6 \end{bmatrix} & \vect{y}&= \colvector{2\\1\\3\\-2} \end{align*}

Solution
C35

For the matrix \(A\) below, find two different linearly independent sets whose spans equal the column space of \(A\text{,}\) \(\csp{A}\text{,}\) such that

  1. the elements are each columns of \(A\text{.}\)
  2. the set is obtained by a procedure that is substantially different from the procedure you use in part (1).

\begin{align*} A&= \begin{bmatrix} 3 & 5 & 1 & -2 \\ 1 & 2 & 3 & 3 \\ -3 & -4 & 7 & 13 \end{bmatrix} \end{align*}

Solution
C40

The following archetypes are systems of equations. For each system, write the vector of constants as a linear combination of the vectors in the span construction for the column space provided by Theorem BCS (these vectors are listed for each of these archetypes).

Archetype A, Archetype B, Archetype C, Archetype D, Archetype E, Archetype F, Archetype G, Archetype H, Archetype I, Archetype J

C42

The following archetypes are either matrices or systems of equations with coefficient matrices. For each matrix, compute a set of column vectors such that (1) the vectors are columns of the matrix, (2) the set is linearly independent, and (3) the span of the set is the column space of the matrix. See Theorem BCS.

Archetype A, Archetype B, Archetype C, Archetype D/Archetype E, Archetype F, Archetype G/Archetype H, Archetype I, Archetype J, Archetype K, Archetype L

C50

The following archetypes are either matrices or systems of equations with coefficient matrices. For each matrix, compute a set of column vectors such that (1) the set is linearly independent, and (2) the span of the set is the row space of the matrix. See Theorem BRS.

Archetype A, Archetype B, Archetype C, Archetype D/Archetype E, Archetype F, Archetype G/Archetype H, Archetype I, Archetype J, Archetype K, Archetype L

C51

The following archetypes are either matrices or systems of equations with coefficient matrices. For each matrix, compute the column space as the span of a linearly independent set as follows: transpose the matrix, row-reduce, toss out zero rows, convert rows into column vectors. See Example CSROI.

Archetype A, Archetype B, Archetype C, Archetype D/Archetype E, Archetype F, Archetype G/Archetype H, Archetype I, Archetype J, Archetype K, Archetype L

C52

The following archetypes are systems of equations. For each different coefficient matrix build two new vectors of constants. The first should lead to a consistent system and the second should lead to an inconsistent system. Descriptions of the column space as spans of linearly independent sets of vectors with “nice patterns” of zeros and ones might be most useful and instructive in connection with this exercise. (See the end of Example CSROI.)

Archetype A, Archetype B, Archetype C, Archetype D/Archetype E, Archetype F, Archetype G/Archetype H, Archetype I, Archetype J

M10

For the matrix \(E\) below, find vectors \(\vect{b}\) and \(\vect{c}\) so that the system \(\linearsystem{E}{\vect{b}}\) is consistent and \(\linearsystem{E}{\vect{c}}\) is inconsistent. \begin{equation*} E= \begin{bmatrix} -2 & 1 & 1 & 0\\ 3 & -1 & 0 & 2\\ 4 & 1 & 1 & 6 \end{bmatrix} \end{equation*}

Solution
M20

Usually the column space and null space of a matrix contain vectors of different sizes. For a square matrix, though, the vectors in these two sets are the same size. Usually the two sets will be different. Construct an example of a square matrix where the column space and null space are equal.

Solution
M21

We have a variety of theorems about how to create column spaces and row spaces and they frequently involve row-reducing a matrix. Here is a procedure that some try to use to get a column space. Begin with an \(m\times n\) matrix \(A\) and row-reduce to a matrix \(B\) with columns \(\vectorlist{B}{n}\text{.}\) Then form the column space of \(A\) as \begin{equation*} \csp{A}= \spn{\set{\vectorlist{B}{n}}} =\csp{B}\text{.} \end{equation*} This is not not a legitimate procedure, and therefore is not a theorem. Construct an example to show that the procedure will not in general create the column space of \(A\text{.}\)

Solution
T40

Suppose that \(A\) is an \(m\times n\) matrix and \(B\) is an \(n\times p\) matrix. Prove that the column space of \(AB\) is a subset of the column space of \(A\text{,}\) that is \(\csp{AB}\subseteq\csp{A}\text{.}\) Provide an example where the opposite is false, in other words give an example where \(\csp{A}\not\subseteq\csp{AB}\text{.}\) (Compare with Exercise MM.T40.)

Solution
T41

Suppose that \(A\) is an \(m\times n\) matrix and \(B\) is an \(n\times n\) nonsingular matrix. Prove that the column space of \(A\) is equal to the column space of \(AB\text{,}\) that is \(\csp{A}=\csp{AB}\text{.}\) (Compare with Exercise MM.T41 and Exercise CRS.T40.)

Solution
T45

Suppose that \(A\) is an \(m\times n\) matrix and \(B\) is an \(n\times m\) matrix where \(AB\) is a nonsingular matrix. Prove that

  1. \(\nsp{B}=\set{\zerovector}\)
  2. \(\csp{B}\cap\nsp{A}=\set{\zerovector}\)

Discuss the case when \(m=n\) in connection with Theorem NPNT.

Solution