From A First Course in Linear Algebra

Version 2.00

© 2004.

Licensed under the GNU Free Documentation License.

http://linear.ups.edu/

A subspace is a vector space that is contained within another vector space. So
every subspace is a vector space in its own right, but it is also defined relative to
some other (larger) vector space. We will discover shortly that we are already
familiar with a wide variety of subspaces from previous sections. Here’s the
definition.

Definition S

Subspace

Suppose that $V$
and $W$ are two
vector spaces that have identical definitions of vector addition and scalar multiplication,
and that $W$ is
a subset of $V$,
$W\subseteq V$. Then
$W$ is a
subspace of $V$.
$\u25b3$

Lets look at an example of a vector space inside another vector space.

Example SC3

A subspace of ${\u2102}^{3}$

We know that ${\u2102}^{3}$
is a vector space (Example VSCV). Consider the subset,

$$W=\left\{\left.\left[\begin{array}{c}\hfill {x}_{1}\hfill \\ \hfill {x}_{2}\hfill \\ \hfill {x}_{3}\hfill \end{array}\right]\phantom{\rule{0em}{0ex}}\right|\phantom{\rule{0em}{0ex}}2{x}_{1}-5{x}_{2}+7{x}_{3}=0\right\}$$ |

It is clear that $W\subseteq {\u2102}^{3}$, since the objects in $W$ are column vectors of size 3. But is $W$ a vector space? Does it satisfy the ten properties of Definition VS when we use the same operations? That is the main question. Suppose $x=\left[\begin{array}{c}\hfill {x}_{1}\hfill \\ \hfill {x}_{2}\hfill \\ \hfill {x}_{3}\hfill \end{array}\right]$and $y=\left[\begin{array}{c}\hfill {y}_{1}\hfill \\ \hfill {y}_{2}\hfill \\ \hfill {y}_{3}\hfill \end{array}\right]$are vectors from $W$. Then we know that these vectors cannot be totally arbitrary, they must have gained membership in $W$ by virtue of meeting the membership test. For example, we know that $x$ must satisfy $2{x}_{1}-5{x}_{2}+7{x}_{3}=0$ while $y$ must satisfy $2{y}_{1}-5{y}_{2}+7{y}_{3}=0$. Our first property (Property AC) asks the question, is $x+y\in W$? When our set of vectors was ${\u2102}^{3}$, this was an easy question to answer. Now it is not so obvious. Notice first that

$$x+y=\left[\begin{array}{c}\hfill {x}_{1}\hfill \\ \hfill {x}_{2}\hfill \\ \hfill {x}_{3}\hfill \end{array}\right]+\left[\begin{array}{c}\hfill {y}_{1}\hfill \\ \hfill {y}_{2}\hfill \\ \hfill {y}_{3}\hfill \end{array}\right]=\left[\begin{array}{c}\hfill {x}_{1}+{y}_{1}\hfill \\ \hfill {x}_{2}+{y}_{2}\hfill \\ \hfill {x}_{3}+{y}_{3}\hfill \end{array}\right]$$ |

and we can test this vector for membership in $W$ as follows,

$$\begin{array}{llllll}\hfill 2\left({x}_{1}+{y}_{1}\right)-5\left({x}_{2}+{y}_{2}\right)+7\left({x}_{3}+{y}_{3}\right)& =2{x}_{1}+2{y}_{1}-5{x}_{2}-5{y}_{2}+7{x}_{3}+7{y}_{3}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =\left(2{x}_{1}-5{x}_{2}+7{x}_{3}\right)+\left(2{y}_{1}-5{y}_{2}+7{y}_{3}\right)\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =0+0\phantom{\rule{2em}{0ex}}& \hfill & x\in W,\phantom{\rule{0ex}{0ex}}y\in W\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =0\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$$and by this computation we see that $x+y\in W$. One property down, nine to go.

If $\alpha $ is a scalar and $x\in W$, is it always true that $\alpha x\in W$? This is what we need to establish Property SC. Again, the answer is not as obvious as it was when our set of vectors was all of ${\u2102}^{3}$. Let’s see.

$$\alpha x=\alpha \left[\begin{array}{c}\hfill {x}_{1}\hfill \\ \hfill {x}_{2}\hfill \\ \hfill {x}_{3}\hfill \end{array}\right]=\left[\begin{array}{c}\hfill \alpha {x}_{1}\hfill \\ \hfill \alpha {x}_{2}\hfill \\ \hfill \alpha {x}_{3}\hfill \end{array}\right]$$ |

and we can test this vector for membership in $W$ with

$$\begin{array}{llllll}\hfill 2\left(\alpha {x}_{1}\right)-5\left(\alpha {x}_{2}\right)+7\left(\alpha {x}_{3}\right)& =\alpha \left(2{x}_{1}-5{x}_{2}+7{x}_{3}\right)\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =\alpha 0\phantom{\rule{2em}{0ex}}& \hfill & x\in W\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =0\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$$and we see that indeed $\alpha x\in W$. Always.

If $W$ has a zero vector, it will be unique (Theorem ZVU). The zero vector for ${\u2102}^{3}$ should also perform the required duties when added to elements of $W$. So the likely candidate for a zero vector in $W$ is the same zero vector that we know ${\u2102}^{3}$ has. You can check that $0=\left[\begin{array}{c}\hfill 0\hfill \\ \hfill 0\hfill \\ \hfill 0\hfill \end{array}\right]$ is a zero vector in $W$ too (Property Z).

With a zero vector, we can now ask about additive inverses (Property AI). As you might suspect, the natural candidate for an additive inverse in $W$ is the same as the additive inverse from ${\u2102}^{3}$. However, we must insure that these additive inverses actually are elements of $W$. Given $x\in W$, is $-x\in W$?

$$-x=\left[\begin{array}{c}\hfill -{x}_{1}\hfill \\ \hfill -{x}_{2}\hfill \\ \hfill -{x}_{3}\hfill \end{array}\right]$$ |

and we can test this vector for membership in $W$ with

$$\begin{array}{llllll}\hfill 2\left(-{x}_{1}\right)-5\left(-{x}_{2}\right)+7\left(-{x}_{3}\right)& =-\left(2{x}_{1}-5{x}_{2}+7{x}_{3}\right)\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =-0\phantom{\rule{2em}{0ex}}& \hfill & x\in W\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =0\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$$and we now believe that $-x\in W$.

Is the vector addition in $W$ commutative (Property C)? Is $x+y=y+x$? Of course! Nothing about restricting the scope of our set of vectors will prevent the operation from still being commutative. Indeed, the remaining five properties are unaffected by the transition to a smaller set of vectors, and so remain true. That was convenient.

So $W$ satisfies all ten properties, is therefore a vector space, and thus earns the title of being a subspace of ${\u2102}^{3}$. $\u22a0$

In Example SC3 we proceeded through all ten of the vector space properties before believing that a subset was a subspace. But six of the properties were easy to prove, and we can lean on some of the properties of the vector space (the superset) to make the other four easier. Here is a theorem that will make it easier to test if a subset is a vector space. A shortcut if there ever was one.

Theorem TSS

Testing Subsets for Subspaces

Suppose that $V$ is a
vector space and $W$
is a subset of $V$,
$W\subseteq V$. Endow
$W$ with the same
operations as $V$.
Then $W$
is a subspace if and only if three conditions are met

- $W$ is non-empty, $W\ne \varnothing $.
- If $x\in W$ and $y\in W$, then $x+y\in W$.
- If $\alpha \in {\u2102}^{}$ and $x\in W$, then $\alpha x\in W$.

Proof ($\Rightarrow $) We have the hypothesis that $W$ is a subspace, so by Definition VS we know that $W$ contains a zero vector. This is enough to show that $W\ne \varnothing $. Also, since $W$ is a vector space it satisfies the additive and scalar multiplication closure properties, and so exactly meets the second and third conditions. If that was easy, the the other direction might require a bit more work.

($\Leftarrow $) We have three properties for our hypothesis, and from this we should conclude that $W$ has the ten defining properties of a vector space. The second and third conditions of our hypothesis are exactly Property AC and Property SC. Our hypothesis that $V$ is a vector space implies that Property C, Property AA, Property SMA, Property DVA, Property DSA and Property O all hold. They continue to be true for vectors from $W$ since passing to a subset, and keeping the operation the same, leaves their statements unchanged. Eight down, two to go.

Suppose $x\in W$. Then by the third part of our hypothesis (scalar closure), we know that $\left(-1\right)x\in W$. By Theorem AISM $\left(-1\right)x=-x$, so together these statements show us that $-x\in W$. $-x$ is the additive inverse of $x$ in $V$, but will continue in this role when viewed as element of the subset $W$. So every element of $W$ has an additive inverse that is an element of $W$ and Property AI is completed. Just one property left.

While we have implicitly discussed the zero vector in the previous paragraph, we need to be certain that the zero vector (of $V$) really lives in $W$. Since $W$ is non-empty, we can choose some vector $z\in W$. Then by the argument in the previous paragraph, we know $-z\in W$. Now by Property AI for $V$ and then by the second part of our hypothesis (additive closure) we see that

$$0=z+\left(-z\right)\in W$$ |

So $W$ contain the zero vector from $V$. Since this vector performs the required duties of a zero vector in $V$, it will continue in that role as an element of $W$. This gives us, Property Z, the final property of the ten required. (Sarah Fellez contributed to this proof.) $\u25a0$

So just three conditions, plus being a subset of a known vector space, gets us all ten properties. Fabulous! This theorem can be paraphrased by saying that a subspace is “a non-empty subset (of a vector space) that is closed under vector addition and scalar multiplication.”

You might want to go back and rework Example SC3 in light of this result, perhaps seeing where we can now economize or where the work done in the example mirrored the proof and where it did not. We will press on and apply this theorem in a slightly more abstract setting.

Example SP4

A subspace of ${P}_{4}$

${P}_{4}$
is the vector space of polynomials with degree at most
$4$ (Example VSP).
Define a subset $W$
as

$$W=\left\{\left.p\left(x\right)\phantom{\rule{0em}{0ex}}\right|\phantom{\rule{0em}{0ex}}p\in {P}_{4},\phantom{\rule{0ex}{0ex}}p\left(2\right)=0\right\}$$ |

so $W$ is the collection of those polynomials (with degree 4 or less) whose graphs cross the $x$-axis at $x=2$. Whenever we encounter a new set it is a good idea to gain a better understanding of the set by finding a few elements in the set, and a few outside it. For example ${x}^{2}-x-2\in W$, while ${x}^{4}+{x}^{3}-7\notin W$.

Is $W$ nonempty? Yes, $x-2\in W$.

Additive closure? Suppose $p\in W$ and $q\in W$. Is $p+q\in W$? $p$ and $q$ are not totally arbitrary, we know that $p\left(2\right)=0$ and $q\left(2\right)=0$. Then we can check $p+q$ for membership in $W$,

$$\begin{array}{llllllll}\hfill \left(p+q\right)\left(2\right)& =p\left(2\right)+q\left(2\right)\phantom{\rule{2em}{0ex}}& \hfill & \text{Additionin}{P}_{4}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =0+0\phantom{\rule{2em}{0ex}}& \hfill & p\in W,\phantom{\rule{0em}{0ex}}q\in W\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =0\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$$so we see that $p+q$ qualifies for membership in $W$.

Scalar multiplication closure? Suppose that $\alpha \in {\u2102}^{}$ and $p\in W$. Then we know that $p\left(2\right)=0$. Testing $\alpha p$ for membership,

$$\begin{array}{llllllll}\hfill \left(\alpha p\right)\left(2\right)& =\alpha p\left(2\right)\phantom{\rule{2em}{0ex}}& \hfill & \text{Scalarmultiplicationin}{P}_{4}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =\alpha 0\phantom{\rule{2em}{0ex}}& \hfill & p\in W\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =0\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$$so $\alpha p\in W$.

We have shown that $W$ meets the three conditions of Theorem TSS and so qualifies as a subspace of ${P}_{4}$. Notice that by Definition S we now know that $W$ is also a vector space. So all the properties of a vector space (Definition VS) and the theorems of Section VS apply in full. $\u22a0$

Much of the power of Theorem TSS is that we can easily establish new vector spaces if we can locate them as subsets of other vector spaces, such as the ones presented in Subsection VS.EVS.

It can be as instructive to consider some subsets that are not subspaces. Since Theorem TSS is an equivalence (see Technique E) we can be assured that a subset is not a subspace if it violates one of the three conditions, and in any example of interest this will not be the “non-empty” condition. However, since a subspace has to be a vector space in its own right, we can also search for a violation of any one of the ten defining properties in Definition VS or any inherent property of a vector space, such as those given by the basic theorems of Subsection VS.VSP. Notice also that a violation need only be for a specific vector or pair of vectors.

Example NSC2Z

A non-subspace in ${\u2102}^{2}$,
zero vector

Consider the subset $W$
below as a candidate for being a subspace of
${\u2102}^{2}$

$$W=\left\{\left.\left[\begin{array}{c}\hfill {x}_{1}\hfill \\ \hfill {x}_{2}\hfill \end{array}\right]\phantom{\rule{0em}{0ex}}\right|\phantom{\rule{0em}{0ex}}3{x}_{1}-5{x}_{2}=12\right\}$$ |

The zero vector of ${\u2102}^{2}$, $0=\left[\begin{array}{c}\hfill 0\hfill \\ \hfill 0\hfill \end{array}\right]$will need to be the zero vector in $W$ also. However, $0\notin W$ since $3\left(0\right)-5\left(0\right)=0\ne 12$. So $W$ has no zero vector and fails Property Z of Definition VS. This subspace also fails to be closed under addition and scalar multiplication. Can you find examples of this? $\u22a0$

Example NSC2A

A non-subspace in ${\u2102}^{2}$,
additive closure

Consider the subset $X$
below as a candidate for being a subspace of
${\u2102}^{2}$

$$X=\left\{\left.\left[\begin{array}{c}\hfill {x}_{1}\hfill \\ \hfill {x}_{2}\hfill \end{array}\right]\phantom{\rule{0em}{0ex}}\right|\phantom{\rule{0em}{0ex}}{x}_{1}{x}_{2}=0\right\}$$ |

You can check that $0\in X$, so the approach of the last example will not get us anywhere. However, notice that $x=\left[\begin{array}{c}\hfill 1\hfill \\ \hfill 0\hfill \end{array}\right]\in X$ and $y=\left[\begin{array}{c}\hfill 0\hfill \\ \hfill 1\hfill \end{array}\right]\in X$. Yet

$$x+y=\left[\begin{array}{c}\hfill 1\hfill \\ \hfill 0\hfill \end{array}\right]+\left[\begin{array}{c}\hfill 0\hfill \\ \hfill 1\hfill \end{array}\right]=\left[\begin{array}{c}\hfill 1\hfill \\ \hfill 1\hfill \end{array}\right]\notin X$$ |

So $X$ fails the additive closure requirement of either Property AC or Theorem TSS, and is therefore not a subspace. $\u22a0$

Example NSC2S

A non-subspace in ${\u2102}^{2}$,
scalar multiplication closure

Consider the subset $Y$
below as a candidate for being a subspace of
${\u2102}^{2}$

$$Y=\left\{\left.\left[\begin{array}{c}\hfill {x}_{1}\hfill \\ \hfill {x}_{2}\hfill \end{array}\right]\phantom{\rule{0em}{0ex}}\right|\phantom{\rule{0em}{0ex}}{x}_{1}\in \mathbb{Z},\phantom{\rule{0em}{0ex}}{x}_{2}\in \mathbb{Z}\right\}$$ |

$\mathbb{Z}$ is the set of integers, so we are only allowing “whole numbers” as the constituents of our vectors. Now, $0\in Y$, and additive closure also holds (can you prove these claims?). So we will have to try something different. Note that $\alpha =\frac{1}{2}\in {\u2102}^{}$ and $\left[\begin{array}{c}\hfill 2\hfill \\ \hfill 3\hfill \end{array}\right]\in Y$, but

$$\alpha x=\frac{1}{2}\left[\begin{array}{c}\hfill 2\hfill \\ \hfill 3\hfill \end{array}\right]=\left[\begin{array}{c}\hfill 1\hfill \\ \hfill \frac{3}{2}\hfill \end{array}\right]\notin Y$$ |

So $Y$ fails the scalar multiplication closure requirement of either Property SC or Theorem TSS, and is therefore not a subspace. $\u22a0$

There are two examples of subspaces that are trivial. Suppose that $V$ is any vector space. Then $V$ is a subset of itself and is a vector space. By Definition S, $V$ qualifies as a subspace of itself. The set containing just the zero vector $Z=\left\{0\right\}$ is also a subspace as can be seen by applying Theorem TSS or by simple modifications of the techniques hinted at in Example VSS. Since these subspaces are so obvious (and therefore not too interesting) we will refer to them as being trivial.

Definition TS

Trivial Subspaces

Given the vector space $V$,
the subspaces $V$ and
$\left\{0\right\}$ are each called a
trivial subspace. $\u25b3$

We can also use Theorem TSS to prove more general statements about subspaces, as illustrated in the next theorem.

Theorem NSMS

Null Space of a Matrix is a Subspace

Suppose that $A$ is
an $m\times n$ matrix. Then
the null space of $A$,
$\mathcal{N}\phantom{\rule{0em}{0ex}}\left(A\right)$, is a
subspace of ${\u2102}^{n}$.
$\square $

Proof We will examine the three requirements of Theorem TSS. Recall that $\mathcal{N}\phantom{\rule{0em}{0ex}}\left(A\right)=\left\{\left.x\in {\u2102}^{n}\phantom{\rule{0em}{0ex}}\right|\phantom{\rule{0em}{0ex}}Ax=0\right\}$.

First, $0\in \mathcal{N}\phantom{\rule{0em}{0ex}}\left(A\right)$, which can be inferred as a consequence of Theorem HSC. So $\mathcal{N}\phantom{\rule{0em}{0ex}}\left(A\right)\ne \varnothing $.

Second, check additive closure by supposing that $x\in \mathcal{N}\phantom{\rule{0em}{0ex}}\left(A\right)$ and $y\in \mathcal{N}\phantom{\rule{0em}{0ex}}\left(A\right)$. So we know a little something about $x$ and $y$: $Ax=0$ and $Ay=0$, and that is all we know. Question: Is $x+y\in \mathcal{N}\phantom{\rule{0em}{0ex}}\left(A\right)$? Let’s check.

$$\begin{array}{llllllll}\hfill A\left(x+y\right)& =Ax+Ay\phantom{\rule{2em}{0ex}}& \hfill & \text{}\text{Theorem MMDAA}\text{}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =0+0\phantom{\rule{2em}{0ex}}& \hfill & x\in \mathcal{N}\phantom{\rule{0em}{0ex}}\left(A\right),\phantom{\rule{0ex}{0ex}}y\in \mathcal{N}\phantom{\rule{0em}{0ex}}\left(A\right)\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =0\phantom{\rule{2em}{0ex}}& \hfill & \text{}\text{Theorem VSPCV}\text{}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$$So, yes, $x+y$ qualifies for membership in $\mathcal{N}\phantom{\rule{0em}{0ex}}\left(A\right)$.

Third, check scalar multiplication closure by supposing that $\alpha \in {\u2102}^{}$ and $x\in \mathcal{N}\phantom{\rule{0em}{0ex}}\left(A\right)$. So we know a little something about $x$: $Ax=0$, and that is all we know. Question: Is $\alpha x\in \mathcal{N}\phantom{\rule{0em}{0ex}}\left(A\right)$? Let’s check.

$$\begin{array}{llllllll}\hfill A\left(\alpha x\right)& =\alpha \left(Ax\right)\phantom{\rule{2em}{0ex}}& \hfill & \text{}\text{Theorem MMSMM}\text{}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =\alpha 0\phantom{\rule{2em}{0ex}}& \hfill & x\in \mathcal{N}\phantom{\rule{0em}{0ex}}\left(A\right)\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =0\phantom{\rule{2em}{0ex}}& \hfill & \text{}\text{Theorem ZVSM}\text{}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$$So, yes, $\alpha x$ qualifies for membership in $\mathcal{N}\phantom{\rule{0em}{0ex}}\left(A\right)$.

Having met the three conditions in Theorem TSS we can now say that the null space of a matrix is a subspace (and hence a vector space in its own right!). $\u25a0$

Here is an example where we can exercise Theorem NSMS.

Example RSNS

Recasting a subspace as a null space

Consider the subset of ${\u2102}^{5}$
defined as

$$W=\left\{\left.\left[\begin{array}{c}\hfill {x}_{1}\hfill \\ \hfill {x}_{2}\hfill \\ \hfill {x}_{3}\hfill \\ \hfill {x}_{4}\hfill \\ \hfill {x}_{5}\hfill \end{array}\right]\phantom{\rule{0em}{0ex}}\right|\phantom{\rule{0em}{0ex}}\begin{array}{c}3{x}_{1}+{x}_{2}-5{x}_{3}+7{x}_{4}+{x}_{5}=0,\hfill \\ 4{x}_{1}+6{x}_{2}+3{x}_{3}-6{x}_{4}-5{x}_{5}=0,\hfill \\ -2{x}_{1}+4{x}_{2}+7{x}_{4}+{x}_{5}=0\hfill \end{array}\right\}$$ |

It is possible to show that $W$ is a subspace of ${\u2102}^{5}$ by checking the three conditions of Theorem TSS directly, but it will get tedious rather quickly. Instead, give $W$ a fresh look and notice that it is a set of solutions to a homogeneous system of equations. Define the matrix

$$A=\left[\begin{array}{ccccc}\hfill 3\hfill & \hfill 1\hfill & \hfill -5\hfill & \hfill 7\hfill & \hfill 1\hfill \\ \hfill 4\hfill & \hfill 6\hfill & \hfill 3\hfill & \hfill -6\hfill & \hfill -5\hfill \\ \hfill -2\hfill & \hfill 4\hfill & \hfill 0\hfill & \hfill 7\hfill & \hfill 1\hfill \end{array}\right]$$ |

and then recognize that $W=\mathcal{N}\phantom{\rule{0em}{0ex}}\left(A\right)$. By Theorem NSMS we can immediately see that $W$ is a subspace. Boom! $\u22a0$

The span of a set of column vectors got a heavy workout in Chapter V and Chapter M. The definition of the span depended only on being able to formulate linear combinations. In any of our more general vector spaces we always have a definition of vector addition and of scalar multiplication. So we can build linear combinations and manufacture spans. This subsection contains two definitions that are just mild variants of definitions we have seen earlier for column vectors. If you haven’t already, compare them with Definition LCCV and Definition SSCV.

Definition LC

Linear Combination

Suppose that $V$ is a
vector space. Given $n$
vectors ${u}_{1},\phantom{\rule{0em}{0ex}}{u}_{2},\phantom{\rule{0em}{0ex}}{u}_{3},\phantom{\rule{0em}{0ex}}\dots ,\phantom{\rule{0em}{0ex}}{u}_{n}$
and $n$
scalars ${\alpha}_{1},\phantom{\rule{0em}{0ex}}{\alpha}_{2},\phantom{\rule{0em}{0ex}}{\alpha}_{3},\phantom{\rule{0em}{0ex}}\dots ,\phantom{\rule{0em}{0ex}}{\alpha}_{n}$,
their linear combination is the vector

$${\alpha}_{1}{u}_{1}+{\alpha}_{2}{u}_{2}+{\alpha}_{3}{u}_{3}+\cdots +{\alpha}_{n}{u}_{n}.$$ |

Example LCM

A linear combination of matrices

In the vector space ${M}_{23}$
of $2\times 3$
matrices, we have the vectors

and we can form linear combinations such as

$$\begin{array}{llll}\hfill 2x+4y+\left(-1\right)z& =2\left[\begin{array}{ccc}\hfill 1\hfill & \hfill 3\hfill & \hfill -2\hfill \\ \hfill 2\hfill & \hfill 0\hfill & \hfill 7\hfill \end{array}\right]+4\left[\begin{array}{ccc}\hfill 3\hfill & \hfill -1\hfill & \hfill 2\hfill \\ \hfill 5\hfill & \hfill 5\hfill & \hfill 1\hfill \end{array}\right]+\left(-1\right)\left[\begin{array}{ccc}\hfill 4\hfill & \hfill 2\hfill & \hfill -4\hfill \\ \hfill 1\hfill & \hfill 1\hfill & \hfill 1\hfill \end{array}\right]\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =\left[\begin{array}{ccc}\hfill 2\hfill & \hfill 6\hfill & \hfill -4\hfill \\ \hfill 4\hfill & \hfill 0\hfill & \hfill 14\hfill \end{array}\right]+\left[\begin{array}{ccc}\hfill 12\hfill & \hfill -4\hfill & \hfill 8\hfill \\ \hfill 20\hfill & \hfill 20\hfill & \hfill 4\hfill \end{array}\right]+\left[\begin{array}{ccc}\hfill -4\hfill & \hfill -2\hfill & \hfill 4\hfill \\ \hfill -1\hfill & \hfill -1\hfill & \hfill -1\hfill \end{array}\right]\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =\left[\begin{array}{ccc}\hfill 10\hfill & \hfill 0\hfill & \hfill 8\hfill \\ \hfill 23\hfill & \hfill 19\hfill & \hfill 17\hfill \end{array}\right]\phantom{\rule{2em}{0ex}}& \hfill & \\ \multicolumn{4}{c}{\text{or,}}\\ \phantom{\rule{2em}{0ex}}\\ \hfill 4x-2y+3z& =4\left[\begin{array}{ccc}\hfill 1\hfill & \hfill 3\hfill & \hfill -2\hfill \\ \hfill 2\hfill & \hfill 0\hfill & \hfill 7\hfill \end{array}\right]-2\left[\begin{array}{ccc}\hfill 3\hfill & \hfill -1\hfill & \hfill 2\hfill \\ \hfill 5\hfill & \hfill 5\hfill & \hfill 1\hfill \end{array}\right]+3\left[\begin{array}{ccc}\hfill 4\hfill & \hfill 2\hfill & \hfill -4\hfill \\ \hfill 1\hfill & \hfill 1\hfill & \hfill 1\hfill \end{array}\right]\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =\left[\begin{array}{ccc}\hfill 4\hfill & \hfill 12\hfill & \hfill -8\hfill \\ \hfill 8\hfill & \hfill 0\hfill & \hfill 28\hfill \end{array}\right]+\left[\begin{array}{ccc}\hfill -6\hfill & \hfill 2\hfill & \hfill -4\hfill \\ \hfill -10\hfill & \hfill -10\hfill & \hfill -2\hfill \end{array}\right]+\left[\begin{array}{ccc}\hfill 12\hfill & \hfill 6\hfill & \hfill -12\hfill \\ \hfill 3\hfill & \hfill 3\hfill & \hfill 3\hfill \end{array}\right]\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =\left[\begin{array}{ccc}\hfill 10\hfill & \hfill 20\hfill & \hfill -24\hfill \\ \hfill 1\hfill & \hfill -7\hfill & \hfill 29\hfill \end{array}\right]\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$$ $\u22a0$When we realize that we can form linear combinations in any vector space, then it is natural to revisit our definition of the span of a set, since it is the set of all possible linear combinations of a set of vectors.

Definition SS

Span of a Set

Suppose that $V$ is a vector
space. Given a set of vectors $S=\left\{{u}_{1},\phantom{\rule{0em}{0ex}}{u}_{2},\phantom{\rule{0em}{0ex}}{u}_{3},\phantom{\rule{0em}{0ex}}\dots ,\phantom{\rule{0em}{0ex}}{u}_{t}\right\}$,
their span, $\u2329S\u232a$,
is the set of all possible linear combinations of
${u}_{1},\phantom{\rule{0em}{0ex}}{u}_{2},\phantom{\rule{0em}{0ex}}{u}_{3},\phantom{\rule{0em}{0ex}}\dots ,\phantom{\rule{0em}{0ex}}{u}_{t}$.
Symbolically,

Theorem SSS

Span of a Set is a Subspace

Suppose $V$ is a vector space.
Given a set of vectors $S=\left\{{u}_{1},\phantom{\rule{0em}{0ex}}{u}_{2},\phantom{\rule{0em}{0ex}}{u}_{3},\phantom{\rule{0em}{0ex}}\dots ,\phantom{\rule{0em}{0ex}}{u}_{t}\right\}\subseteq V$,
their span, $\u2329S\u232a$, is
a subspace. $\square $

Proof We will verify the three conditions of Theorem TSS. First,

$$\begin{array}{llllllll}\hfill 0& =0+0+0+\dots +0\phantom{\rule{2em}{0ex}}& \hfill & \text{}\text{Property Z}\text{for}V\text{}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =0{u}_{1}+0{u}_{2}+0{u}_{3}+\cdots +0{u}_{t}\phantom{\rule{2em}{0ex}}& \hfill & \text{}\text{Theorem ZSSM}\text{}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$$So we have written $0$ as a linear combination of the vectors in $S$ and by Definition SS$,0\in \u2329S\u232a$ and therefore $S\ne \varnothing $.

Second, suppose $x\in \u2329S\u232a$ and $y\in \u2329S\u232a$. Can we conclude that $x+y\in \u2329S\u232a$? What do we know about $x$ and $y$ by virtue of their membership in $\u2329S\u232a$? There must be scalars from ${\u2102}^{}$, ${\alpha}_{1},\phantom{\rule{0em}{0ex}}{\alpha}_{2},\phantom{\rule{0em}{0ex}}{\alpha}_{3},\phantom{\rule{0em}{0ex}}\dots ,\phantom{\rule{0em}{0ex}}{\alpha}_{t}$ and ${\beta}_{1},\phantom{\rule{0em}{0ex}}{\beta}_{2},\phantom{\rule{0em}{0ex}}{\beta}_{3},\phantom{\rule{0em}{0ex}}\dots ,\phantom{\rule{0em}{0ex}}{\beta}_{t}$ so that

$$\begin{array}{llll}\hfill x& ={\alpha}_{1}{u}_{1}+{\alpha}_{2}{u}_{2}+{\alpha}_{3}{u}_{3}+\cdots +{\alpha}_{t}{u}_{t}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill y& ={\beta}_{1}{u}_{1}+{\beta}_{2}{u}_{2}+{\beta}_{3}{u}_{3}+\cdots +{\beta}_{t}{u}_{t}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$$Then

$$\begin{array}{llllll}\hfill x+y& ={\alpha}_{1}{u}_{1}+{\alpha}_{2}{u}_{2}+{\alpha}_{3}{u}_{3}+\cdots +{\alpha}_{t}{u}_{t}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & \phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}+{\beta}_{1}{u}_{1}+{\beta}_{2}{u}_{2}+{\beta}_{3}{u}_{3}+\cdots +{\beta}_{t}{u}_{t}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & ={\alpha}_{1}{u}_{1}+{\beta}_{1}{u}_{1}+{\alpha}_{2}{u}_{2}+{\beta}_{2}{u}_{2}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & \phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}+{\alpha}_{3}{u}_{3}+{\beta}_{3}{u}_{3}+\cdots +{\alpha}_{t}{u}_{t}+{\beta}_{t}{u}_{t}\phantom{\rule{2em}{0ex}}& \hfill & \text{}\text{Property AA}\text{,}\text{Property C}\text{}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =\left({\alpha}_{1}+{\beta}_{1}\right){u}_{1}+\left({\alpha}_{2}+{\beta}_{2}\right){u}_{2}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & \phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}+\left({\alpha}_{3}+{\beta}_{3}\right){u}_{3}+\cdots +\left({\alpha}_{t}+{\beta}_{t}\right){u}_{t}\phantom{\rule{2em}{0ex}}& \hfill & \text{}\text{Property DSA}\text{}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$$Since each ${\alpha}_{i}+{\beta}_{i}$ is again a scalar from ${\u2102}^{}$ we have expressed the vector sum $x+y$ as a linear combination of the vectors from $S$, and therefore by Definition SS we can say that $x+y\in \u2329S\u232a$.

Third, suppose $\alpha \in {\u2102}^{}$ and $x\in \u2329S\u232a$. Can we conclude that $\alpha x\in \u2329S\u232a$? What do we know about $x$ by virtue of its membership in $\u2329S\u232a$? There must be scalars from ${\u2102}^{}$, ${\alpha}_{1},\phantom{\rule{0em}{0ex}}{\alpha}_{2},\phantom{\rule{0em}{0ex}}{\alpha}_{3},\phantom{\rule{0em}{0ex}}\dots ,\phantom{\rule{0em}{0ex}}{\alpha}_{t}$ so that

$$\begin{array}{llll}\hfill x& ={\alpha}_{1}{u}_{1}+{\alpha}_{2}{u}_{2}+{\alpha}_{3}{u}_{3}+\cdots +{\alpha}_{t}{u}_{t}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & \phantom{\rule{2em}{0ex}}& \hfill \end{array}$$Then

$$\begin{array}{llllll}\hfill \alpha x& =\alpha \left({\alpha}_{1}{u}_{1}+{\alpha}_{2}{u}_{2}+{\alpha}_{3}{u}_{3}+\cdots +{\alpha}_{t}{u}_{t}\right)\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =\alpha \left({\alpha}_{1}{u}_{1}\right)+\alpha \left({\alpha}_{2}{u}_{2}\right)+\alpha \left({\alpha}_{3}{u}_{3}\right)+\cdots +\alpha \left({\alpha}_{t}{u}_{t}\right)\phantom{\rule{2em}{0ex}}& \hfill & \text{}\text{Property DVA}\text{}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =\left(\alpha {\alpha}_{1}\right){u}_{1}+\left(\alpha {\alpha}_{2}\right){u}_{2}+\left(\alpha {\alpha}_{3}\right){u}_{3}+\cdots +\left(\alpha {\alpha}_{t}\right){u}_{t}\phantom{\rule{2em}{0ex}}& \hfill & \text{}\text{Property SMA}\text{}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill \end{array}$$Since each $\alpha {\alpha}_{i}$ is again a scalar from ${\u2102}^{}$ we have expressed the scalar multiple $\alpha x$ as a linear combination of the vectors from $S$, and therefore by Definition SS we can say that $\alpha x\in \u2329S\u232a$.

With the three conditions of Theorem TSS met, we can say that $\u2329S\u232a$ is a subspace (and so is also vector space, Definition VS). (See Exercise SS.T20, Exercise SS.T21, Exercise SS.T22.) $\u25a0$

Example SSP

Span of a set of polynomials

In Example SP4 we proved that

$$W=\left\{\left.p\left(x\right)\phantom{\rule{0em}{0ex}}\right|\phantom{\rule{0em}{0ex}}p\in {P}_{4},\phantom{\rule{0ex}{0ex}}p\left(2\right)=0\right\}$$ |

is a subspace of ${P}_{4}$, the vector space of polynomials of degree at most 4. Since $W$ is a vector space itself, let’s construct a span within $W$. First let

$$S=\left\{{x}^{4}-4{x}^{3}+5{x}^{2}-x-2,\phantom{\rule{0em}{0ex}}2{x}^{4}-3{x}^{3}-6{x}^{2}+6x+4\right\}$$ |

and verify that $S$ is a subset of $W$ by checking that each of these two polynomials has $x=2$ as a root. Now, if we define $U=\u2329S\u232a$, then Theorem SSS tells us that $U$ is a subspace of $W$. So quite quickly we have built a chain of subspaces, $U$ inside $W$, and $W$ inside ${P}_{4}$.

Rather than dwell on how quickly we can build subspaces, let’s try to gain a better understanding of just how the span construction creates subspaces, in the context of this example. We can quickly build representative elements of $U$,

$$3\left({x}^{4}-4{x}^{3}+5{x}^{2}-x-2\right)+5\left(2{x}^{4}-3{x}^{3}-6{x}^{2}+6x+4\right)=13{x}^{4}-27{x}^{3}-15{x}^{2}+27x+14$$ |

and

$$\left(-2\right)\left({x}^{4}-4{x}^{3}+5{x}^{2}-x-2\right)+8\left(2{x}^{4}-3{x}^{3}-6{x}^{2}+6x+4\right)=14{x}^{4}-16{x}^{3}-58{x}^{2}+50x+36$$ |

and each of these polynomials must be in $W$ since it is closed under addition and scalar multiplication. But you might check for yourself that both of these polynomials have $x=2$ as a root.

I can tell you that $y=3{x}^{4}-7{x}^{3}-{x}^{2}+7x-2$ is not in $U$, but would you believe me? A first check shows that $y$ does have $x=2$ as a root, but that only shows that $y\in W$. What does $y$ have to do to gain membership in $U=\u2329S\u232a$? It must be a linear combination of the vectors in $S$, ${x}^{4}-4{x}^{3}+5{x}^{2}-x-2$ and $2{x}^{4}-3{x}^{3}-6{x}^{2}+6x+4$. So let’s suppose that $y$ is such a linear combination,

$$\begin{array}{llll}\hfill y& =3{x}^{4}-7{x}^{3}-{x}^{2}+7x-2\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & ={\alpha}_{1}\left({x}^{4}-4{x}^{3}+5{x}^{2}-x-2\right)+{\alpha}_{2}\left(2{x}^{4}-3{x}^{3}-6{x}^{2}+6x+4\right)\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =\left({\alpha}_{1}+2{\alpha}_{2}\right){x}^{4}+\left(-4{\alpha}_{1}-3{\alpha}_{2}\right){x}^{3}+\left(5{\alpha}_{1}-6{\alpha}_{2}\right){x}^{2}+\left(-{\alpha}_{1}+6{\alpha}_{2}\right)x-\left(-2{\alpha}_{1}+4{\alpha}_{2}\right)\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$$Notice that operations above are done in accordance with the definition of the vector space of polynomials (Example VSP). Now, if we equate coefficients, which is the definition of equality for polynomials, then we obtain the system of five linear equations in two variables

$$\begin{array}{llll}\hfill {\alpha}_{1}+2{\alpha}_{2}& =3\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill -4{\alpha}_{1}-3{\alpha}_{2}& =-7\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill 5{\alpha}_{1}-6{\alpha}_{2}& =-1\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill -{\alpha}_{1}+6{\alpha}_{2}& =7\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill -2{\alpha}_{1}+4{\alpha}_{2}& =-2\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$$Build an augmented matrix from the system and row-reduce,

$$\left[\begin{array}{ccc}\hfill 1\hfill & \hfill 2\hfill & \hfill 3\hfill \\ \hfill -4\hfill & \hfill -3\hfill & \hfill -7\hfill \\ \hfill 5\hfill & \hfill -6\hfill & \hfill -1\hfill \\ \hfill -1\hfill & \hfill 6\hfill & \hfill 7\hfill \\ \hfill -2\hfill & \hfill 4\hfill & \hfill -2\hfill \end{array}\right]\underset{}{\overset{\text{RREF}}{\to}}\left[\begin{array}{ccc}\hfill \text{1}\hfill & \hfill 0\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill \text{1}\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill \text{1}\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill \end{array}\right]$$ |

With a leading 1 in the final column of the row-reduced augmented matrix, Theorem RCLS tells us the system of equations is inconsistent. Therefore, there are no scalars, ${\alpha}_{1}$ and ${\alpha}_{2}$, to establish $y$ as a linear combination of the elements in $U$. So $y\notin U$. $\u22a0$

Let’s again examine membership in a span.

Example SM32

A subspace of ${M}_{32}$

The set of all $3\times 2$
matrices forms a vector space when we use the operations of matrix addition
(Definition MA) and scalar matrix multiplication (Definition MSM), as was show
in Example VSM. Consider the subset

$$S=\left\{\left[\begin{array}{cc}\hfill 3\hfill & \hfill 1\hfill \\ \hfill 4\hfill & \hfill 2\hfill \\ \hfill 5\hfill & \hfill -5\hfill \end{array}\right],\phantom{\rule{0em}{0ex}}\left[\begin{array}{cc}\hfill 1\hfill & \hfill 1\hfill \\ \hfill 2\hfill & \hfill -1\hfill \\ \hfill 14\hfill & \hfill -1\hfill \end{array}\right],\phantom{\rule{0em}{0ex}}\left[\begin{array}{cc}\hfill 3\hfill & \hfill -1\hfill \\ \hfill -1\hfill & \hfill 2\hfill \\ \hfill -19\hfill & \hfill -11\hfill \end{array}\right],\phantom{\rule{0em}{0ex}}\left[\begin{array}{cc}\hfill 4\hfill & \hfill 2\hfill \\ \hfill 1\hfill & \hfill -2\hfill \\ \hfill 14\hfill & \hfill -2\hfill \end{array}\right],\phantom{\rule{0em}{0ex}}\left[\begin{array}{cc}\hfill 3\hfill & \hfill 1\hfill \\ \hfill -4\hfill & \hfill 0\hfill \\ \hfill -17\hfill & \hfill 7\hfill \end{array}\right]\right\}$$ |

and define a new subset of vectors $W$ in ${M}_{32}$ using the span (Definition SS), $W=\u2329S\u232a$. So by Theorem SSS we know that $W$ is a subspace of ${M}_{32}$. While $W$ is an infinite set, and this is a precise description, it would still be worthwhile to investigate whether or not $W$ contains certain elements.

First, is

$$y=\left[\begin{array}{cc}\hfill 9\hfill & \hfill 3\hfill \\ \hfill 7\hfill & \hfill 3\hfill \\ \hfill 10\hfill & \hfill -11\hfill \end{array}\right]$$ |

in $W$? To answer this, we want to determine if $y$ can be written as a linear combination of the five matrices in $S$. Can we find scalars, ${\alpha}_{1},\phantom{\rule{0em}{0ex}}{\alpha}_{2},\phantom{\rule{0em}{0ex}}{\alpha}_{3},\phantom{\rule{0em}{0ex}}{\alpha}_{4},\phantom{\rule{0em}{0ex}}{\alpha}_{5}$ so that

$$\begin{array}{llll}\hfill \left[\begin{array}{cc}\hfill 9\hfill & \hfill 3\hfill \\ \hfill 7\hfill & \hfill 3\hfill \\ \hfill 10\hfill & \hfill -11\hfill \end{array}\right]& ={\alpha}_{1}\left[\begin{array}{cc}\hfill 3\hfill & \hfill 1\hfill \\ \hfill 4\hfill & \hfill 2\hfill \\ \hfill 5\hfill & \hfill -5\hfill \end{array}\right]+{\alpha}_{2}\left[\begin{array}{cc}\hfill 1\hfill & \hfill 1\hfill \\ \hfill 2\hfill & \hfill -1\hfill \\ \hfill 14\hfill & \hfill -1\hfill \end{array}\right]+{\alpha}_{3}\left[\begin{array}{cc}\hfill 3\hfill & \hfill -1\hfill \\ \hfill -1\hfill & \hfill 2\hfill \\ \hfill -19\hfill & \hfill -11\hfill \end{array}\right]+{\alpha}_{4}\left[\begin{array}{cc}\hfill 4\hfill & \hfill 2\hfill \\ \hfill 1\hfill & \hfill -2\hfill \\ \hfill 14\hfill & \hfill -2\hfill \end{array}\right]+{\alpha}_{5}\left[\begin{array}{cc}\hfill 3\hfill & \hfill 1\hfill \\ \hfill -4\hfill & \hfill 0\hfill \\ \hfill -17\hfill & \hfill 7\hfill \end{array}\right]\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =\left[\begin{array}{cc}\hfill 3{\alpha}_{1}+{\alpha}_{2}+3{\alpha}_{3}+4{\alpha}_{4}+3{\alpha}_{5}\hfill & \hfill {\alpha}_{1}+{\alpha}_{2}-{\alpha}_{3}+2{\alpha}_{4}+{\alpha}_{5}\hfill \\ \hfill 4{\alpha}_{1}+2{\alpha}_{2}-{\alpha}_{3}+{\alpha}_{4}-4{\alpha}_{5}\hfill & \hfill 2{\alpha}_{1}-{\alpha}_{2}+2{\alpha}_{3}-2{\alpha}_{4}\hfill \\ \hfill 5{\alpha}_{1}+14{\alpha}_{2}-19{\alpha}_{3}+14{\alpha}_{4}-17{\alpha}_{5}\hfill & \hfill -5{\alpha}_{1}-{\alpha}_{2}-11{\alpha}_{3}-2{\alpha}_{4}+7{\alpha}_{5}\hfill \end{array}\right]\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$$Using our definition of matrix equality (Definition ME) we can translate this statement into six equations in the five unknowns,

$$\begin{array}{llll}\hfill 3{\alpha}_{1}+{\alpha}_{2}+3{\alpha}_{3}+4{\alpha}_{4}+3{\alpha}_{5}& =9\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill {\alpha}_{1}+{\alpha}_{2}-{\alpha}_{3}+2{\alpha}_{4}+{\alpha}_{5}& =3\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill 4{\alpha}_{1}+2{\alpha}_{2}-{\alpha}_{3}+{\alpha}_{4}-4{\alpha}_{5}& =7\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill 2{\alpha}_{1}-{\alpha}_{2}+2{\alpha}_{3}-2{\alpha}_{4}& =3\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill 5{\alpha}_{1}+14{\alpha}_{2}-19{\alpha}_{3}+14{\alpha}_{4}-17{\alpha}_{5}& =10\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill -5{\alpha}_{1}-{\alpha}_{2}-11{\alpha}_{3}-2{\alpha}_{4}+7{\alpha}_{5}& =-11\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$$This is a linear system of equations, which we can represent with an augmented matrix and row-reduce in search of solutions. The matrix that is row-equivalent to the augmented matrix is

$$\left[\begin{array}{cccccc}\hfill \text{1}\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill \frac{5}{8}\hfill & \hfill 2\hfill \\ \hfill 0\hfill & \hfill \text{1}\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill \frac{-19}{4}\hfill & \hfill -1\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill \text{1}\hfill & \hfill 0\hfill & \hfill \frac{-7}{8}\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill \text{1}\hfill & \hfill \frac{17}{8}\hfill & \hfill 1\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill \end{array}\right]$$ |

So we recognize that the system is consistent since there is no leading 1 in the final column (Theorem RCLS), and compute $n-r=5-4=1$ free variables (Theorem FVCS). While there are infinitely many solutions, we are only in pursuit of a single solution, so let’s choose the free variable ${\alpha}_{5}=0$ for simplicity’s sake. Then we easily see that ${\alpha}_{1}=2$, ${\alpha}_{2}=-1$, ${\alpha}_{3}=0$, ${\alpha}_{4}=1$. So the scalars ${\alpha}_{1}=2$, ${\alpha}_{2}=-1$, ${\alpha}_{3}=0$, ${\alpha}_{4}=1$, ${\alpha}_{5}=0$ will provide a linear combination of the elements of $S$ that equals $y$, as we can verify by checking,

$$\begin{array}{lll}\hfill \left[\begin{array}{cc}\hfill 9\hfill & \hfill 3\hfill \\ \hfill 7\hfill & \hfill 3\hfill \\ \hfill 10\hfill & \hfill -11\hfill \end{array}\right]=2\left[\begin{array}{cc}\hfill 3\hfill & \hfill 1\hfill \\ \hfill 4\hfill & \hfill 2\hfill \\ \hfill 5\hfill & \hfill -5\hfill \end{array}\right]+\left(-1\right)\left[\begin{array}{cc}\hfill 1\hfill & \hfill 1\hfill \\ \hfill 2\hfill & \hfill -1\hfill \\ \hfill 14\hfill & \hfill -1\hfill \end{array}\right]+\left(1\right)\left[\begin{array}{cc}\hfill 4\hfill & \hfill 2\hfill \\ \hfill 1\hfill & \hfill -2\hfill \\ \hfill 14\hfill & \hfill -2\hfill \end{array}\right]& \phantom{\rule{2em}{0ex}}& \hfill \end{array}$$So with one particular linear combination in hand, we are convinced that $y$ deserves to be a member of $W=\u2329S\u232a$. Second, is

$$x=\left[\begin{array}{cc}\hfill 2\hfill & \hfill 1\hfill \\ \hfill 3\hfill & \hfill 1\hfill \\ \hfill 4\hfill & \hfill -2\hfill \end{array}\right]$$ |

in $W$? To answer this, we want to determine if $x$ can be written as a linear combination of the five matrices in $S$. Can we find scalars, ${\alpha}_{1},\phantom{\rule{0em}{0ex}}{\alpha}_{2},\phantom{\rule{0em}{0ex}}{\alpha}_{3},\phantom{\rule{0em}{0ex}}{\alpha}_{4},\phantom{\rule{0em}{0ex}}{\alpha}_{5}$ so that

$$\begin{array}{llll}\hfill \left[\begin{array}{cc}\hfill 2\hfill & \hfill 1\hfill \\ \hfill 3\hfill & \hfill 1\hfill \\ \hfill 4\hfill & \hfill -2\hfill \end{array}\right]& ={\alpha}_{1}\left[\begin{array}{cc}\hfill 3\hfill & \hfill 1\hfill \\ \hfill 4\hfill & \hfill 2\hfill \\ \hfill 5\hfill & \hfill -5\hfill \end{array}\right]+{\alpha}_{2}\left[\begin{array}{cc}\hfill 1\hfill & \hfill 1\hfill \\ \hfill 2\hfill & \hfill -1\hfill \\ \hfill 14\hfill & \hfill -1\hfill \end{array}\right]+{\alpha}_{3}\left[\begin{array}{cc}\hfill 3\hfill & \hfill -1\hfill \\ \hfill -1\hfill & \hfill 2\hfill \\ \hfill -19\hfill & \hfill -11\hfill \end{array}\right]+{\alpha}_{4}\left[\begin{array}{cc}\hfill 4\hfill & \hfill 2\hfill \\ \hfill 1\hfill & \hfill -2\hfill \\ \hfill 14\hfill & \hfill -2\hfill \end{array}\right]+{\alpha}_{5}\left[\begin{array}{cc}\hfill 3\hfill & \hfill 1\hfill \\ \hfill -4\hfill & \hfill 0\hfill \\ \hfill -17\hfill & \hfill 7\hfill \end{array}\right]\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =\left[\begin{array}{cc}\hfill 3{\alpha}_{1}+{\alpha}_{2}+3{\alpha}_{3}+4{\alpha}_{4}+3{\alpha}_{5}\hfill & \hfill {\alpha}_{1}+{\alpha}_{2}-{\alpha}_{3}+2{\alpha}_{4}+{\alpha}_{5}\hfill \\ \hfill 4{\alpha}_{1}+2{\alpha}_{2}-{\alpha}_{3}+{\alpha}_{4}-4{\alpha}_{5}\hfill & \hfill 2{\alpha}_{1}-{\alpha}_{2}+2{\alpha}_{3}-2{\alpha}_{4}\hfill \\ \hfill 5{\alpha}_{1}+14{\alpha}_{2}-19{\alpha}_{3}+14{\alpha}_{4}-17{\alpha}_{5}\hfill & \hfill -5{\alpha}_{1}-{\alpha}_{2}-11{\alpha}_{3}-2{\alpha}_{4}+7{\alpha}_{5}\hfill \end{array}\right]\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$$Using our definition of matrix equality (Definition ME) we can translate this statement into six equations in the five unknowns,

$$\begin{array}{llll}\hfill 3{\alpha}_{1}+{\alpha}_{2}+3{\alpha}_{3}+4{\alpha}_{4}+3{\alpha}_{5}& =2\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill {\alpha}_{1}+{\alpha}_{2}-{\alpha}_{3}+2{\alpha}_{4}+{\alpha}_{5}& =1\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill 4{\alpha}_{1}+2{\alpha}_{2}-{\alpha}_{3}+{\alpha}_{4}-4{\alpha}_{5}& =3\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill 2{\alpha}_{1}-{\alpha}_{2}+2{\alpha}_{3}-2{\alpha}_{4}& =1\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill 5{\alpha}_{1}+14{\alpha}_{2}-19{\alpha}_{3}+14{\alpha}_{4}-17{\alpha}_{5}& =4\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill -5{\alpha}_{1}-{\alpha}_{2}-11{\alpha}_{3}-2{\alpha}_{4}+7{\alpha}_{5}& =-2\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$$This is a linear system of equations, which we can represent with an augmented matrix and row-reduce in search of solutions. The matrix that is row-equivalent to the augmented matrix is

$$\left[\begin{array}{cccccc}\hfill \text{1}\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill \frac{5}{8}\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill \text{1}\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill -\frac{38}{8}\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill \text{1}\hfill & \hfill 0\hfill & \hfill -\frac{7}{8}\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill \text{1}\hfill & \hfill -\frac{17}{8}\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill \text{1}\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\phantom{\rule{0ex}{0ex}}\hfill \end{array}\right]$$ |

With a leading 1 in the last column Theorem RCLS tells us that the system is inconsistent. Therefore, there are no values for the scalars that will place $x$ in $W$, and so we conclude that $x\notin W$. $\u22a0$

Notice how Example SSP and Example SM32 contained questions about membership in a span, but these questions quickly became questions about solutions to a system of linear equations. This will be a common theme going forward.

Several of the subsets of vectors spaces that we worked with in Chapter M are also subspaces — they are closed under vector addition and scalar multiplication in ${\u2102}^{m}$.

Theorem CSMS

Column Space of a Matrix is a Subspace

Suppose that $A$
is an $m\times n$ matrix.
Then $\mathcal{C}\phantom{\rule{0em}{0ex}}\left(A\right)$ is a
subspace of ${\u2102}^{m}$.
$\square $

Proof Definition CSM shows us that $\mathcal{C}\phantom{\rule{0em}{0ex}}\left(A\right)$ is a subset of ${\u2102}^{m}$, and that it is defined as the span of a set of vectors from ${\u2102}^{m}$ (the columns of the matrix). Since $\mathcal{C}\phantom{\rule{0em}{0ex}}\left(A\right)$ is a span, Theorem SSS says it is a subspace. $\u25a0$

That was easy! Notice that we could have used this same approach to prove that the null space is a subspace, since Theorem SSNS provided a description of the null space of a matrix as the span of a set of vectors. However, I much prefer the current proof of Theorem NSMS. Speaking of easy, here is a very easy theorem that exposes another of our constructions as creating subspaces.

Theorem RSMS

Row Space of a Matrix is a Subspace

Suppose that $A$
is an $m\times n$ matrix.
Then $\mathcal{\mathcal{R}}\phantom{\rule{0em}{0ex}}\left(A\right)$ is a
subspace of ${\u2102}^{n}$.
$\square $

Proof Definition RSM says $\mathcal{\mathcal{R}}\phantom{\rule{0em}{0ex}}\left(A\right)=\mathcal{C}\phantom{\rule{0em}{0ex}}\left({A}^{t}\right)$, so the row space of a matrix is a column space, and every column space is a subspace by Theorem CSMS. That’s enough. $\u25a0$

One more.

Theorem LNSMS

Left Null Space of a Matrix is a Subspace

Suppose that $A$
is an $m\times n$ matrix.
Then $\mathcal{\mathcal{L}}\phantom{\rule{0em}{0ex}}\left(A\right)$ is a
subspace of ${\u2102}^{m}$.
$\square $

Proof Definition LNS says $\mathcal{\mathcal{L}}\phantom{\rule{0em}{0ex}}\left(A\right)=\mathcal{N}\phantom{\rule{0em}{0ex}}\left({A}^{t}\right)$, so the left null space is a null space, and every null space is a subspace by Theorem NSMS. Done. $\u25a0$

So the span of a set of vectors, and the null space, column space, row space and left null space of a matrix are all subspaces, and hence are all vector spaces, meaning they have all the properties detailed in Definition VS and in the basic theorems presented in Section VS. We have worked with these objects as just sets in Chapter V and Chapter M, but now we understand that they have much more structure. In particular, being closed under vector addition and scalar multiplication means a subspace is also closed under linear combinations.

- Summarize the three conditions that allow us to quickly test if a set is a subspace.
- Consider the set of vectors
$$\begin{array}{llll}\hfill W& =\left\{\left.\left[\begin{array}{c}\hfill a\hfill \\ \hfill b\hfill \\ \hfill c\hfill \end{array}\right]\phantom{\rule{0em}{0ex}}\right|\phantom{\rule{0em}{0ex}}3a-2b+c=5\right\}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$$
Is the set $W$ a subspace of ${\u2102}^{3}$? Explain your answer.

- Name five general constructions of sets of column vectors (subsets of ${\u2102}^{m}$) that we now know as subspaces.

C20 Working within the vector space ${P}_{3}$ of polynomials of degree 3 or less, determine if $p\left(x\right)={x}^{3}+6x+4$ is in the subspace $W$ below.

$$W=\u2329\left\{{x}^{3}+{x}^{2}+x,\phantom{\rule{0em}{0ex}}{x}^{3}+2x-6,\phantom{\rule{0em}{0ex}}{x}^{2}-5\right\}\u232a$$ |

Contributed by Robert Beezer Solution [881]

C21 Consider the subspace

$$W=\u2329\left\{\left[\begin{array}{cc}\hfill 2\hfill & \hfill 1\hfill \\ \hfill 3\hfill & \hfill -1\hfill \end{array}\right],\phantom{\rule{0em}{0ex}}\left[\begin{array}{cc}\hfill 4\hfill & \hfill 0\hfill \\ \hfill 2\hfill & \hfill 3\hfill \end{array}\right],\phantom{\rule{0em}{0ex}}\left[\begin{array}{cc}\hfill -3\hfill & \hfill 1\hfill \\ \hfill 2\hfill & \hfill 1\hfill \end{array}\right]\right\}\u232a$$ |

of the vector space of $2\times 2$
matrices, ${M}_{22}$.
Is $C=\left[\begin{array}{cc}\hfill -3\hfill & \hfill 3\hfill \\ \hfill 6\hfill & \hfill -4\hfill \end{array}\right]$ an
element of $W$?

Contributed by Robert Beezer Solution [882]

C25 Show that the set $W=\left\{\left.\left[\begin{array}{c}\hfill {x}_{1}\hfill \\ \hfill {x}_{2}\hfill \end{array}\right]\phantom{\rule{0em}{0ex}}\right|\phantom{\rule{0em}{0ex}}3{x}_{1}-5{x}_{2}=12\right\}$
from Example NSC2Z fails Property AC and Property SC.

Contributed by Robert Beezer

C26 Show that the set $Y=\left\{\left.\left[\begin{array}{c}\hfill {x}_{1}\hfill \\ \hfill {x}_{2}\hfill \end{array}\right]\phantom{\rule{0em}{0ex}}\right|\phantom{\rule{0em}{0ex}}{x}_{1}\in \mathbb{Z},\phantom{\rule{0em}{0ex}}{x}_{2}\in \mathbb{Z}\right\}$
from Example NSC2S has Property AC.

Contributed by Robert Beezer

M20 In ${\u2102}^{3}$, the vector space of column vectors of size 3, prove that the set $Z$ is a subspace.

$$Z=\left\{\left.\left[\begin{array}{c}\hfill {x}_{1}\hfill \\ \hfill {x}_{2}\hfill \\ \hfill {x}_{3}\hfill \end{array}\right]\phantom{\rule{0em}{0ex}}\right|\phantom{\rule{0em}{0ex}}4{x}_{1}-{x}_{2}+5{x}_{3}=0\right\}$$ |

Contributed by Robert Beezer Solution [884]

T20 A square matrix $A$
of size $n$ is upper
triangular if ${\left[A\right]}_{ij}=0$
whenever $i>j$.
Let $U{T}_{n}$
be the set of all upper triangular matrices of size
$n$. Prove
that $U{T}_{n}$
is a subspace of the vector space of all square matrices of size
$n$,
${M}_{nn}$.

Contributed by Robert Beezer Solution [887]

C20 Contributed by Robert Beezer Statement [878]

The question is if $p$
can be written as a linear combination of the vectors in
$W$. To check
this, we set $p$
equal to a linear combination and massage with the definitions
of vector addition and scalar multiplication that we get with
${P}_{3}$
(Example VSP)

Equating coefficients of equal powers of $x$, we get the system of equations,

$$\begin{array}{llll}\hfill {a}_{1}+{a}_{2}& =1\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill {a}_{1}+{a}_{3}& =0\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill {a}_{1}+2{a}_{2}& =6\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill -6{a}_{2}-5{a}_{3}& =4\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$$The augmented matrix of this system of equations row-reduces to

$$\left[\begin{array}{cccc}\hfill \text{1}\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill \text{1}\hfill & \hfill 0\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill \text{1}\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill \text{1}\hfill \end{array}\right]$$ |

There is a leading 1 in the last column, so Theorem RCLS implies that the system is inconsistent. So there is no way for $p$ to gain membership in $W$, so $p\notin W$.

C21 Contributed by Robert Beezer Statement [878]

In order to belong to $W$, we
must be able to express $C$
as a linear combination of the elements in the spanning set of
$W$.
So we begin with such an expression, using the unknowns
$a,\phantom{\rule{0em}{0ex}}b,\phantom{\rule{0em}{0ex}}c$ for
the scalars in the linear combination.

$$C=\left[\begin{array}{cc}\hfill -3\hfill & \hfill 3\hfill \\ \hfill 6\hfill & \hfill -4\hfill \end{array}\right]=a\left[\begin{array}{cc}\hfill 2\hfill & \hfill 1\hfill \\ \hfill 3\hfill & \hfill -1\hfill \end{array}\right]+b\left[\begin{array}{cc}\hfill 4\hfill & \hfill 0\hfill \\ \hfill 2\hfill & \hfill 3\hfill \end{array}\right]+c\left[\begin{array}{cc}\hfill -3\hfill & \hfill 1\hfill \\ \hfill 2\hfill & \hfill 1\hfill \end{array}\right]$$ |

Massaging the right-hand side, according to the definition of the vector space operations in ${M}_{22}$ (Example VSM), we find the matrix equality,

$$\left[\begin{array}{cc}\hfill -3\hfill & \hfill 3\hfill \\ \hfill 6\hfill & \hfill -4\hfill \end{array}\right]=\left[\begin{array}{cc}\hfill 2a+4b-3c\hfill & \hfill a+c\hfill \\ \hfill 3a+2b+2c\hfill & \hfill -a+3b+c\hfill \end{array}\right]$$ |

Matrix equality allows us to form a system of four equations in three variables, whose augmented matrix row-reduces as follows,

$$\left[\begin{array}{cccc}\hfill 2\hfill & \hfill 4\hfill & \hfill -3\hfill & \hfill -3\hfill \\ \hfill 1\hfill & \hfill 0\hfill & \hfill 1\hfill & \hfill 3\hfill \\ \hfill 3\hfill & \hfill 2\hfill & \hfill 2\hfill & \hfill 6\hfill \\ \hfill -1\hfill & \hfill 3\hfill & \hfill 1\hfill & \hfill -4\hfill \end{array}\right]\underset{}{\overset{\text{RREF}}{\to}}\left[\begin{array}{cccc}\hfill \text{1}\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 2\hfill \\ \hfill 0\hfill & \hfill \text{1}\hfill & \hfill 0\hfill & \hfill -1\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill \text{1}\hfill & \hfill 1\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill \end{array}\right]$$ |

Since this system of equations is consistent (Theorem RCLS), a solution will provide values for $a,\phantom{\rule{0em}{0ex}}b$ and $c$ that alllow us to recognize $C$ as an element of $W$.

M20 Contributed by Robert Beezer Statement [879]

The membership criteria for $Z$
is a single linear equation, which comprises a homogeneous system of equations. As such, we
can recognize $Z$
as the solutions to this system, and therefore
$Z$ is a null space.
Specifically, $Z=\mathcal{N}\phantom{\rule{0em}{0ex}}\left(\left[\begin{array}{ccc}\hfill 4\hfill & \hfill -1\hfill & \hfill 5\hfill \end{array}\right]\right)$.
Every null space is a subspace by Theorem NSMS.

A less direct solution appeals to Theorem TSS.

First, we want to be certain $Z$ is non-empty. The zero vector of ${\u2102}^{3}$, $0=\left[\begin{array}{c}\hfill 0\hfill \\ \hfill 0\hfill \\ \hfill 0\hfill \end{array}\right]$, is a good candidate, since if it fails to be in $Z$, we will know that $Z$ is not a vector space. Check that

$$4\left(0\right)-\left(0\right)+5\left(0\right)=0$$ |

so that $0\in Z$.

Suppose $x=\left[\begin{array}{c}\hfill {x}_{1}\hfill \\ \hfill {x}_{2}\hfill \\ \hfill {x}_{3}\hfill \end{array}\right]$ and $y=\left[\begin{array}{c}\hfill {y}_{1}\hfill \\ \hfill {y}_{2}\hfill \\ \hfill {y}_{3}\hfill \end{array}\right]$ are vectors from $Z$. Then we know that these vectors cannot be totally arbitrary, they must have gained membership in $Z$ by virtue of meeting the membership test. For example, we know that $x$ must satisfy $4{x}_{1}-{x}_{2}+5{x}_{3}=0$ while $y$ must satisfy $4{y}_{1}-{y}_{2}+5{y}_{3}=0$. Our second criteria asks the question, is $x+y\in Z$? Notice first that

$$x+y=\left[\begin{array}{c}\hfill {x}_{1}\hfill \\ \hfill {x}_{2}\hfill \\ \hfill {x}_{3}\hfill \end{array}\right]+\left[\begin{array}{c}\hfill {y}_{1}\hfill \\ \hfill {y}_{2}\hfill \\ \hfill {y}_{3}\hfill \end{array}\right]=\left[\begin{array}{c}\hfill {x}_{1}+{y}_{1}\hfill \\ \hfill {x}_{2}+{y}_{2}\hfill \\ \hfill {x}_{3}+{y}_{3}\hfill \end{array}\right]$$ |

and we can test this vector for membership in $Z$ as follows,

$$\begin{array}{llllll}\hfill & \phantom{\rule{0ex}{0ex}}4\left({x}_{1}+{y}_{1}\right)-1\left({x}_{2}+{y}_{2}\right)+5\left({x}_{3}+{y}_{3}\right)\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =4{x}_{1}+4{y}_{1}-{x}_{2}-{y}_{2}+5{x}_{3}+5{y}_{3}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =\left(4{x}_{1}-{x}_{2}+5{x}_{3}\right)+\left(4{y}_{1}-{y}_{2}+5{y}_{3}\right)\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =0+0\phantom{\rule{2em}{0ex}}& \hfill & x\in Z,\phantom{\rule{0ex}{0ex}}y\in Z\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =0\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$$and by this computation we see that $x+y\in Z$.

If $\alpha $ is a scalar and $x\in Z$, is it always true that $\alpha x\in Z$? To check our third criteria, we examine

$$\alpha x=\alpha \left[\begin{array}{c}\hfill {x}_{1}\hfill \\ \hfill {x}_{2}\hfill \\ \hfill {x}_{3}\hfill \end{array}\right]=\left[\begin{array}{c}\hfill \alpha {x}_{1}\hfill \\ \hfill \alpha {x}_{2}\hfill \\ \hfill \alpha {x}_{3}\hfill \end{array}\right]$$ |

and we can test this vector for membership in $Z$ with

$$\begin{array}{llllll}\hfill & 4\left(\alpha {x}_{1}\right)-\left(\alpha {x}_{2}\right)+5\left(\alpha {x}_{3}\right)\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & \phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}=\alpha \left(4{x}_{1}-{x}_{2}+5{x}_{3}\right)\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & \phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}=\alpha 0\phantom{\rule{2em}{0ex}}& \hfill & x\in Z\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & \phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}=0\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$$and we see that indeed $\alpha x\in Z$. With the three conditions of Theorem TSS fulfilled, we can conclude that $Z$ is a subspace of ${\u2102}^{3}$.

T20 Contributed by Robert Beezer Statement [879]

Apply Theorem TSS.

First, the zero vector of ${M}_{nn}$ is the zero matrix, $\mathcal{O}$, whose entries are all zero (Definition ZM). This matrix then meets the condition that ${\left[\mathcal{O}\right]}_{ij}=0$ for $i>j$ and so is an element of $U{T}_{n}$.

Suppose $A,B\in U{T}_{n}$. Is $A+B\in U{T}_{n}$? We examine the entries of $A+B$ “below” the diagonal. That is, in the following, assume that $i>j$.

$$\begin{array}{llllllll}\hfill {\left[A+B\right]}_{ij}& ={\left[A\right]}_{ij}+{\left[B\right]}_{ij}\phantom{\rule{2em}{0ex}}& \hfill & \text{}\text{Definition MA}\text{}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =0+0\phantom{\rule{2em}{0ex}}& \hfill & \text{}A,B\in U{T}_{n}\text{}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =0\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$$which qualifies $A+B$ for membership in $U{T}_{n}$.

Suppose $\alpha \in {\u2102}^{}$ and $A\in U{T}_{n}$. Is $\alpha A\in U{T}_{n}$? We examine the entries of $\alpha A$ “below” the diagonal. That is, in the following, assume that $i>j$.

$$\begin{array}{llllllll}\hfill {\left[\alpha A\right]}_{ij}& =\alpha {\left[A\right]}_{ij}\phantom{\rule{2em}{0ex}}& \hfill & \text{}\text{Definition MSM}\text{}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =\alpha 0\phantom{\rule{2em}{0ex}}& \hfill & \text{}A\in U{T}_{n}\text{}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =0\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$$which qualifies $\alpha A$ for membership in $U{T}_{n}$.

Having fulfilled the three conditions of Theorem TSS we see that $U{T}_{n}$ is a subspace of ${M}_{nn}$.