Section SS  Spanning Sets

From A First Course in Linear Algebra
Version 2.10
© 2004.
Licensed under the GNU Free Documentation License.
http://linear.ups.edu/

In this section we will describe a compact way to indicate the elements of an infinite set of vectors, making use of linear combinations. This will give us a convenient way to describe the elements of a set of solutions to a linear system, or the elements of the null space of a matrix, or many other sets of vectors.

Subsection SSV: Span of a Set of Vectors

In Example VFSAL we saw the solution set of a homogeneous system described as all possible linear combinations of two particular vectors. This happens to be a useful way to construct or describe infinite sets of vectors, so we encapsulate this idea in a definition.

Definition SSCV
Span of a Set of Column Vectors
Given a set of vectors S = \{{u}_{1},\kern 1.95872pt {u}_{2},\kern 1.95872pt {u}_{3},\kern 1.95872pt \mathop{\mathop{…}},\kern 1.95872pt {u}_{p}\}, their span, \left \langle S\right \rangle , is the set of all possible linear combinations of {u}_{1},\kern 1.95872pt {u}_{2},\kern 1.95872pt {u}_{3},\kern 1.95872pt \mathop{\mathop{…}},\kern 1.95872pt {u}_{p}. Symbolically,

\eqalignno{ \left \langle S\right \rangle & = \left \{{α}_{1}{u}_{1} + {α}_{2}{u}_{2} + {α}_{3}{u}_{3} + \mathrel{⋯} + {α}_{p}{u}_{p}\mathrel{∣}{α}_{i} ∈ {ℂ}^{},\kern 1.95872pt 1 ≤ i ≤ p\right \} & & \cr & = \left \{{\mathop{∑ }}_{i=1}^{p}{α}_{ i}{u}_{i}\mathrel{∣}{α}_{i} ∈ {ℂ}^{},\kern 1.95872pt 1 ≤ i ≤ p\right \} & & }

(This definition contains Notation SSV.)

The span is just a set of vectors, though in all but one situation it is an infinite set. (Just when is it not infinite?) So we start with a finite collection of vectors S (p of them to be precise), and use this finite set to describe an infinite set of vectors, \left \langle S\right \rangle . Confusing the finite set S with the infinite set \left \langle S\right \rangle is one of the most pervasive problems in understanding introductory linear algebra. We will see this construction repeatedly, so let’s work through some examples to get comfortable with it. The most obvious question about a set is if a particular item of the correct type is in the set, or not.

Example ABS
A basic span
Consider the set of 5 vectors, S, from {ℂ}^{4}

S = \left \{\left [\array{ 1 \cr 1 \cr 3 \cr 1 } \right ],\kern 1.95872pt \left [\array{ 2 \cr 1 \cr 2 \cr −1 } \right ],\kern 1.95872pt \left [\array{ 7 \cr 3 \cr 5 \cr −5 } \right ],\kern 1.95872pt \left [\array{ 1 \cr 1 \cr −1 \cr 2 } \right ],\kern 1.95872pt \left [\array{ −1 \cr 0 \cr 9 \cr 0 } \right ]\right \}

and consider the infinite set of vectors \left \langle S\right \rangle formed from all possible linear combinations of the elements of S. Here are four vectors we definitely know are elements of \left \langle S\right \rangle , since we will construct them in accordance with Definition SSCV,

w = (2)\left [\array{ 1 \cr 1 \cr 3 \cr 1 } \right ]+(1)\left [\array{ 2 \cr 1 \cr 2 \cr −1 } \right ]+(−1)\left [\array{ 7 \cr 3 \cr 5 \cr −5 } \right ]+(2)\left [\array{ 1 \cr 1 \cr −1 \cr 2 } \right ]+(3)\left [\array{ −1 \cr 0 \cr 9 \cr 0 } \right ] = \left [\array{ −4 \cr 2 \cr 28 \cr 10 } \right ]
x = (5)\left [\array{ 1 \cr 1 \cr 3 \cr 1 } \right ]+(−6)\left [\array{ 2 \cr 1 \cr 2 \cr −1 } \right ]+(−3)\left [\array{ 7 \cr 3 \cr 5 \cr −5 } \right ]+(4)\left [\array{ 1 \cr 1 \cr −1 \cr 2 } \right ]+(2)\left [\array{ −1 \cr 0 \cr 9 \cr 0 } \right ] = \left [\array{ −26 \cr −6 \cr 2 \cr 34} \right ]
y = (1)\left [\array{ 1 \cr 1 \cr 3 \cr 1 } \right ]+(0)\left [\array{ 2 \cr 1 \cr 2 \cr −1 } \right ]+(1)\left [\array{ 7 \cr 3 \cr 5 \cr −5 } \right ]+(0)\left [\array{ 1 \cr 1 \cr −1 \cr 2 } \right ]+(1)\left [\array{ −1 \cr 0 \cr 9 \cr 0 } \right ] = \left [\array{ 7 \cr 4 \cr 17 \cr −4 } \right ]
z = (0)\left [\array{ 1 \cr 1 \cr 3 \cr 1 } \right ]+(0)\left [\array{ 2 \cr 1 \cr 2 \cr −1 } \right ]+(0)\left [\array{ 7 \cr 3 \cr 5 \cr −5 } \right ]+(0)\left [\array{ 1 \cr 1 \cr −1 \cr 2 } \right ]+(0)\left [\array{ −1 \cr 0 \cr 9 \cr 0 } \right ] = \left [\array{ 0 \cr 0 \cr 0 \cr 0 } \right ]

The purpose of a set is to collect objects with some common property, and to exclude objects without that property. So the most fundamental question about a set is if a given object is an element of the set or not. Let’s learn more about \left \langle S\right \rangle by investigating which vectors are elements of the set, and which are not.

First, is u = \left [\array{ −15 \cr −6 \cr 19 \cr 5 } \right ] an element of \left \langle S\right \rangle ? We are asking if there are scalars {α}_{1},\kern 1.95872pt {α}_{2},\kern 1.95872pt {α}_{3},\kern 1.95872pt {α}_{4},\kern 1.95872pt {α}_{5} such that

{ α}_{1}\left [\array{ 1 \cr 1 \cr 3 \cr 1 } \right ]+{α}_{2}\left [\array{ 2 \cr 1 \cr 2 \cr −1 } \right ]+{α}_{3}\left [\array{ 7 \cr 3 \cr 5 \cr −5 } \right ]+{α}_{4}\left [\array{ 1 \cr 1 \cr −1 \cr 2 } \right ]+{α}_{5}\left [\array{ −1 \cr 0 \cr 9 \cr 0 } \right ] = u = \left [\array{ −15 \cr −6 \cr 19 \cr 5 } \right ]

Applying Theorem SLSLC we recognize the search for these scalars as a solution to a linear system of equations with augmented matrix

\left [\array{ 1& 2 & 7 & 1 &−1&−15 \cr 1& 1 & 3 & 1 & 0 & −6 \cr 3& 2 & 5 &−1& 9 & 19 \cr 1&−1&−5& 2 & 0 & 5 } \right ]

which row-reduces to

\left [\array{ \text{1}&0&−1&0& 3 &10 \cr 0&\text{1}& 4 &0&−1&−9 \cr 0&0& 0 &\text{1}&−2&−7 \cr 0&0& 0 &0& 0 & 0 } \right ]

At this point, we see that the system is consistent (Theorem RCLS), so we know there is a solution for the five scalars {α}_{1},\kern 1.95872pt {α}_{2},\kern 1.95872pt {α}_{3},\kern 1.95872pt {α}_{4},\kern 1.95872pt {α}_{5}. This is enough evidence for us to say that u ∈\left \langle S\right \rangle . If we wished further evidence, we could compute an actual solution, say

\eqalignno{ {α}_{1} & = 2 &{α}_{2} & = 1 &{α}_{3} & = −2 &{α}_{4} & = −3 &{α}_{5} & = 2 & & & & & & & & & & }

This particular solution allows us to write

(2)\left [\array{ 1 \cr 1 \cr 3 \cr 1 } \right ]+(1)\left [\array{ 2 \cr 1 \cr 2 \cr −1 } \right ]+(−2)\left [\array{ 7 \cr 3 \cr 5 \cr −5 } \right ]+(−3)\left [\array{ 1 \cr 1 \cr −1 \cr 2 } \right ]+(2)\left [\array{ −1 \cr 0 \cr 9 \cr 0 } \right ] = u = \left [\array{ −15 \cr −6 \cr 19 \cr 5 } \right ]

making it even more obvious that u ∈\left \langle S\right \rangle .

Lets do it again. Is v = \left [\array{ 3 \cr 1 \cr 2 \cr −1 } \right ] an element of \left \langle S\right \rangle ? We are asking if there are scalars {α}_{1},\kern 1.95872pt {α}_{2},\kern 1.95872pt {α}_{3},\kern 1.95872pt {α}_{4},\kern 1.95872pt {α}_{5} such that

{ α}_{1}\left [\array{ 1 \cr 1 \cr 3 \cr 1 } \right ]+{α}_{2}\left [\array{ 2 \cr 1 \cr 2 \cr −1 } \right ]+{α}_{3}\left [\array{ 7 \cr 3 \cr 5 \cr −5 } \right ]+{α}_{4}\left [\array{ 1 \cr 1 \cr −1 \cr 2 } \right ]+{α}_{5}\left [\array{ −1 \cr 0 \cr 9 \cr 0 } \right ] = v = \left [\array{ 3 \cr 1 \cr 2 \cr −1 } \right ]

Applying Theorem SLSLC we recognize the search for these scalars as a solution to a linear system of equations with augmented matrix

\left [\array{ 1& 2 & 7 & 1 &−1& 3 \cr 1& 1 & 3 & 1 & 0 & 1 \cr 3& 2 & 5 &−1& 9 & 2 \cr 1&−1&−5& 2 & 0 &−1 } \right ]

which row-reduces to

\left [\array{ \text{1}&0&−1&0& 3 &0 \cr 0&\text{1}& 4 &0&−1&0 \cr 0&0& 0 &\text{1}&−2&0 \cr 0&0& 0 &0& 0 &\text{1} } \right ]

At this point, we see that the system is inconsistent by Theorem RCLS, so we know there is not a solution for the five scalars {α}_{1},\kern 1.95872pt {α}_{2},\kern 1.95872pt {α}_{3},\kern 1.95872pt {α}_{4},\kern 1.95872pt {α}_{5}. This is enough evidence for us to say that v∉\left \langle S\right \rangle . End of story.

Example SCAA
Span of the columns of Archetype A
Begin with the finite set of three vectors of size 3

S = \{{u}_{1},\kern 1.95872pt {u}_{2},\kern 1.95872pt {u}_{3}\} = \left \{\left [\array{ 1 \cr 2 \cr 1 } \right ],\kern 1.95872pt \left [\array{ −1 \cr 1 \cr 1 } \right ],\kern 1.95872pt \left [\array{ 2 \cr 1 \cr 0 } \right ]\right \}

and consider the infinite set \left \langle S\right \rangle . The vectors of S could have been chosen to be anything, but for reasons that will become clear later, we have chosen the three columns of the coefficient matrix in Archetype A. First, as an example, note that

v = (5)\left [\array{ 1 \cr 2 \cr 1 } \right ]+(−3)\left [\array{ −1 \cr 1 \cr 1 } \right ]+(7)\left [\array{ 2 \cr 1 \cr 0 } \right ] = \left [\array{ 22 \cr 14 \cr 2 } \right ]

is in \left \langle S\right \rangle , since it is a linear combination of {u}_{1},\kern 1.95872pt {u}_{2},\kern 1.95872pt {u}_{3}. We write this succinctly as v ∈\left \langle S\right \rangle . There is nothing magical about the scalars {α}_{1} = 5,\kern 1.95872pt {α}_{2} = −3,\kern 1.95872pt {α}_{3} = 7, they could have been chosen to be anything. So repeat this part of the example yourself, using different values of {α}_{1},\kern 1.95872pt {α}_{2},\kern 1.95872pt {α}_{3}. What happens if you choose all three scalars to be zero?

So we know how to quickly construct sample elements of the set \left \langle S\right \rangle . A slightly different question arises when you are handed a vector of the correct size and asked if it is an element of \left \langle S\right \rangle . For example, is w = \left [\array{ 1 \cr 8 \cr 5 } \right ] in \left \langle S\right \rangle ? More succinctly, w ∈\left \langle S\right \rangle ?

To answer this question, we will look for scalars {α}_{1},\kern 1.95872pt {α}_{2},\kern 1.95872pt {α}_{3} so that

\eqalignno{ {α}_{1}{u}_{1} + {α}_{2}{u}_{2} + {α}_{3}{u}_{3} & = w. & & \text{By @(a href="fcla-jsmath-2.10li24.html#theorem.SLSLC")Theorem SLSLC@(/a) solutions to this vector equation are solutions to the system of equations} \cr {α}_{1} − {α}_{2} + 2{α}_{3} & = 1 & & \cr 2{α}_{1} + {α}_{2} + {α}_{3} & = 8 & & \cr {α}_{1} + {α}_{2} & = 5. & & \text{Building the augmented matrix for this linear system, and row-reducing, gives} \cr \left [\array{ \text{1}&0& 1 &3 \cr 0&\text{1}&−1&2 \cr 0&0& 0 &0 } \right ]. & & }

This system has infinitely many solutions (there’s a free variable in {x}_{3}), but all we need is one solution vector. The solution,

\eqalignno{ {α}_{1} & = 2 &{α}_{2} & = 3 &{α}_{3} & = 1 & & & & & & }

tells us that

(2){u}_{1} + (3){u}_{2} + (1){u}_{3} = w

so we are convinced that w really is in \left \langle S\right \rangle . Notice that there are an infinite number of ways to answer this question affirmatively. We could choose a different solution, this time choosing the free variable to be zero,

\eqalignno{ {α}_{1} & = 3 &{α}_{2} & = 2 &{α}_{3} & = 0 & & & & & & }

shows us that

(3){u}_{1} + (2){u}_{2} + (0){u}_{3} = w

Verifying the arithmetic in this second solution maybe makes it seem obvious that w is in this span? And of course, we now realize that there are an infinite number of ways to realize w as element of \left \langle S\right \rangle . Let’s ask the same type of question again, but this time with y = \left [\array{ 2 \cr 4 \cr 3 } \right ], i.e. is y ∈\left \langle S\right \rangle ?

So we’ll look for scalars {α}_{1},\kern 1.95872pt {α}_{2},\kern 1.95872pt {α}_{3} so that

\eqalignno{ {α}_{1}{u}_{1} + {α}_{2}{u}_{2} + {α}_{3}{u}_{3} & = y. & & \text{By @(a href="fcla-jsmath-2.10li24.html#theorem.SLSLC")Theorem SLSLC@(/a) solutions to this vector equation are the solutions to the system of equations} \cr {α}_{1} − {α}_{2} + 2{α}_{3} & = 2 & & \cr 2{α}_{1} + {α}_{2} + {α}_{3} & = 4 & & \cr {α}_{1} + {α}_{2} & = 3. & & \text{Building the augmented matrix for this linear system, and row-reducing, gives} \cr \left [\array{ \text{1}&0& 1 &0 \cr 0&\text{1}&−1&0 \cr 0&0& 0 &\text{1} } \right ] & & }

This system is inconsistent (there’s a leading 1 in the last column, Theorem RCLS), so there are no scalars {α}_{1},\kern 1.95872pt {α}_{2},\kern 1.95872pt {α}_{3} that will create a linear combination of {u}_{1},\kern 1.95872pt {u}_{2},\kern 1.95872pt {u}_{3} that equals y. More precisely, y∉\left \langle S\right \rangle .

There are three things to observe in this example. (1) It is easy to construct vectors in \left \langle S\right \rangle . (2) It is possible that some vectors are in \left \langle S\right \rangle (e.g. w), while others are not (e.g. y). (3) Deciding if a given vector is in \left \langle S\right \rangle leads to solving a linear system of equations and asking if the system is consistent.

With a computer program in hand to solve systems of linear equations, could you create a program to decide if a vector was, or wasn’t, in the span of a given set of vectors? Is this art or science?

This example was built on vectors from the columns of the coefficient matrix of Archetype A. Study the determination that v ∈\left \langle S\right \rangle and see if you can connect it with some of the other properties of Archetype A.

Having analyzed Archetype A in Example SCAA, we will of course subject Archetype B to a similar investigation.

Example SCAB
Span of the columns of Archetype B
Begin with the finite set of three vectors of size 3 that are the columns of the coefficient matrix in Archetype B,

R = \{{v}_{1},\kern 1.95872pt {v}_{2},\kern 1.95872pt {v}_{3}\} = \left \{\left [\array{ −7 \cr 5 \cr 1 } \right ],\kern 1.95872pt \left [\array{ −6 \cr 5 \cr 0 } \right ],\kern 1.95872pt \left [\array{ −12 \cr 7 \cr 4 } \right ]\right \}

and consider the infinite set V = \left \langle R\right \rangle . First, as an example, note that

x = (2)\left [\array{ −7 \cr 5 \cr 1 } \right ]+(4)\left [\array{ −6 \cr 5 \cr 0 } \right ]+(−3)\left [\array{ −12 \cr 7 \cr 4 } \right ] = \left [\array{ −2 \cr 9 \cr −10 } \right ]

is in \left \langle R\right \rangle , since it is a linear combination of {v}_{1},\kern 1.95872pt {v}_{2},\kern 1.95872pt {v}_{3}. In other words, x ∈\left \langle R\right \rangle . Try some different values of {α}_{1},\kern 1.95872pt {α}_{2},\kern 1.95872pt {α}_{3} yourself, and see what vectors you can create as elements of \left \langle R\right \rangle .

Now ask if a given vector is an element of \left \langle R\right \rangle . For example, is z = \left [\array{ −33 \cr 24 \cr 5 } \right ] in \left \langle R\right \rangle ? Is z ∈\left \langle R\right \rangle ?

To answer this question, we will look for scalars {α}_{1},\kern 1.95872pt {α}_{2},\kern 1.95872pt {α}_{3} so that

\eqalignno{ {α}_{1}{v}_{1} + {α}_{2}{v}_{2} + {α}_{3}{v}_{3} & = z. & & \text{By @(a href="fcla-jsmath-2.10li24.html#theorem.SLSLC")Theorem SLSLC@(/a) solutions to this vector equation are the solutions to the system of equations} \cr − 7{α}_{1} − 6{α}_{2} − 12{α}_{3} & = −33 & & \cr 5{α}_{1} + 5{α}_{2} + 7{α}_{3} & = 24 & & \cr {α}_{1} + 4{α}_{3} & = 5. & & \text{Building the augmented matrix for this linear system, and row-reducing, gives} \cr \left [\array{ \text{1}&0&0&−3 \cr 0&\text{1}&0& 5 \cr 0&0&\text{1}& 2 } \right ]. & & }

This system has a unique solution,

\eqalignno{ {α}_{1} = −3 & &{α}_{2} = 5 & &{α}_{3} = 2 & & & & & & }

telling us that

(−3){v}_{1} + (5){v}_{2} + (2){v}_{3} = z

so we are convinced that z really is in \left \langle R\right \rangle . Notice that in this case we have only one way to answer the question affirmatively since the solution is unique.

Let’s ask about another vector, say is x = \left [\array{ −7 \cr 8 \cr −3 } \right ]in \left \langle R\right \rangle ? Is x ∈\left \langle R\right \rangle ?

We desire scalars {α}_{1},\kern 1.95872pt {α}_{2},\kern 1.95872pt {α}_{3} so that

\eqalignno{ {α}_{1}{v}_{1} + {α}_{2}{v}_{2} + {α}_{3}{v}_{3} & = x. & & \text{By @(a href="fcla-jsmath-2.10li24.html#theorem.SLSLC")Theorem SLSLC@(/a) solutions to this vector equation are the solutions to the system of equations} \cr − 7{α}_{1} − 6{α}_{2} − 12{α}_{3} & = −7 & & \cr 5{α}_{1} + 5{α}_{2} + 7{α}_{3} & = 8 & & \cr {α}_{1} + 4{α}_{3} & = −3. & & \text{Building the augmented matrix for this linear system, and row-reducing, gives} \cr \left [\array{ \text{1}&0&0& 1 \cr 0&\text{1}&0& 2 \cr 0&0&\text{1}&−1 } \right ] & & }

This system has a unique solution,

\eqalignno{ {α}_{1} = 1 & &{α}_{2} = 2 & &{α}_{3} = −1 & & & & & & }

telling us that

(1){v}_{1} + (2){v}_{2} + (−1){v}_{3} = x

so we are convinced that x really is in \left \langle R\right \rangle . Notice that in this case we again have only one way to answer the question affirmatively since the solution is again unique.

We could continue to test other vectors for membership in \left \langle R\right \rangle , but there is no point. A question about membership in \left \langle R\right \rangle inevitably leads to a system of three equations in the three variables {α}_{1},\kern 1.95872pt {α}_{2},\kern 1.95872pt {α}_{3} with a coefficient matrix whose columns are the vectors {v}_{1},\kern 1.95872pt {v}_{2},\kern 1.95872pt {v}_{3}. This particular coefficient matrix is nonsingular, so by Theorem NMUS, the system is guaranteed to have a solution. (This solution is unique, but that’s not critical here.) So no matter which vector we might have chosen for z, we would have been certain to discover that it was an element of \left \langle R\right \rangle . Stated differently, every vector of size 3 is in \left \langle R\right \rangle , or \left \langle R\right \rangle = {ℂ}^{3}.

Compare this example with Example SCAA, and see if you can connect z with some aspects of the write-up for Archetype B.

Subsection SSNS: Spanning Sets of Null Spaces

We saw in Example VFSAL that when a system of equations is homogeneous the solution set can be expressed in the form described by Theorem VFSLS where the vector c is the zero vector. We can essentially ignore this vector, so that the remainder of the typical expression for a solution looks like an arbitrary linear combination, where the scalars are the free variables and the vectors are {u}_{1},\kern 1.95872pt {u}_{2},\kern 1.95872pt {u}_{3},\kern 1.95872pt \mathop{\mathop{…}},\kern 1.95872pt {u}_{n−r}. Which sounds a lot like a span. This is the substance of the next theorem.

Theorem SSNS
Spanning Sets for Null Spaces
Suppose that A is an m × n matrix, and B is a row-equivalent matrix in reduced row-echelon form with r nonzero rows. Let D = \{{d}_{1},\kern 1.95872pt {d}_{2},\kern 1.95872pt {d}_{3},\kern 1.95872pt \mathop{\mathop{…}},\kern 1.95872pt {d}_{r}\} be the column indices where B has leading 1’s (pivot columns) and F = \{{f}_{1},\kern 1.95872pt {f}_{2},\kern 1.95872pt {f}_{3},\kern 1.95872pt \mathop{\mathop{…}},\kern 1.95872pt {f}_{n−r}\} be the set of column indices where B does not have leading 1’s. Construct the n − r vectors {z}_{j}, 1 ≤ j ≤ n − r of size n as

{ \left [{z}_{j}\right ]}_{i} = \left \{\array{ 1 \quad &\text{if $i ∈ F$, $i = {f}_{j}$} \cr 0 \quad &\text{if $i ∈ F$, $i\mathrel{≠}{f}_{j}$} \cr −{\left [B\right ]}_{k,{f}_{j}}\quad &\text{if $i ∈ D$, $i = {d}_{k}$} } \right .

Then the null space of A is given by

N\kern -1.95872pt \left (A\right ) = \left \langle \left \{{z}_{1},\kern 1.95872pt {z}_{2},\kern 1.95872pt {z}_{3},\kern 1.95872pt \mathop{\mathop{…}},\kern 1.95872pt {z}_{n−r}\right \}\right \rangle .

Proof   Consider the homogeneous system with A as a coefficient matrix, ℒS\kern -1.95872pt \left (A,\kern 1.95872pt 0\right ). Its set of solutions, S, is by Definition NSM, the null space of A, N\kern -1.95872pt \left (A\right ). Let {B}^{′} denote the result of row-reducing the augmented matrix of this homogeneous system. Since the system is homogeneous, the final column of the augmented matrix will be all zeros, and after any number of row operations (Definition RO), the column will still be all zeros. So {B}^{′} has a final column that is totally zeros.

Now apply Theorem VFSLS to {B}^{′}, after noting that our homogeneous system must be consistent (Theorem HSC). The vector c has zeros for each entry that corresponds to an index in F. For entries that correspond to an index in D, the value is −{\left [{B}^{′}\right ]}_{ k,n+1}, but for {B}^{′} any entry in the final column (index n + 1) is zero. So c = 0. The vectors {z}_{j}, 1 ≤ j ≤ n − r are identical to the vectors {u}_{j}, 1 ≤ j ≤ n − r described in Theorem VFSLS. Putting it all together and applying Definition SSCV in the final step,

\eqalignno{ N\kern -1.95872pt \left (A\right ) & = S & & \cr & = \left \{c + {α}_{1}{u}_{1} + {α}_{2}{u}_{2} + {α}_{3}{u}_{3} + \mathrel{⋯} + {α}_{n−r}{u}_{n−r}\mathrel{∣}{α}_{1},\kern 1.95872pt {α}_{2},\kern 1.95872pt {α}_{3},\kern 1.95872pt \mathop{\mathop{…}},\kern 1.95872pt {α}_{n−r} ∈ {ℂ}^{}\right \} & & \cr & = \left \{{α}_{1}{u}_{1} + {α}_{2}{u}_{2} + {α}_{3}{u}_{3} + \mathrel{⋯} + {α}_{n−r}{u}_{n−r}\mathrel{∣}{α}_{1},\kern 1.95872pt {α}_{2},\kern 1.95872pt {α}_{3},\kern 1.95872pt \mathop{\mathop{…}},\kern 1.95872pt {α}_{n−r} ∈ {ℂ}^{}\right \} & & \cr & = \left \langle \left \{{z}_{1},\kern 1.95872pt {z}_{2},\kern 1.95872pt {z}_{3},\kern 1.95872pt \mathop{\mathop{…}},\kern 1.95872pt {z}_{n−r}\right \}\right \rangle & & }

Example SSNS
Spanning set of a null space
Find a set of vectors, S, so that the null space of the matrix A below is the span of S, that is, \left \langle S\right \rangle = N\kern -1.95872pt \left (A\right ).

A = \left [\array{ 1 & 3 & 3 &−1&−5 \cr 2 & 5 & 7 & 1 & 1 \cr 1 & 1 & 5 & 1 & 5 \cr −1&−4&−2& 0 & 4 } \right ]

The null space of A is the set of all solutions to the homogeneous system ℒS\kern -1.95872pt \left (A,\kern 1.95872pt 0\right ). If we find the vector form of the solutions to this homogeneous system (Theorem VFSLS) then the vectors {u}_{j}, 1 ≤ j ≤ n − r in the linear combination are exactly the vectors {z}_{j}, 1 ≤ j ≤ n − r described in Theorem SSNS. So we can mimic Example VFSAL to arrive at these vectors (rather than being a slave to the formulas in the statement of the theorem).

Begin by row-reducing A. The result is

\left [\array{ \text{1}&0& 6 &0& 4 \cr 0&\text{1}&−1&0&−2 \cr 0&0& 0 &\text{1}& 3 \cr 0&0& 0 &0& 0 } \right ]

With D = \left \{1,\kern 1.95872pt 2,\kern 1.95872pt 4\right \} and F = \left \{3,\kern 1.95872pt 5\right \} we recognize that {x}_{3} and {x}_{5} are free variables and we can express each nonzero row as an expression for the dependent variables {x}_{1}, {x}_{2}, {x}_{4} (respectively) in the free variables {x}_{3} and {x}_{5}. With this we can write the vector form of a solution vector as

\left [\array{ {x}_{1} \cr {x}_{2} \cr {x}_{3} \cr {x}_{4} \cr {x}_{5} } \right ] = \left [\array{ −6{x}_{3} − 4{x}_{5} \cr {x}_{3} + 2{x}_{5} \cr {x}_{3} \cr −3{x}_{5} \cr {x}_{5} } \right ] = {x}_{3}\left [\array{ −6 \cr 1 \cr 1 \cr 0 \cr 0 } \right ]+{x}_{5}\left [\array{ −4 \cr 2 \cr 0 \cr −3 \cr 1 } \right ]

Then in the notation of Theorem SSNS,

\eqalignno{ {z}_{1} & = \left [\array{ −6 \cr 1 \cr 1 \cr 0 \cr 0 } \right ] &{z}_{2} & = \left [\array{ −4 \cr 2 \cr 0 \cr −3 \cr 1 } \right ] & & & & }

and

N\kern -1.95872pt \left (A\right ) = \left \langle \left \{{z}_{1},\kern 1.95872pt {z}_{2}\right \}\right \rangle = \left \langle \left \{\left [\array{ −6 \cr 1 \cr 1 \cr 0 \cr 0 } \right ],\kern 1.95872pt \left [\array{ −4 \cr 2 \cr 0 \cr −3 \cr 1 } \right ]\right \}\right \rangle

Example NSDS
Null space directly as a span
Let’s express the null space of A as the span of a set of vectors, applying Theorem SSNS as economically as possible, without reference to the underlying homogeneous system of equations (in contrast to Example SSNS).

A = \left [\array{ 2 & 1 & 5 & 1 & 5 & 1 \cr 1 & 1 & 3 & 1 & 6 &−1 \cr −1& 1 &−1& 0 & 4 &−3 \cr −3& 2 &−4&−4&−7& 0 \cr 3 &−1& 5 & 2 & 2 & 3 } \right ]

Theorem SSNS creates vectors for the span by first row-reducing the matrix in question. The row-reduced version of A is

B = \left [\array{ \text{1}&0&2&0&−1& 2 \cr 0&\text{1}&1&0& 3 &−1 \cr 0&0&0&\text{1}& 4 &−2 \cr 0&0&0&0& 0 & 0 \cr 0&0&0&0& 0 & 0 } \right ]

We will mechanically follow the prescription of Theorem SSNS. Here we go, in two big steps.

First, the indices of the non-pivot columns have indices F = \left \{3,\kern 1.95872pt 5,\kern 1.95872pt 6\right \}, so we will construct the n − r = 6 − 3 = 3 vectors with a pattern of zeros and ones corresponding to the indices in F. This is the realization of the first two lines of the three-case definition of the vectors {z}_{j}, 1 ≤ j ≤ n − r.

\eqalignno{ {z}_{1} & = \left [\array{ \cr \cr 1 \cr \cr 0 \cr 0 } \right ] &{z}_{2} & = \left [\array{ \cr \cr 0 \cr \cr 1 \cr 0 } \right ] &{z}_{3} & = \left [\array{ \cr \cr 0 \cr \cr 0 \cr 1 } \right ] & & & & & & }

Each of these vectors arises due to the presence of a column that is not a pivot column. The remaining entries of each vector are the entries of the corresponding non-pivot column, negated, and distributed into the empty slots in order (these slots have indices in the set D and correspond to pivot columns). This is the realization of the third line of the three-case definition of the vectors {z}_{j}, 1 ≤ j ≤ n − r.

\eqalignno{ {z}_{1} & = \left [\array{ −2 \cr −1 \cr 1 \cr 0 \cr 0 \cr 0 } \right ] &{z}_{2} & = \left [\array{ 1 \cr −3 \cr 0 \cr −4 \cr 1 \cr 0 } \right ] &{z}_{3} & = \left [\array{ −2 \cr 1 \cr 0 \cr 2 \cr 0 \cr 1 } \right ] & & & & & & }

So, by Theorem SSNS, we have

N\kern -1.95872pt \left (A\right ) = \left \langle \left \{{z}_{1},\kern 1.95872pt {z}_{2},\kern 1.95872pt {z}_{3}\right \}\right \rangle = \left \langle \left \{\left [\array{ −2 \cr −1 \cr 1 \cr 0 \cr 0 \cr 0 } \right ],\kern 1.95872pt \left [\array{ 1 \cr −3 \cr 0 \cr −4 \cr 1 \cr 0 } \right ],\kern 1.95872pt \left [\array{ −2 \cr 1 \cr 0 \cr 2 \cr 0 \cr 1 } \right ]\right \}\right \rangle

We know that the null space of A is the solution set of the homogeneous system ℒS\kern -1.95872pt \left (A,\kern 1.95872pt 0\right ), but nowhere in this application of Theorem SSNS have we found occasion to reference the variables or equations of this system. These details are all buried in the proof of Theorem SSNS.

More advanced computational devices will compute the null space of a matrix.See: Computation NS.MMA . Here’s an example that will simultaneously exercise the span construction and Theorem SSNS, while also pointing the way to the next section.

Example SCAD
Span of the columns of Archetype D
Begin with the set of four vectors of size 3

T = \left \{{w}_{1},\kern 1.95872pt {w}_{2},\kern 1.95872pt {w}_{3},\kern 1.95872pt {w}_{4}\right \} = \left \{\left [\array{ 2 \cr −3 \cr 1 } \right ],\kern 1.95872pt \left [\array{ 1 \cr 4 \cr 1 } \right ],\kern 1.95872pt \left [\array{ 7 \cr −5 \cr 4 } \right ],\kern 1.95872pt \left [\array{ −7 \cr −6 \cr −5 } \right ]\right \}

and consider the infinite set W = \left \langle T\right \rangle . The vectors of T have been chosen as the four columns of the coefficient matrix in Archetype D. Check that the vector

{ z}_{2} = \left [\array{ 2 \cr 3 \cr 0 \cr 1 } \right ]

is a solution to the homogeneous system ℒS\kern -1.95872pt \left (D,\kern 1.95872pt 0\right ) (it is the vector {z}_{2} provided by the description of the null space of the coefficient matrix D from Theorem SSNS). Applying Theorem SLSLC, we can write the linear combination,

2{w}_{1} + 3{w}_{2} + 0{w}_{3} + 1{w}_{4} = 0

which we can solve for {w}_{4},

{w}_{4} = (−2){w}_{1} + (−3){w}_{2}.

This equation says that whenever we encounter the vector {w}_{4}, we can replace it with a specific linear combination of the vectors {w}_{1} and {w}_{2}. So using {w}_{4} in the set T, along with {w}_{1} and {w}_{2}, is excessive. An example of what we mean here can be illustrated by the computation,

\eqalignno{ 5{w}_{1} + (−4){w}_{2} + 6{w}_{3} + (−3){w}_{4}& = 5{w}_{1} + (−4){w}_{2} + 6{w}_{3} + (−3)\left ((−2){w}_{1} + (−3){w}_{2}\right )&& \cr & = 5{w}_{1} + (−4){w}_{2} + 6{w}_{3} + \left (6{w}_{1} + 9{w}_{2}\right ) && \cr & = 11{w}_{1} + 5{w}_{2} + 6{w}_{3}. && }

So what began as a linear combination of the vectors {w}_{1},\kern 1.95872pt {w}_{2},\kern 1.95872pt {w}_{3},\kern 1.95872pt {w}_{4} has been reduced to a linear combination of the vectors {w}_{1},\kern 1.95872pt {w}_{2},\kern 1.95872pt {w}_{3}. A careful proof using our definition of set equality (Definition SE) would now allow us to conclude that this reduction is possible for any vector in W, so

W = \left \langle \left \{{w}_{1},\kern 1.95872pt {w}_{2},\kern 1.95872pt {w}_{3}\right \}\right \rangle .

So the span of our set of vectors, W, has not changed, but we have described it by the span of a set of three vectors, rather than four. Furthermore, we can achieve yet another, similar, reduction.

Check that the vector

{ z}_{1} = \left [\array{ −3 \cr −1 \cr 1 \cr 0 } \right ]

is a solution to the homogeneous system ℒS\kern -1.95872pt \left (D,\kern 1.95872pt 0\right ) (it is the vector {z}_{1} provided by the description of the null space of the coefficient matrix D from Theorem SSNS). Applying Theorem SLSLC, we can write the linear combination,

(−3){w}_{1} + (−1){w}_{2} + 1{w}_{3} = 0

which we can solve for {w}_{3},

{w}_{3} = 3{w}_{1} + 1{w}_{2}.

This equation says that whenever we encounter the vector {w}_{3}, we can replace it with a specific linear combination of the vectors {w}_{1} and {w}_{2}. So, as before, the vector {w}_{3} is not needed in the description of W, provided we have {w}_{1} and {w}_{2} available. In particular, a careful proof (such as is done in Example RSC5) would show that

W = \left \langle \left \{{w}_{1},\kern 1.95872pt {w}_{2}\right \}\right \rangle .

So W began life as the span of a set of four vectors, and we have now shown (utilizing solutions to a homogeneous system) that W can also be described as the span of a set of just two vectors. Convince yourself that we cannot go any further. In other words, it is not possible to dismiss either {w}_{1} or {w}_{2} in a similar fashion and winnow the set down to just one vector.

What was it about the original set of four vectors that allowed us to declare certain vectors as surplus? And just which vectors were we able to dismiss? And why did we have to stop once we had two vectors remaining? The answers to these questions motivate “linear independence,” our next section and next definition, and so are worth considering carefully now.

It is possible to have your computational device crank out the vector form of the solution set to a linear system of equations.See: Computation VFSS.MMA .

Subsection READ: Reading Questions

  1. Let S be the set of three vectors below.
    S = \left \{\left [\array{ 1 \cr 2 \cr −1 } \right ],\kern 1.95872pt \left [\array{ 3 \cr −4 \cr 2 } \right ],\kern 1.95872pt \left [\array{ 4 \cr −2 \cr 1 } \right ]\right \}

    Let W = \left \langle S\right \rangle be the span of S. Is the vector \left [\array{ −1 \cr 8 \cr −4 } \right ] in W? Give an explanation of the reason for your answer.

  2. Use S and W from the previous question. Is the vector \left [\array{ 6 \cr 5 \cr −1 } \right ] in W? Give an explanation of the reason for your answer.
  3. For the matrix A below, find a set S so that \left \langle S\right \rangle = N\kern -1.95872pt \left (A\right ), where N\kern -1.95872pt \left (A\right ) is the null space of A. (See Theorem SSNS.)
    A = \left [\array{ 1&3& 1 &9 \cr 2&1&−3&8 \cr 1&1&−1&5 } \right ]

Subsection EXC: Exercises

C22 For each archetype that is a system of equations, consider the corresponding homogeneous system of equations. Write elements of the solution set to these homogeneous systems in vector form, as guaranteed by Theorem VFSLS. Then write the null space of the coefficient matrix of each system as the span of a set of vectors, as described in Theorem SSNS.
Archetype A
Archetype B
Archetype C
Archetype D/ Archetype E
Archetype F
Archetype G/ Archetype H
Archetype I
Archetype J

 
Contributed by Robert Beezer Solution [397]

C23 Archetype K and Archetype L are defined as matrices. Use Theorem SSNS directly to find a set S so that \left \langle S\right \rangle is the null space of the matrix. Do not make any reference to the associated homogeneous system of equations in your solution.  
Contributed by Robert Beezer Solution [397]

C40 Suppose that S = \left \{\left [\array{ 2 \cr −1 \cr 3 \cr 4 } \right ],\kern 1.95872pt \left [\array{ 3 \cr 2 \cr −2 \cr 1 } \right ]\right \}. Let W = \left \langle S\right \rangle and let x = \left [\array{ 5 \cr 8 \cr −12 \cr −5 } \right ]. Is x ∈ W? If so, provide an explicit linear combination that demonstrates this.  
Contributed by Robert Beezer Solution [397]

C41 Suppose that S = \left \{\left [\array{ 2 \cr −1 \cr 3 \cr 4 } \right ],\kern 1.95872pt \left [\array{ 3 \cr 2 \cr −2 \cr 1 } \right ]\right \}. Let W = \left \langle S\right \rangle and let y = \left [\array{ 5 \cr 1 \cr 3 \cr 5 } \right ]. Is y ∈ W? If so, provide an explicit linear combination that demonstrates this.  
Contributed by Robert Beezer Solution [399]

C42 Suppose R = \left \{\left [\array{ 2 \cr −1 \cr 3 \cr 4 \cr 0 } \right ],\kern 1.95872pt \left [\array{ 1 \cr 1 \cr 2 \cr 2 \cr −1 } \right ],\kern 1.95872pt \left [\array{ 3 \cr −1 \cr 0 \cr 3 \cr −2 } \right ]\right \}. Is y = \left [\array{ 1 \cr −1 \cr −8 \cr −4 \cr −3 } \right ] in \left \langle R\right \rangle ?  
Contributed by Robert Beezer Solution [400]

C43 Suppose R = \left \{\left [\array{ 2 \cr −1 \cr 3 \cr 4 \cr 0 } \right ],\kern 1.95872pt \left [\array{ 1 \cr 1 \cr 2 \cr 2 \cr −1 } \right ],\kern 1.95872pt \left [\array{ 3 \cr −1 \cr 0 \cr 3 \cr −2 } \right ]\right \}. Is z = \left [\array{ 1 \cr 1 \cr 5 \cr 3 \cr 1 } \right ] in \left \langle R\right \rangle ?  
Contributed by Robert Beezer Solution [402]

C44 Suppose that S = \left \{\left [\array{ −1 \cr 2 \cr 1 } \right ],\kern 1.95872pt \left [\array{ 3 \cr 1 \cr 2 } \right ],\kern 1.95872pt \left [\array{ 1 \cr 5 \cr 4 } \right ],\kern 1.95872pt \left [\array{ −6 \cr 5 \cr 1 } \right ]\right \}. Let W = \left \langle S\right \rangle and let y = \left [\array{ −5 \cr 3 \cr 0 } \right ]. Is y ∈ W? If so, provide an explicit linear combination that demonstrates this.  
Contributed by Robert Beezer Solution [404]

C45 Suppose that S = \left \{\left [\array{ −1 \cr 2 \cr 1 } \right ],\kern 1.95872pt \left [\array{ 3 \cr 1 \cr 2 } \right ],\kern 1.95872pt \left [\array{ 1 \cr 5 \cr 4 } \right ],\kern 1.95872pt \left [\array{ −6 \cr 5 \cr 1 } \right ]\right \}. Let W = \left \langle S\right \rangle and let w = \left [\array{ 2 \cr 1 \cr 3 } \right ]. Is w ∈ W? If so, provide an explicit linear combination that demonstrates this.  
Contributed by Robert Beezer Solution [405]

C50 Let A be the matrix below.
(a) Find a set S so that N\kern -1.95872pt \left (A\right ) = \left \langle S\right \rangle .
(b) If z = \left [\array{ 3 \cr −5 \cr 1 \cr 2 } \right ], then show directly that z ∈N\kern -1.95872pt \left (A\right ).
(c) Write z as a linear combination of the vectors in S.

\eqalignno{ A = \left [\array{ 2 &3&1&4 \cr 1 &2&1&3 \cr −1&0&1&1 } \right ] & & }

 
Contributed by Robert Beezer Solution [407]

C60 For the matrix A below, find a set of vectors S so that the span of S equals the null space of A, \left \langle S\right \rangle = N\kern -1.95872pt \left (A\right ).

A = \left [\array{ 1 & 1 & 6 &−8 \cr 1 &−2& 0 & 1 \cr −2& 1 &−6& 7 } \right ]

 
Contributed by Robert Beezer Solution [410]

M10 Consider the set of all size 2 vectors in the Cartesian plane {ℝ}^{2}.

  1. Give a geometric description of the span of a single vector.
  2. How can you tell if two vectors span the entire plane, without doing any row reduction or calculation?

 
Contributed by Chris Black Solution [412]

M11 Consider the set of all size 3 vectors in Cartesian 3-space {ℝ}^{3}.

  1. Give a geometric description of the span of a single vector.
  2. Describe the possibilities for the span of two vectors.
  3. Describe the possibilities for the span of three vectors.

 
Contributed by Chris Black Solution [412]

M12 Let u = \left [\array{ 1 \cr 3 \cr −2 } \right ] and v = \left [\array{ 2 \cr −2 \cr 1 } \right ].

  1. Find a vector {w}_{1}, different from u and v, so that \left \langle u,v,{w}_{1}\right \rangle = \left \langle u,v\right \rangle .
  2. Find a vector {w}_{2} so that \left \langle u,v,{w}_{2}\right \rangle \mathrel{≠}\left \langle u,v\right \rangle .

 
Contributed by Chris Black Solution [413]

M20 In Example SCAD we began with the four columns of the coefficient matrix of Archetype D, and used these columns in a span construction. Then we methodically argued that we could remove the last column, then the third column, and create the same set by just doing a span construction with the first two columns. We claimed we could not go any further, and had removed as many vectors as possible. Provide a convincing argument for why a third vector cannot be removed.  
Contributed by Robert Beezer

M21 In the spirit of Example SCAD, begin with the four columns of the coefficient matrix of Archetype C, and use these columns in a span construction to build the set S. Argue that S can be expressed as the span of just three of the columns of the coefficient matrix (saying exactly which three) and in the spirit of Exercise SS.M20 argue that no one of these three vectors can be removed and still have a span construction create S.  
Contributed by Robert Beezer Solution [414]

T10 Suppose that {v}_{1},\kern 1.95872pt {v}_{2} ∈ {ℂ}^{m}. Prove that

\left \langle \left \{{v}_{1},\kern 1.95872pt {v}_{2}\right \}\right \rangle = \left \langle \left \{{v}_{1},\kern 1.95872pt {v}_{2},\kern 1.95872pt 5{v}_{1} + 3{v}_{2}\right \}\right \rangle

 
Contributed by Robert Beezer Solution [416]

T20 Suppose that S is a set of vectors from {ℂ}^{m}. Prove that the zero vector, 0, is an element of \left \langle S\right \rangle .  
Contributed by Robert Beezer Solution [417]

T21 Suppose that S is a set of vectors from {ℂ}^{m} and x,\kern 1.95872pt y ∈\left \langle S\right \rangle . Prove that x + y ∈\left \langle S\right \rangle .  
Contributed by Robert Beezer

T22 Suppose that S is a set of vectors from {ℂ}^{m}, α ∈ {ℂ}^{}, and x ∈\left \langle S\right \rangle . Prove that αx ∈\left \langle S\right \rangle .  
Contributed by Robert Beezer

Subsection SOL: Solutions

C22 Contributed by Robert Beezer Statement [390]
The vector form of the solutions obtained in this manner will involve precisely the vectors described in Theorem SSNS as providing the null space of the coefficient matrix of the system as a span. These vectors occur in each archetype in a description of the null space. Studying Example VFSAL may be of some help.

C23 Contributed by Robert Beezer Statement [390]
Study Example NSDS to understand the correct approach to this question. The solution for each is listed in the Archetypes (Appendix A) themselves.

C40 Contributed by Robert Beezer Statement [390]
Rephrasing the question, we want to know if there are scalars {α}_{1} and {α}_{2} such that

{ α}_{1}\left [\array{ 2 \cr −1 \cr 3 \cr 4 } \right ]+{α}_{2}\left [\array{ 3 \cr 2 \cr −2 \cr 1 } \right ] = \left [\array{ 5 \cr 8 \cr −12 \cr −5 } \right ]

Theorem SLSLC allows us to rephrase the question again as a quest for solutions to the system of four equations in two unknowns with an augmented matrix given by

\left [\array{ 2 & 3 & 5 \cr −1& 2 & 8 \cr 3 &−2&−12 \cr 4 & 1 & −5 } \right ]

This matrix row-reduces to

\left [\array{ \text{1}&0&−2 \cr 0&\text{1}& 3 \cr 0&0& 0 \cr 0&0& 0 } \right ]

From the form of this matrix, we can see that {α}_{1} = −2 and {α}_{2} = 3 is an affirmative answer to our question. More convincingly,

(−2)\left [\array{ 2 \cr −1 \cr 3 \cr 4 } \right ]+(3)\left [\array{ 3 \cr 2 \cr −2 \cr 1 } \right ] = \left [\array{ 5 \cr 8 \cr −12 \cr −5 } \right ]

C41 Contributed by Robert Beezer Statement [391]
Rephrasing the question, we want to know if there are scalars {α}_{1} and {α}_{2} such that

{ α}_{1}\left [\array{ 2 \cr −1 \cr 3 \cr 4 } \right ]+{α}_{2}\left [\array{ 3 \cr 2 \cr −2 \cr 1 } \right ] = \left [\array{ 5 \cr 1 \cr 3 \cr 5 } \right ]

Theorem SLSLC allows us to rephrase the question again as a quest for solutions to the system of four equations in two unknowns with an augmented matrix given by

\left [\array{ 2 & 3 &5 \cr −1& 2 &1 \cr 3 &−2&3 \cr 4 & 1 &5 } \right ]

This matrix row-reduces to

\left [\array{ \text{1}&0&0 \cr 0&\text{1}&0 \cr 0&0&\text{1} \cr 0&0&0} \right ]

With a leading 1 in the last column of this matrix (Theorem RCLS) we can see that the system of equations has no solution, so there are no values for {α}_{1} and {α}_{2} that will allow us to conclude that y is in W. So y∉W.

C42 Contributed by Robert Beezer Statement [391]
Form a linear combination, with unknown scalars, of R that equals y,

{ a}_{1}\left [\array{ 2 \cr −1 \cr 3 \cr 4 \cr 0 } \right ]+{a}_{2}\left [\array{ 1 \cr 1 \cr 2 \cr 2 \cr −1 } \right ]+{a}_{3}\left [\array{ 3 \cr −1 \cr 0 \cr 3 \cr −2 } \right ] = \left [\array{ 1 \cr −1 \cr −8 \cr −4 \cr −3 } \right ]

We want to know if there are values for the scalars that make the vector equation true since that is the definition of membership in \left \langle R\right \rangle . By Theorem SLSLC any such values will also be solutions to the linear system represented by the augmented matrix,

\left [\array{ 2 & 1 & 3 & 1 \cr −1& 1 &−1&−1 \cr 3 & 2 & 0 &−8 \cr 4 & 2 & 3 &−4 \cr 0 &−1&−2&−3 } \right ]

Row-reducing the matrix yields,

\left [\array{ \text{1}&0&0&−2 \cr 0&\text{1}&0&−1 \cr 0&0&\text{1}& 2 \cr 0&0&0& 0 \cr 0&0&0& 0 } \right ]

From this we see that the system of equations is consistent (Theorem RCLS), and has a unique solution. This solution will provide a linear combination of the vectors in R that equals y. So y ∈ R.

C43 Contributed by Robert Beezer Statement [391]
Form a linear combination, with unknown scalars, of R that equals z,

{ a}_{1}\left [\array{ 2 \cr −1 \cr 3 \cr 4 \cr 0 } \right ]+{a}_{2}\left [\array{ 1 \cr 1 \cr 2 \cr 2 \cr −1 } \right ]+{a}_{3}\left [\array{ 3 \cr −1 \cr 0 \cr 3 \cr −2 } \right ] = \left [\array{ 1 \cr 1 \cr 5 \cr 3 \cr 1 } \right ]

We want to know if there are values for the scalars that make the vector equation true since that is the definition of membership in \left \langle R\right \rangle . By Theorem SLSLC any such values will also be solutions to the linear system represented by the augmented matrix,

\left [\array{ 2 & 1 & 3 &1 \cr −1& 1 &−1&1 \cr 3 & 2 & 0 &5 \cr 4 & 2 & 3 &3 \cr 0 &−1&−2&1 } \right ]

Row-reducing the matrix yields,

\left [\array{ \text{1}&0&0&0 \cr 0&\text{1}&0&0 \cr 0&0&\text{1}&0 \cr 0&0&0&\text{1} \cr 0&0&0&0 } \right ]

With a leading 1 in the last column, the system is inconsistent (Theorem RCLS), so there are no scalars {a}_{1},\kern 1.95872pt {a}_{2},\kern 1.95872pt {a}_{3} that will create a linear combination of the vectors in R that equal z. So z∉R.

C44 Contributed by Robert Beezer Statement [392]
Form a linear combination, with unknown scalars, of S that equals y,

{ a}_{1}\left [\array{ −1 \cr 2 \cr 1 } \right ]+{a}_{2}\left [\array{ 3 \cr 1 \cr 2 } \right ]+{a}_{3}\left [\array{ 1 \cr 5 \cr 4 } \right ]+{a}_{4}\left [\array{ −6 \cr 5 \cr 1 } \right ] = \left [\array{ −5 \cr 3 \cr 0 } \right ]

We want to know if there are values for the scalars that make the vector equation true since that is the definition of membership in \left \langle S\right \rangle . By Theorem SLSLC any such values will also be solutions to the linear system represented by the augmented matrix,

\left [\array{ −1&3&1&−6&−5 \cr 2 &1&5& 5 & 3 \cr 1 &2&4& 1 & 0 } \right ]

Row-reducing the matrix yields,

\left [\array{ \text{1}&0&2& 3 & 2 \cr 0&\text{1}&1&−1&−1 \cr 0&0&0& 0 & 0 } \right ]

From this we see that the system of equations is consistent (Theorem RCLS), and has a infinitely many solutions. Any solution will provide a linear combination of the vectors in R that equals y. So y ∈ S, for example,

(−10)\left [\array{ −1 \cr 2 \cr 1 } \right ]+(−2)\left [\array{ 3 \cr 1 \cr 2 } \right ]+(3)\left [\array{ 1 \cr 5 \cr 4 } \right ]+(2)\left [\array{ −6 \cr 5 \cr 1 } \right ] = \left [\array{ −5 \cr 3 \cr 0 } \right ]

C45 Contributed by Robert Beezer Statement [392]
Form a linear combination, with unknown scalars, of S that equals w,

{ a}_{1}\left [\array{ −1 \cr 2 \cr 1 } \right ]+{a}_{2}\left [\array{ 3 \cr 1 \cr 2 } \right ]+{a}_{3}\left [\array{ 1 \cr 5 \cr 4 } \right ]+{a}_{4}\left [\array{ −6 \cr 5 \cr 1 } \right ] = \left [\array{ 2 \cr 1 \cr 3 } \right ]

We want to know if there are values for the scalars that make the vector equation true since that is the definition of membership in \left \langle S\right \rangle . By Theorem SLSLC any such values will also be solutions to the linear system represented by the augmented matrix,

\left [\array{ −1&3&1&−6&2 \cr 2 &1&5& 5 &1 \cr 1 &2&4& 1 &3 } \right ]

Row-reducing the matrix yields,

\left [\array{ \text{1}&0&2& 3 &0 \cr 0&\text{1}&1&−1&0 \cr 0&0&0& 0 &\text{1} } \right ]

With a leading 1 in the last column, the system is inconsistent (Theorem RCLS), so there are no scalars {a}_{1},\kern 1.95872pt {a}_{2},\kern 1.95872pt {a}_{3},\kern 1.95872pt {a}_{4} that will create a linear combination of the vectors in S that equal w. So w∉\left \langle S\right \rangle .

C50 Contributed by Robert Beezer Statement [392]
(a) Theorem SSNS provides formulas for a set S with this property, but first we must row-reduce A

A\mathop{\longrightarrow}\limits_{}^{\text{RREF}}\left [\array{ \text{1}&0&−1&−1 \cr 0&\text{1}& 1 & 2 \cr 0&0& 0 & 0 } \right ]

{x}_{3} and {x}_{4} would be the free variables in the homogeneous system ℒS\kern -1.95872pt \left (A,\kern 1.95872pt 0\right ) and Theorem SSNS provides the set S = \left \{{z}_{1},\kern 1.95872pt {z}_{2}\right \} where

\eqalignno{ {z}_{1} & = \left [\array{ 1 \cr −1 \cr 1 \cr 0 } \right ] &{z}_{2} & = \left [\array{ 1 \cr −2 \cr 0 \cr 1 } \right ] & & & & }

(b) Simply employ the components of the vector z as the variables in the homogeneous system ℒS\kern -1.95872pt \left (A,\kern 1.95872pt 0\right ). The three equations of this system evaluate as follows,

\eqalignno{ 2(3) + 3(−5) + 1(1) + 4(2) & = 0 & & \cr 1(3) + 2(−5) + 1(1) + 3(2) & = 0 & & \cr − 1(3) + 0(−5) + 1(1) + 1(2) & = 0 & & }

Since each result is zero, z qualifies for membership in N\kern -1.95872pt \left (A\right ).

(c) By Theorem SSNS we know this must be possible (that is the moral of this exercise). Find scalars {α}_{1} and {α}_{2} so that

{ α}_{1}{z}_{1}+{α}_{2}{z}_{2} = {α}_{1}\left [\array{ 1 \cr −1 \cr 1 \cr 0 } \right ]+{α}_{2}\left [\array{ 1 \cr −2 \cr 0 \cr 1 } \right ] = \left [\array{ 3 \cr −5 \cr 1 \cr 2 } \right ] = z

Theorem SLSLC allows us to convert this question into a question about a system of four equations in two variables. The augmented matrix of this system row-reduces to

\left [\array{ \text{1}&0&1 \cr 0&\text{1}&2 \cr 0&0&0 \cr 0&0&0} \right ]

A solution is {α}_{1} = 1 and {α}_{2} = 2. (Notice too that this solution is unique!)

C60 Contributed by Robert Beezer Statement [393]
Theorem SSNS says that if we find the vector form of the solutions to the homogeneous system ℒS\kern -1.95872pt \left (A,\kern 1.95872pt 0\right ), then the fixed vectors (one per free variable) will have the desired property. Row-reduce A, viewing it as the augmented matrix of a homogeneous system with an invisible columns of zeros as the last column,

\left [\array{ \text{1}&0&4&−5 \cr 0&\text{1}&2&−3 \cr 0&0&0& 0 } \right ]

Moving to the vector form of the solutions (Theorem VFSLS), with free variables {x}_{3} and {x}_{4}, solutions to the consistent system (it is homogeneous, Theorem HSC) can be expressed as

\left [\array{ {x}_{1} \cr {x}_{2} \cr {x}_{3} \cr {x}_{4} } \right ] = {x}_{3}\left [\array{ −4 \cr −2 \cr 1 \cr 0 } \right ]+{x}_{4}\left [\array{ 5 \cr 3 \cr 0 \cr 1 } \right ]

Then with S given by

S = \left \{\left [\array{ −4 \cr −2 \cr 1 \cr 0 } \right ],\kern 1.95872pt \left [\array{ 5 \cr 3 \cr 0 \cr 1 } \right ]\right \}

Theorem SSNS guarantees that

N\kern -1.95872pt \left (A\right ) = \left \langle S\right \rangle = \left \langle \left \{\left [\array{ −4 \cr −2 \cr 1 \cr 0 } \right ],\kern 1.95872pt \left [\array{ 5 \cr 3 \cr 0 \cr 1 } \right ]\right \}\right \rangle

M10 Contributed by Chris Black Statement [393]

  1. The span of a single vector v is the set of all linear combinations of that vector. Thus, \left \langle v\right \rangle = \left \{αv\mathrel{∣}α ∈ {ℝ}^{}\right \}. This is the line through the origin and containing the (geometric) vector v. Thus, if v = \left [\array{ {v}_{1} \cr {v}_{2} } \right ], then the span of v is the line through (0, 0) and ({v}_{1},{v}_{2}).
  2. Two vectors will span the entire plane if they point in different directions, meaning that u does not lie on the line through v and vice-versa. That is, for vectors u and v in {ℝ}^{2}, \left \langle u,v\right \rangle = {ℝ}^{2} if u is not a multiple of v.

M11 Contributed by Chris Black Statement [394]

  1. The span of a single vector v is the set of all linear combinations of that vector. Thus, \left \langle v\right \rangle = \left \{αv\mathrel{∣}α ∈ {ℝ}^{}\right \}. This is the line through the origin and containing the (geometric) vector v. Thus, if v = \left [\array{ {v}_{1} \cr {v}_{2} \cr {v}_{3} } \right ], then the span of v is the line through (0, 0, 0) and ({v}_{1},{v}_{2},{v}_{3}).
  2. If the two vectors point in the same direction, then their span is the line through them. Recall that while two points determine a line, three points determine a plane. Two vectors will span a plane if they point in different directions, meaning that u does not lie on the line through v and vice-versa. The plane spanned by u = \left [\array{ {u}_{1} \cr {u}_{1} \cr {u}_{1} } \right ] and v = \left [\array{ {v}_{1} \cr {v}_{2} \cr {v}_{3} } \right ] is determined by the origin and the points ({u}_{1},{u}_{2},{u}_{3}) and ({v}_{1},{v}_{2},{v}_{3}).
  3. If all three vectors lie on the same line, then the span is that line. If one is a linear combination of the other two, but they are not all on the same line, then they will lie in a plane. Otherwise, the span of the set of three vectors will be all of 3-space.

M12 Contributed by Chris Black Statement [394]

  1. If we can find a vector {w}_{1} that is a linear combination of u and v, then \left \langle u,v,{w}_{1}\right \rangle will be the same set as \left \langle u,v\right \rangle . Thus, {w}_{1} can be any linear combination of u and v. One such example is {w}_{1} = 3u − v = \left [\array{ 1 \cr 11 \cr −7 } \right ].
  2. Now we are looking for a vector {w}_{2} that cannot be written as a linear combination of u and v. How can we find such a vector? Any vector that matches two components but not the third of any element of \left \langle u,v\right \rangle will not be in the span (why?). One such example is {w}_{2} = \left [\array{ 4 \cr −4 \cr 1 } \right ] (which is nearly 2v, but not quite).

M21 Contributed by Robert Beezer Statement [395]
If the columns of the coefficient matrix from Archetype C are named {u}_{1},\kern 1.95872pt {u}_{2},\kern 1.95872pt {u}_{3},\kern 1.95872pt {u}_{4} then we can discover the equation

(−2){u}_{1} + (−3){u}_{2} + {u}_{3} + {u}_{4} = 0

by building a homogeneous system of equations and viewing a solution to the system as scalars in a linear combination via Theorem SLSLC. This particular vector equation can be rearranged to read

{u}_{4} = (2){u}_{1} + (3){u}_{2} + (−1){u}_{3}

This can be interpreted to mean that {u}_{4} is unnecessary in \left \langle \left \{{u}_{1},\kern 1.95872pt {u}_{2},\kern 1.95872pt {u}_{3},\kern 1.95872pt {u}_{4}\right \}\right \rangle , so that

\left \langle \left \{{u}_{1},\kern 1.95872pt {u}_{2},\kern 1.95872pt {u}_{3},\kern 1.95872pt {u}_{4}\right \}\right \rangle = \left \langle \left \{{u}_{1},\kern 1.95872pt {u}_{2},\kern 1.95872pt {u}_{3}\right \}\right \rangle

If we try to repeat this process and find a linear combination of {u}_{1},\kern 1.95872pt {u}_{2},\kern 1.95872pt {u}_{3} that equals the zero vector, we will fail. The required homogeneous system of equations (via Theorem SLSLC) has only a trivial solution, which will not provide the kind of equation we need to remove one of the three remaining vectors.

T10 Contributed by Robert Beezer Statement [395]
This is an equality of sets, so Definition SE applies.

First show that X = \left \langle \left \{{v}_{1},\kern 1.95872pt {v}_{2}\right \}\right \rangle ⊆\left \langle \left \{{v}_{1},\kern 1.95872pt {v}_{2},\kern 1.95872pt 5{v}_{1} + 3{v}_{2}\right \}\right \rangle = Y .
Choose x ∈ X. Then x = {a}_{1}{v}_{1} + {a}_{2}{v}_{2} for some scalars {a}_{1} and {a}_{2}. Then,

x = {a}_{1}{v}_{1} + {a}_{2}{v}_{2} = {a}_{1}{v}_{1} + {a}_{2}{v}_{2} + 0(5{v}_{1} + 3{v}_{2})

which qualifies x for membership in Y , as it is a linear combination of {v}_{1},\kern 1.95872pt {v}_{2},\kern 1.95872pt 5{v}_{1} + 3{v}_{2}.

Now show the opposite inclusion, Y = \left \langle \left \{{v}_{1},\kern 1.95872pt {v}_{2},\kern 1.95872pt 5{v}_{1} + 3{v}_{2}\right \}\right \rangle ⊆\left \langle \left \{{v}_{1},\kern 1.95872pt {v}_{2}\right \}\right \rangle = X.
Choose y ∈ Y . Then there are scalars {a}_{1},\kern 1.95872pt {a}_{2},\kern 1.95872pt {a}_{3} such that

y = {a}_{1}{v}_{1} + {a}_{2}{v}_{2} + {a}_{3}(5{v}_{1} + 3{v}_{2})

Rearranging, we obtain,

\eqalignno{ y & = {a}_{1}{v}_{1} + {a}_{2}{v}_{2} + {a}_{3}(5{v}_{1} + 3{v}_{2}) & & & & \cr & = {a}_{1}{v}_{1} + {a}_{2}{v}_{2} + 5{a}_{3}{v}_{1} + 3{a}_{3}{v}_{2} & &\text{@(a href="fcla-jsmath-2.10li23.html#property.DVAC")Property DVAC@(/a)} & & & & \cr & = {a}_{1}{v}_{1} + 5{a}_{3}{v}_{1} + {a}_{2}{v}_{2} + 3{a}_{3}{v}_{2} & &\text{@(a href="fcla-jsmath-2.10li23.html#property.CC")Property CC@(/a)} & & & & \cr & = ({a}_{1} + 5{a}_{3}){v}_{1} + ({a}_{2} + 3{a}_{3}){v}_{2} & &\text{@(a href="fcla-jsmath-2.10li23.html#property.DSAC")Property DSAC@(/a)} & & & & \cr & & & & }

This is an expression for y as a linear combination of {v}_{1} and {v}_{2}, earning y membership in X. Since X is a subset of Y , and vice versa, we see that X = Y , as desired.

T20 Contributed by Robert Beezer Statement [395]
No matter what the elements of the set S are, we can choose the scalars in a linear combination to all be zero. Suppose that S = \left \{{v}_{1},\kern 1.95872pt {v}_{2},\kern 1.95872pt {v}_{3},\kern 1.95872pt \mathop{\mathop{…}},\kern 1.95872pt {v}_{p}\right \}. Then compute

\eqalignno{ 0{v}_{1} + 0{v}_{2} + 0{v}_{3} + \mathrel{⋯} + 0{v}_{p} & = 0 + 0 + 0 + \mathrel{⋯} + 0 & & \cr & = 0 & & }

But what if we choose S to be the empty set? The convention is that the empty sum in Definition SSCV evaluates to “zero,” in this case this is the zero vector.