Section SS  Spanning Sets

From A First Course in Linear Algebra
Version 2.20
© 2004.
Licensed under the GNU Free Documentation License.
http://linear.ups.edu/

In this section we will describe a compact way to indicate the elements of an infinite set of vectors, making use of linear combinations. This will give us a convenient way to describe the elements of a set of solutions to a linear system, or the elements of the null space of a matrix, or many other sets of vectors.

Subsection SSV: Span of a Set of Vectors

In Example VFSAL we saw the solution set of a homogeneous system described as all possible linear combinations of two particular vectors. This happens to be a useful way to construct or describe infinite sets of vectors, so we encapsulate this idea in a definition.

Definition SSCV
Span of a Set of Column Vectors
Given a set of vectors S = {u1,u2,u3,,up}, their span, S, is the set of all possible linear combinations of u1,u2,u3,,up. Symbolically,

S = α1u1 + α2u2 + α3u3 + + αpupαi ,1 i p = i=1pα iuiαi ,1 i p

(This definition contains Notation SSV.)

The span is just a set of vectors, though in all but one situation it is an infinite set. (Just when is it not infinite?) So we start with a finite collection of vectors S (p of them to be precise), and use this finite set to describe an infinite set of vectors, S. Confusing the finite set S with the infinite set S is one of the most pervasive problems in understanding introductory linear algebra. We will see this construction repeatedly, so let’s work through some examples to get comfortable with it. The most obvious question about a set is if a particular item of the correct type is in the set, or not.

Example ABS
A basic span
Consider the set of 5 vectors, S, from 4

S = 1 1 3 1 , 2 1 2 1 , 7 3 5 5 , 1 1 1 2 , 1 0 9 0

and consider the infinite set of vectors S formed from all possible linear combinations of the elements of S. Here are four vectors we definitely know are elements of S, since we will construct them in accordance with Definition SSCV,

w = (2) 1 1 3 1 +(1) 2 1 2 1 +(1) 7 3 5 5 +(2) 1 1 1 2 +(3) 1 0 9 0 = 4 2 28 10
x = (5) 1 1 3 1 +(6) 2 1 2 1 +(3) 7 3 5 5 +(4) 1 1 1 2 +(2) 1 0 9 0 = 26 6 2 34
y = (1) 1 1 3 1 +(0) 2 1 2 1 +(1) 7 3 5 5 +(0) 1 1 1 2 +(1) 1 0 9 0 = 7 4 174
z = (0) 1 1 3 1 +(0) 2 1 2 1 +(0) 7 3 5 5 +(0) 1 1 1 2 +(0) 1 0 9 0 = 0 0 0 0

The purpose of a set is to collect objects with some common property, and to exclude objects without that property. So the most fundamental question about a set is if a given object is an element of the set or not. Let’s learn more about S by investigating which vectors are elements of the set, and which are not.

First, is u = 15 6 19 5 an element of S? We are asking if there are scalars α1,α2,α3,α4,α5 such that

α1 1 1 3 1 +α2 2 1 2 1 +α3 7 3 5 5 +α4 1 1 1 2 +α5 1 0 9 0 = u = 15 6 19 5

Applying Theorem SLSLC we recognize the search for these scalars as a solution to a linear system of equations with augmented matrix

1 2 7 1 115 1 1 3 1 0 6 3 2 5 1 9 19 1 1 5 2 0 5

which row-reduces to

1010 3 10 0 4 0 1 9 0 0 0 127 0 0 0 0 0 0

At this point, we see that the system is consistent (Theorem RCLS), so we know there is a solution for the five scalars α1,α2,α3,α4,α5. This is enough evidence for us to say that u S. If we wished further evidence, we could compute an actual solution, say

α1 = 2 α2 = 1 α3 = 2 α4 = 3 α5 = 2

This particular solution allows us to write

(2) 1 1 3 1 +(1) 2 1 2 1 +(2) 7 3 5 5 +(3) 1 1 1 2 +(2) 1 0 9 0 = u = 15 6 19 5

making it even more obvious that u S.

Lets do it again. Is v = 3 1 2 1 an element of S? We are asking if there are scalars α1,α2,α3,α4,α5 such that

α1 1 1 3 1 +α2 2 1 2 1 +α3 7 3 5 5 +α4 1 1 1 2 +α5 1 0 9 0 = v = 3 1 2 1

Applying Theorem SLSLC we recognize the search for these scalars as a solution to a linear system of equations with augmented matrix

1 2 7 1 1 3 1 1 3 1 0 1 3 2 5 1 9 2 1 1 5 2 0 1

which row-reduces to

1010 3 0 0 4 0 1 0 0 0 0 120 0 0 0 0 0 1

At this point, we see that the system is inconsistent by Theorem RCLS, so we know there is not a solution for the five scalars α1,α2,α3,α4,α5. This is enough evidence for us to say that v S. End of story.

Example SCAA
Span of the columns of Archetype A
Begin with the finite set of three vectors of size 3

S = {u1,u2,u3} = 1 2 1 , 1 1 1 , 2 1 0

and consider the infinite set S. The vectors of S could have been chosen to be anything, but for reasons that will become clear later, we have chosen the three columns of the coefficient matrix in Archetype A. First, as an example, note that

v = (5) 1 2 1 +(3) 1 1 1 +(7) 2 1 0 = 22 14 2

is in S, since it is a linear combination of u1,u2,u3. We write this succinctly as v S. There is nothing magical about the scalars α1 = 5,α2 = 3,α3 = 7, they could have been chosen to be anything. So repeat this part of the example yourself, using different values of α1,α2,α3. What happens if you choose all three scalars to be zero?

So we know how to quickly construct sample elements of the set S. A slightly different question arises when you are handed a vector of the correct size and asked if it is an element of S. For example, is w = 1 8 5 in S? More succinctly, w S?

To answer this question, we will look for scalars α1,α2,α3 so that

α1u1 + α2u2 + α3u3 = w.  By Theorem SLSLC solutions to this vector equation are solutions to the system of equations α1 α2 + 2α3 = 1 2α1 + α2 + α3 = 8 α1 + α2 = 5.  Building the augmented matrix for this linear system, and row-reducing, gives 10 1 3 0 1 2 0 0 0 0 .

This system has infinitely many solutions (there’s a free variable in x3), but all we need is one solution vector. The solution,

α1 = 2 α2 = 3 α3 = 1

tells us that

(2)u1 + (3)u2 + (1)u3 = w

so we are convinced that w really is in S. Notice that there are an infinite number of ways to answer this question affirmatively. We could choose a different solution, this time choosing the free variable to be zero,

α1 = 3 α2 = 2 α3 = 0

shows us that

(3)u1 + (2)u2 + (0)u3 = w

Verifying the arithmetic in this second solution will make it obvious that w is in this span. And of course, we now realize that there are an infinite number of ways to realize w as element of S. Let’s ask the same type of question again, but this time with y = 2 4 3 , i.e. is y S?

So we’ll look for scalars α1,α2,α3 so that

α1u1 + α2u2 + α3u3 = y.  By Theorem SLSLC solutions to this vector equation are the solutions to the system of equations α1 α2 + 2α3 = 2 2α1 + α2 + α3 = 4 α1 + α2 = 3.  Building the augmented matrix for this linear system, and row-reducing, gives 10 1 0 0 1 0 0 0 0 1

This system is inconsistent (there’s a leading 1 in the last column, Theorem RCLS), so there are no scalars α1,α2,α3 that will create a linear combination of u1,u2,u3 that equals y. More precisely, y S.

There are three things to observe in this example. (1) It is easy to construct vectors in S. (2) It is possible that some vectors are in S (e.g. w), while others are not (e.g. y). (3) Deciding if a given vector is in S leads to solving a linear system of equations and asking if the system is consistent.

With a computer program in hand to solve systems of linear equations, could you create a program to decide if a vector was, or wasn’t, in the span of a given set of vectors? Is this art or science?

This example was built on vectors from the columns of the coefficient matrix of Archetype A. Study the determination that v S and see if you can connect it with some of the other properties of Archetype A.

Having analyzed Archetype A in Example SCAA, we will of course subject Archetype B to a similar investigation.

Example SCAB
Span of the columns of Archetype B
Begin with the finite set of three vectors of size 3 that are the columns of the coefficient matrix in Archetype B,

R = {v1,v2,v3} = 7 5 1 , 6 5 0 , 12 7 4

and consider the infinite set R. First, as an example, note that

x = (2) 7 5 1 +(4) 6 5 0 +(3) 12 7 4 = 2 9 10

is in R, since it is a linear combination of v1,v2,v3. In other words, x R. Try some different values of α1,α2,α3 yourself, and see what vectors you can create as elements of R.

Now ask if a given vector is an element of R. For example, is z = 33 24 5 in R? Is z R?

To answer this question, we will look for scalars α1,α2,α3 so that

α1v1 + α2v2 + α3v3 = z.  By Theorem SLSLC solutions to this vector equation are the solutions to the system of equations 7α1 6α2 12α3 = 33 5α1 + 5α2 + 7α3 = 24 α1 + 4α3 = 5.  Building the augmented matrix for this linear system, and row-reducing, gives 1003 0 0 5 0 01 2 .

This system has a unique solution,

α1 = 3 α2 = 5 α3 = 2

telling us that

(3)v1 + (5)v2 + (2)v3 = z

so we are convinced that z really is in R. Notice that in this case we have only one way to answer the question affirmatively since the solution is unique.

Let’s ask about another vector, say is x = 7 8 3 in R? Is x R?

We desire scalars α1,α2,α3 so that

α1v1 + α2v2 + α3v3 = x.  By Theorem SLSLC solutions to this vector equation are the solutions to the system of equations 7α1 6α2 12α3 = 7 5α1 + 5α2 + 7α3 = 8 α1 + 4α3 = 3.  Building the augmented matrix for this linear system, and row-reducing, gives 100 1 0 0 2 0 011

This system has a unique solution,

α1 = 1 α2 = 2 α3 = 1

telling us that

(1)v1 + (2)v2 + (1)v3 = x

so we are convinced that x really is in R. Notice that in this case we again have only one way to answer the question affirmatively since the solution is again unique.

We could continue to test other vectors for membership in R, but there is no point. A question about membership in R inevitably leads to a system of three equations in the three variables α1,α2,α3 with a coefficient matrix whose columns are the vectors v1,v2,v3. This particular coefficient matrix is nonsingular, so by Theorem NMUS, the system is guaranteed to have a solution. (This solution is unique, but that’s not critical here.) So no matter which vector we might have chosen for z, we would have been certain to discover that it was an element of R. Stated differently, every vector of size 3 is in R, or R = 3.

Compare this example with Example SCAA, and see if you can connect z with some aspects of the write-up for Archetype B.

Subsection SSNS: Spanning Sets of Null Spaces

We saw in Example VFSAL that when a system of equations is homogeneous the solution set can be expressed in the form described by Theorem VFSLS where the vector c is the zero vector. We can essentially ignore this vector, so that the remainder of the typical expression for a solution looks like an arbitrary linear combination, where the scalars are the free variables and the vectors are u1,u2,u3,,unr. Which sounds a lot like a span. This is the substance of the next theorem.

Theorem SSNS
Spanning Sets for Null Spaces
Suppose that A is an m × n matrix, and B is a row-equivalent matrix in reduced row-echelon form with r nonzero rows. Let D = {d1,d2,d3,,dr} be the column indices where B has leading 1’s (pivot columns) and F = {f1,f2,f3,,fnr} be the set of column indices where B does not have leading 1’s. Construct the n r vectors zj, 1 j n r of size n as

zj i = 1  if i Fi = fj 0  if i Fifj Bk,fj if i Di = dk

Then the null space of A is given by

NA = z1,z2,z3,,znr .

Proof   Consider the homogeneous system with A as a coefficient matrix, SA,0. Its set of solutions, S, is by Definition NSM, the null space of A, NA. Let B denote the result of row-reducing the augmented matrix of this homogeneous system. Since the system is homogeneous, the final column of the augmented matrix will be all zeros, and after any number of row operations (Definition RO), the column will still be all zeros. So B has a final column that is totally zeros.

Now apply Theorem VFSLS to B, after noting that our homogeneous system must be consistent (Theorem HSC). The vector c has zeros for each entry that corresponds to an index in F. For entries that correspond to an index in D, the value is B k,n+1, but for B any entry in the final column (index n + 1) is zero. So c = 0. The vectors zj, 1 j n r are identical to the vectors uj, 1 j n r described in Theorem VFSLS. Putting it all together and applying Definition SSCV in the final step,

NA = S = c + α1u1 + α2u2 + α3u3 + + αnrunrα1,α2,α3,,αnr = α1u1 + α2u2 + α3u3 + + αnrunrα1,α2,α3,,αnr = z1,z2,z3,,znr

Example SSNS
Spanning set of a null space
Find a set of vectors, S, so that the null space of the matrix A below is the span of S, that is, S = NA.

A = 1 3 3 15 2 5 7 1 1 1 1 5 1 5 1 4 2 0 4

The null space of A is the set of all solutions to the homogeneous system SA,0. If we find the vector form of the solutions to this homogeneous system (Theorem VFSLS) then the vectors uj, 1 j n r in the linear combination are exactly the vectors zj, 1 j n r described in Theorem SSNS. So we can mimic Example VFSAL to arrive at these vectors (rather than being a slave to the formulas in the statement of the theorem).

Begin by row-reducing A. The result is

10 6 0 4 0 1 0 2 0 0 0 1 3 0 0 0 0 0

With D = 1,2,4 and F = 3,5 we recognize that x3 and x5 are free variables and we can express each nonzero row as an expression for the dependent variables x1, x2, x4 (respectively) in the free variables x3 and x5. With this we can write the vector form of a solution vector as

x1 x2 x3 x4 x5 = 6x3 4x5 x3 + 2x5 x3 3x5 x5 = x3 6 1 1 0 0 +x5 4 2 0 3 1

Then in the notation of Theorem SSNS,

z1 = 6 1 1 0 0 z2 = 4 2 0 3 1

and

NA = z1,z2 = 6 1 1 0 0 , 4 2 0 3 1

Example NSDS
Null space directly as a span
Let’s express the null space of A as the span of a set of vectors, applying Theorem SSNS as economically as possible, without reference to the underlying homogeneous system of equations (in contrast to Example SSNS).

A = 2 1 5 1 5 1 1 1 3 1 6 1 1 1 1 0 4 33 2 4 4 7 0 3 1 5 2 2 3

Theorem SSNS creates vectors for the span by first row-reducing the matrix in question. The row-reduced version of A is

B = 10201 2 0 1 0 3 1 0 001 4 2 0 0 0 0 0 0 0 000 0 0

We will mechanically follow the prescription of Theorem SSNS. Here we go, in two big steps.

First, the non-pivot columns have indices F = 3,5,6, so we will construct the n r = 6 3 = 3 vectors with a pattern of zeros and ones corresponding to the indices in F. This is the realization of the first two lines of the three-case definition of the vectors zj, 1 j n r.

z1 = 1 00 z2 = 0 10 z3 = 0 01

Each of these vectors arises due to the presence of a column that is not a pivot column. The remaining entries of each vector are the entries of the corresponding non-pivot column, negated, and distributed into the empty slots in order (these slots have indices in the set D and correspond to pivot columns). This is the realization of the third line of the three-case definition of the vectors zj, 1 j n r.

z1 = 2 1 1 0 0 0 z2 = 1 3 0 4 1 0 z3 = 2 1 0 2 0 1

So, by Theorem SSNS, we have

NA = z1,z2,z3 = 2 1 1 0 0 0 , 1 3 0 4 1 0 , 2 1 0 2 0 1

We know that the null space of A is the solution set of the homogeneous system SA,0, but nowhere in this application of Theorem SSNS have we found occasion to reference the variables or equations of this system. These details are all buried in the proof of Theorem SSNS.

More advanced computational devices will compute the null space of a matrix.  See: Computation NS.MMA Here’s an example that will simultaneously exercise the span construction and Theorem SSNS, while also pointing the way to the next section.

Example SCAD
Span of the columns of Archetype D
Begin with the set of four vectors of size 3

T = w1,w2,w3,w4 = 2 3 1 , 1 4 1 , 7 5 4 , 7 6 5

and consider the infinite set W = T. The vectors of T have been chosen as the four columns of the coefficient matrix in Archetype D. Check that the vector

z2 = 2 3 0 1

is a solution to the homogeneous system SD,0 (it is the vector z2 provided by the description of the null space of the coefficient matrix D from Theorem SSNS). Applying Theorem SLSLC, we can write the linear combination,

2w1 + 3w2 + 0w3 + 1w4 = 0

which we can solve for w4,

w4 = (2)w1 + (3)w2.

This equation says that whenever we encounter the vector w4, we can replace it with a specific linear combination of the vectors w1 and w2. So using w4 in the set T, along with w1 and w2, is excessive. An example of what we mean here can be illustrated by the computation,

5w1 + (4)w2 + 6w3 + (3)w4 = 5w1 + (4)w2 + 6w3 + (3) (2)w1 + (3)w2 = 5w1 + (4)w2 + 6w3 + 6w1 + 9w2 = 11w1 + 5w2 + 6w3.

So what began as a linear combination of the vectors w1,w2,w3,w4 has been reduced to a linear combination of the vectors w1,w2,w3. A careful proof using our definition of set equality (Definition SE) would now allow us to conclude that this reduction is possible for any vector in W, so

W = w1,w2,w3 .

So the span of our set of vectors, W, has not changed, but we have described it by the span of a set of three vectors, rather than four. Furthermore, we can achieve yet another, similar, reduction.

Check that the vector

z1 = 3 1 1 0

is a solution to the homogeneous system SD,0 (it is the vector z1 provided by the description of the null space of the coefficient matrix D from Theorem SSNS). Applying Theorem SLSLC, we can write the linear combination,

(3)w1 + (1)w2 + 1w3 = 0

which we can solve for w3,

w3 = 3w1 + 1w2.

This equation says that whenever we encounter the vector w3, we can replace it with a specific linear combination of the vectors w1 and w2. So, as before, the vector w3 is not needed in the description of W, provided we have w1 and w2 available. In particular, a careful proof (such as is done in Example RSC5) would show that

W = w1,w2 .

So W began life as the span of a set of four vectors, and we have now shown (utilizing solutions to a homogeneous system) that W can also be described as the span of a set of just two vectors. Convince yourself that we cannot go any further. In other words, it is not possible to dismiss either w1 or w2 in a similar fashion and winnow the set down to just one vector.

What was it about the original set of four vectors that allowed us to declare certain vectors as surplus? And just which vectors were we able to dismiss? And why did we have to stop once we had two vectors remaining? The answers to these questions motivate “linear independence,” our next section and next definition, and so are worth considering carefully now.

It is possible to have your computational device crank out the vector form of the solution set to a linear system of equations.  See: Computation VFSS.MMA

Subsection READ: Reading Questions

  1. Let S be the set of three vectors below.
    S = 1 2 1 , 3 4 2 , 4 2 1

    Let W = S be the span of S. Is the vector 1 8 4 in W? Give an explanation of the reason for your answer.

  2. Use S and W from the previous question. Is the vector 6 5 1 in W? Give an explanation of the reason for your answer.
  3. For the matrix A below, find a set S so that S = NA, where NA is the null space of A. (See Theorem SSNS.)
    A = 13 1 9 2 1 3 8 1 115

Subsection EXC: Exercises

C22 For each archetype that is a system of equations, consider the corresponding homogeneous system of equations. Write elements of the solution set to these homogeneous systems in vector form, as guaranteed by Theorem VFSLS. Then write the null space of the coefficient matrix of each system as the span of a set of vectors, as described in Theorem SSNS.
Archetype A
Archetype B
Archetype C
Archetype D/ Archetype E
Archetype F
Archetype G/ Archetype H
Archetype I
Archetype J

 
Contributed by Robert Beezer Solution [383]

C23 Archetype K and Archetype L are defined as matrices. Use Theorem SSNS directly to find a set S so that S is the null space of the matrix. Do not make any reference to the associated homogeneous system of equations in your solution.  
Contributed by Robert Beezer Solution [383]

C40 Suppose that S = 2 1 3 4 , 3 2 2 1 . Let W = S and let x = 5 8 12 5 . Is x W? If so, provide an explicit linear combination that demonstrates this.  
Contributed by Robert Beezer Solution [383]

C41 Suppose that S = 2 1 3 4 , 3 2 2 1 . Let W = S and let y = 5 1 3 5 . Is y W? If so, provide an explicit linear combination that demonstrates this.  
Contributed by Robert Beezer Solution [384]

C42 Suppose R = 2 1 3 4 0 , 1 1 2 2 1 , 3 1 0 3 2 . Is y = 1 1 84 3 in R?  
Contributed by Robert Beezer Solution [386]

C43 Suppose R = 2 1 3 4 0 , 1 1 2 2 1 , 3 1 0 3 2 . Is z = 1 1 5 3 1 in R?  
Contributed by Robert Beezer Solution [387]

C44 Suppose that S = 1 2 1 , 3 1 2 , 1 5 4 , 6 5 1 . Let W = S and let y = 5 3 0 . Is y W? If so, provide an explicit linear combination that demonstrates this.  
Contributed by Robert Beezer Solution [389]

C45 Suppose that S = 1 2 1 , 3 1 2 , 1 5 4 , 6 5 1 . Let W = S and let w = 2 1 3 . Is w W? If so, provide an explicit linear combination that demonstrates this.  
Contributed by Robert Beezer Solution [391]

C50 Let A be the matrix below.
(a) Find a set S so that NA = S.
(b) If z = 3 5 1 2 , then show directly that z NA.
(c) Write z as a linear combination of the vectors in S.

A = 2 314 1 2 1 3 1011

 
Contributed by Robert Beezer Solution [392]

C60 For the matrix A below, find a set of vectors S so that the span of S equals the null space of A, S = NA.

A = 1 1 6 8 1 2 0 1 2 1 6 7

 
Contributed by Robert Beezer Solution [395]

M10 Consider the set of all size 2 vectors in the Cartesian plane 2.

  1. Give a geometric description of the span of a single vector.
  2. How can you tell if two vectors span the entire plane, without doing any row reduction or calculation?

 
Contributed by Chris Black Solution [396]

M11 Consider the set of all size 3 vectors in Cartesian 3-space 3.

  1. Give a geometric description of the span of a single vector.
  2. Describe the possibilities for the span of two vectors.
  3. Describe the possibilities for the span of three vectors.

 
Contributed by Chris Black Solution [397]

M12 Let u = 1 3 2 and v = 2 2 1 .

  1. Find a vector w1, different from u and v, so that u,v,w1 = u,v.
  2. Find a vector w2 so that u,v,w2 u,v.

 
Contributed by Chris Black Solution [398]

M20 In Example SCAD we began with the four columns of the coefficient matrix of Archetype D, and used these columns in a span construction. Then we methodically argued that we could remove the last column, then the third column, and create the same set by just doing a span construction with the first two columns. We claimed we could not go any further, and had removed as many vectors as possible. Provide a convincing argument for why a third vector cannot be removed.  
Contributed by Robert Beezer

M21 In the spirit of Example SCAD, begin with the four columns of the coefficient matrix of Archetype C, and use these columns in a span construction to build the set S. Argue that S can be expressed as the span of just three of the columns of the coefficient matrix (saying exactly which three) and in the spirit of Exercise SS.M20 argue that no one of these three vectors can be removed and still have a span construction create S.  
Contributed by Robert Beezer Solution [399]

T10 Suppose that v1,v2 m. Prove that

v1,v2 = v1,v2,5v1 + 3v2

 
Contributed by Robert Beezer Solution [400]

T20 Suppose that S is a set of vectors from m. Prove that the zero vector, 0, is an element of S.  
Contributed by Robert Beezer Solution [402]

T21 Suppose that S is a set of vectors from m and x,y S. Prove that x + y S.  
Contributed by Robert Beezer

T22 Suppose that S is a set of vectors from m, α , and x S. Prove that αx S.  
Contributed by Robert Beezer

Subsection SOL: Solutions

C22 Contributed by Robert Beezer Statement [377]
The vector form of the solutions obtained in this manner will involve precisely the vectors described in Theorem SSNS as providing the null space of the coefficient matrix of the system as a span. These vectors occur in each archetype in a description of the null space. Studying Example VFSAL may be of some help.

C23 Contributed by Robert Beezer Statement [377]
Study Example NSDS to understand the correct approach to this question. The solution for each is listed in the Archetypes (Appendix A) themselves.

C40 Contributed by Robert Beezer Statement [377]
Rephrasing the question, we want to know if there are scalars α1 and α2 such that

α1 2 1 3 4 +α2 3 2 2 1 = 5 8 12 5

Theorem SLSLC allows us to rephrase the question again as a quest for solutions to the system of four equations in two unknowns with an augmented matrix given by

2 3 5 1 2 8 3 212 4 1 5

This matrix row-reduces to

102 0 3 0 0 0 0 0 0

From the form of this matrix, we can see that α1 = 2 and α2 = 3 is an affirmative answer to our question. More convincingly,

(2) 2 1 3 4 +(3) 3 2 2 1 = 5 8 12 5

C41 Contributed by Robert Beezer Statement [377]
Rephrasing the question, we want to know if there are scalars α1 and α2 such that

α1 2 1 3 4 +α2 3 2 2 1 = 5 1 3 5

Theorem SLSLC allows us to rephrase the question again as a quest for solutions to the system of four equations in two unknowns with an augmented matrix given by

2 3 51 2 1 3 23 4 1 5

This matrix row-reduces to

100 0 0 0 01 0 0 0

With a leading 1 in the last column of this matrix (Theorem RCLS) we can see that the system of equations has no solution, so there are no values for α1 and α2 that will allow us to conclude that y is in W. So yW.

C42 Contributed by Robert Beezer Statement [378]
Form a linear combination, with unknown scalars, of R that equals y,

a1 2 1 3 4 0 +a2 1 1 2 2 1 +a3 3 1 0 3 2 = 1 1 84 3

We want to know if there are values for the scalars that make the vector equation true since that is the definition of membership in R. By Theorem SLSLC any such values will also be solutions to the linear system represented by the augmented matrix,

2 1 3 1 1 1 1 1 3 2 0 8 4 2 3 4 0 123

Row-reducing the matrix yields,

1002 0 0 1 0 01 2 0 0 0 0 0 00 0

From this we see that the system of equations is consistent (Theorem RCLS), and has a unique solution. This solution will provide a linear combination of the vectors in R that equals y. So y R.

C43 Contributed by Robert Beezer Statement [378]
Form a linear combination, with unknown scalars, of R that equals z,

a1 2 1 3 4 0 +a2 1 1 2 2 1 +a3 3 1 0 3 2 = 1 1 5 3 1

We want to know if there are values for the scalars that make the vector equation true since that is the definition of membership in R. By Theorem SLSLC any such values will also be solutions to the linear system represented by the augmented matrix,

2 1 3 11 1 1 1 3 2 0 5 4 2 3 3 0 121

Row-reducing the matrix yields,

1000 0 0 0 0 010 0 0 0 1 0 000

With a leading 1 in the last column, the system is inconsistent (Theorem RCLS), so there are no scalars a1,a2,a3 that will create a linear combination of the vectors in R that equal z. So zR.

C44 Contributed by Robert Beezer Statement [378]
Form a linear combination, with unknown scalars, of S that equals y,

a1 1 2 1 +a2 3 1 2 +a3 1 5 4 +a4 6 5 1 = 5 3 0

We want to know if there are values for the scalars that make the vector equation true since that is the definition of membership in S. By Theorem SLSLC any such values will also be solutions to the linear system represented by the augmented matrix,

13165 2 1 5 5 3 1 24 1 0

Row-reducing the matrix yields,

102 3 2 0 1 1 1 0 00 0 0

From this we see that the system of equations is consistent (Theorem RCLS), and has a infinitely many solutions. Any solution will provide a linear combination of the vectors in R that equals y. So y S, for example,

(10) 1 2 1 +(2) 3 1 2 +(3) 1 5 4 +(2) 6 5 1 = 5 3 0

C45 Contributed by Robert Beezer Statement [378]
Form a linear combination, with unknown scalars, of S that equals w,

a1 1 2 1 +a2 3 1 2 +a3 1 5 4 +a4 6 5 1 = 2 1 3

We want to know if there are values for the scalars that make the vector equation true since that is the definition of membership in S. By Theorem SLSLC any such values will also be solutions to the linear system represented by the augmented matrix,

13162 2 1 5 5 1 1 24 1 3

Row-reducing the matrix yields,

102 3 0 0 1 1 0 0 00 0 1

With a leading 1 in the last column, the system is inconsistent (Theorem RCLS), so there are no scalars a1,a2,a3,a4 that will create a linear combination of the vectors in S that equal w. So w S.

C50 Contributed by Robert Beezer Statement [378]
(a) Theorem SSNS provides formulas for a set S with this property, but first we must row-reduce A

A  RREF 1011 0 1 2 0 0 0 0

x3 and x4 would be the free variables in the homogeneous system SA,0 and Theorem SSNS provides the set S = z1,z2 where

z1 = 1 1 1 0 z2 = 1 2 0 1

(b) Simply employ the components of the vector z as the variables in the homogeneous system SA,0. The three equations of this system evaluate as follows,

2(3) + 3(5) + 1(1) + 4(2) = 0 1(3) + 2(5) + 1(1) + 3(2) = 0 1(3) + 0(5) + 1(1) + 1(2) = 0

Since each result is zero, z qualifies for membership in NA.

(c) By Theorem SSNS we know this must be possible (that is the moral of this exercise). Find scalars α1 and α2 so that

α1z1+α2z2 = α1 1 1 1 0 +α2 1 2 0 1 = 3 5 1 2 = z

Theorem SLSLC allows us to convert this question into a question about a system of four equations in two variables. The augmented matrix of this system row-reduces to

101 0 2 0 00 0 0 0

A solution is α1 = 1 and α2 = 2. (Notice too that this solution is unique!)

C60 Contributed by Robert Beezer Statement [379]
Theorem SSNS says that if we find the vector form of the solutions to the homogeneous system SA,0, then the fixed vectors (one per free variable) will have the desired property. Row-reduce A, viewing it as the augmented matrix of a homogeneous system with an invisible columns of zeros as the last column,

1045 0 2 3 0 00 0

Moving to the vector form of the solutions (Theorem VFSLS), with free variables x3 and x4, solutions to the consistent system (it is homogeneous, Theorem HSC) can be expressed as

x1 x2 x3 x4 = x3 4 2 1 0 +x4 5 3 0 1

Then with S given by

S = 4 2 1 0 , 5 3 0 1

Theorem SSNS guarantees that

NA = S = 4 2 1 0 , 5 3 0 1

M10 Contributed by Chris Black Statement [380]

  1. The span of a single vector v is the set of all linear combinations of that vector. Thus, v = αvα . This is the line through the origin and containing the (geometric) vector v. Thus, if v = v1 v2 , then the span of v is the line through (0, 0) and (v1,v2).
  2. Two vectors will span the entire plane if they point in different directions, meaning that u does not lie on the line through v and vice-versa. That is, for vectors u and v in 2, u,v = 2 if u is not a multiple of v.

M11 Contributed by Chris Black Statement [380]

  1. The span of a single vector v is the set of all linear combinations of that vector. Thus, v = αvα . This is the line through the origin and containing the (geometric) vector v. Thus, if v = v1 v2 v3 , then the span of v is the line through (0, 0, 0) and (v1,v2,v3).
  2. If the two vectors point in the same direction, then their span is the line through them. Recall that while two points determine a line, three points determine a plane. Two vectors will span a plane if they point in different directions, meaning that u does not lie on the line through v and vice-versa. The plane spanned by u = u1 u1 u1 and v = v1 v2 v3 is determined by the origin and the points (u1,u2,u3) and (v1,v2,v3).
  3. If all three vectors lie on the same line, then the span is that line. If one is a linear combination of the other two, but they are not all on the same line, then they will lie in a plane. Otherwise, the span of the set of three vectors will be all of 3-space.

M12 Contributed by Chris Black Statement [380]

  1. If we can find a vector w1 that is a linear combination of u and v, then u,v,w1 will be the same set as u,v. Thus, w1 can be any linear combination of u and v. One such example is w1 = 3uv = 1 11 7 .
  2. Now we are looking for a vector w2 that cannot be written as a linear combination of u and v. How can we find such a vector? Any vector that matches two components but not the third of any element of u,v will not be in the span (why?). One such example is w2 = 4 4 1 (which is nearly 2v, but not quite).

M21 Contributed by Robert Beezer Statement [381]
If the columns of the coefficient matrix from Archetype C are named u1,u2,u3,u4 then we can discover the equation

(2)u1 + (3)u2 + u3 + u4 = 0

by building a homogeneous system of equations and viewing a solution to the system as scalars in a linear combination via Theorem SLSLC. This particular vector equation can be rearranged to read

u4 = (2)u1 + (3)u2 + (1)u3

This can be interpreted to mean that u4 is unnecessary in u1,u2,u3,u4, so that

u1,u2,u3,u4 = u1,u2,u3

If we try to repeat this process and find a linear combination of u1,u2,u3 that equals the zero vector, we will fail. The required homogeneous system of equations (via Theorem SLSLC) has only a trivial solution, which will not provide the kind of equation we need to remove one of the three remaining vectors.

T10 Contributed by Robert Beezer Statement [381]
This is an equality of sets, so Definition SE applies.

First show that X = v1,v2 v1,v2,5v1 + 3v2 = Y .
Choose x X. Then x = a1v1 + a2v2 for some scalars a1 and a2. Then,

x = a1v1 + a2v2 = a1v1 + a2v2 + 0(5v1 + 3v2)

which qualifies x for membership in Y , as it is a linear combination of v1,v2,5v1 + 3v2.

Now show the opposite inclusion, Y = v1,v2,5v1 + 3v2 v1,v2 = X.
Choose y Y . Then there are scalars a1,a2,a3 such that

y = a1v1 + a2v2 + a3(5v1 + 3v2)

Rearranging, we obtain,

y = a1v1 + a2v2 + a3(5v1 + 3v2) = a1v1 + a2v2 + 5a3v1 + 3a3v2  Property DVAC = a1v1 + 5a3v1 + a2v2 + 3a3v2  Property CC = (a1 + 5a3)v1 + (a2 + 3a3)v2  Property DSAC

This is an expression for y as a linear combination of v1 and v2, earning y membership in X. Since X is a subset of Y , and vice versa, we see that X = Y , as desired.

T20 Contributed by Robert Beezer Statement [382]
No matter what the elements of the set S are, we can choose the scalars in a linear combination to all be zero. Suppose that S = v1,v2,v3,,vp. Then compute

0v1 + 0v2 + 0v3 + + 0vp = 0 + 0 + 0 + + 0 = 0

But what if we choose S to be the empty set? The convention is that the empty sum in Definition SSCV evaluates to “zero,” in this case this is the zero vector.