From A First Course in Linear Algebra
Version 2.11
© 2004.
Licensed under the GNU Free Documentation License.
http://linear.ups.edu/
In any linearly dependent set there is always one vector that can be written as a
linear combination of the others. This is the substance of the upcoming
Theorem DLDS. Perhaps this will explain the use of the word “dependent.” In a
linearly dependent set, at least one vector “depends” on the others (via a linear
combination).
Indeed, because Theorem DLDS is an equivalence (Technique E) some authors use this condition as a definition (Technique D) of linear dependence. Then linear independence is defined as the logical opposite of linear dependence. Of course, we have chosen to take Definition LICV as our definition, and then follow with Theorem DLDS as a theorem.
If we use a linearly dependent set to construct a span, then we can always create the same infinite set with a starting set that is one vector smaller in size. We will illustrate this behavior in Example RSC5. However, this will not be possible if we build a span from a linearly independent set. So in a certain sense, using a linearly independent set to formulate a span is the best possible way — there aren’t any extra vectors being used to build up all the necessary linear combinations. OK, here’s the theorem, and then the example.
Theorem DLDS
Dependency in Linearly Dependent Sets
Suppose that u1
u2
u3
…
un
u2
u3
…
ut−1
ut+1
…
un
Proof (
![]() |
Since the
Since the values of
(
Then we have
So the scalars β2
β3
…
βt−1
βt=−1
βt+1
…
βn
This theorem can be used, sometimes repeatedly, to whittle down the size of a set of vectors used in a span construction. We have seen some of this already in Example SCAD, but in the next example we will detail some of the subtleties.
Example RSC5
Reducing a span in
Consider the set of
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
and define R
To employ Theorem LIVHS, we form a
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
and row-reduce to understand solutions to the homogeneous system
D
0
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
We can find infinitely many solutions to this system, most of them nontrivial,
and we choose any one we like to build a relation of linear dependence on
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
So we can write the relation of linear dependence,
![]() |
Theorem DLDS guarantees that we can solve this relation of linear dependence for some
vector in
OK, if we are convinced that we cannot solve for
![]() |
We now claim that this particular equation will allow us to write
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
in essence declaring v1
v2
v4
R′
First show that
Next show that α2
α3
α4
This equation says that
If
In Example RSC5 we used four vectors to create a span. With a relation of linear dependence in hand, we were able to “toss-out” one of these four vectors and create the same span from a subset of just three vectors from the original set of four. We did have to take some care as to just which vector we tossed-out. In the next example, we will be more methodical about just how we choose to eliminate vectors from a linearly dependent set while preserving a span.
Example COV
Casting out vectors
We begin with a set
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
and define S
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
By Theorem SLSLC a nontrivial solution to
A
0
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
so we can easily create solutions to the homogeneous system
A
0
x5
x6
x7
A
0
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
which can be used to create the linear combination
|
This can then be arranged and solved for
A1
A3
A4
|
This means that
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Technically, this set equality for
Now, set the free variable B
0
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
which can be used to create the linear combination
|
This can then be arranged and solved for
A1
A3
A4
|
This means that
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Do it again, set the free variable B
0
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
which can be used to create the linear combination
|
This can then be arranged and solved for
A1
A3
A4
|
This means that
![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Set the free variable B
0
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
which can be used to create the linear combination
|
This can then be arranged and solved for
A1
A3
A4
|
This means that
![]() ![]() ![]() ![]() ![]() ![]() |
You might think we could keep this up, but we have
run out of free variables. And not coincidentally, the set
A1
A3
A4
For extra credit, notice that the vector
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
is the vector of constants in the definition of Archetype I. Since the system
A
b
A3
A4
Example COV deserves your careful attention, since this important example motivates the following very fundamental theorem.
Theorem BS
Basis of a Span
Suppose that v1
v2
v3
…
vn
S
d1
d2
d3
…
dr
Proof To prove that
|
and we will try to conclude that the only possibility for the scalars
f1
f2
f3
…
fn−r
|
By Theorem SLSLC, the scalars in this linear combination
(suitably reordered) are a solution to the homogeneous system
A
0
The second conclusion of this theorem is an equality of sets (Definition SE). Since
T
⊆
S
=W
S
⊆
T
For each
By Theorem VFSLS, the remainder of this solution vector is given by,
From this solution, we obtain a relation of linear dependence on the columns of A,
−{\left [B\right ]}_{1,{f}_{k}}{v}_{{d}_{1}} −{\left [B\right ]}_{2,{f}_{k}}{v}_{{d}_{2}} −{\left [B\right ]}_{3,{f}_{k}}{v}_{{d}_{3}} −\mathop{\mathop{…}} −{\left [B\right ]}_{r,{f}_{k}}{v}_{{d}_{r}} + 1{v}_{{f}_{k}} = 0
|
which can be arranged as the equality
{v}_{{f}_{k}} ={ \left [B\right ]}_{1,{f}_{k}}{v}_{{d}_{1}} +{ \left [B\right ]}_{2,{f}_{k}}{v}_{{d}_{2}} +{ \left [B\right ]}_{3,{f}_{k}}{v}_{{d}_{3}} + \mathop{\mathop{…}} +{ \left [B\right ]}_{r,{f}_{k}}{v}_{{d}_{r}}
|
Now, suppose we take an arbitrary element, w, of W = \left \langle S\right \rangle and write it as a linear combination of the elements of S, but with the terms organized according to the indices in D and F,
From the above, we can replace each {v}_{{f}_{j}} by a linear combination of the {v}_{{d}_{i}},
}
This mess expresses the vector w as a linear combination of the vectors in
T = \left \{{v}_{{d}_{1}},\kern 1.95872pt {v}_{{d}_{2}},\kern 1.95872pt {v}_{{d}_{3}},\kern 1.95872pt \mathop{\mathop{…}}\kern 1.95872pt {v}_{{d}_{r}}\right \}
|
thus saying that w ∈\left \langle T\right \rangle . Therefore, W = \left \langle S\right \rangle ⊆\left \langle T\right \rangle . ■
In Example COV, we tossed-out vectors one at a time. But in each instance, we rewrote the offending vector as a linear combination of those vectors that corresponded to the pivot columns of the reduced row-echelon form of the matrix of columns. In the proof of Theorem BS, we accomplish this reduction in one big step. In Example COV we arrived at a linearly independent set at exactly the same moment that we ran out of free variables to exploit. This was not a coincidence, it is the substance of our conclusion of linear independence in Theorem BS.
Here’s a straightforward application of Theorem BS.
Example RSC4
Reducing a span in {ℂ}^{4}
Begin with a set of five vectors from
{ℂ}^{4},
S = \left \{\left [\array{
1\cr
1
\cr
2\cr
1 } \right ],\kern 1.95872pt \left [\array{
2\cr
2
\cr
4\cr
2 } \right ],\kern 1.95872pt \left [\array{
2\cr
0
\cr
−1\cr
1 } \right ],\kern 1.95872pt \left [\array{
7\cr
1
\cr
−1\cr
4 } \right ],\kern 1.95872pt \left [\array{
0\cr
2
\cr
5\cr
1 } \right ]\right \}
|
and let W = \left \langle S\right \rangle . To arrive at a (smaller) linearly independent set, follow the procedure described in Theorem BS. Place the vectors from S into a matrix as columns, and row-reduce,
\left [\array{
1&2& 2 & 7 &0\cr
1&2 & 0 & 1 &2
\cr
2&4&−1&−1&5\cr
1&2 & 1 & 4 &1 } \right ]\mathop{\longrightarrow}\limits_{}^{\text{RREF}}\left [\array{
\text{1}&2&0&1& 2\cr
0&0 &\text{1 } &3 &−1
\cr
0&0&0&0& 0\cr
0&0 &0 &0 & 0 } \right ]
|
Columns 1 and 3 are the pivot columns (D = \left \{1,\kern 1.95872pt 3\right \}) so the set
T = \left \{\left [\array{
1\cr
1
\cr
2\cr
1 } \right ],\kern 1.95872pt \left [\array{
2\cr
0
\cr
−1\cr
1 } \right ]\right \}
|
is linearly independent and \left \langle T\right \rangle = \left \langle S\right \rangle = W. Boom!
Since the reduced row-echelon form of a matrix is unique (Theorem RREFU), the procedure of Theorem BS leads us to a unique set T. However, there is a wide variety of possibilities for sets T that are linearly independent and which can be employed in a span to create W. Without proof, we list two other possibilities:
Can you prove that {T}^{′} and {T}^{∗} are linearly independent sets and W = \left \langle S\right \rangle = \left \langle {T}^{′}\right \rangle = \left \langle {T}^{∗}\right \rangle ? ⊠
Example RES
Reworking elements of a span
Begin with a set of five vectors from
{ℂ}^{4},
R = \left \{\left [\array{
2\cr
1
\cr
3\cr
2 } \right ],\kern 1.95872pt \left [\array{
−1\cr
1
\cr
0\cr
1 } \right ],\kern 1.95872pt \left [\array{
−8\cr
−1
\cr
−9\cr
−4
} \right ],\kern 1.95872pt \left [\array{
3\cr
1
\cr
−1\cr
−2
} \right ],\kern 1.95872pt \left [\array{
−10\cr
−1
\cr
−1\cr
4 } \right ]\right \}
|
It is easy to create elements of X = \left \langle R\right \rangle — we will create one at random,
y = 6\left [\array{
2\cr
1
\cr
3\cr
2 } \right ]+(−7)\left [\array{
−1\cr
1
\cr
0\cr
1 } \right ]+1\left [\array{
−8\cr
−1
\cr
−9\cr
−4
} \right ]+6\left [\array{
3\cr
1
\cr
−1\cr
−2
} \right ]+2\left [\array{
−10\cr
−1
\cr
−1\cr
4 } \right ] = \left [\array{
9\cr
2
\cr
1\cr
−3 } \right ]
|
We know we can replace R by a smaller set (since it is obviously linearly dependent by Theorem MVSLD) that will create the same span. Here goes,
\left [\array{
2&−1&−8& 3 &−10\cr
1& 1 &−1 & 1 & −1
\cr
3& 0 &−9&−1& −1\cr
2& 1 &−4 &−2 & 4
} \right ]\mathop{\longrightarrow}\limits_{}^{\text{RREF}}\left [\array{
\text{1}&0&−3&0&−1\cr
0&\text{1 } & 2 &0 & 2
\cr
0&0& 0 &\text{1}&−2\cr
0&0 & 0 &0 & 0 } \right ]
|
So, if we collect the first, second and fourth vectors from R,
P = \left \{\left [\array{
2\cr
1
\cr
3\cr
2 } \right ],\kern 1.95872pt \left [\array{
−1\cr
1
\cr
0\cr
1 } \right ],\kern 1.95872pt \left [\array{
3\cr
1
\cr
−1\cr
−2
} \right ]\right \}
|
then P is linearly independent and \left \langle P\right \rangle = \left \langle R\right \rangle = X by Theorem BS. Since we built y as an element of \left \langle R\right \rangle it must also be an element of \left \langle P\right \rangle . Can we write y as a linear combination of just the three vectors in P? The answer is, of course, yes. But let’s compute an explicit linear combination just for fun. By Theorem SLSLC we can get such a linear combination by solving a system of equations with the column vectors of R as the columns of a coefficient matrix, and y as the vector of constants. Employing an augmented matrix to solve this system,
\left [\array{
2&−1& 3 & 9\cr
1& 1 & 1 & 2
\cr
3& 0 &−1& 1\cr
2& 1 &−2 &−3
} \right ]\mathop{\longrightarrow}\limits_{}^{\text{RREF}}\left [\array{
\text{1}&0&0& 1\cr
0&\text{1 } &0 &−1
\cr
0&0&\text{1}& 2\cr
0&0 &0 & 0 } \right ]
|
So we see, as expected, that
1\left [\array{
2\cr
1
\cr
3\cr
2 } \right ]+(−1)\left [\array{
−1\cr
1
\cr
0\cr
1 } \right ]+2\left [\array{
3\cr
1
\cr
−1\cr
−2
} \right ] = \left [\array{
9\cr
2
\cr
1\cr
−3 } \right ] = y
|
A key feature of this example is that the linear combination that expresses y as a linear combination of the vectors in P is unique. This is a consequence of the linear independence of P. The linearly independent set P is smaller than R, but still just (barely) big enough to create elements of the set X = \left \langle R\right \rangle . There are many, many ways to write y as a linear combination of the five vectors in R (the appropriate system of equations to verify this claim has two free variables in the description of the solution set), yet there is precisely one way to write y as a linear combination of the three vectors in P. ⊠
S = \left \{\left [\array{
1\cr
10
\cr
100\cr
1000
} \right ],\kern 1.95872pt \left [\array{
1\cr
1
\cr
1\cr
1
} \right ],\kern 1.95872pt \left [\array{
5\cr
23
\cr
203\cr
2003
} \right ]\right \}
|
Write one vector from S as a linear combination of the other two (you should be able to do this on sight, rather than doing some computations). Convert this expression into a nontrivial relation of linear dependence on S.
C20 Let T be the set of columns of the matrix B below. Define W = \left \langle T\right \rangle . Find a set R so that (1) R has 3 vectors, (2) R is a subset of T, and (3) W = \left \langle R\right \rangle .
B = \left [\array{
−3&1&−2& 7\cr
−1 &2 & 1 & 4
\cr
1 &1& 2 &−1 } \right ]
|
Contributed by Robert Beezer Solution [491]
C40 Verify that the set {R}^{′} = \left \{{v}_{
1},\kern 1.95872pt {v}_{2},\kern 1.95872pt {v}_{4}\right \}
at the end of Example RSC5 is linearly independent.
Contributed by Robert Beezer
C50 Consider the set of vectors from {ℂ}^{3}, W, given below. Find a linearly independent set T that contains three vectors from W and such that \left \langle W\right \rangle = \left \langle T\right \rangle .
W = \left \{{v}_{1},\kern 1.95872pt {v}_{2},\kern 1.95872pt {v}_{3},\kern 1.95872pt {v}_{4},\kern 1.95872pt {v}_{5}\right \} = \left \{\left [\array{
2\cr
1
\cr
1 } \right ],\kern 1.95872pt \left [\array{
−1\cr
−1
\cr
1 } \right ],\kern 1.95872pt \left [\array{
1\cr
2
\cr
3 } \right ],\kern 1.95872pt \left [\array{
3\cr
1
\cr
3 } \right ],\kern 1.95872pt \left [\array{
0\cr
1
\cr
−3 } \right ]\right \}
|
Contributed by Robert Beezer Solution [492]
C51 Given the set S below, find a linearly independent set T so that \left \langle T\right \rangle = \left \langle S\right \rangle .
S = \left \{\left [\array{
2\cr
−1
\cr
2 } \right ],\kern 1.95872pt \left [\array{
3\cr
0
\cr
1 } \right ],\kern 1.95872pt \left [\array{
1\cr
1
\cr
−1 } \right ],\kern 1.95872pt \left [\array{
5\cr
−1
\cr
3 } \right ]\right \}
|
Contributed by Robert Beezer Solution [493]
C52 Let W be the span of the set of vectors S below, W = \left \langle S\right \rangle . Find a set T so that 1) the span of T is W, \left \langle T\right \rangle = W, (2) T is a linearly independent set, and (3) T is a subset of S. (15 points)
Contributed by Robert Beezer Solution [494]
C55 Let T be the set
of vectors T = \left \{\left [\array{
1\cr
−1
\cr
2 } \right ],\kern 1.95872pt \left [\array{
3\cr
0
\cr
1 } \right ],\kern 1.95872pt \left [\array{
4\cr
2
\cr
3 } \right ],\kern 1.95872pt \left [\array{
3\cr
0
\cr
6 } \right ]\right \}. Find two
different subsets of T,
named R
and S, so
that R and
S each contain three
vectors, and so that \left \langle R\right \rangle = \left \langle T\right \rangle
and \left \langle S\right \rangle = \left \langle T\right \rangle . Prove
that both R
and S
are linearly independent.
Contributed by Robert Beezer Solution [495]
C70 Reprise Example RES by creating a new version of the vector
y. In
other words, form a new, different linear combination of the vectors in
R to create
a new vector y
(but do not simplify the problem too much by choosing any
of the five new scalars to be zero). Then express this new
y as a combination
of the vectors in P.
Contributed by Robert Beezer
M10 At the conclusion of Example RSC4 two alternative solutions, sets
{T}^{′} and
{T}^{∗},
are proposed. Verify these claims by proving that
\left \langle T\right \rangle = \left \langle {T}^{′}\right \rangle and
\left \langle T\right \rangle = \left \langle {T}^{∗}\right \rangle .
Contributed by Robert Beezer
T40 Suppose that {v}_{1} and {v}_{2} are any two vectors from {ℂ}^{m}. Prove the following set equality.
\left \langle \left \{{v}_{1},\kern 1.95872pt {v}_{2}\right \}\right \rangle = \left \langle \left \{{v}_{1} + {v}_{2},\kern 1.95872pt {v}_{1} − {v}_{2}\right \}\right \rangle
|
Contributed by Robert Beezer Solution [497]
C20 Contributed by Robert Beezer Statement [487]
Let T = \left \{{w}_{1},\kern 1.95872pt {w}_{2},\kern 1.95872pt {w}_{3},\kern 1.95872pt {w}_{4}\right \}. The
vector \left [\array{
2\cr
−1
\cr
0\cr
1 } \right ]
is a solution to the homogeneous system with the matrix
B
as the coefficient matrix (check this!). By Theorem SLSLC it
provides the scalars for a linear combination of the columns of
B (the vectors
in T)
that equals the zero vector, a relation of linear dependence on
T,
2{w}_{1} + (−1){w}_{2} + (1){w}_{4} = 0
|
We can rearrange this equation by solving for {w}_{4},
{w}_{4} = (−2){w}_{1} + {w}_{2}
|
This equation tells us that the vector {w}_{4} is superfluous in the span construction that creates W. So W = \left \langle \left \{{w}_{1},\kern 1.95872pt {w}_{2},\kern 1.95872pt {w}_{3}\right \}\right \rangle . The requested set is R = \left \{{w}_{1},\kern 1.95872pt {w}_{2},\kern 1.95872pt {w}_{3}\right \}.
C50 Contributed by Robert Beezer Statement [487]
To apply Theorem BS, we formulate a matrix
A whose columns
are {v}_{1},\kern 1.95872pt {v}_{2},\kern 1.95872pt {v}_{3},\kern 1.95872pt {v}_{4},\kern 1.95872pt {v}_{5}. Then we
row-reduce A.
After row-reducing, we obtain
\left [\array{
\text{1}&0&0&2&−1\cr
0&\text{1 } &0 &1 &−2
\cr
0&0&\text{1}&0& 0 } \right ]
|
From this we that the pivot columns are D = \left \{1,\kern 1.95872pt 2,\kern 1.95872pt 3\right \}. Thus
T = \left \{{v}_{1},\kern 1.95872pt {v}_{2},\kern 1.95872pt {v}_{3}\right \} = \left \{\left [\array{
2\cr
1
\cr
1 } \right ],\kern 1.95872pt \left [\array{
−1\cr
−1
\cr
1 } \right ],\kern 1.95872pt \left [\array{
1\cr
2
\cr
3 } \right ]\right \}
|
is a linearly independent set and \left \langle T\right \rangle = W. Compare this problem with Exercise LI.M50.
C51 Contributed by Robert Beezer Statement [488]
Theorem BS says we can make a matrix with these four vectors as
columns, row-reduce, and just keep the columns with indices in the set
D. Here
we go, forming the relevant matrix and row-reducing,
\left [\array{
2 &3& 1 & 5\cr
−1 &0 & 1 &−1
\cr
2 &1&−1& 3 } \right ]\mathop{\longrightarrow}\limits_{}^{\text{RREF}}\left [\array{
\text{1}&0&−1&1\cr
0&\text{1 } & 1 &1
\cr
0&0& 0 &0 } \right ]
|
Analyzing the row-reduced version of this matrix, we see that the first two columns are pivot columns, so D = \left \{1, 2\right \}. Theorem BS says we need only “keep” the first two columns to create a set with the requisite properties,
T = \left \{\left [\array{
2\cr
−1
\cr
2 } \right ],\kern 1.95872pt \left [\array{
3\cr
0
\cr
1 } \right ]\right \}
|
C52 Contributed by Robert Beezer Statement [488]
This is a straight setup for the conclusion of Theorem BS. The hypotheses of this theorem tell us to pack the vectors of W into the columns of a matrix and row-reduce,
Pivot columns have indices D = \left \{1,\kern 1.95872pt 2,\kern 1.95872pt 4\right \}. Theorem BS tells us to form T with columns 1,\kern 1.95872pt 2 and 4 of S,
C55 Contributed by Robert Beezer Statement [489]
Let A
be the matrix whose columns are the vectors in
T. Then
row-reduce A,
A\mathop{\longrightarrow}\limits_{}^{\text{RREF}}B = \left [\array{
\text{1}&0&0& 2\cr
0&\text{1 } &0 &−1
\cr
0&0&\text{1}& 1 } \right ]
|
From Theorem BS we can form R by choosing the columns of A that correspond to the pivot columns of B. Theorem BS also guarantees that R will be linearly independent.
R = \left \{\left [\array{
1\cr
−1
\cr
2 } \right ],\kern 1.95872pt \left [\array{
3\cr
0
\cr
1 } \right ],\kern 1.95872pt \left [\array{
4\cr
2
\cr
3 } \right ]\right \}
|
That was easy. To find S will require a bit more work. From B we can obtain a solution to ℒS\kern -1.95872pt \left (A,\kern 1.95872pt 0\right ), which by Theorem SLSLC will provide a nontrivial relation of linear dependence on the columns of A, which are the vectors in T. To wit, choose the free variable {x}_{4} to be 1, then {x}_{1} = −2, {x}_{2} = 1, {x}_{3} = −1, and so
(−2)\left [\array{
1\cr
−1
\cr
2 } \right ]+(1)\left [\array{
3\cr
0
\cr
1 } \right ]+(−1)\left [\array{
4\cr
2
\cr
3 } \right ]+(1)\left [\array{
3\cr
0
\cr
6 } \right ] = \left [\array{
0\cr
0
\cr
0 } \right ]
|
this equation can be rewritten with the second vector staying put, and the other three moving to the other side of the equality,
\left [\array{
3\cr
0
\cr
1 } \right ] = (2)\left [\array{
1\cr
−1
\cr
2 } \right ]+(1)\left [\array{
4\cr
2
\cr
3 } \right ]+(−1)\left [\array{
3\cr
0
\cr
6 } \right ]
|
We could have chosen other vectors to stay put, but may have then needed to divide by a nonzero scalar. This equation is enough to conclude that the second vector in T is “surplus” and can be replaced (see the careful argument in Example RSC5). So set
S = \left \{\left [\array{
1\cr
−1
\cr
2 } \right ],\kern 1.95872pt \left [\array{
4\cr
2
\cr
3 } \right ],\kern 1.95872pt \left [\array{
3\cr
0
\cr
6 } \right ]\right \}
|
and then \left \langle S\right \rangle = \left \langle T\right \rangle . T is also a linearly independent set, which we can show directly. Make a matrix C whose columns are the vectors in S. Row-reduce B and you will obtain the identity matrix {I}_{3}. By Theorem LIVRN, the set S is linearly independent.
T40 Contributed by Robert Beezer Statement [490]
This is an equality of sets, so Definition SE applies.
The “easy” half first. Show that X = \left \langle \left \{{v}_{1} + {v}_{2},\kern 1.95872pt {v}_{1} − {v}_{2}\right \}\right \rangle ⊆\left \langle \left \{{v}_{1},\kern 1.95872pt {v}_{2}\right \}\right \rangle = Y .
Choose x ∈ X.
Then x = {a}_{1}({v}_{1} + {v}_{2}) + {a}_{2}({v}_{1} − {v}_{2}) for
some scalars {a}_{1}
and {a}_{2}.
Then,
which qualifies x for membership in Y , as it is a linear combination of {v}_{1},\kern 1.95872pt {v}_{2}.
Now show the opposite inclusion, Y = \left \langle \left \{{v}_{1},\kern 1.95872pt {v}_{2}\right \}\right \rangle ⊆\left \langle \left \{{v}_{1} + {v}_{2},\kern 1.95872pt {v}_{1} − {v}_{2}\right \}\right \rangle = X.
Choose y ∈ Y . Then
there are scalars {b}_{1},\kern 1.95872pt {b}_{2}
such that y = {b}_{1}{v}_{1} + {b}_{2}{v}_{2}.
Rearranging, we obtain,
This is an expression for y as a linear combination of {v}_{1} + {v}_{2} and {v}_{1} − {v}_{2}, earning y membership in X. Since X is a subset of Y , and vice versa, we see that X = Y , as desired.