From A First Course in Linear Algebra
Version 2.12
© 2004.
Licensed under the GNU Free Documentation License.
http://linear.ups.edu/
Once the dimension of a vector space is known, then the determination
of whether or not a set of vectors is linearly independent, or if it spans
the vector space, can often be much easier. In this section we will
state a workhorse theorem and then apply it to the column space and
row space of a matrix. It will also help us describe a superbasis for
{ℂ}^{m}.
We begin with a useful theorem that we will need later, and in the proof of the main theorem in this subsection. This theorem says that we can extend linearly independent sets, one vector at a time, by adding vectors from outside the span of the linearly independent set, all the while preserving the linear independence of the set.
Theorem ELIS
Extending Linearly Independent Sets
Suppose V is
vector space and S
is a linearly independent set of vectors from
V . Suppose
w is a vector such
that w∉\left \langle S\right \rangle . Then the
set {S}^{′} = S ∪\left \{w\right \} is linearly
independent. □
Proof Suppose S = \left \{{v}_{1},\kern 1.95872pt {v}_{2},\kern 1.95872pt {v}_{3},\kern 1.95872pt \mathop{\mathop{…}},\kern 1.95872pt {v}_{m}\right \} and begin with a relation of linear dependence on {S}^{′},
{a}_{1}{v}_{1} + {a}_{2}{v}_{2} + {a}_{3}{v}_{3} + \mathrel{⋯} + {a}_{m}{v}_{m} + {a}_{m+1}w = 0.

There are two cases to consider. First suppose that {a}_{m+1} = 0. Then the relation of linear dependence on {S}^{′} becomes
{a}_{1}{v}_{1} + {a}_{2}{v}_{2} + {a}_{3}{v}_{3} + \mathrel{⋯} + {a}_{m}{v}_{m} = 0.

and by the linear independence of the set S, we conclude that {a}_{1} = {a}_{2} = {a}_{3} = \mathrel{⋯} = {a}_{m} = 0. So all of the scalars in the relation of linear dependence on {S}^{′} are zero.
In the second case, suppose that {a}_{m+1}\mathrel{≠}0. Then the relation of linear dependence on {S}^{′} becomes
This equation expresses w as a linear combination of the vectors in S, contrary to the assumption that w∉\left \langle S\right \rangle , so this case leads to a contradiction.
The first case yielded only a trivial relation of linear dependence on {S}^{′} and the second case led to a contradiction. So {S}^{′} is a linearly independent set since any relation of linear dependence is trivial. ■
In the story Goldilocks and the Three Bears, the young girl Goldilocks visits the empty house of the three bears while out walking in the woods. One bowl of porridge is too hot, the other too cold, the third is just right. One chair is too hard, one too soft, the third is just right. So it is with sets of vectors — some are too big (linearly dependent), some are too small (they don’t span), and some are just right (bases). Here’s Goldilocks’ Theorem.
Theorem G
Goldilocks
Suppose that V is a vector
space of dimension t.
Let S = \left \{{v}_{1},\kern 1.95872pt {v}_{2},\kern 1.95872pt {v}_{3},\kern 1.95872pt \mathop{\mathop{…}},\kern 1.95872pt {v}_{m}\right \} be a set
of vectors from V .
Then
Proof Let B be a basis of V . Since \mathop{ dim}\nolimits \left (V \right ) = t, Definition B and Theorem BIS imply that B is a linearly independent set of t vectors that spans V .
There is a tension in the construction of basis. Make a set too big and you will end up with relations of linear dependence among the vectors. Make a set too small and you will not have enough raw material to span the entire vector space. Make a set just the right size (the dimension) and you only need to have linear independence or spanning, and you get the other property for free. These roughlystated ideas are made precise by Theorem G.
The structure and proof of this theorem also deserve comment. The hypotheses seem innocuous. We presume we know the dimension of the vector space in hand, then we mostly just look at the size of the set S. From this we get big conclusions about spanning and linear independence. Each of the four proofs relies on ultimately contradicting Theorem SSLD, so in a way we could think of this entire theorem as a corollary of Theorem SSLD. (See Technique LC.) The proofs of the third and fourth parts parallel each other in style (add w, toss {v}_{k}) and then turn on Theorem ELIS before contradicting Theorem SSLD.
Theorem G is useful in both concrete examples and as a tool in other proofs. We will use it often to bypass verifying linear independence or spanning.
Example BPR
Bases for {P}_{n},
reprised
In Example BP we claimed that
were both bases for {P}_{n} (Example VSP). Suppose we had first verified that B was a basis, so we would then know that \mathop{ dim}\nolimits \left ({P}_{n}\right ) = n + 1. The size of C is n + 1, the right size to be a basis. We could then verify that C is linearly independent. We would not have to make any special efforts to prove that C spans {P}_{n}, since Theorem G would allow us to conclude this property of C directly. Then we would be able to say that C is a basis of {P}_{n} also. ⊠
Example BDM22
Basis by dimension in {M}_{22}
In Example DSM22 we showed that
B = \left \{\left [\array{
−2&1\cr
1 &0 } \right ],\kern 1.95872pt \left [\array{
−2&0\cr
0 &1 } \right ]\right \}

is a basis for the subspace Z of {M}_{22} (Example VSM) given by
Z = \left \{\left [\array{
a&b\cr
c&d } \right ]\mathrel{∣}2a + b + 3c + 4d = 0,\kern 1.95872pt − a + 3b − 5c − d = 0\right \}

This tells us that \mathop{ dim}\nolimits \left (Z\right ) = 2. In this example we will find another basis. We can construct two new matrices in Z by forming linear combinations of the matrices in B.
Then the set
C = \left \{\left [\array{
2& 2\cr
2&−3 } \right ],\kern 1.95872pt \left [\array{
−8&3\cr
3 &1 } \right ]\right \}

has the right size to be a basis of Z. Let’s see if it is a linearly independent set. The relation of linear dependence
leads to the homogeneous system of equations whose coefficient matrix
\left [\array{
2 &−8\cr
2 & 3
\cr
2 & 3\cr
−3 & 1 } \right ]

rowreduces to
\left [\array{
\text{1}&0\cr
0&\text{1}
\cr
0&0\cr
0&0 } \right ]

So with {a}_{1} = {a}_{2} = 0 as the only solution, the set is linearly independent. Now we can apply Theorem G to see that C also spans Z and therefore is a second basis for Z. ⊠
Example SVP4
Sets of vectors in {P}_{4}
In Example BSP4 we showed that
B = \left \{x − 2,\kern 1.95872pt {x}^{2} − 4x + 4,\kern 1.95872pt {x}^{3} − 6{x}^{2} + 12x − 8,\kern 1.95872pt {x}^{4} − 8{x}^{3} + 24{x}^{2} − 32x + 16\right \}

is a basis for W = \left \{p(x)\mathrel{∣}p ∈ {P}_{4},\ p(2) = 0\right \}. So \mathop{ dim}\nolimits \left (W\right ) = 4.
The set
\left \{3{x}^{2} − 5x − 2,\kern 1.95872pt 2{x}^{2} − 7x + 6,\kern 1.95872pt {x}^{3} − 2{x}^{2} + x − 2\right \}

is a subset of W (check this) and it happens to be linearly independent (check this, too). However, by Theorem G it cannot span W.
The set
\left \{3{x}^{2} − 5x − 2,\kern 1.95872pt 2{x}^{2} − 7x + 6,\kern 1.95872pt {x}^{3} − 2{x}^{2} + x − 2,\kern 1.95872pt − {x}^{4} + 2{x}^{3} + 5{x}^{2} − 10x,\kern 1.95872pt {x}^{4} − 16\right \}

is another subset of W (check this) and Theorem G tells us that it must be linearly dependent.
The set
\left \{x − 2,\kern 1.95872pt {x}^{2} − 2x,\kern 1.95872pt {x}^{3} − 2{x}^{2},\kern 1.95872pt {x}^{4} − 2{x}^{3}\right \}

is a third subset of W (check this) and is linearly independent (check this). Since it has the right size to be a basis, and is linearly independent, Theorem G tells us that it also spans W, and therefore is a basis of W. ⊠
A simple consequence of Theorem G is the observation that proper subspaces have strictly smaller dimensions. Hopefully this may seem intuitively obvious, but it still requires proof, and we will cite this result later.
Theorem PSSD
Proper Subspaces have Smaller Dimension
Suppose that U and
V are subspaces of
the vector space W,
such that U ⊊ V .
Then \mathop{ dim}\nolimits \left (U\right ) <\mathop{ dim}\nolimits \left (V \right ).
□
Proof Suppose that \mathop{ dim}\nolimits \left (U\right ) = m and \mathop{ dim}\nolimits \left (V \right ) = t. Then U has a basis B of size m. If m > t, then by Theorem G, B is linearly dependent, which is a contradiction. If m = t, then by Theorem G, B spans V . Then U = \left \langle B\right \rangle = V , also a contradiction. All that remains is that m < t, which is the desired conclusion. ■
The final theorem of this subsection is an extremely powerful tool for establishing the equality of two sets that are subspaces. Notice that the hypotheses include the equality of two integers (dimensions) while the conclusion is the equality of two sets (subspaces). It is the extra “structure” of a vector space and its dimension that makes possible this huge leap from an integer equality to a set equality.
Theorem EDYES
Equal Dimensions Yields Equal Subspaces
Suppose that U and
V are subspaces of
the vector space W,
such that U ⊆ V
and \mathop{ dim}\nolimits \left (U\right ) =\mathop{ dim}\nolimits \left (V \right ).
Then U = V .
□
Proof We give a proof by contradiction (Technique CD). Suppose to the contrary that U\mathrel{≠}V . Since U ⊆ V , there must be a vector v such that v ∈ V and v∉U. Let B = \left \{{u}_{1},\kern 1.95872pt {u}_{2},\kern 1.95872pt {u}_{3},\kern 1.95872pt \mathop{\mathop{…}},\kern 1.95872pt {u}_{t}\right \} be a basis for U. Then, by Theorem ELIS, the set C = B ∪\left \{v\right \} = \left \{{u}_{1},\kern 1.95872pt {u}_{2},\kern 1.95872pt {u}_{3},\kern 1.95872pt \mathop{\mathop{…}},\kern 1.95872pt {u}_{t},\kern 1.95872pt v\right \} is a linearly independent set of t + 1 vectors in V . However, by hypothesis, V has the same dimension as U (namely t) and therefore Theorem G says that C is too big to be linearly independent. This contradiction shows that U = V . ■
We now prove one of the most surprising theorems about matrices. Notice the paucity of hypotheses compared to the precision of the conclusion.
Theorem RMRT
Rank of a Matrix is the Rank of the Transpose
Suppose A is
an m × n matrix.
Then r\left (A\right ) = r\left ({A}^{t}\right ).
□
Proof Suppose we rowreduce A to the matrix B in reduced rowechelon form, and B has r nonzero rows. The quantity r tells us three things about B: the number of leading 1’s, the number of nonzero rows and the number of pivot columns. For this proof we will be interested in the latter two.
Theorem BRS and Theorem BCS each has a conclusion that provides a basis, for the row space and the column space, respectively. In each case, these bases contain r vectors. This observation makes the following go.
This says that the row space and the column space of a matrix have the same dimension, which should be very surprising. It does not say that column space and the row space are identical. Indeed, if the matrix is not square, then the sizes (number of slots) of the vectors in each space are different, so the sets are not even comparable.
It is not hard to construct by yourself examples of matrices that illustrate Theorem RMRT, since it applies equally well to any matrix. Grab a matrix, rowreduce it, count the nonzero rows or the leading 1’s. That’s the rank. Transpose the matrix, rowreduce that, count the nonzero rows or the leading 1’s. That’s the rank of the transpose. The theorem says the two will be equal. Here’s an example anyway.
Example RRTI
Rank, rank of transpose, Archetype I
Archetype I has a 4 × 7
coefficient matrix which rowreduces to
\left [\array{
\text{1}&4&0&0&2& 1 &−3\cr
0&0 &\text{1 } &0 &1 &−3 & 5
\cr
0&0&0&\text{1}&2&−6& 6\cr
0&0 &0 &0 &0 & 0 & 0 } \right ]

so the rank is 3. Rowreducing the transpose yields
\left [\array{
\text{1}&0&0&−{31\over
7}
\cr
0&\text{1}&0& {12\over
7}
\cr
0&0&\text{1}& {13\over
7}
\cr
0&0&0& 0\cr
0&0 &0 & 0
\cr
0&0&0& 0\cr
0&0 &0 & 0 } \right ].

demonstrating that the rank of the transpose is also 3. ⊠
That the rank of a matrix equals the rank of its transpose is a fundamental and surprising result. However, applying Theorem FS we can easily determine the dimension of all four fundamental subspaces associated with a matrix.
Theorem DFS
Dimensions of Four Subspaces
Suppose that A
is an m × n
matrix, and B
is a rowequivalent matrix in reduced rowechelon form with
r
nonzero rows. Then
Proof If A rowreduces to a matrix in reduced rowechelon form with r nonzero rows, then the matrix C of extended echelon form (Definition EEF) will be an r × n matrix in reduced rowechelon form with no zero rows and r pivot columns (Theorem PEEF). Similarly, the matrix L of extended echelon form (Definition EEF) will be an m − r × m matrix in reduced rowechelon form with no zero rows and m − r pivot columns (Theorem PEEF).
There are many different ways to state and prove this result, and indeed, the equality of the dimensions of the column space and row space is just a slight expansion of Theorem RMRT. However, we have restricted our techniques to applying Theorem FS and then determining dimensions with bases provided by Theorem BNS and Theorem BRS. This provides an appealing symmetry to the results and the proof.
Some of the more advanced ideas in linear algebra are closely related to decomposing (Technique DC) vector spaces into direct sums of subspaces. With our previous results about bases and dimension, now is the right time to state and collect a few results about direct sums, though we will only mention these results in passing until we get to Section NLT, where they will get a heavy workout.
A direct sum is a shorthand way to describe the relationship between a vector space and two, or more, of its subspaces. As we will use it, it is not a way to construct new vector spaces from others.
Definition DS
Direct Sum
Suppose that V is a vector
space with two subspaces U
and W such that
for every v ∈ V ,
Then V is the direct sum of U and W and we write V = U ⊕ W.
(This definition contains Notation DS.) △
Informally, when we say V is the direct sum of the subspaces U and W, we are saying that each vector of V can always be expressed as the sum of a vector from U and a vector from W, and this expression can only be accomplished in one way (i.e. uniquely). This statement should begin to feel something like our definitions of nonsingular matrices (Definition NM) and linear independence (Definition LI). It should not be hard to imagine the natural extension of this definition to the case of more than two subspaces. Could you provide a careful definition of V = {U}_{1} ⊕ {U}_{2} ⊕ {U}_{3} ⊕\mathop{\mathop{…}} ⊕ {U}_{m} (Exercise PD.M50)?
Example SDS
Simple direct sum
In {ℂ}^{3},
define
Then {ℂ}^{3} = \left \langle \left \{{v}_{ 1},\kern 1.95872pt {v}_{2}\right \}\right \rangle ⊕\left \langle \left \{{v}_{3}\right \}\right \rangle . This statement derives from the fact that B = \left \{{v}_{1},\kern 1.95872pt {v}_{2},\kern 1.95872pt {v}_{3}\right \} is basis for {ℂ}^{3}. The spanning property of B yields the decomposition of any vector into a sum of vectors from the two subspaces, and the linear independence of B yields the uniqueness of the decomposition. We will illustrate these claims with a numerical example.
Choose v = \left [\array{ 10\cr 1 \cr 6 } \right ]. Then
where we have added parentheses for emphasis. Obviously 1{v}_{3} ∈\left \langle \left \{{v}_{3}\right \}\right \rangle , while 2{v}_{1} + (−2){v}_{2} ∈\left \langle \left \{{v}_{1},\kern 1.95872pt {v}_{2}\right \}\right \rangle . Theorem VRRB provides the uniqueness of the scalars in these linear combinations. ⊠
Example SDS is easy to generalize into a theorem.
Theorem DSFB
Direct Sum From a Basis
Suppose that V is a vector
space with a basis B = \left \{{v}_{1},\kern 1.95872pt {v}_{2},\kern 1.95872pt {v}_{3},\kern 1.95872pt \mathop{\mathop{…}},\kern 1.95872pt {v}_{n}\right \}.
Define
Then V = U ⊕ W. □
Proof Choose any vector v ∈ V . Then by Theorem VRRB there are unique scalars, {a}_{1},\kern 1.95872pt {a}_{2},\kern 1.95872pt {a}_{3},\kern 1.95872pt \mathop{\mathop{…}},\kern 1.95872pt {a}_{n} such that
where we have implicitly defined u and w in the last line. It should be clear that u ∈ U, and similarly, w ∈ W (and not simply by the choice of their names).
Suppose we had another decomposition of v, say v = {u}^{∗} + {w}^{∗}. Then we could write {u}^{∗} as a linear combination of {v}_{1} through {v}_{m}, say using scalars {b}_{1},\kern 1.95872pt {b}_{2},\kern 1.95872pt {b}_{3},\kern 1.95872pt \mathop{\mathop{…}},\kern 1.95872pt {b}_{m}. And we could write {w}^{∗} as a linear combination of {v}_{m+1} through {v}_{n}, say using scalars {c}_{1},\kern 1.95872pt {c}_{2},\kern 1.95872pt {c}_{3},\kern 1.95872pt \mathop{\mathop{…}},\kern 1.95872pt {c}_{n−m}. These two collections of scalars would then together give a linear combination of {v}_{1} through {v}_{n} that equals v. By the uniqueness of {a}_{1},\kern 1.95872pt {a}_{2},\kern 1.95872pt {a}_{3},\kern 1.95872pt \mathop{\mathop{…}},\kern 1.95872pt {a}_{n}, {a}_{i} = {b}_{i} for 1 ≤ i ≤ m and {a}_{m+i} = {c}_{i} for 1 ≤ i ≤ n − m. From the equality of these scalars we conclude that u = {u}^{∗} and w = {w}^{∗}. So with both conditions of Definition DS fulfilled we see that V = U ⊕ W. ■
Given one subspace of a vector space, we can always find another subspace that will pair with the first to form a direct sum. The main idea of this theorem, and its proof, is the idea of extending a linearly independent subset into a basis with repeated applications of Theorem ELIS.
Theorem DSFOS
Direct Sum From One Subspace
Suppose that U is a subspace
of the vector space V . Then
there exists a subspace W
of V such
that V = U ⊕ W.
□
Proof If U = V , then choose W = \left \{0\right \}. Otherwise, choose a basis B = \left \{{v}_{1},\kern 1.95872pt {v}_{2},\kern 1.95872pt {v}_{3},\kern 1.95872pt \mathop{\mathop{…}},\kern 1.95872pt {v}_{m}\right \} for U. Then since B is a linearly independent set, Theorem ELIS tells us there is a vector {v}_{m+1} in V , but not in U, such that B ∪\left \{{v}_{m+1}\right \} is linearly independent. Define the subspace {U}_{1} = \left \langle B ∪\left \{{v}_{m+1}\right \}\right \rangle .
We can repeat this procedure, in the case were {U}_{1}\mathrel{≠}V , creating a new vector {v}_{m+2} in V , but not in {U}_{1}, and a new subspace {U}_{2} = \left \langle B ∪\left \{{v}_{m+1},\kern 1.95872pt {v}_{m+2}\right \}\right \rangle . If we continue repeating this procedure, eventually, {U}_{k} = V for some k, and we can no longer apply Theorem ELIS. No matter, in this case B ∪\left \{{v}_{m+1},\kern 1.95872pt {v}_{m+2},\kern 1.95872pt \mathop{\mathop{…}},\kern 1.95872pt {v}_{m+k}\right \} is a linearly independent set that spans V , i.e. a basis for V .
Define W = \left \langle \left \{{v}_{m+1},\kern 1.95872pt {v}_{m+2},\kern 1.95872pt \mathop{\mathop{…}},\kern 1.95872pt {v}_{m+k}\right \}\right \rangle . We now are exactly in position to apply Theorem DSFB and see that V = U ⊕ W. ■
There are several different ways to define a direct sum. Our next two theorems give equivalences (Technique E) for direct sums, and therefore could have been employed as definitions. The first should further cement the notion that a direct sum has some connection with linear independence.
Theorem DSZV
Direct Sums and Zero Vectors
Suppose U and
W are subspaces of
the vector space V .
Then V = U ⊕ W
if and only if
Proof The first condition is identical in the definition and the theorem, so we only need to establish the equivalence of the second conditions.
( ⇒) Assume that V = U ⊕ W, according to Definition DS. By Property Z, 0 ∈ V and 0 = 0 + 0. If we also assume that 0 = u + w, then the uniqueness of the decomposition gives u = 0 and w = 0.
( ⇐) Suppose that v ∈ V , v = {u}_{1} + {w}_{1} and v = {u}_{2} + {w}_{2} where {u}_{1},\kern 1.95872pt {u}_{2} ∈ U, {w}_{1},\kern 1.95872pt {w}_{2} ∈ W. Then
By Property AC, {u}_{1} − {u}_{2} ∈ U and {w}_{1} − {w}_{2} ∈ W. We can now apply our hypothesis, the second statement of the theorem, to conclude that
which establishes the uniqueness needed for the second condition of the definition. ■
Our second equivalence lends further credence to calling a direct sum a decomposition. The two subspaces of a direct sum have no (nontrivial) elements in common.
Theorem DSZI
Direct Sums and Zero Intersection
Suppose U and
W are subspaces of
the vector space V .
Then V = U ⊕ W
if and only if
Proof The first condition is identical in the definition and the theorem, so we only need to establish the equivalence of the second conditions.
( ⇒) Assume that V = U ⊕ W, according to Definition DS. By Property Z and Definition SI, \left \{0\right \} ⊆ U ∩ W. To establish the opposite inclusion, suppose that x ∈ U ∩ W. Then, since x is an element of both U and W, we can write two decompositions of x as a vector from U plus a vector from W,
By the uniqueness of the decomposition, we see (twice) that x = 0 and U ∩ W ⊆\left \{0\right \}. Applying Definition SE, we have U ∩ W = \left \{0\right \}.
( ⇐) Assume that U ∩ W = \left \{0\right \}. And assume further that v ∈ V is such that v = {u}_{1} + {w}_{1} and v = {u}_{2} + {w}_{2} where {u}_{1},\kern 1.95872pt {u}_{2} ∈ U, {w}_{1},\kern 1.95872pt {w}_{2} ∈ W. Define x = {u}_{1} − {u}_{2}. then by Property AC, x ∈ U. Also
So x ∈ W by Property AC. Thus, x ∈ U ∩ W = \left \{0\right \} (Definition SI). So x = 0 and
yielding the desired uniqueness of the second condition of the definition. ■
If the statement of Theorem DSZV did not remind you of linear independence, the next theorem should establish the connection.
Theorem DSLI
Direct Sums and Linear Independence
Suppose U and
W are subspaces of
the vector space V
with V = U ⊕ W. Suppose
that R is a linearly
independent subset of U
and S is a linearly
independent subset of W.
Then R ∪ S is a linearly
independent subset of V .
□
Proof Let R = \left \{{u}_{1},\kern 1.95872pt {u}_{2},\kern 1.95872pt {u}_{3},\kern 1.95872pt \mathop{\mathop{…}},\kern 1.95872pt {u}_{k}\right \} and S = \left \{{w}_{1},\kern 1.95872pt {w}_{2},\kern 1.95872pt {w}_{3},\kern 1.95872pt \mathop{\mathop{…}},\kern 1.95872pt {w}_{ℓ}\right \}. Begin with a relation of linear dependence (Definition RLD) on the set R ∪ S using scalars {a}_{1},\kern 1.95872pt {a}_{2},\kern 1.95872pt {a}_{3},\kern 1.95872pt \mathop{\mathop{…}},\kern 1.95872pt {a}_{k} and {b}_{1},\kern 1.95872pt {b}_{2},\kern 1.95872pt {b}_{3},\kern 1.95872pt \mathop{\mathop{…}},\kern 1.95872pt {b}_{ℓ}. Then,
where we have made an implicit definition of the vectors u ∈ U, w ∈ W. Applying Theorem DSZV we conclude that
Now the linear independence of R and S (individually) yields
Forced to acknowledge that only a trivial linear combination yields the zero vector, Definition LI says the set R ∪ S is linearly independent in V . ■
Our last theorem in this collection will go some ways towards explaining the word “sum” in the moniker “direct sum,” while also partially explaining why these results appear in a section devoted to a discussion of dimension.
Theorem DSD
Direct Sums and Dimension
Suppose U and
W are subspaces of
the vector space V
with V = U ⊕ W.
Then \mathop{ dim}\nolimits \left (V \right ) =\mathop{ dim}\nolimits \left (U\right ) +\mathop{ dim}\nolimits \left (W\right ).
□
Proof We will establish this equality of positive integers with two inequalities. We will need a basis of U (call it B) and a basis of W (call it C).
First, note that B and C have sizes equal to the dimensions of the respective subspaces. The union of these two linearly independent sets, B ∪ C will be linearly independent in V by Theorem DSLI. Further, the two bases have no vectors in common by Theorem DSZI, since B ∩ C ⊆\left \{0\right \} and the zero vector is never an element of a linearly independent set (Exercise LI.T10). So the size of the union is exactly the sum of the dimensions of U and W. By Theorem G the size of B ∪ C cannot exceed the dimension of V without being linearly dependent. These observations give us \mathop{ dim}\nolimits \left (U\right ) +\mathop{ dim}\nolimits \left (W\right ) ≤\mathop{ dim}\nolimits \left (V \right ).
Grab any vector v ∈ V . Then by Theorem DSZI we can write v = u + w with u ∈ U and w ∈ W. Individually, we can write u as a linear combination of the basis elements in B, and similarly, we can write w as a linear combination of the basis elements in C, since the bases are spanning sets for their respective subspaces. These two sets of scalars will provide a linear combination of all of the vectors in B ∪ C which will equal v. The upshot of this is that B ∪ C is a spanning set for V . By Theorem G, the size of B ∪ C cannot be smaller than the dimension of V without failing to span V . These observations give us \mathop{ dim}\nolimits \left (U\right ) +\mathop{ dim}\nolimits \left (W\right ) ≥\mathop{ dim}\nolimits \left (V \right ). ■
There is a certain appealling symmetry in the previous proof, where both linear independence and spanning properties of the bases are used, both of the first two conclusions of Theorem G are employed, and we have quoted both of the two conditions of Theorem DSZI.
One final theorem tells us that we can successively decompose direct sums into sums of smaller and smaller subspaces.
Theorem RDS
Repeated Direct Sums
Suppose V is a vector
space with subspaces U
and W with
V = U ⊕ W. Suppose
that X and
Y are
subspaces of W
with W = X ⊕ Y .
Then V = U ⊕ X ⊕ Y .
□
Proof Suppose that v ∈ V . Then due to V = U ⊕ W, there exist vectors u ∈ U and w ∈ W such that v = u + w. Due to W = X ⊕ Y , there exist vectors x ∈ X and y ∈ Y such that w = x + y. All together,
which would be the first condition of a definition of a 3way direct product. Now consider the uniqueness. Suppose that
Because {x}_{1} + {y}_{1} ∈ W, {x}_{2} + {y}_{2} ∈ W, and V = U ⊕ W, we conclude that
From the second equality, an application of W = X ⊕ Y yields the conclusions {x}_{1} = {x}_{2} and {y}_{1} = {y}_{2}. This establishes the uniqueness of the decomposition of v into a sum of vectors from U, X and Y . ■
Remember that when we write V = U ⊕ W there always needs to be a “superspace,” in this case V . The statement U ⊕ W is meaningless. Writing V = U ⊕ W is simply a shorthand for a somewhat complicated relationship between V , U and W, as described in the two conditions of Definition DS, or Theorem DSZV, or Theorem DSZI. Theorem DSFB and Theorem DSFOS gives us surefire ways to build direct sums, while Theorem DSLI, Theorem DSD and Theorem RDS tell us interesting properties of direct sums. This subsection has been long on theorems and short on examples. If we were to use the term “lemma” we might have chosen to label some of these results as such, since they will be important tools in other proofs, but may not have much interest on their own (see Technique LC). We will be referencing these results heavily in later sections, and will remind you then to come back for a second look.
A = \left [\array{
1&−1& 2 & 8 & 5\cr
1& 1 & 1 & 4 &−1
\cr
0& 2 &−3&−8&−6\cr
2& 0 & 1 & 8 & 4
} \right ]

C10 Example SVP4 leaves several details for the reader to check. Verify
these five claims.
Contributed by Robert Beezer
C40 Determine if the set T = \left \{{x}^{2} − x + 5,\kern 1.95872pt 4{x}^{3} − {x}^{2} + 5x,\kern 1.95872pt 3x + 2\right \}
spans the vector space of polynomials with degree 4 or less,
{P}_{4}.
(Compare the solution to this exercise with Solution LISS.C40.)
Contributed by Robert Beezer Solution [1131]
M50 Mimic Definition DS and construct a reasonable definition of
V = {U}_{1} ⊕ {U}_{2} ⊕ {U}_{3} ⊕\mathop{\mathop{…}} ⊕ {U}_{m}.
Contributed by Robert Beezer
T05 Trivially, if U
and V are two
subspaces of W,
then \mathop{ dim}\nolimits \left (U\right ) =\mathop{ dim}\nolimits \left (V \right ).
Combine this fact, Theorem PSSD, and Theorem EDYES all into one grand
combined theorem. You might look to Theorem PIP stylistic inspiration.
(Notice this problem does not ask you to prove anything. It just asks you to
roll up three theorems into one compact, logically equivalent statement.)
Contributed by Robert Beezer
T10 Prove the following theorem, which could be viewed as a reformulation of parts (3) and (4) of Theorem G, or more appropriately as a corollary of Theorem G (Technique LC).
Suppose V is a vector
space and S is a subset of
V such that the number
of vectors in S equals
the dimension of V . Then
S is linearly independent
if and only if S
spans V .
Contributed by Robert Beezer
T15 Suppose that A
is an m × n matrix
and let \text{min}(m,n) denote
the minimum of m
and n. Prove
that r\left (A\right ) ≤\text{min}(m,n).
Contributed by Robert Beezer
T20 Suppose that A
is an m × n matrix and
b ∈ {ℂ}^{m}. Prove that the linear
system ℒS\kern 1.95872pt \left (A,\kern 1.95872pt b\right ) is consistent
if and only if r\left (A\right ) = r\left (\left [\left .A\kern 1.95872pt \right \vert \kern 1.95872pt b\right ]\right ).
Contributed by Robert Beezer Solution [1131]
T25 Suppose that V
is a vector space with finite dimension. Let
W be any
subspace of V .
Prove that W
has finite dimension.
Contributed by Robert Beezer
T33 Part of Exercise B.T50 is the half of the proof where we assume the matrix
A is
nonsingular and prove that a set is basis. In Solution B.T50 we proved directly
that the set was both linearly independent and a spanning set. Shorten this part
of the proof by applying Theorem G. Be careful, there is one subtlety.
Contributed by Robert Beezer Solution [1131]
T60 Suppose that W is a vector
space with dimension 5, and U
and V are subspaces of
W, each of dimension
3. Prove that U ∩ V
contains a nonzero vector. State a more general result.
Contributed by Joe Riegsecker Solution [1132]
C40 Contributed by Robert Beezer Statement [1128]
The vector space {P}_{4}
has dimension 5 by Theorem DP. Since
T contains only 3 vectors,
and 3 < 5, Theorem G
tells us that T
does not span {P}_{5}.
T20 Contributed by Robert Beezer Statement [1129]
( ⇒) Suppose first that
ℒS\kern 1.95872pt \left (A,\kern 1.95872pt b\right ) is consistent. Then
by Theorem CSCS, b ∈C\kern 1.95872pt \left (A\right ).
This means that C\kern 1.95872pt \left (A\right ) = C\kern 1.95872pt \left (\left [\left .A\kern 1.95872pt \right \vert \kern 1.95872pt b\right ]\right ) and
so it follows that r\left (A\right ) = r\left (\left [\left .A\kern 1.95872pt \right \vert \kern 1.95872pt b\right ]\right ).
( ⇐) Adding a column to a matrix will only increase the size of its column space, so in all cases, C\kern 1.95872pt \left (A\right ) ⊆C\kern 1.95872pt \left (\left [\left .A\kern 1.95872pt \right \vert \kern 1.95872pt b\right ]\right ). However, if we assume that r\left (A\right ) = r\left (\left [\left .A\kern 1.95872pt \right \vert \kern 1.95872pt b\right ]\right ), then by Theorem EDYES we conclude that C\kern 1.95872pt \left (A\right ) = C\kern 1.95872pt \left (\left [\left .A\kern 1.95872pt \right \vert \kern 1.95872pt b\right ]\right ). Then b ∈C\kern 1.95872pt \left (\left [\left .A\kern 1.95872pt \right \vert \kern 1.95872pt b\right ]\right ) = C\kern 1.95872pt \left (A\right ) so by Theorem CSCS, ℒS\kern 1.95872pt \left (A,\kern 1.95872pt b\right ) is consistent.
T33 Contributed by Robert Beezer Statement [1129]
By Theorem DCM we know that {ℂ}^{n}
has dimension n.
So by Theorem G we need only establish that the set
C is
linearly independent or a spanning set. However, the hypotheses also require that
C be of size
n. We assumed
that B = \left \{{x}_{1},\kern 1.95872pt {x}_{2},\kern 1.95872pt {x}_{3},\kern 1.95872pt \mathop{\mathop{…}},\kern 1.95872pt {x}_{n}\right \} had size
n, but there is no
guarantee that C = \left \{A{x}_{1},\kern 1.95872pt A{x}_{2},\kern 1.95872pt A{x}_{3},\kern 1.95872pt \mathop{\mathop{…}},\kern 1.95872pt A{x}_{n}\right \}
will have size n.
There could be some “collapsing” or “collisions.”
Suppose we establish that C is linearly independent. Then C must have n distinct elements or else we could fashion a nontrivial relation of linear dependence involving duplicate elements.
If we instead to choose to prove that C is a spanning set, then we could establish the uniqueness of the elements of C quite easily. Suppose that A{x}_{i} = A{x}_{j}. Then
Since A is nonsingular, we conclude that {x}_{i} − {x}_{j} = 0, or {x}_{i} = {x}_{j}, contrary to our description of B.
T60 Contributed by Robert Beezer Statement [1129]
Let \left \{{u}_{1},\kern 1.95872pt {u}_{2},\kern 1.95872pt {u}_{3}\right \} and
\left \{{v}_{1},\kern 1.95872pt {v}_{2},\kern 1.95872pt {v}_{3}\right \} be bases
for U and
V (respectively).
Then, the set \left \{{u}_{1},\kern 1.95872pt {u}_{2},\kern 1.95872pt {u}_{3},\kern 1.95872pt {v}_{1},\kern 1.95872pt {v}_{2},\kern 1.95872pt {v}_{3}\right \}
is linearly dependent, since Theorem G says we cannot have 6 linearly
independent vectors in a vector space of dimension 5. So we can assert that there
is a nontrivial relation of linear dependence,
{a}_{1}{u}_{1} + {a}_{2}{u}_{2} + {a}_{3}{u}_{3} + {b}_{1}{v}_{1} + {b}_{2}{v}_{2} + {b}_{3}{v}_{3} = 0

where {a}_{1},\kern 1.95872pt {a}_{2},\kern 1.95872pt {a}_{3} and {b}_{1},\kern 1.95872pt {b}_{2},\kern 1.95872pt {b}_{3} are not all zero.
We can rearrange this equation as
{a}_{1}{u}_{1} + {a}_{2}{u}_{2} + {a}_{3}{u}_{3} = −{b}_{1}{v}_{1} − {b}_{2}{v}_{2} − {b}_{3}{v}_{3}

This is an equality of two vectors, so we can give this common vector a name, say w,
w = {a}_{1}{u}_{1} + {a}_{2}{u}_{2} + {a}_{3}{u}_{3} = −{b}_{1}{v}_{1} − {b}_{2}{v}_{2} − {b}_{3}{v}_{3}

This is the desired nonzero vector, as we will now show.
First, since w = {a}_{1}{u}_{1} + {a}_{2}{u}_{2} + {a}_{3}{u}_{3}, we can see that w ∈ U. Similarly, w = −{b}_{1}{v}_{1} − {b}_{2}{v}_{2} − {b}_{3}{v}_{3}, so w ∈ V . This establishes that w ∈ U ∩ V (Definition SI).
Is w\mathrel{≠}0? Suppose not, in other words, suppose w = 0. Then
0 = w = {a}_{1}{u}_{1} + {a}_{2}{u}_{2} + {a}_{3}{u}_{3}

Because \left \{{u}_{1},\kern 1.95872pt {u}_{2},\kern 1.95872pt {u}_{3}\right \} is a basis for U, it is a linearly independent set and the relation of linear dependence above means we must conclude that {a}_{1} = {a}_{2} = {a}_{3} = 0. By a similar process, we would conclude that {b}_{1} = {b}_{2} = {b}_{3} = 0. But this is a contradiction since {a}_{1},\kern 1.95872pt {a}_{2},\kern 1.95872pt {a}_{3},\kern 1.95872pt {b}_{1},\kern 1.95872pt {b}_{2},\kern 1.95872pt {b}_{3} were chosen so that some were nonzero. So w\mathrel{≠}0.
How does this generalize? All we really needed was the original relation of linear dependence that resulted because we had “too many” vectors in W. A more general statement would be: Suppose that W is a vector space with dimension n, U is a subspace of dimension p and V is a subspace of dimension q. If p + q > n, then U ∩ V contains a nonzero vector.
Definition VS
The most fundamental object in linear algebra is a vector space. Or else the most
fundamental object is a vector, and a vector space is important because it is a
collection of vectors. Either way, Definition VS is critical. All of our remaining
theorems that assume we are working with a vector space can trace their lineage
back to this definition.
Theorem TSS
Check all ten properties of a vector space (Definition VS) can get tedious.
But if you have a subset of a known vector space, then Theorem TSS
considerably shortens the verification. Also, proofs of closure (the last trwo
conditions in Theorem TSS) are a good way tp practice a common style of
proof.
Theorem VRRB
The proof of uniqueness in this theorem is a very typical employment of the
hypothesis of linear independence. But that’s not why we mention it here. This
theorem is critical to our first section about representations, Section VR, via
Definition VR.
Theorem CNMB
Having just defined a basis (Definition B) we discover that the columns of a nonsingular matrix
form a basis of {ℂ}^{m}.
Much of what we know about nonsingular matrices is either contained in this
statement, or much more evident because of it.
Theorem SSLD
This theorem is a key juncture in our development of linear algebra. You have
probably already realized how useful Theorem G is. All four parts of Theorem G
have proofs that finish with an application of Theorem SSLD.
Theorem RPNC
This simple relationship between the rank, nullity and number of columns of a
matrix might be surprising. But in simplicity comes power, as this theorem can be
very useful. It will be generalized in the very last theorem of Chapter LT,
Theorem RPNDD.
Theorem G
A whimsical title, but the intent is to make sure you don’t miss this one. Much of
the interaction between bases, dimension, linear independence and spanning is
captured in this theorem.
Theorem RMRT
This one is a real surprise. Why should a matrix, and its transpose, both
rowreduce to the same number of nonzero rows?