Section LC Linear Combinations
In Section VO we defined vector addition and scalar multiplication. These two operations combine nicely to give us a construction known as a linear combination, a construct that we will work with throughout this course.
Subsection LC Linear Combinations
Definition LCCV Linear Combination of Column Vectors
Given $n$ vectors $\vectorlist{u}{n}$ from $\complex{m}$ and $n$ scalars $\alpha_1,\,\alpha_2,\,\alpha_3,\,\ldots,\,\alpha_n$, their linear combination is the vector \begin{equation*} \lincombo{\alpha}{u}{n} \end{equation*}
So this definition takes an equal number of scalars and vectors, combines them using our two new operations (scalar multiplication and vector addition) and creates a single brand-new vector, of the same size as the original vectors. When a definition or theorem employs a linear combination, think about the nature of the objects that go into its creation (lists of scalars and vectors), and the type of object that results (a single vector). Computationally, a linear combination is pretty easy.
Example TLC Two linear combinations in $\complex{6}$
Sage LC Linear Combinations
Our next two examples are key ones, and a discussion about decompositions is timely. Have a look at Proof Technique DC before studying the next two examples.
Example ABLC Archetype B as a linear combination
With any discussion of Archetype A or Archetype B we should be sure to contrast with the other.
Example AALC Archetype A as a linear combination
There is a lot going on in the last two examples. Come back to them in a while and make some connections with the intervening material. For now, we will summarize and explain some of this behavior with a theorem.
Theorem SLSLC Solutions to Linear Systems are Linear Combinations
Denote the columns of the $m\times n$ matrix $A$ as the vectors $\vectorlist{A}{n}$. Then $\vect{x}\in\complex{n}$ is a solution to the linear system of equations $\linearsystem{A}{\vect{b}}$ if and only if $\vect{b}$ equals the linear combination of the columns of $A$ formed with the entries of $\vect{x}$, \begin{equation*} \vectorentry{\vect{x}}{1}\vect{A}_1+ \vectorentry{\vect{x}}{2}\vect{A}_2+ \vectorentry{\vect{x}}{3}\vect{A}_3+ \cdots+ \vectorentry{\vect{x}}{n}\vect{A}_n = \vect{b} \end{equation*}
In other words, this theorem tells us that solutions to systems of equations are linear combinations of the $n$ column vectors of the coefficient matrix ($\vect{A}_j$) which yield the constant vector $\vect{b}$. Or said another way, a solution to a system of equations $\linearsystem{A}{\vect{b}}$ is an answer to the question “How can I form the vector $\vect{b}$ as a linear combination of the columns of $A$?” Look through the archetypes that are systems of equations and examine a few of the advertised solutions. In each case use the solution to form a linear combination of the columns of the coefficient matrix and verify that the result equals the constant vector (see Exercise LC.C21).
Sage SLC Solutions and Linear Combinations
Subsection VFSS Vector Form of Solution Sets
We have written solutions to systems of equations as column vectors. For example Archetype B has the solution $x_1 = -3,\,x_2 = 5,\,x_3 = 2$ which we write as \begin{equation*} \vect{x}=\colvector{x_1\\x_2\\x_3}=\colvector{-3\\5\\2} \end{equation*}
Now, we will use column vectors and linear combinations to express all of the solutions to a linear system of equations in a compact and understandable way. First, here are two examples that will motivate our next theorem. This is a valuable technique, almost the equal of row-reducing a matrix, so be sure you get comfortable with it over the course of this section.
Example VFSAD Vector form of solutions for Archetype D
This is such an important and fundamental technique, we will do another example.
Example VFS Vector form of solutions
Did you think a few weeks ago that you could so quickly and easily list all the solutions to a linear system of 5 equations in 7 variables?
We will now formalize the last two (important) examples as a theorem. The statement of this theorem is a bit scary, and the proof is scarier. For now, be sure to convice yourself, by working through the examples and exercises, that the statement just describes the procedure of the two immediately previous examples.
Theorem VFSLS Vector Form of Solutions to Linear Systems
Suppose that $\augmented{A}{\vect{b}}$ is the augmented matrix for a consistent linear system $\linearsystem{A}{\vect{b}}$ of $m$ equations in $n$ variables. Let $B$ be a row-equivalent $m\times (n+1)$ matrix in reduced row-echelon form. Suppose that $B$ has $r$ pivot columns, with indices $D=\set{d_1,\,d_2,\,d_3,\,\ldots,\,d_r}$, while the $n-r$ non-pivot columns have indices in $F=\set{f_1,\,f_2,\,f_3,\,\ldots,\,f_{n-r},\,n+1}$. Define vectors $\vect{c}$, $\vect{u}_j$, $1\leq j\leq n-r$ of size $n$ by \begin{align*} \vectorentry{\vect{c}}{i}&= \begin{cases} 0&\text{if $i\in F$}\\ \matrixentry{B}{k,n+1}&\text{if $i\in D$, $i=d_k$} \end{cases}\\ \vectorentry{\vect{u}_j}{i}&= \begin{cases} 1&\text{if $i\in F$, $i=f_j$}\\ 0&\text{if $i\in F$, $i\neq f_j$}\\ -\matrixentry{B}{k,f_j}&\text{if $i\in D$, $i=d_k$} \end{cases}. \end{align*}
Then the set of solutions to the system of equations $\linearsystem{A}{\vect{b}}$ is \begin{equation*} S=\setparts{ \vect{c}+\alpha_1\vect{u}_1+\alpha_2\vect{u}_2+\alpha_3\vect{u}_3+\cdots+\alpha_{n-r}\vect{u}_{n-r} }{ \alpha_1,\,\alpha_2,\,\alpha_3,\,\ldots,\,\alpha_{n-r}\in\complex{\null} } \end{equation*}
Note that both halves of the proof of Theorem VFSLS indicate that $\alpha_i=\vectorentry{\vect{x}}{f_i}$. In other words, the arbitrary scalars, $\alpha_i$, in the description of the set $S$ actually have more meaning — they are the values of the free variables $\vectorentry{\vect{x}}{f_i}$, $1\leq i\leq n-r$. So we will often exploit this observation in our descriptions of solution sets.
Theorem VFSLS formalizes what happened in the three steps of Example VFSAD. The theorem will be useful in proving other theorems, and it it is useful since it tells us an exact procedure for simply describing an infinite solution set. We could program a computer to implement it, once we have the augmented matrix row-reduced and have checked that the system is consistent. By Knuth's definition, this completes our conversion of linear equation solving from art into science. Notice that it even applies (but is overkill) in the case of a unique solution. However, as a practical matter, I prefer the three-step process of Example VFSAD when I need to describe an infinite solution set. So let us practice some more, but with a bigger example.
Example VFSAI Vector form of solutions for Archetype I
This technique is so important, that we will do one more example. However, an important distinction will be that this system is homogeneous.
Example VFSAL Vector form of solutions for Archetype L
Sage SS2 Solving Systems, Part 2
Subsection PSHS Particular Solutions, Homogeneous Solutions
The next theorem tells us that in order to find all of the solutions to a linear system of equations, it is sufficient to find just one solution, and then find all of the solutions to the corresponding homogeneous system. This explains part of our interest in the null space, the set of all solutions to a homogeneous system.
Theorem PSPHS Particular Solution Plus Homogeneous Solutions
Suppose that $\vect{w}$ is one solution to the linear system of equations $\linearsystem{A}{\vect{b}}$. Then $\vect{y}$ is a solution to $\linearsystem{A}{\vect{b}}$ if and only if $\vect{y}=\vect{w}+\vect{z}$ for some vector $\vect{z}\in\nsp{A}$.
After proving Theorem NMUS we commented (insufficiently) on the negation of one half of the theorem. Nonsingular coefficient matrices lead to unique solutions for every choice of the vector of constants. What does this say about singular matrices? A singular matrix $A$ has a nontrivial null space (Theorem NMTNS). For a given vector of constants, $\vect{b}$, the system $\linearsystem{A}{\vect{b}}$ could be inconsistent, meaning there are no solutions. But if there is at least one solution ($\vect{w}$), then Theorem PSPHS tells us there will be infinitely many solutions because of the role of the infinite null space for a singular matrix. So a system of equations with a singular coefficient matrix never has a unique solution. Notice that this is the contrapositive of the statement in Exercise NM.T31. With a singular coefficient matrix, either there are no solutions, or infinitely many solutions, depending on the choice of the vector of constants ($\vect{b}$).
Example PSHS Particular solutions, homogeneous solutions, Archetype D
The ideas of this subsection will appear again in Chapter LT when we discuss pre-images of linear transformations (Definition PI).