Skip to main content
\(\newcommand{\orderof}[1]{\sim #1} \newcommand{\Z}{\mathbb{Z}} \newcommand{\reals}{\mathbb{R}} \newcommand{\real}[1]{\mathbb{R}^{#1}} \newcommand{\complexes}{\mathbb{C}} \newcommand{\complex}[1]{\mathbb{C}^{#1}} \newcommand{\conjugate}[1]{\overline{#1}} \newcommand{\modulus}[1]{\left\lvert#1\right\rvert} \newcommand{\zerovector}{\vect{0}} \newcommand{\zeromatrix}{\mathcal{O}} \newcommand{\innerproduct}[2]{\left\langle#1,\,#2\right\rangle} \newcommand{\norm}[1]{\left\lVert#1\right\rVert} \newcommand{\dimension}[1]{\dim\left(#1\right)} \newcommand{\nullity}[1]{n\left(#1\right)} \newcommand{\rank}[1]{r\left(#1\right)} \newcommand{\ds}{\oplus} \newcommand{\detname}[1]{\det\left(#1\right)} \newcommand{\detbars}[1]{\left\lvert#1\right\rvert} \newcommand{\trace}[1]{t\left(#1\right)} \newcommand{\sr}[1]{#1^{1/2}} \newcommand{\spn}[1]{\left\langle#1\right\rangle} \newcommand{\nsp}[1]{\mathcal{N}\!\left(#1\right)} \newcommand{\csp}[1]{\mathcal{C}\!\left(#1\right)} \newcommand{\rsp}[1]{\mathcal{R}\!\left(#1\right)} \newcommand{\lns}[1]{\mathcal{L}\!\left(#1\right)} \newcommand{\per}[1]{#1^\perp} \newcommand{\augmented}[2]{\left\lbrack\left.#1\,\right\rvert\,#2\right\rbrack} \newcommand{\linearsystem}[2]{\mathcal{LS}\!\left(#1,\,#2\right)} \newcommand{\homosystem}[1]{\linearsystem{#1}{\zerovector}} \newcommand{\rowopswap}[2]{R_{#1}\leftrightarrow R_{#2}} \newcommand{\rowopmult}[2]{#1R_{#2}} \newcommand{\rowopadd}[3]{#1R_{#2}+R_{#3}} \newcommand{\leading}[1]{\boxed{#1}} \newcommand{\rref}{\xrightarrow{\text{RREF}}} \newcommand{\elemswap}[2]{E_{#1,#2}} \newcommand{\elemmult}[2]{E_{#2}\left(#1\right)} \newcommand{\elemadd}[3]{E_{#2,#3}\left(#1\right)} \newcommand{\scalarlist}[2]{{#1}_{1},\,{#1}_{2},\,{#1}_{3},\,\ldots,\,{#1}_{#2}} \newcommand{\vect}[1]{\mathbf{#1}} \newcommand{\colvector}[1]{\begin{bmatrix}#1\end{bmatrix}} \newcommand{\vectorcomponents}[2]{\colvector{#1_{1}\\#1_{2}\\#1_{3}\\\vdots\\#1_{#2}}} \newcommand{\vectorlist}[2]{\vect{#1}_{1},\,\vect{#1}_{2},\,\vect{#1}_{3},\,\ldots,\,\vect{#1}_{#2}} \newcommand{\vectorentry}[2]{\left\lbrack#1\right\rbrack_{#2}} \newcommand{\matrixentry}[2]{\left\lbrack#1\right\rbrack_{#2}} \newcommand{\lincombo}[3]{#1_{1}\vect{#2}_{1}+#1_{2}\vect{#2}_{2}+#1_{3}\vect{#2}_{3}+\cdots +#1_{#3}\vect{#2}_{#3}} \newcommand{\matrixcolumns}[2]{\left\lbrack\vect{#1}_{1}|\vect{#1}_{2}|\vect{#1}_{3}|\ldots|\vect{#1}_{#2}\right\rbrack} \newcommand{\transpose}[1]{#1^{t}} \newcommand{\inverse}[1]{#1^{-1}} \newcommand{\submatrix}[3]{#1\left(#2|#3\right)} \newcommand{\adj}[1]{\transpose{\left(\conjugate{#1}\right)}} \newcommand{\adjoint}[1]{#1^\ast} \newcommand{\set}[1]{\left\{#1\right\}} \newcommand{\setparts}[2]{\left\lbrace#1\,\middle|\,#2\right\rbrace} \newcommand{\card}[1]{\left\lvert#1\right\rvert} \newcommand{\setcomplement}[1]{\overline{#1}} \newcommand{\charpoly}[2]{p_{#1}\left(#2\right)} \newcommand{\eigenspace}[2]{\mathcal{E}_{#1}\left(#2\right)} \newcommand{\eigensystem}[3]{\lambda&=#2&\eigenspace{#1}{#2}&=\spn{\set{#3}}} \newcommand{\geneigenspace}[2]{\mathcal{G}_{#1}\left(#2\right)} \newcommand{\algmult}[2]{\alpha_{#1}\left(#2\right)} \newcommand{\geomult}[2]{\gamma_{#1}\left(#2\right)} \newcommand{\indx}[2]{\iota_{#1}\left(#2\right)} \newcommand{\ltdefn}[3]{#1\colon #2\rightarrow#3} \newcommand{\lteval}[2]{#1\left(#2\right)} \newcommand{\ltinverse}[1]{#1^{-1}} \newcommand{\restrict}[2]{{#1}|_{#2}} \newcommand{\preimage}[2]{#1^{-1}\left(#2\right)} \newcommand{\rng}[1]{\mathcal{R}\!\left(#1\right)} \newcommand{\krn}[1]{\mathcal{K}\!\left(#1\right)} \newcommand{\compose}[2]{{#1}\circ{#2}} \newcommand{\vslt}[2]{\mathcal{LT}\left(#1,\,#2\right)} \newcommand{\isomorphic}{\cong} \newcommand{\similar}[2]{\inverse{#2}#1#2} \newcommand{\vectrepname}[1]{\rho_{#1}} \newcommand{\vectrep}[2]{\lteval{\vectrepname{#1}}{#2}} \newcommand{\vectrepinvname}[1]{\ltinverse{\vectrepname{#1}}} \newcommand{\vectrepinv}[2]{\lteval{\ltinverse{\vectrepname{#1}}}{#2}} \newcommand{\matrixrep}[3]{M^{#1}_{#2,#3}} \newcommand{\matrixrepcolumns}[4]{\left\lbrack \left.\vectrep{#2}{\lteval{#1}{\vect{#3}_{1}}}\right|\left.\vectrep{#2}{\lteval{#1}{\vect{#3}_{2}}}\right|\left.\vectrep{#2}{\lteval{#1}{\vect{#3}_{3}}}\right|\ldots\left|\vectrep{#2}{\lteval{#1}{\vect{#3}_{#4}}}\right.\right\rbrack} \newcommand{\cbm}[2]{C_{#1,#2}} \newcommand{\jordan}[2]{J_{#1}\left(#2\right)} \newcommand{\hadamard}[2]{#1\circ #2} \newcommand{\hadamardidentity}[1]{J_{#1}} \newcommand{\hadamardinverse}[1]{\widehat{#1}} \newcommand{\lt}{<} \newcommand{\gt}{>} \newcommand{\amp}{&} \)

SectionLCLinear Combinations

In Section VO we defined vector addition and scalar multiplication. These two operations combine nicely to give us a construction known as a linear combination, a construct that we will work with throughout this course.

SubsectionLCLinear Combinations

DefinitionLCCVLinear Combination of Column Vectors

Given \(n\) vectors \(\vectorlist{u}{n}\) from \(\complex{m}\) and \(n\) scalars \(\alpha_1,\,\alpha_2,\,\alpha_3,\,\ldots,\,\alpha_n\text{,}\) their linear combination is the vector \begin{equation*} \lincombo{\alpha}{u}{n} \end{equation*}

So this definition takes an equal number of scalars and vectors, combines them using our two new operations (scalar multiplication and vector addition) and creates a single brand-new vector, of the same size as the original vectors. When a definition or theorem employs a linear combination, think about the nature of the objects that go into its creation (lists of scalars and vectors), and the type of object that results (a single vector). Computationally, a linear combination is pretty easy.

Our next two examples are key ones, and a discussion about decompositions is timely. Have a look at Proof Technique DC before studying the next two examples.

With any discussion of Archetype A or Archetype B we should be sure to contrast with the other.

There is a lot going on in the last two examples. Come back to them in a while and make some connections with the intervening material. For now, we will summarize and explain some of this behavior with a theorem.

Proof

In other words, this theorem tells us that solutions to systems of equations are linear combinations of the \(n\) column vectors of the coefficient matrix (\(\vect{A}_j\)) which yield the constant vector \(\vect{b}\text{.}\) Or said another way, a solution to a system of equations \(\linearsystem{A}{\vect{b}}\) is an answer to the question “How can I form the vector \(\vect{b}\) as a linear combination of the columns of \(A\text{?}\)” Look through the Archetypes that are systems of equations and examine a few of the advertised solutions. In each case use the solution to form a linear combination of the columns of the coefficient matrix and verify that the result equals the constant vector (see Exercise LC.C21).

SubsectionVFSSVector Form of Solution Sets

We have written solutions to systems of equations as column vectors. For example Archetype B has the solution \(x_1 = -3,\,x_2 = 5,\,x_3 = 2\) which we write as \begin{equation*} \vect{x}=\colvector{x_1\\x_2\\x_3}=\colvector{-3\\5\\2}\text{.} \end{equation*} Now, we will use column vectors and linear combinations to express all of the solutions to a linear system of equations in a compact and understandable way. First, here are two examples that will motivate our next theorem. This is a valuable technique, almost the equal of row-reducing a matrix, so be sure you get comfortable with it over the course of this section.

This is such an important and fundamental technique, we will do another example.

Did you think a few weeks ago that you could so quickly and easily list all the solutions to a linear system of 5 equations in 7 variables?

We will now formalize the last two (important) examples as a theorem. The statement of this theorem is a bit scary, and the proof is scarier. For now, be sure to convice yourself, by working through the examples and exercises, that the statement just describes the procedure of the two immediately previous examples.

Proof

Note that both halves of the proof of Theorem VFSLS indicate that \(\alpha_i=\vectorentry{\vect{x}}{f_i}\text{.}\) In other words, the arbitrary scalars, \(\alpha_i\text{,}\) in the description of the set \(S\) actually have more meaning — they are the values of the free variables \(\vectorentry{\vect{x}}{f_i}\text{,}\) \(1\leq i\leq n-r\text{.}\) So we will often exploit this observation in our descriptions of solution sets.

Theorem VFSLS formalizes what happened in the three steps of Example VFSAD. The theorem will be useful in proving other theorems, and it it is useful since it tells us an exact procedure for simply describing an infinite solution set. We could program a computer to implement it, once we have the augmented matrix row-reduced and have checked that the system is consistent. By Knuth's definition, this completes our conversion of linear equation solving from art into science. Notice that it even applies (but is overkill) in the case of a unique solution. However, as a practical matter, I prefer the three-step process of Example VFSAD when I need to describe an infinite solution set. So let us practice some more, but with a bigger example.

This technique is so important, that we will do one more example. However, an important distinction will be that this system is homogeneous.

SubsectionPSHSParticular Solutions, Homogeneous Solutions

The next theorem tells us that in order to find all of the solutions to a linear system of equations, it is sufficient to find just one solution, and then find all of the solutions to the homogeneous system with the same coefficient matrix. This explains part of our interest in the null space, the set of all solutions to a homogeneous system.

Proof

After proving Theorem NMUS we commented (insufficiently) on the negation of one half of the theorem. Nonsingular coefficient matrices lead to unique solutions for every choice of the vector of constants. What does this say about singular matrices? A singular matrix \(A\) has a nontrivial null space (Theorem NMTNS). For a given vector of constants, \(\vect{b}\text{,}\) the system \(\linearsystem{A}{\vect{b}}\) could be inconsistent, meaning there are no solutions. But if there is at least one solution (\(\vect{w}\)), then Theorem PSPHS tells us there will be infinitely many solutions because of the role of the infinite null space for a singular matrix. So a system of equations with a singular coefficient matrix never has a unique solution. Notice that this is the contrapositive of the statement in Exercise NM.T31. With a singular coefficient matrix, either there are no solutions, or infinitely many solutions, depending on the choice of the vector of constants (\(\vect{b}\)).

The ideas of this subsection will appear again in Chapter LT when we discuss pre-images of linear transformations (Definition PI).

SagePSHSParticular Solutions, Homogeneous Solutions
Click to open

SubsectionReading Questions

1

Earlier, a reading question asked you to solve the system of equations \begin{align*} 2x_1 + 3x_2 - x_3&= 0\\ x_1 + 2x_2 + x_3&= 3\\ x_1 + 3x_2 + 3x_3&= 7\text{.} \end{align*} Use a linear combination to rewrite this system of equations as a vector equality.

2

Find a linear combination of the vectors \begin{equation*} S=\set{\colvector{1\\3\\-1},\,\colvector{2\\0\\4},\,\colvector{-1\\3\\-5}} \end{equation*} that equals the vector \(\colvector{1\\-9\\11}\text{.}\)

3

The matrix below is the augmented matrix of a system of equations, row-reduced to reduced row-echelon form. Write the vector form of the solutions to the system. \begin{equation*} \begin{bmatrix} \leading{1}&3&0&6&0&9\\ 0&0&\leading{1}&-2&0&-8\\ 0&0&0&0&\leading{1}&3 \end{bmatrix} \end{equation*}

SubsectionExercises

C21

Consider each archetype that is a system of equations. For individual solutions listed (both for the original system and the corresponding homogeneous system) express the vector of constants as a linear combination of the columns of the coefficient matrix, as guaranteed by Theorem SLSLC. Verify this equality by computing the linear combination. For systems with no solutions, recognize that it is then impossible to write the vector of constants as a linear combination of the columns of the coefficient matrix. Note too, for homogeneous systems, that the solutions give rise to linear combinations that equal the zero vector.

Archetype A, Archetype B, Archetype C, Archetype D, Archetype E, Archetype F, Archetype G, Archetype H, Archetype I, Archetype J

Solution
C22

Consider each archetype that is a system of equations. Write elements of the solution set in vector form, as guaranteed by Theorem VFSLS.

Archetype A, Archetype B, Archetype C, Archetype D, Archetype E, Archetype F, Archetype G, Archetype H, Archetype I, Archetype J

Solution
C40

Find the vector form of the solutions to the system of equations below. \begin{align*} 2x_1-4x_2+3x_3+x_5&=6\\ x_1-2x_2-2x_3+14x_4-4x_5&=15\\ x_1-2x_2+x_3+2x_4+x_5&=-1\\ -2x_1+4x_2-12x_4+x_5&=-7 \end{align*}

Solution
C41

Find the vector form of the solutions to the system of equations below. \begin{align*} -2 x_1 -1 x_2 -8 x_3+ 8 x_4+ 4 x_5 -9 x_6 -1 x_7 -1 x_8 -18 x_9 &= 3\\ 3 x_1 -2 x_2+ 5 x_3+ 2 x_4 -2 x_5 -5 x_6+ 1 x_7+ 2 x_8+ 15 x_9 &= 10\\ 4 x_1 -2 x_2+ 8 x_3+ 2 x_5 -14 x_6 -2 x_8+ 2 x_9 &= 36\\ -1 x_1+ 2 x_2+ 1 x_3 -6 x_4+ 7 x_6 -1 x_7 -3 x_9 &= -8\\ 3 x_1+ 2 x_2+ 13 x_3 -14 x_4 -1 x_5+ 5 x_6 -1 x_8+ 12 x_9 &= 15\\ -2 x_1+ 2 x_2 -2 x_3 -4 x_4+ 1 x_5+ 6 x_6 -2 x_7 -2 x_8 -15 x_9 &= -7 \end{align*}

Solution
M10

Example TLC asks if the vector \begin{equation*} \vect{w}=\colvector{13\\15\\5\\-17\\2\\25} \end{equation*} can be written as a linear combination of the four vectors \begin{align*} \vect{u}_1&=\colvector{2\\4\\-3\\1\\2\\9}& \vect{u}_2&=\colvector{6\\3\\0\\-2\\1\\4}& \vect{u}_3&=\colvector{-5\\2\\1\\1\\-3\\0}& \vect{u}_4&=\colvector{3\\2\\-5\\7\\1\\3}\text{.} \end{align*} Can it? Can any vector in \(\complex{6}\) be written as a linear combination of the four vectors \(\vect{u}_1,\,\vect{u}_2,\,\vect{u}_3,\,\vect{u}_4\text{?}\)

Solution
M11

At the end of Example VFS, the vector \(\vect{w}\) is claimed to be a solution to the linear system under discussion. Verify that \(\vect{w}\) really is a solution. Then determine the four scalars that express \(\vect{w}\) as a linear combination of \(\vect{c}\text{,}\) \(\vect{u}_1\text{,}\) \(\vect{u}_2\text{,}\) \(\vect{u}_3\text{.}\)

Solution