From A First Course in Linear Algebra

Version 1.08

© 2004.

Licensed under the GNU Free Documentation License.

http://linear.ups.edu/

We will now be more careful about analyzing the reduced row-echelon form
derived from the augmented matrix of a system of linear equations. In
particular, we will see how to systematically handle the situation when we
have infinitely many solutions to a system, and we will prove that every
system of linear equations has either zero, one or infinitely many solutions.
With these tools, we will be able to solve any system by a well-described
method.

The computer scientist Donald Knuth said, “Science is what we understand well enough to explain to a computer. Art is everything else.” In this section we’ll remove solving systems of equations from the realm of art, and into the realm of science. We begin with a definition.

Definition CS

Consistent System

A system of linear equations is consistent if it has at least
one solution. Otherwise, the system is called inconsistent.
$\u25b3$

We will want to first recognize when a system is inconsistent or consistent, and in the case of consistent systems we will be able to further refine the types of solutions possible. We will do this by analyzing the reduced row-echelon form of a matrix, using the value of $r$, and the sets of column indices, $D$ and $F$, first defined back in Definition RREF.

Use of the notation for the elements of $D$ and $F$ can be a bit confusing, since we have subscripted variables that are in turn equal to integers used to index the matrix. However, many questions about matrices and systems of equations can be answered once we know $r$, $D$ and $F$. The choice of the letters $D$ and $F$ refer to our upcoming definition of dependent and free variables (Definition IDV). An example will help us begin to get comfortable with this aspect of reduced row-echelon form.

Example RREFN

Reduced row-echelon form notation

For the $5\times 9$
matrix

in reduced row-echelon form we have

$$\begin{array}{llllllllllll}\hfill r& =4\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill {d}_{1}& =1\phantom{\rule{2em}{0ex}}& \hfill {d}_{2}& =3\phantom{\rule{2em}{0ex}}& \hfill {d}_{3}& =4\phantom{\rule{2em}{0ex}}& \hfill {d}_{4}& =7\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill {f}_{1}& =2\phantom{\rule{2em}{0ex}}& \hfill {f}_{2}& =5\phantom{\rule{2em}{0ex}}& \hfill {f}_{3}& =6\phantom{\rule{2em}{0ex}}& \hfill {f}_{4}& =8\phantom{\rule{2em}{0ex}}& \hfill {f}_{5}& =9.\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$$Notice that the sets $D=\left\{{d}_{1},\phantom{\rule{0em}{0ex}}{d}_{2},\phantom{\rule{0em}{0ex}}{d}_{3},\phantom{\rule{0em}{0ex}}{d}_{4}\right\}=\left\{1,\phantom{\rule{0em}{0ex}}3,\phantom{\rule{0em}{0ex}}4,\phantom{\rule{0em}{0ex}}7\right\}$ and $F=\left\{{f}_{1},\phantom{\rule{0em}{0ex}}{f}_{2},\phantom{\rule{0em}{0ex}}{f}_{3},\phantom{\rule{0em}{0ex}}{f}_{4},\phantom{\rule{0em}{0ex}}{f}_{5}\right\}=\left\{2,\phantom{\rule{0em}{0ex}}5,\phantom{\rule{0em}{0ex}}6,\phantom{\rule{0em}{0ex}}8,\phantom{\rule{0em}{0ex}}9\right\}$ have nothing in common and together account for all of the columns of $B$ (we say it is a partition of the set of column indices). $\u22a0$

The number $r$ is the single most important piece of information we can get from the reduced row-echelon form of a matrix. It is defined as the number of non-zero rows, but since each non-zero row has a leading 1, it is also the number of leading 1’s present. For each leading 1, we have a pivot column, so $r$ is also the number of pivot columns. Repeating ourselves, $r$ is the number of leading 1’s, the number of non-zero rows and the number of pivot columns. Across different situations, each of these interpretations of the meaning of $r$ will be useful.

Before proving some theorems about the possibilities for solution sets to systems of equations, let’s analyze one particular system with an infinite solution set very carefully as an example. We’ll use this technique frequently, and shortly we’ll refine it slightly.

Archetypes I and J are both fairly large for doing computations by hand (though not impossibly large). Their properties are very similar, so we will frequently analyze the situation in Archetype I, and leave you the joy of analyzing Archetype J yourself. So work through Archetype I with the text, by hand and/or with a computer, and then tackle Archetype J yourself (and check your results with those listed). Notice too that the archetypes describing systems of equations each lists the values of $r$, $D$ and $F$. Here we go…

Example ISSI

Describing infinite solution sets, Archetype I

Archetype I is the system of $m=4$
equations in $n=7$
variables.

This system has a $4\times 8$ augmented matrix that is row-equivalent to the following matrix (check this!), and which is in reduced row-echelon form (the existence of this matrix is guaranteed by Theorem REMEF),

$$\begin{array}{lll}\hfill \left[\begin{array}{cccccccc}\hfill \text{1}\hfill & \hfill 4\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 2\hfill & \hfill 1\hfill & \hfill -3\hfill & \hfill 4\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill \text{1}\hfill & \hfill 0\hfill & \hfill 1\hfill & \hfill -3\hfill & \hfill 5\hfill & \hfill 2\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill \text{1}\hfill & \hfill 2\hfill & \hfill -6\hfill & \hfill 6\hfill & \hfill 1\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill \end{array}\right].& \phantom{\rule{2em}{0ex}}& \hfill \end{array}$$So we find that $r=3$ and

$$\begin{array}{lllllll}\hfill D=\left\{{d}_{1},\phantom{\rule{0em}{0ex}}{d}_{2},\phantom{\rule{0em}{0ex}}{d}_{3}\right\}=\left\{1,\phantom{\rule{0em}{0ex}}3,\phantom{\rule{0em}{0ex}}4\right\}& \phantom{\rule{2em}{0ex}}& \hfill F=\left\{{f}_{1},\phantom{\rule{0em}{0ex}}{f}_{2},\phantom{\rule{0em}{0ex}}{f}_{3},\phantom{\rule{0em}{0ex}}{f}_{4},\phantom{\rule{0em}{0ex}}{f}_{5}\right\}=\left\{2,\phantom{\rule{0em}{0ex}}5,\phantom{\rule{0em}{0ex}}6,\phantom{\rule{0em}{0ex}}7,\phantom{\rule{0em}{0ex}}8\right\}.& \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill \end{array}$$Let $i$ denote one of the $r=3$ non-zero rows, and then we see that we can solve the corresponding equation represented by this row for the variable ${x}_{{d}_{i}}$ and write it as a linear function of the variables ${x}_{{f}_{1}},\phantom{\rule{0em}{0ex}}{x}_{{f}_{2}},\phantom{\rule{0em}{0ex}}{x}_{{f}_{3}},\phantom{\rule{0em}{0ex}}{x}_{{f}_{4}}$ (notice that ${f}_{5}=8$ does not reference a variable). We’ll do this now, but you can already see how the subscripts upon subscripts takes some getting used to.

$$\begin{array}{llllllll}\hfill \left(i=1\right)& \phantom{\rule{2em}{0ex}}& \hfill {x}_{{d}_{1}}& ={x}_{1}=4-4{x}_{2}-2{x}_{5}-{x}_{6}+3{x}_{7}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill \left(i=2\right)& \phantom{\rule{2em}{0ex}}& \hfill {x}_{{d}_{2}}& ={x}_{3}=2-{x}_{5}+3{x}_{6}-5{x}_{7}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill \left(i=3\right)& \phantom{\rule{2em}{0ex}}& \hfill {x}_{{d}_{3}}& ={x}_{4}=1-2{x}_{5}+6{x}_{6}-6{x}_{7}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$$Each element of the set $F=\left\{{f}_{1},\phantom{\rule{0em}{0ex}}{f}_{2},\phantom{\rule{0em}{0ex}}{f}_{3},\phantom{\rule{0em}{0ex}}{f}_{4},\phantom{\rule{0em}{0ex}}{f}_{5}\right\}=\left\{2,\phantom{\rule{0em}{0ex}}5,\phantom{\rule{0em}{0ex}}6,\phantom{\rule{0em}{0ex}}7,\phantom{\rule{0em}{0ex}}8\right\}$ is the index of a variable, except for ${f}_{5}=8$. We refer to ${x}_{{f}_{1}}={x}_{2}$, ${x}_{{f}_{2}}={x}_{5}$, ${x}_{{f}_{3}}={x}_{6}$ and ${x}_{{f}_{4}}={x}_{7}$ as “free” (or “independent”) variables since they are allowed to assume any possible combination of values that we can imagine and we can continue on to build a solution to the system by solving individual equations for the values of the other (“dependent”) variables.

Each element of the set $D=\left\{{d}_{1},\phantom{\rule{0em}{0ex}}{d}_{2},\phantom{\rule{0em}{0ex}}{d}_{3}\right\}=\left\{1,\phantom{\rule{0em}{0ex}}3,\phantom{\rule{0em}{0ex}}4\right\}$ is the index of a variable. We refer to the variables ${x}_{{d}_{1}}={x}_{1}$, ${x}_{{d}_{2}}={x}_{3}$ and ${x}_{{d}_{3}}={x}_{4}$ as “dependent” variables since they depend on the independent variables. More precisely, for each possible choice of values for the independent variables we get exactly one set of values for the dependent variables that combine to form a solution of the system.

To express the solutions as a set, we write

$$\left\{\left.\left[\begin{array}{c}\hfill 4-4{x}_{2}-2{x}_{5}-{x}_{6}+3{x}_{7}\hfill \\ \hfill {x}_{2}\hfill \\ \hfill 2-{x}_{5}+3{x}_{6}-5{x}_{7}\hfill \\ \hfill 1-2{x}_{5}+6{x}_{6}-6{x}_{7}\hfill \\ \hfill {x}_{5}\hfill \\ \hfill {x}_{6}\hfill \\ \hfill {x}_{7}\hfill \end{array}\right]\phantom{\rule{0em}{0ex}}\right|\phantom{\rule{0em}{0ex}}{x}_{2},\phantom{\rule{0em}{0ex}}{x}_{5},\phantom{\rule{0em}{0ex}}{x}_{6},\phantom{\rule{0em}{0ex}}{x}_{7}\in {\u2102}^{}\right\}$$ |

The condition that ${x}_{2},\phantom{\rule{0em}{0ex}}{x}_{5},\phantom{\rule{0em}{0ex}}{x}_{6},\phantom{\rule{0em}{0ex}}{x}_{7}\in {\u2102}^{}$ is how we specify that the variables ${x}_{2},\phantom{\rule{0em}{0ex}}{x}_{5},\phantom{\rule{0em}{0ex}}{x}_{6},\phantom{\rule{0em}{0ex}}{x}_{7}$ are “free” to assume any possible values.

This systematic approach to solving a system of equations will allow us to create a precise description of the solution set for any consistent system once we have found the reduced row-echelon form of the augmented matrix. It will work just as well when the set of free variables is empty and we get just a single solution. And we could program a computer to do it! Now have a whack at Archetype J (Exercise TSS.T10), mimicking the discussion in this example. We’ll still be here when you get back. $\u22a0$

Using the reduced row-echelon form of the augmented matrix of a system of equations to determine the nature of the solution set of the system is a very key idea. So let’s look at one more example like the last one. But first a definition, and then the example. We mix our metaphors a bit when we call variables free versus dependent. Maybe we should call dependent variables “enslaved”?

Definition IDV

Independent and Dependent Variables

Suppose $A$
is the augmented matrix of a consistent system of linear equations and
$B$
is a row-equivalent matrix in reduced row-echelon form. Suppose
$j$ is the index of
a column of $B$
that contains the leading 1 for some row (i.e. column
$j$ is a
pivot column), and this column is not the last column. Then the variable
${x}_{j}$ is
dependent. A variable that is not dependent is called independent or free.
$\u25b3$

Example FDV

Free and dependent variables

Consider the system of five equations in five variables,

whose augmented matrix row-reduces to

$$\left[\begin{array}{cccccc}\hfill \text{1}\hfill & \hfill -1\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 3\hfill & \hfill 6\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill \text{1}\hfill & \hfill 0\hfill & \hfill -2\hfill & \hfill 1\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill \text{1}\hfill & \hfill 4\hfill & \hfill 9\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill \end{array}\right]$$ |

There are leading 1’s in columns 1, 3 and 4, so $D=\left\{1,\phantom{\rule{0em}{0ex}}3,\phantom{\rule{0em}{0ex}}4\right\}$. From this we know that the variables ${x}_{1}$, ${x}_{3}$ and ${x}_{4}$ will be dependent variables, and each of the $r=3$ nonzero rows of the row-reduced matrix will yield an expression for one of these three variables. The set $F$ is all the remaining column indices, $F=\left\{2,\phantom{\rule{0em}{0ex}}5,\phantom{\rule{0em}{0ex}}6\right\}$. Since $6\in F$ we know there is no leading 1 in the final column, so the system is consistent by Theorem RCLS. The remaining indices in $F$ will correspond to free variables, so ${x}_{2}$ and ${x}_{5}$ are our free variables. The resulting three equations that describe our solution set are then,

$$\begin{array}{llllllll}\hfill \left({x}_{{d}_{1}}={x}_{1}\right)& \phantom{\rule{2em}{0ex}}& \hfill {x}_{1}& =6+{x}_{2}-3{x}_{5}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill \left({x}_{{d}_{2}}={x}_{3}\right)& \phantom{\rule{2em}{0ex}}& \hfill {x}_{3}& =1+2{x}_{5}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill \left({x}_{{d}_{3}}={x}_{4}\right)& \phantom{\rule{2em}{0ex}}& \hfill {x}_{4}& =9-4{x}_{5}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$$Make sure you understand where these three equations came from, and notice how the location of the leading 1’s determined the variables on the left-hand side of each equation. We can compactly describe the solution set as,

$$S=\left\{\left.\left[\begin{array}{c}\hfill 6+{x}_{2}-3{x}_{5}\hfill \\ \hfill {x}_{2}\hfill \\ \hfill 1+2{x}_{5}\hfill \\ \hfill 9-4{x}_{5}\hfill \\ \hfill {x}_{5}\hfill \end{array}\right]\phantom{\rule{0em}{0ex}}\right|\phantom{\rule{0em}{0ex}}{x}_{2},\phantom{\rule{0em}{0ex}}{x}_{5}\in {\u2102}^{}\right\}$$ |

Notice how we express the freedom for ${x}_{2}$ and ${x}_{5}$: ${x}_{2},\phantom{\rule{0em}{0ex}}{x}_{5}\in {\u2102}^{}$. $\u22a0$

Sets are an important part of algebra, and we’ve seen a few already. Being comfortable with sets is important for understanding and writing proofs. If you haven’t already, pay a visit now to Section SET.

We can now use the values of $m$, $n$, $r$, and the independent and dependent variables to categorize the solution sets for linear systems through a sequence of theorems. Through the following sequence of proofs, you will want to consult three proof techniques. See Technique E. See Technique N. See Technique CP.

First we have a theorem that explores the distinction between consistent and inconsistent linear systems.

Theorem RCLS

Recognizing Consistency of a Linear System

Suppose $A$
is the augmented matrix of a system of linear equations with
$m$ equations in
$n$ variables.
Suppose also that $B$
is a row-equivalent matrix in reduced row-echelon form with
$r$ rows that are
not zero rows. Then the system of equations is inconsistent if and only if the leading 1 of
row $r$ is located
in column $n+1$
of $B$.
$\square $

Proof ($\Leftarrow $) The first half of the proof begins with the assumption that the leading 1 of row $r$ is located in column $n+1$ of $B$. Then row $r$ of $B$ begins with $n$ consecutive zeros, finishing with the leading 1. This is a representation of the equation $0=1$, which is false. Since this equation is false for any collection of values we might choose for the variables, there are no solutions for the system of equations, and it is inconsistent.

($\Rightarrow $) For the second half of the proof, we wish to show that if we assume the system is inconsistent, then the final leading 1 is located in the last column. But instead of proving this directly, we’ll form the logically equivalent statement that is the contrapositive, and prove that instead (see Technique CP). Turning the implication around, and negating each portion, we arrive at the logically equivalent statement: If the leading 1 of row $r$ is not in column $n+1$, then the system of equations is consistent.

If the leading 1 for row $r$ is located somewhere in columns 1 through $n$, then every preceding row’s leading 1 is also located in columns 1 through $n$. In other words, since the last leading 1 is not in the last column, no leading 1 for any row is in the last column, due to the echelon layout of the leading 1’s. Let ${b}_{i,n+1}$, $1\le i\le r$, denote the entries of the last column of $B$ for the first $r$ rows. Employ our notation for columns of the reduced row-echelon form of a matrix (see Notation RREFA) to $B$ and set ${x}_{{f}_{i}}=0$, $1\le i\le n-r$ and then set ${x}_{{d}_{i}}={b}_{i,n+1}$, $1\le i\le r$. In other words, set the dependent variables equal to the corresponding values in the final column and set all the free variables to zero. These values for the variables make the equations represented by the first $r$ rows all true (convince yourself of this). Rows $r+1$ through $m$ (if any) are all zero rows, hence represent the equation $0=0$ and are also all true. We have now identified one solution to the system, so we can say the system is consistent. $\u25a0$

The beauty of this theorem being an equivalence is that we can unequivocally test to see if a system is consistent or inconsistent by looking at just a single entry of the reduced row-echelon form matrix. We could program a computer to do it!

Notice that for a consistent system the row-reduced augmented matrix has $n+1\in F$, so the largest element of $F$ does not refer to a variable. Also, for an inconsistent system, $n+1\in D$, and it then does not make much sense to discuss whether or not variables are free or dependent since there is no solution. With the characterization of Theorem RCLS, we can explore the relationships between $r$ and $n$ in light of the consistency of a system of equations. First, a situation where we can quickly conclude the inconsistency of a system.

Theorem ISRN

Inconsistent Systems, $r$
and $n$

Suppose $A$
is the augmented matrix of a system of linear equations with
$m$ equations in
$n$ variables.
Suppose also that $B$
is a row-equivalent matrix in reduced row-echelon form with
$r$ rows that are not
completely zeros. If $r=n+1$,
then the system of equations is inconsistent.
$\square $

Proof If $r=n+1$, then $D=\left\{1,\phantom{\rule{0em}{0ex}}2,\phantom{\rule{0em}{0ex}}3,\phantom{\rule{0em}{0ex}}\dots ,\phantom{\rule{0em}{0ex}}n,\phantom{\rule{0em}{0ex}}n+1\right\}$ and every column of $B$ contains a leading 1 and is a pivot column. In particular, the entry of column $n+1$ for row $r=n+1$ is a leading 1. Theorem RCLS then says that the system is inconsistent. $\u25a0$

Do not confuse Theorem ISRN with its converse! Go check out Technique CV right now.

Next, if a system is consistent, we can distinguish between a unique solution and infinitely many solutions, and furthermore, we recognize that these are the only two possibilities.

Theorem CSRN

Consistent Systems, $r$
and $n$

Suppose $A$
is the augmented matrix of a consistent system of linear equations with
$m$ equations in
$n$ variables.
Suppose also that $B$
is a row-equivalent matrix in reduced row-echelon form with
$r$ rows that are not
zero rows. Then $r\le n$. If
$r=n$, then the system has a
unique solution, and if $r<n$,
then the system has infinitely many solutions.
$\square $

Proof This theorem contains three implications that we must establish. Notice first that $B$ has $n+1$ columns, so there can be at most $n+1$ pivot columns, i.e. $r\le n+1$. If $r=n+1$, then Theorem ISRN tells us that the system is inconsistent, contrary to our hypothesis. We are left with $r\le n$.

When $r=n$, we find $n-r=0$ free variables (i.e. $F=\left\{n+1\right\}$) and any solution must equal the unique solution given by the first $n$ entries of column $n+1$ of $B$.

When $r<n$, we have $n-r>0$ free variables, corresponding to columns of $B$ without a leading 1, excepting the final column, which also does not contain a leading 1 by Theorem RCLS. By varying the values of the free variables suitably, we can demonstrate infinitely many solutions. $\u25a0$

The next theorem simply states a conclusion from the final paragraph of the previous proof, allowing us to state explicitly the number of free variables for a consistent system.

Theorem FVCS

Free Variables for Consistent Systems

Suppose $A$
is the augmented matrix of a consistent system of linear equations with
$m$ equations in
$n$ variables.
Suppose also that $B$
is a row-equivalent matrix in reduced row-echelon form with
$r$ rows
that are not completely zeros. Then the solution set can be described with
$n-r$ free
variables. $\square $

Proof See the proof of Theorem CSRN. $\u25a0$

Example CFV

Counting free variables

For each archetype that is a system of equations, the values of
$n$ and
$r$ are
listed. Many also contain a few sample solutions. We can use this information
profitably, as illustrated by four examples.

- Archetype A has $n=3$ and $r=2$. It can be seen to be consistent by the sample solutions given. Its solution set then has $n-r=1$ free variables, and therefore will be infinite.
- Archetype B has $n=3$ and $r=3$. It can be seen to be consistent by the single sample solution given. Its solution set can then be described with $n-r=0$ free variables, and therefore will have just the single solution.
- Archetype H has $n=2$ and $r=3$. In this case, $r=n+1$, so Theorem ISRN says the system is inconsistent. We should not try to apply Theorem FVCS to count free variables, since the theorem only applies to consistent systems. (What would happen if you did?)
- Archetype E has $n=4$ and $r=3$. However, by looking at the reduced row-echelon form of the augmented matrix, we find a leading 1 in row 3, column 4. By Theorem RCLS we recognize the system is then inconsistent. (Why doesn’t this example contradict Theorem ISRN?)

We have accomplished a lot so far, but our main goal has been the following theorem, which is now very simple to prove. The proof is so simple that we ought to call it a corollary, but the result is important enough that it deserves to be called a theorem. (See Technique LC.) Notice that this theorem was presaged first by Example TTS and further foreshadowed by other examples.

Theorem PSSLS

Possible Solution Sets for Linear Systems

A system of linear equations has no solutions, a unique solution or infinitely many
solutions. $\square $

Proof By definition, a system is either inconsistent or consistent. The first case describes systems with no solutions. For consistent systems, we have the remaining two possibilities as guaranteed by, and described in, Theorem CSRN. $\u25a0$

We have one more theorem to round out our set of tools for determining solution sets to systems of linear equations.

Theorem CMVEI

Consistent, More Variables than Equations, Infinite solutions

Suppose a consistent system of linear equations has
$m$ equations in
$n$ variables. If
$n>m$, then the system has
infinitely many solutions. $\square $

Proof Suppose that the augmented matrix of the system of equations is row-equivalent to $B$, a matrix in reduced row-echelon form with $r$ nonzero rows. Because $B$ has $m$ rows in total, the number that are nonzero rows is less. In other words, $r\le m$. Follow this with the hypothesis that $n>m$ and we find that the system has a solution set described by at least one free variable because

$$n-r\ge n-m>0.$$ |

A consistent system with free variables will have an infinite number of solutions, as given by Theorem CSRN. $\u25a0$

Notice that to use this theorem we need only know that the system is consistent, together with the values of $m$ and $n$. We do not necessarily have to compute a row-equivalent reduced row-echelon form matrix, even though we discussed such a matrix in the proof. This is the substance of the following example.

Example OSGMD

One solution gives many, Archetype D

Archetype D is the system of $m=3$
equations in $n=4$
variables,

and the solution ${x}_{1}=0$, ${x}_{2}=1$, ${x}_{3}=2$, ${x}_{4}=1$ can be checked easily by substitution. Having been handed this solution, we know the system is consistent. This, together with $n>m$, allows us to apply Theorem CMVEI and conclude that the system has infinitely many solutions. $\u22a0$

These theorems give us the procedures and implications that allow us to completely solve any system of linear equations. The main computational tool is using row operations to convert an augmented matrix into reduced row-echelon form. Here’s a broad outline of how we would instruct a computer to solve a system of linear equations.

- Represent a system of linear equations by an augmented matrix (an array is the appropriate data structure in most computer languages).
- Convert the matrix to a row-equivalent matrix in reduced row-echelon form using the procedure from the proof of Theorem REMEF.
- Determine $r$ and locate the leading 1 of row $r$. If it is in column $n+1$, output the statement that the system is inconsistent and halt.
- With the leading 1 of row $r$
not in column $n+1$,
there are two possibilities:
- $r=n$ and the solution is unique. It can be read off directly from the entries in rows 1 through $n$ of column $n+1$.
- $r<n$ and there are infinitely many solutions. If only a single solution is needed, set all the free variables to zero and read off the dependent variable values from column $n+1$, as in the second half of the proof of Theorem RCLS. If the entire solution set is required, figure out some nice compact way to describe it, since your finite computer is not big enough to hold all the solutions (we’ll have such a way soon).

The above makes it all sound a bit simpler than it really is. In practice, row operations employ division (usually to get a leading entry of a row to convert to a leading 1) and that will introduce round-off errors. Entries that should be zero sometimes end up being very, very small nonzero entries, or small entries lead to overflow errors when used as divisors. A variety of strategies can be employed to minimize these sorts of errors, and this is one of the main topics in the important subject known as numerical linear algebra.

Solving a linear system is such a fundamental problem in so many areas of mathematics, and its applications, that any computational device worth using for linear algebra will have a built-in routine to do just that. See: Computation LS.MMA . In this section we’ve gained a foolproof procedure for solving any system of linear equations, no matter how many equations or variables. We also have a handful of theorems that allow us to determine partial information about a solution set without actually constructing the whole set itself. Donald Knuth would be proud.

- How do we recognize when a system of linear equations is inconsistent?
- Suppose we have converted the augmented matrix of a system of equations into reduced row-echelon form. How do we then identify the dependent and independent (free) variables?
- What are the possible solution sets for a system of linear equations?

C10 In the spirit of Example ISSI, describe the infinite solution set for
Archetype J.

Contributed by Robert Beezer

M45 Prove that Archetype J has infinitely many solutions without row-reducing
the augmented matrix.

Contributed by Robert Beezer Solution [148]

For Exercises M51–M57 say as much as possible about each system’s
solution set. Be sure to make it clear which theorems you are using to reach your
conclusions.

M51 A consistent system of 8 equations in 6 variables.

Contributed by Robert Beezer Solution [148]

M52 A consistent system of 6 equations in 8 variables.

Contributed by Robert Beezer Solution [148]

M53 A system of 5 equations in 9 variables.

Contributed by Robert Beezer Solution [148]

M54 A system with 12 equations in 35 variables.

Contributed by Robert Beezer Solution [148]

M56 A system with 6 equations in 12 variables.

Contributed by Robert Beezer Solution [148]

M57 A system with 8 equations and 6 variables. The reduced row-echelon form
of the augmented matrix of the system has 7 pivot coulmns.

Contributed by Robert Beezer Solution [149]

M60 Without doing any computations, and without examining any solutions,
say as much as possible about the form of the solution set for each archetype that
is a system of equations.

Archetype A

Archetype B

Archetype C

Archetype D

Archetype E

Archetype F

Archetype G

Archetype H

Archetype I

Archetype J

Contributed by Robert Beezer

T10 An inconsistent system may have
$r>n$. If we
try (incorrectly!) to apply Theorem FVCS to such a system, how many free
variables would we discover?

Contributed by Robert Beezer Solution [149]

T40 Suppose that the coefficient matrix of a system of linear equations has two
columns that are identical. Prove that the system has infinitely many solutions.

Contributed by Robert Beezer Solution [149]

M45 Contributed by Robert Beezer Statement [146]

Demonstrate that the system is consistent by verifying any
one of the four sample solutions provided. Then because
$n=9>6=m$,
Theorem CMVEI gives us the conclusion that the system has infinitely many
solutions.

Notice that we only know the system will have at least $9-6=3$ free variables, but very well could have more. We do not know know that $r=6$, only that $r\le 6$.

M51 Contributed by Robert Beezer Statement [146]

Consistent means there is at least one solution (Definition CS). It will have either
a unique solution or infinitely many solutions (Theorem PSSLS).

M52 Contributed by Robert Beezer Statement [146]

With 6 rows in the augmented matrix, the row-reduced version will have
$r\le 6$.
Since the system is consistent, apply Theorem CSRN to see that
$n-r\ge 2$
implies infinitely many solutions.

M53 Contributed by Robert Beezer Statement [146]

The system could be inconsistent. If it is consistent, then because it has more
variables than equations Theorem CMVEI implies that there would be infinitely
many solutions. So, of all the possibilities in Theorem PSSLS, only the case of a
unique solution can be ruled out.

M54 Contributed by Robert Beezer Statement [146]

The system could be inconsistent. If it is consistent, then Theorem CMVEI tells
us the solution set will be infinite. So we can be certain that there is not a unique
solution.

M56 Contributed by Robert Beezer Statement [146]

The system could be inconsistent. If it is consistent, and since
$12>6$, then
Theorem CMVEI says we will have infinitely many solutions. So there are two
possibilities. Theorem PSSLS allows to state equivalently that a unique solution
is an impossibility.

M57 Contributed by Robert Beezer Statement [146]

7 pivot columns implies that there are
$r=7$
nonzero rows (so row 8 is all zeros in the reduced row-echelon form). Then
$n+1=6+1=7=r$ and
Theorem ISRN allows to conclude that the system is inconsistent.

T10 Contributed by Robert Beezer Statement [147]

Theorem FVCS will indicate a negative number of free variables, but we can say even more.
If $r>n$, then the only
possibility is that $r=n+1$,
and then we compute $n-r=n-\left(n+1\right)=-1$
free variables.

T40 Contributed by Robert Beezer Statement [147]

Since the system is consistent, we know there is either a unique solution, or
infinitely many solutions (Theorem PSSLS). If we perform row operations
(Definition RO) on the augmented matrix of the system, the two equal columns
of the coefficient matrix will suffer the same fate, and remain equal in the final
reduced row-echelon form. Suppose both of these columns are pivot columns
(Definition RREF). Then there is single row containing the two leading
1’s of the two pivot columns, a violation of reduced row-echelon form
(Definition RREF). So at least one of these columns is not a pivot column, and
the column index indicates a free variable in the description of the solution set
(Definition IDV). With a free variable, we arrive at an infinite solution set
(Theorem FVCS).