Skip to main content
\(\newcommand{\orderof}[1]{\sim #1} \newcommand{\Z}{\mathbb{Z}} \newcommand{\reals}{\mathbb{R}} \newcommand{\real}[1]{\mathbb{R}^{#1}} \newcommand{\complexes}{\mathbb{C}} \newcommand{\complex}[1]{\mathbb{C}^{#1}} \newcommand{\conjugate}[1]{\overline{#1}} \newcommand{\modulus}[1]{\left\lvert#1\right\rvert} \newcommand{\zerovector}{\vect{0}} \newcommand{\zeromatrix}{\mathcal{O}} \newcommand{\innerproduct}[2]{\left\langle#1,\,#2\right\rangle} \newcommand{\norm}[1]{\left\lVert#1\right\rVert} \newcommand{\dimension}[1]{\dim\left(#1\right)} \newcommand{\nullity}[1]{n\left(#1\right)} \newcommand{\rank}[1]{r\left(#1\right)} \newcommand{\ds}{\oplus} \newcommand{\detname}[1]{\det\left(#1\right)} \newcommand{\detbars}[1]{\left\lvert#1\right\rvert} \newcommand{\trace}[1]{t\left(#1\right)} \newcommand{\sr}[1]{#1^{1/2}} \newcommand{\spn}[1]{\left\langle#1\right\rangle} \newcommand{\nsp}[1]{\mathcal{N}\!\left(#1\right)} \newcommand{\csp}[1]{\mathcal{C}\!\left(#1\right)} \newcommand{\rsp}[1]{\mathcal{R}\!\left(#1\right)} \newcommand{\lns}[1]{\mathcal{L}\!\left(#1\right)} \newcommand{\per}[1]{#1^\perp} \newcommand{\augmented}[2]{\left\lbrack\left.#1\,\right\rvert\,#2\right\rbrack} \newcommand{\linearsystem}[2]{\mathcal{LS}\!\left(#1,\,#2\right)} \newcommand{\homosystem}[1]{\linearsystem{#1}{\zerovector}} \newcommand{\rowopswap}[2]{R_{#1}\leftrightarrow R_{#2}} \newcommand{\rowopmult}[2]{#1R_{#2}} \newcommand{\rowopadd}[3]{#1R_{#2}+R_{#3}} \newcommand{\leading}[1]{\boxed{#1}} \newcommand{\rref}{\xrightarrow{\text{RREF}}} \newcommand{\elemswap}[2]{E_{#1,#2}} \newcommand{\elemmult}[2]{E_{#2}\left(#1\right)} \newcommand{\elemadd}[3]{E_{#2,#3}\left(#1\right)} \newcommand{\scalarlist}[2]{{#1}_{1},\,{#1}_{2},\,{#1}_{3},\,\ldots,\,{#1}_{#2}} \newcommand{\vect}[1]{\mathbf{#1}} \newcommand{\colvector}[1]{\begin{bmatrix}#1\end{bmatrix}} \newcommand{\vectorcomponents}[2]{\colvector{#1_{1}\\#1_{2}\\#1_{3}\\\vdots\\#1_{#2}}} \newcommand{\vectorlist}[2]{\vect{#1}_{1},\,\vect{#1}_{2},\,\vect{#1}_{3},\,\ldots,\,\vect{#1}_{#2}} \newcommand{\vectorentry}[2]{\left\lbrack#1\right\rbrack_{#2}} \newcommand{\matrixentry}[2]{\left\lbrack#1\right\rbrack_{#2}} \newcommand{\lincombo}[3]{#1_{1}\vect{#2}_{1}+#1_{2}\vect{#2}_{2}+#1_{3}\vect{#2}_{3}+\cdots +#1_{#3}\vect{#2}_{#3}} \newcommand{\matrixcolumns}[2]{\left\lbrack\vect{#1}_{1}|\vect{#1}_{2}|\vect{#1}_{3}|\ldots|\vect{#1}_{#2}\right\rbrack} \newcommand{\transpose}[1]{#1^{t}} \newcommand{\inverse}[1]{#1^{-1}} \newcommand{\submatrix}[3]{#1\left(#2|#3\right)} \newcommand{\adj}[1]{\transpose{\left(\conjugate{#1}\right)}} \newcommand{\adjoint}[1]{#1^\ast} \newcommand{\set}[1]{\left\{#1\right\}} \newcommand{\setparts}[2]{\left\lbrace#1\,\middle|\,#2\right\rbrace} \newcommand{\card}[1]{\left\lvert#1\right\rvert} \newcommand{\setcomplement}[1]{\overline{#1}} \newcommand{\charpoly}[2]{p_{#1}\left(#2\right)} \newcommand{\eigenspace}[2]{\mathcal{E}_{#1}\left(#2\right)} \newcommand{\eigensystem}[3]{\lambda&=#2&\eigenspace{#1}{#2}&=\spn{\set{#3}}} \newcommand{\geneigenspace}[2]{\mathcal{G}_{#1}\left(#2\right)} \newcommand{\algmult}[2]{\alpha_{#1}\left(#2\right)} \newcommand{\geomult}[2]{\gamma_{#1}\left(#2\right)} \newcommand{\indx}[2]{\iota_{#1}\left(#2\right)} \newcommand{\ltdefn}[3]{#1\colon #2\rightarrow#3} \newcommand{\lteval}[2]{#1\left(#2\right)} \newcommand{\ltinverse}[1]{#1^{-1}} \newcommand{\restrict}[2]{{#1}|_{#2}} \newcommand{\preimage}[2]{#1^{-1}\left(#2\right)} \newcommand{\rng}[1]{\mathcal{R}\!\left(#1\right)} \newcommand{\krn}[1]{\mathcal{K}\!\left(#1\right)} \newcommand{\compose}[2]{{#1}\circ{#2}} \newcommand{\vslt}[2]{\mathcal{LT}\left(#1,\,#2\right)} \newcommand{\isomorphic}{\cong} \newcommand{\similar}[2]{\inverse{#2}#1#2} \newcommand{\vectrepname}[1]{\rho_{#1}} \newcommand{\vectrep}[2]{\lteval{\vectrepname{#1}}{#2}} \newcommand{\vectrepinvname}[1]{\ltinverse{\vectrepname{#1}}} \newcommand{\vectrepinv}[2]{\lteval{\ltinverse{\vectrepname{#1}}}{#2}} \newcommand{\matrixrep}[3]{M^{#1}_{#2,#3}} \newcommand{\matrixrepcolumns}[4]{\left\lbrack \left.\vectrep{#2}{\lteval{#1}{\vect{#3}_{1}}}\right|\left.\vectrep{#2}{\lteval{#1}{\vect{#3}_{2}}}\right|\left.\vectrep{#2}{\lteval{#1}{\vect{#3}_{3}}}\right|\ldots\left|\vectrep{#2}{\lteval{#1}{\vect{#3}_{#4}}}\right.\right\rbrack} \newcommand{\cbm}[2]{C_{#1,#2}} \newcommand{\jordan}[2]{J_{#1}\left(#2\right)} \newcommand{\hadamard}[2]{#1\circ #2} \newcommand{\hadamardidentity}[1]{J_{#1}} \newcommand{\hadamardinverse}[1]{\widehat{#1}} \newcommand{\lt}{<} \newcommand{\gt}{>} \newcommand{\amp}{&} \)

SectionRREFReduced Row-Echelon Form

After solving a few systems of equations, you will recognize that it does not matter so much what we call our variables, as opposed to what numbers act as their coefficients. A system in the variables \(x_1,\,x_2,\,x_3\) would behave the same if we changed the names of the variables to \(a,\,b,\,c\) and kept all the constants the same and in the same places. In this section, we will isolate the key bits of information about a system of equations into something called a matrix, and then use this matrix to systematically solve the equations. Along the way we will obtain one of our most important and useful computational tools.

SubsectionMVNSEMatrix and Vector Notation for Systems of Equations

DefinitionMMatrix

An \(m\times n\) matrix is a rectangular layout of numbers from \(\complexes\) having \(m\) rows and \(n\) columns. We will use upper-case Latin letters from the start of the alphabet (\(A,\,B,\,C,\dotsc\)) to denote matrices and squared-off brackets to delimit the layout. Many use large parentheses instead of brackets — the distinction is not important. Rows of a matrix will be referenced starting at the top and working down (i.e. row 1 is at the top) and columns will be referenced starting from the left (i.e. column 1 is at the left). For a matrix \(A\text{,}\) the notation \(\matrixentry{A}{ij}\) will refer to the complex number in row \(i\) and column \(j\) of \(A\text{.}\)

Be careful with this notation for individual entries, since it is easy to think that \(\matrixentry{A}{ij}\) refers to the whole matrix. It does not. It is just a number, but is a convenient way to talk about the individual entries simultaneously. This notation will get a heavy workout once we get to Chapter M.

When we do equation operations on system of equations, the names of the variables really are not very important. Use \(x_1\text{,}\) \(x_2\text{,}\) \(x_3\text{,}\) or \(a\text{,}\) \(b\text{,}\) \(c\text{,}\) or \(x\text{,}\) \(y\text{,}\) \(z\text{,}\) it really does not matter. In this subsection we will describe some notation that will make it easier to describe linear systems, solve the systems and describe the solution sets. Here is a list of definitions, laden with notation.

DefinitionCVColumn Vector

A column vector of size \(m\) is an ordered list of \(m\) numbers, which is written in order vertically, starting at the top and proceeding to the bottom. At times, we will refer to a column vector as simply a vector. Column vectors will be written in bold, usually with lower case Latin letter from the end of the alphabet such as \(\vect{u}\text{,}\) \(\vect{v}\text{,}\) \(\vect{w}\text{,}\) \(\vect{x}\text{,}\) \(\vect{y}\text{,}\) \(\vect{z}\text{.}\) Some books like to write vectors with arrows, such as \(\vec{u}\text{.}\) Writing by hand, some like to put arrows on top of the symbol, or a tilde underneath the symbol, as in \(\underset{\sim}{\textstyle u}\text{.}\) To refer to the entry or component of vector \(\vect{v}\) in location \(i\) of the list, we write \(\vectorentry{\vect{v}}{i}\text{.}\)

Be careful with this notation. While the symbols \(\vectorentry{\vect{v}}{i}\) might look somewhat substantial, as an object this represents just one entry of a vector, which is just a single complex number.

DefinitionZCVZero Column Vector

The zero vector of size \(m\) is the column vector of size \(m\) where each entry is the number zero, \begin{gather*} \zerovector=\colvector{0\\0\\0\\\vdots\\0} \end{gather*} or defined much more compactly, \(\vectorentry{\zerovector}{i}=0\) for \(1\leq i\leq m\text{.}\)

DefinitionCMCoefficient Matrix

For a system of linear equations, \begin{align*} a_{11}x_1+a_{12}x_2+a_{13}x_3+\dots+a_{1n}x_n&=b_1\\ a_{21}x_1+a_{22}x_2+a_{23}x_3+\dots+a_{2n}x_n&=b_2\\ a_{31}x_1+a_{32}x_2+a_{33}x_3+\dots+a_{3n}x_n&=b_3\\ \vdots&\\ a_{m1}x_1+a_{m2}x_2+a_{m3}x_3+\dots+a_{mn}x_n&=b_m \end{align*} the coefficient matrix is the \(m\times n\) matrix \begin{equation*} A= \begin{bmatrix} a_{11}&a_{12}&a_{13}&\dots&a_{1n}\\ a_{21}&a_{22}&a_{23}&\dots&a_{2n}\\ a_{31}&a_{32}&a_{33}&\dots&a_{3n}\\ \vdots&\\ a_{m1}&a_{m2}&a_{m3}&\dots&a_{mn}\\ \end{bmatrix}\text{.} \end{equation*}

DefinitionVOCVector of Constants

For a system of linear equations, \begin{align*} a_{11}x_1+a_{12}x_2+a_{13}x_3+\dots+a_{1n}x_n&=b_1\\ a_{21}x_1+a_{22}x_2+a_{23}x_3+\dots+a_{2n}x_n&=b_2\\ a_{31}x_1+a_{32}x_2+a_{33}x_3+\dots+a_{3n}x_n&=b_3\\ \vdots&\\ a_{m1}x_1+a_{m2}x_2+a_{m3}x_3+\dots+a_{mn}x_n&=b_m \end{align*} the vector of constants is the column vector of size \(m\) \begin{equation*} \vect{b}= \begin{bmatrix} b_1\\ b_2\\ b_3\\ \vdots\\ b_m\\ \end{bmatrix}\text{.} \end{equation*}

DefinitionSOLVSolution Vector

For a system of linear equations, \begin{align*} a_{11}x_1+a_{12}x_2+a_{13}x_3+\dots+a_{1n}x_n&=b_1\\ a_{21}x_1+a_{22}x_2+a_{23}x_3+\dots+a_{2n}x_n&=b_2\\ a_{31}x_1+a_{32}x_2+a_{33}x_3+\dots+a_{3n}x_n&=b_3\\ \vdots&\\ a_{m1}x_1+a_{m2}x_2+a_{m3}x_3+\dots+a_{mn}x_n&=b_m \end{align*} the solution vector is the column vector of size \(n\) \begin{equation*} \vect{x}= \begin{bmatrix} x_1\\ x_2\\ x_3\\ \vdots\\ x_n\\ \end{bmatrix}\text{.} \end{equation*}

The solution vector may do double-duty on occasion. It might refer to a list of variable quantities at one point, and subsequently refer to values of those variables that actually form a particular solution to that system.

DefinitionMRLSMatrix Representation of a Linear System

If \(A\) is the coefficient matrix of a system of linear equations and \(\vect{b}\) is the vector of constants, then we will write \(\linearsystem{A}{\vect{b}}\) as a shorthand expression for the system of linear equations, which we will refer to as the matrix representation of the linear system.

DefinitionAMAugmented Matrix

Suppose we have a system of \(m\) equations in \(n\) variables, with coefficient matrix \(A\) and vector of constants \(\vect{b}\text{.}\) Then the augmented matrix of the system of equations is the \(m\times(n+1)\) matrix whose first \(n\) columns are the columns of \(A\) and whose last column (\(n+1\)) is the column vector \(\vect{b}\text{.}\) This matrix will be written as \(\augmented{A}{\vect{b}}\text{.}\)

The augmented matrix represents all the important information in the system of equations, since the names of the variables have been ignored, and the only connection with the variables is the location of their coefficients in the matrix. It is important to realize that the augmented matrix is just that, a matrix, and not a system of equations. In particular, the augmented matrix does not have any “solutions,” though it will be useful for finding solutions to the system of equations that it is associated with. (Think about your objects, and review Proof Technique L.) However, notice that an augmented matrix always belongs to some system of equations, and vice versa, so it is tempting to try and blur the distinction between the two. Here is a quick example.

SubsectionRORow Operations

An augmented matrix for a system of equations will save us the tedium of continually writing down the names of the variables as we solve the system. It will also release us from any dependence on the actual names of the variables. We have seen how certain operations we can perform on equations (Definition EO) will preserve their solutions (Theorem EOPSS). The next two definitions and the following theorem carry over these ideas to augmented matrices.

DefinitionRORow Operations

The following three operations will transform an \(m\times n\) matrix into a different matrix of the same size, and each is known as a row operation.

  1. Swap the locations of two rows.
  2. Multiply each entry of a single row by a nonzero quantity.
  3. Multiply each entry of one row by some quantity, and add these values to the entries in the same columns of a second row. Leave the first row the same after this operation, but replace the second row by the new values.

We will use a symbolic shorthand to describe these row operations.

  1. \(\rowopswap{i}{j}\text{:}\) Swap the location of rows \(i\) and \(j\text{.}\)
  2. \(\rowopmult{\alpha}{i}\text{:}\) Multiply row \(i\) by the nonzero scalar \(\alpha\text{.}\)
  3. \(\rowopadd{\alpha}{i}{j}\text{:}\) Multiply row \(i\) by the scalar \(\alpha\) and add to row \(j\text{.}\)
DefinitionREMRow-Equivalent Matrices

Two matrices, \(A\) and \(B\text{,}\) are row-equivalent if one can be obtained from the other by a sequence of row operations.

Notice that each of the three row operations is reversible (Exercise RREF.T10), so we do not have to be careful about the distinction between “\(A\) is row-equivalent to \(B\)” and “\(B\) is row-equivalent to \(A\text{.}\)” (Exercise RREF.T11)

The preceding definitions are designed to make the following theorem possible. It says that row-equivalent matrices represent systems of linear equations that have identical solution sets.

Proof

So at this point, our strategy is to begin with a system of equations, represent the system by an augmented matrix, perform row operations (which will preserve solutions for the system) to get a “simpler” augmented matrix, convert back to a “simpler” system of equations and then solve that system, knowing that its solutions are those of the original system. Here is a rehash of Example US as an exercise in using our new tools.

SubsectionRREFReduced Row-Echelon Form

The preceding example amply illustrates the definitions and theorems we have seen so far. But it still leaves two questions unanswered. Exactly what is this “simpler” form for a matrix, and just how do we get it? Here is the answer to the first question, a definition of reduced row-echelon form.

DefinitionRREFReduced Row-Echelon Form

A matrix is in reduced row-echelon form if it meets all of the following conditions:

  1. If there is a row where every entry is zero, then this row lies below any other row that contains a nonzero entry.
  2. The leftmost nonzero entry of a row is equal to 1.
  3. The leftmost nonzero entry of a row is the only nonzero entry in its column.
  4. Consider any two different leftmost nonzero entries, one located in row \(i\text{,}\) column \(j\) and the other located in row \(s\text{,}\) column \(t\text{.}\) If \(s\gt i\text{,}\) then \(t\gt j\text{.}\)

A row of only zero entries is called a zero row and the leftmost nonzero entry of a nonzero row is a leading 1. A column containing a leading 1 will be called a pivot column. The number of nonzero rows will be denoted by \(r\text{,}\) which is also equal to the number of leading 1's and the number of pivot columns.

The set of column indices for the pivot columns will be denoted by \(D=\set{d_1,\,d_2,\,d_3,\,\ldots,\,d_r}\) where \(d_1\lt d_2\lt d_3\lt\cdots\lt d_r\text{,}\) while the columns that are not pivot columns will be denoted as \(F=\set{f_1,\,f_2,\,f_3,\,\ldots,\,f_{n-r}}\) where \(f_1\lt f_2\lt f_3\lt\cdots\lt f_{n-r}\text{.}\)

The principal feature of reduced row-echelon form is the pattern of leading 1's guaranteed by conditions (2) and (4), reminiscent of a flight of geese, or steps in a staircase, or water cascading down a mountain stream.

There are a number of new terms and notation introduced in this definition, which should make you suspect that this is an important definition. Given all there is to digest here, we will mostly save the use of \(D\) and \(F\) until Section TSS. However, one important point to make here is that all of these terms and notation apply to a matrix. Sometimes we will employ these terms and sets for an augmented matrix, and other times it might be a coefficient matrix. So always give some thought to exactly which type of matrix you are analyzing.

Our next theorem has a “constructive” proof. Learn about the meaning of this term in Proof Technique C.

Proof

The procedure given in the proof of Theorem REMEF can be more precisely described using a pseudo-code version of a computer program. Single-letter variables, like m, n, i, j, r have the same meanings as above. := is assignment of the value on the right to the variable on the left, A[i,j] is the equivalent of the matrix entry \(\matrixentry{A}{ij}\text{,}\) while == is an equality test and <> is a “not equals” test.

input m, n and A
r := 0
for j := 1 to n
   i := r+1
   while i <= m and A[i,j] == 0
       i := i+1
   if i < m+1
       r := r+1
       swap rows i and r of A (row op 1)
       scale A[r,j] to a leading 1 (row op 2)
       for k := 1 to m, k <> r
           make A[k,j] zero (row op 3, employing row r)
output r and A

Notice that as a practical matter the “and” used in the conditional statement of the while statement should be of the “short-circuit” variety so that the array access that follows is not out-of-bounds.

So now we can put it all together. Begin with a system of linear equations (Definition SLE), and represent the system by its augmented matrix (Definition AM). Use row operations (Definition RO) to convert this matrix into reduced row-echelon form (Definition RREF), using the procedure outlined in the proof of Theorem REMEF. Theorem REMEF also tells us we can always accomplish this, and that the result is row-equivalent (Definition REM) to the original augmented matrix. Since the matrix in reduced-row echelon form has the same solution set, we can analyze the row-reduced version instead of the original matrix, viewing it as the augmented matrix of a different system of equations. The beauty of augmented matrices in reduced row-echelon form is that the solution sets to the systems they represent can be easily determined, as we will see in the next few examples and in the next section.

We will see through the course that almost every interesting property of a matrix can be discerned by looking at a row-equivalent matrix in reduced row-echelon form. For this reason it is important to know that the matrix \(B\) is guaranteed to exist by Theorem REMEF is also unique.

Two proof techniques are applicable to the proof. First, head out and read two proof techniques: Proof Technique CD and Proof Technique U.

Proof

We will now run through some examples of using these definitions and theorems to solve some systems of equations. From now on, when we have a matrix in reduced row-echelon form, we will mark the leading 1's with a small box. This will help you count, and identify, the pivot columns. In your work, you can box 'em, circle 'em or write 'em in a different color — just identify 'em somehow. This device will prove very useful later and is a very good habit to start developing right now.

Archetypes A and B are meant to contrast each other in many respects. So let us solve Archetype A now.

These three examples (Example SAB, Example SAA, Example SAE) illustrate the full range of possibilities for a system of linear equations — no solutions, one solution, or infinitely many solutions. In the next section we will examine these three scenarios more closely.

We (and everybody else) will often speak of “row-reducing” a matrix. This is an informal way of saying we begin with a matrix \(A\) and then analyze the matrix \(B\) that is row-equivalent to \(A\) and in reduced row-echelon form. So the term row-reduce is used as a verb, but describes something a bit more complicated, since we do not really change \(A\text{.}\) Theorem REMEF tells us that this process will always be successful and Theorem RREFU tells us that \(B\) will be unambiguous. Typically, an investigation of \(A\) will proceed by analyzing \(B\) and applying theorems whose hypotheses include the row-equivalence of \(A\) and \(B\text{,}\) and usually the hypothesis that \(B\) is in reduced row-echelon form.

SubsectionReading Questions

1

Is the matrix below in reduced row-echelon form? Why or why not? \begin{equation*} \begin{bmatrix} 1 & 5 & 0 & 6 & 8\\ 0 & 0 & 1 & 2 & 0\\ 0 & 0 & 0 & 0 & 1 \end{bmatrix} \end{equation*}

2

Use row operations to convert the matrix below to reduced row-echelon form and report the final matrix. \begin{equation*} \begin{bmatrix} 2 & 1 & 8\\ -1 & 1 & -1\\ -2 & 5 & 4 \end{bmatrix} \end{equation*}

3

Find all the solutions to the system below by using an augmented matrix and row operations. Report your final matrix in reduced row-echelon form and the set of solutions. \begin{align*} 2x_1 + 3x_2 - x_3&= 0\\ x_1 + 2x_2 + x_3&= 3\\ x_1 + 3x_2 + 3x_3&= 7 \end{align*}

SubsectionExercises

C05

Each archetype below is a system of equations. Form the augmented matrix of the system of equations, convert the matrix to reduced row-echelon form by using equation operations and then describe the solution set of the original system of equations.

Archetype A, Archetype B, Archetype C, Archetype D, Archetype E, Archetype F, Archetype G, Archetype H, Archetype I, Archetype J

For problems C10–C19, find all solutions to the system of linear equations. Use your favorite computing device to row-reduce the augmented matrices for the systems, and write the solutions as a set, using correct set notation.

C10

\begin{align*} 2x_1-3x_2+x_3+7x_4&=14\\ 2x_1+8x_2-4x_3+5x_4&=-1\\ x_1+3x_2-3x_3&=4 \\ -5x_1+2x_2+3x_3+4x_4&=-19 \end{align*}

Solution
C11

\begin{align*} 3x_1+4x_2-x_3+2x_4&=6\\ x_1-2x_2+3x_3+x_4&=2\\ 10x_2-10x_3-x_4&=1 \end{align*}

Solution
C12

\begin{align*} 2x_1+4x_2+5x_3+7x_4&=-26\\ x_1+2x_2+x_3-x_4&=-4\\ -2x_1-4x_2+x_3+11x_4&=-10 \end{align*}

Solution
C13

\begin{align*} x_1+2x_2+8x_3-7x_4&=-2\\ 3x_1+2x_2+12x_3-5x_4&=6\\ -x_1+x_2+x_3-5x_4&=-10 \end{align*}

Solution
C14

\begin{align*} 2x_1+x_2+7x_3-2x_4&=4\\ 3x_1-2x_2+11x_4&=13\\ x_1+x_2+5x_3-3x_4&=1 \end{align*}

Solution
C15

\begin{align*} 2x_1+ 3x_2-x_3 -9x_4&=-16\\ x_1+ 2x_2+ x_3&=0\\ -x_1+ 2x_2+ 3x_3+ 4x_4&=8 \end{align*}

Solution
C16

\begin{align*} 2x_1+ 3x_2+19x_3 -4x_4&=2\\ x_1+ 2x_2+ 12x_3-3x_4&=1\\ -x_1+ 2x_2+ 8x_3-5x_4&=1 \end{align*}

Solution
C17

\begin{align*} -x_1+5x_2&=-8\\ -2x_1+5x_2+5x_3+2x_4&=9\\ -3x_1-x_2+3x_3+x_4&=3\\ 7x_1+6x_2+5x_3+x_4&=30 \end{align*}

Solution
C18

\begin{align*} x_1+2x_2-4x_3-x_4&=32\\ x_1+3x_2-7x_3-x_5&=33\\ x_1+2x_3-2x_4+3x_5&=22 \end{align*}

Solution
C19

\begin{align*} 2x_1 + x_2 &= 6 \\ -x_1 - x_2 &= -2 \\ 3x_1 + 4x_2 &= 4 \\ 3x_1 + 5x_2 &= 2 \end{align*}

Solution

For problems C30–C33, row-reduce the matrix without the aid of a calculator, indicating the row operations you are using at each step using the notation of Definition RO.

C30

\begin{align*} \begin{bmatrix} 2 & 1 & 5 & 10\\ 1 & -3 & -1 & -2\\ 4 & -2 & 6 & 12 \end{bmatrix} \end{align*}

Solution
C31

\begin{align*} \begin{bmatrix} 1 & 2 & -4 \\ -3 & -1 & -3 \\ -2 & 1 & -7 \end{bmatrix} \end{align*}

Solution
C32

\begin{align*} \begin{bmatrix} 1 & 1 & 1 \\ -4 & -3 & -2 \\ 3 & 2 & 1 \end{bmatrix} \end{align*}

Solution
C33

\begin{align*} \begin{bmatrix} 1 & 2 & -1 & -1 \\ 2 & 4 & -1 & 4 \\ -1 & -2 & 3 & 5 \end{bmatrix} \end{align*}

Solution
M40

Consider the two \(3\times 4\) matrices below \begin{align*} B&= \begin{bmatrix} 1 & 3 & -2 & 2 \\ -1 & -2 & -1 & -1 \\ -1 & -5 & 8 & -3 \end{bmatrix} & C&= \begin{bmatrix} 1 & 2 & 1 & 2 \\ 1 & 1 & 4 & 0 \\ -1 & -1 & -4 & 1 \end{bmatrix} \end{align*}

  1. Row-reduce each matrix and determine that the reduced row-echelon forms of \(B\) and \(C\) are identical. From this argue that \(B\) and \(C\) are row-equivalent.
  2. In the proof of Theorem RREFU, we begin by arguing that entries of row-equivalent matrices are related by way of certain scalars and sums. In this example, we would write that entries of \(B\) from row \(i\) that are in column \(j\) are linearly related to the entries of \(C\) in column \(j\) from all three rows \begin{align*} \matrixentry{B}{ij} &= \delta_{i1}\matrixentry{C}{1j}+ \delta_{i2}\matrixentry{C}{2j}+ \delta_{i3}\matrixentry{C}{3j} & 1&\leq j\leq 4 \end{align*} For each \(1\leq i\leq 3\) find the corresponding three scalars in this relationship. So your answer will be nine scalars, determined three at a time.
Solution
M45

You keep a number of lizards, mice and peacocks as pets. There are a total of 108 legs and 30 tails in your menagerie. You have twice as many mice as lizards. How many of each creature do you have?

Solution
M50

A parking lot has 66 vehicles (cars, trucks, motorcycles and bicycles) in it. There are four times as many cars as trucks. The total number of tires (4 per car or truck, 2 per motorcycle or bicycle) is 252. How many cars are there? How many bicycles?

Solution
T10

Prove that each of the three row operations (Definition RO) is reversible. More precisely, if the matrix \(B\) is obtained from \(A\) by application of a single row operation, show that there is a single row operation that will transform \(B\) back into \(A\text{.}\)

Solution
T11

Suppose that \(A\text{,}\) \(B\) and \(C\) are \(m\times n\) matrices. Use the definition of row-equivalence (Definition REM) to prove the following three facts.

  1. \(A\) is row-equivalent to \(A\text{.}\)
  2. If \(A\) is row-equivalent to \(B\text{,}\) then \(B\) is row-equivalent to \(A\text{.}\)
  3. If \(A\) is row-equivalent to \(B\text{,}\) and \(B\) is row-equivalent to \(C\text{,}\) then \(A\) is row-equivalent to \(C\text{.}\)

A relationship that satisfies these three properties is known as an equivalence relation, an important idea in the study of various algebras. This is a formal way of saying that a relationship behaves like equality, without requiring the relationship to be as strict as equality itself. We will see it again in Theorem SER.

T12

Suppose that \(B\) is an \(m\times n\) matrix in reduced row-echelon form. Build a new, likely smaller, \(k\times\ell\) matrix \(C\) as follows. Keep any collection of \(k\) adjacent rows, \(k\leq m\text{.}\) From these rows, keep columns \(1\) through \(\ell\text{,}\) \(\ell\leq n\text{.}\) Prove that \(C\) is in reduced row-echelon form.

T13

Generalize Exercise RREF.T12 by just keeping any \(k\) rows, and not requiring the rows to be adjacent. Prove that any such matrix \(C\) is in reduced row-echelon form.