Skip to main content
\(\newcommand{\orderof}[1]{\sim #1} \newcommand{\Z}{\mathbb{Z}} \newcommand{\reals}{\mathbb{R}} \newcommand{\real}[1]{\mathbb{R}^{#1}} \newcommand{\complexes}{\mathbb{C}} \newcommand{\complex}[1]{\mathbb{C}^{#1}} \newcommand{\conjugate}[1]{\overline{#1}} \newcommand{\modulus}[1]{\left\lvert#1\right\rvert} \newcommand{\zerovector}{\vect{0}} \newcommand{\zeromatrix}{\mathcal{O}} \newcommand{\innerproduct}[2]{\left\langle#1,\,#2\right\rangle} \newcommand{\norm}[1]{\left\lVert#1\right\rVert} \newcommand{\dimension}[1]{\dim\left(#1\right)} \newcommand{\nullity}[1]{n\left(#1\right)} \newcommand{\rank}[1]{r\left(#1\right)} \newcommand{\ds}{\oplus} \newcommand{\detname}[1]{\det\left(#1\right)} \newcommand{\detbars}[1]{\left\lvert#1\right\rvert} \newcommand{\trace}[1]{t\left(#1\right)} \newcommand{\sr}[1]{#1^{1/2}} \newcommand{\spn}[1]{\left\langle#1\right\rangle} \newcommand{\nsp}[1]{\mathcal{N}\!\left(#1\right)} \newcommand{\csp}[1]{\mathcal{C}\!\left(#1\right)} \newcommand{\rsp}[1]{\mathcal{R}\!\left(#1\right)} \newcommand{\lns}[1]{\mathcal{L}\!\left(#1\right)} \newcommand{\per}[1]{#1^\perp} \newcommand{\augmented}[2]{\left\lbrack\left.#1\,\right\rvert\,#2\right\rbrack} \newcommand{\linearsystem}[2]{\mathcal{LS}\!\left(#1,\,#2\right)} \newcommand{\homosystem}[1]{\linearsystem{#1}{\zerovector}} \newcommand{\rowopswap}[2]{R_{#1}\leftrightarrow R_{#2}} \newcommand{\rowopmult}[2]{#1R_{#2}} \newcommand{\rowopadd}[3]{#1R_{#2}+R_{#3}} \newcommand{\leading}[1]{\boxed{#1}} \newcommand{\rref}{\xrightarrow{\text{RREF}}} \newcommand{\elemswap}[2]{E_{#1,#2}} \newcommand{\elemmult}[2]{E_{#2}\left(#1\right)} \newcommand{\elemadd}[3]{E_{#2,#3}\left(#1\right)} \newcommand{\scalarlist}[2]{{#1}_{1},\,{#1}_{2},\,{#1}_{3},\,\ldots,\,{#1}_{#2}} \newcommand{\vect}[1]{\mathbf{#1}} \newcommand{\colvector}[1]{\begin{bmatrix}#1\end{bmatrix}} \newcommand{\vectorcomponents}[2]{\colvector{#1_{1}\\#1_{2}\\#1_{3}\\\vdots\\#1_{#2}}} \newcommand{\vectorlist}[2]{\vect{#1}_{1},\,\vect{#1}_{2},\,\vect{#1}_{3},\,\ldots,\,\vect{#1}_{#2}} \newcommand{\vectorentry}[2]{\left\lbrack#1\right\rbrack_{#2}} \newcommand{\matrixentry}[2]{\left\lbrack#1\right\rbrack_{#2}} \newcommand{\lincombo}[3]{#1_{1}\vect{#2}_{1}+#1_{2}\vect{#2}_{2}+#1_{3}\vect{#2}_{3}+\cdots +#1_{#3}\vect{#2}_{#3}} \newcommand{\matrixcolumns}[2]{\left\lbrack\vect{#1}_{1}|\vect{#1}_{2}|\vect{#1}_{3}|\ldots|\vect{#1}_{#2}\right\rbrack} \newcommand{\transpose}[1]{#1^{t}} \newcommand{\inverse}[1]{#1^{-1}} \newcommand{\submatrix}[3]{#1\left(#2|#3\right)} \newcommand{\adj}[1]{\transpose{\left(\conjugate{#1}\right)}} \newcommand{\adjoint}[1]{#1^\ast} \newcommand{\set}[1]{\left\{#1\right\}} \newcommand{\setparts}[2]{\left\lbrace#1\,\middle|\,#2\right\rbrace} \newcommand{\card}[1]{\left\lvert#1\right\rvert} \newcommand{\setcomplement}[1]{\overline{#1}} \newcommand{\charpoly}[2]{p_{#1}\left(#2\right)} \newcommand{\eigenspace}[2]{\mathcal{E}_{#1}\left(#2\right)} \newcommand{\eigensystem}[3]{\lambda&=#2&\eigenspace{#1}{#2}&=\spn{\set{#3}}} \newcommand{\geneigenspace}[2]{\mathcal{G}_{#1}\left(#2\right)} \newcommand{\algmult}[2]{\alpha_{#1}\left(#2\right)} \newcommand{\geomult}[2]{\gamma_{#1}\left(#2\right)} \newcommand{\indx}[2]{\iota_{#1}\left(#2\right)} \newcommand{\ltdefn}[3]{#1\colon #2\rightarrow#3} \newcommand{\lteval}[2]{#1\left(#2\right)} \newcommand{\ltinverse}[1]{#1^{-1}} \newcommand{\restrict}[2]{{#1}|_{#2}} \newcommand{\preimage}[2]{#1^{-1}\left(#2\right)} \newcommand{\rng}[1]{\mathcal{R}\!\left(#1\right)} \newcommand{\krn}[1]{\mathcal{K}\!\left(#1\right)} \newcommand{\compose}[2]{{#1}\circ{#2}} \newcommand{\vslt}[2]{\mathcal{LT}\left(#1,\,#2\right)} \newcommand{\isomorphic}{\cong} \newcommand{\similar}[2]{\inverse{#2}#1#2} \newcommand{\vectrepname}[1]{\rho_{#1}} \newcommand{\vectrep}[2]{\lteval{\vectrepname{#1}}{#2}} \newcommand{\vectrepinvname}[1]{\ltinverse{\vectrepname{#1}}} \newcommand{\vectrepinv}[2]{\lteval{\ltinverse{\vectrepname{#1}}}{#2}} \newcommand{\matrixrep}[3]{M^{#1}_{#2,#3}} \newcommand{\matrixrepcolumns}[4]{\left\lbrack \left.\vectrep{#2}{\lteval{#1}{\vect{#3}_{1}}}\right|\left.\vectrep{#2}{\lteval{#1}{\vect{#3}_{2}}}\right|\left.\vectrep{#2}{\lteval{#1}{\vect{#3}_{3}}}\right|\ldots\left|\vectrep{#2}{\lteval{#1}{\vect{#3}_{#4}}}\right.\right\rbrack} \newcommand{\cbm}[2]{C_{#1,#2}} \newcommand{\jordan}[2]{J_{#1}\left(#2\right)} \newcommand{\hadamard}[2]{#1\circ #2} \newcommand{\hadamardidentity}[1]{J_{#1}} \newcommand{\hadamardinverse}[1]{\widehat{#1}} \newcommand{\lt}{<} \newcommand{\gt}{>} \newcommand{\amp}{&} \)

SectionMISLEMatrix Inverses and Systems of Linear Equations

The inverse of a square matrix, and solutions to linear systems with square coefficient matrices, are intimately connected.

SubsectionSISolutions and Inverses

We begin with a familiar example, performed in a novel way.

The matrix \(B\) of the previous example is called the inverse of \(A\text{.}\) When \(A\) and \(B\) are combined via matrix multiplication, the result is the identity matrix, which can be inserted “in front” of \(\vect{x}\) as the first step in finding the solution. This is entirely analogous to how we might solve a single linear equation like \(3x=12\text{.}\) \begin{equation*} x=1x=\left(\frac{1}{3}\left(3\right)\right)x=\frac{1}{3}\left(3x\right)=\frac{1}{3}\left(12\right)=4 \end{equation*}

Here we have obtained a solution by employing the “multiplicative inverse” of \(3\text{,}\) \(3^{-1}=\frac{1}{3}\text{.}\) This works fine for any scalar multiple of \(x\text{,}\) except for zero, since zero does not have a multiplicative inverse. Consider separately the two linear equations \begin{align*} 0x&=12 & 0x&=0\text{.} \end{align*}

The first has no solutions, while the second has infinitely many solutions. For matrices, it is all just a little more complicated. Some matrices have inverses, some do not. And when a matrix does have an inverse, just how would we compute it? In other words, just where did that matrix \(B\) in the last example come from? Are there other matrices that might have worked just as well?

SubsectionIMInverse of a Matrix

DefinitionMIMatrix Inverse

Suppose \(A\) and \(B\) are square matrices of size \(n\) such that \(AB=I_n\) and \(BA=I_n\text{.}\) Then \(A\) is invertible and \(B\) is the inverse of \(A\text{.}\) In this situation, we write \(B=\inverse{A}\text{.}\)

Notice that if \(B\) is the inverse of \(A\text{,}\) then we can just as easily say \(A\) is the inverse of \(B\text{,}\) or \(A\) and \(B\) are inverses of each other.

Not every square matrix has an inverse. In Example SABMI the matrix \(B\) is the inverse of the coefficient matrix of Archetype B. To see this it only remains to check that \(AB=I_3\text{.}\) What about Archetype A? It is an example of a square matrix without an inverse.

Let us look at one more matrix inverse before we embark on a more systematic study.

We will now concern ourselves less with whether or not an inverse of a matrix exists, but instead with how you can find one when it does exist. In Section MINM we will have some theorems that allow us to more quickly and easily determine just when a matrix is invertible.

SubsectionCIMComputing the Inverse of a Matrix

We have seen that the matrices from Archetype B and Archetype K both have inverses, but these inverse matrices have just dropped from the sky. How would we compute an inverse? And just when is a matrix invertible, and when is it not? Writing a putative inverse with \(n^2\) unknowns and solving the resulting \(n^2\) equations is one approach. Applying this approach to \(2\times 2\) matrices can get us somewhere, so just for fun, let us do it.

Proof

There are several ways one could try to prove this theorem, but there is a continual temptation to divide by one of the eight entries involved (\(a\) through \(f\)), but we can never be sure if these numbers are zero or not. This could lead to an analysis by cases, which is messy, messy, messy. Note how the above proof never divides, but always multiplies, and how zero/nonzero considerations are handled. Pay attention to the expression \(ad-bc\text{,}\) as we will see it again in a while (Chapter D).

This theorem is cute, and it is nice to have a formula for the inverse, and a condition that tells us when we can use it. However, this approach becomes impractical for larger matrices, even though it is possible to demonstrate that, in theory, there is a general formula. (Think for a minute about extending this result to just \(3\times 3\) matrices. For starters, we need 18 letters!) Instead, we will work column-by-column. Let us first work an example that will motivate the main theorem and remove some of the previous mystery.

Notice how the five systems of equations in the preceding example were all solved by exactly the same sequence of row operations. Would it not be nice to avoid this obvious duplication of effort? Our main theorem for this section follows, and it mimics this previous example, while also avoiding all the overhead.

Proof

We have to be just a bit careful here about both what this theorem says and what it does not say. If \(A\) is a nonsingular matrix, then we are guaranteed a matrix \(B\) such that \(AB=I_n\text{,}\) and the proof gives us a process for constructing \(B\text{.}\) However, the definition of the inverse of a matrix (Definition MI) requires that \(BA=I_n\) also. So at this juncture we must compute the matrix product in the “opposite” order before we claim \(B\) as the inverse of \(A\text{.}\) However, we will soon see that this is always the case, in Theorem OSIS, so the title of this theorem is not inaccurate.

What if \(A\) is singular? At this point we only know that Theorem CINM cannot be applied. The question of \(A\)'s inverse is still open. (But see Theorem NI in the next section.)

We will finish by computing the inverse for the coefficient matrix of Archetype B, the one we just pulled from a hat in Example SABMI. There are more examples in the Archetypes (Appendix A) to practice with, though notice that it is silly to ask for the inverse of a rectangular matrix (the sizes are not right) and not every square matrix has an inverse (remember Example MWIAA?).

SubsectionPMIProperties of Matrix Inverses

The inverse of a matrix enjoys some nice properties. We collect a few here. First, a matrix can have but one inverse.

Proof

When most of us dress in the morning, we put on our socks first, followed by our shoes. In the evening we must then first remove our shoes, followed by our socks. Try to connect the conclusion of the following theorem with this everyday example.

Proof
Proof
Proof
Proof

Notice that there are some likely theorems that are missing here. For example, it would be tempting to think that \(\inverse{(A+B)}=\inverse{A}+\inverse{B}\text{,}\) but this is false. Can you find a counterexample? (See Exercise MISLE.T10.)

SubsectionReading Questions

1

Compute the inverse of the matrix below. \begin{equation*} \begin{bmatrix} -2 & 3\\ -3 & 4 \end{bmatrix} \end{equation*}

2

Compute the inverse of the matrix below. \begin{equation*} \begin{bmatrix} 2 & 3 & 1\\ 1 & -2 & -3\\ -2 & 4 & 6 \end{bmatrix} \end{equation*}

3

Explain why Theorem SS has the title it does. (Do not just state the theorem, explain the choice of the title making reference to the theorem itself.)

SubsectionExercises

C16

If it exists, find the inverse of \(A\text{,}\) and check your answer. \begin{equation*} A = \begin{bmatrix} 1 & 0 & 1 \\ 1 & 1 & 1 \\ 2 & -1 & 1\end{bmatrix} \end{equation*}

Solution
C17

If it exists, find the inverse of \(A = \begin{bmatrix} 2 & -1 & 1\\ 1 & 2 & 1\\3 & 1 & 2\end{bmatrix}\text{,}\) and check your answer.

Solution
C18

If it exists, find the inverse of \(A = \begin{bmatrix} 1 & 3 & 1 \\ 1 & 2 & 1 \\ 2 & 2 & 1\end{bmatrix}\text{,}\) and check your answer.

Solution
C19

If it exists, find the inverse of \(A = \begin{bmatrix} 1 & 3 & 1 \\ 0 & 2 & 1 \\ 2 & 2 & 1\end{bmatrix}\text{,}\) and check your answer.

Solution
C21

Verify that \(B\) is the inverse of \(A\text{.}\) \begin{align*} A&= \begin{bmatrix} 1 & 1 & -1 & 2\\ -2 & -1 & 2 & -3\\ 1 & 1 & 0 & 2\\ -1 & 2 & 0 & 2 \end{bmatrix} & B&= \begin{bmatrix} 4 & 2 & 0 & -1\\ 8 & 4 & -1 & -1\\ -1 & 0 & 1 & 0\\ -6 & -3 & 1 & 1 \end{bmatrix} \end{align*}

Solution
C22

Recycle the matrices \(A\) and \(B\) from Exercise MISLE.C21 and set \begin{align*} \vect{c}&=\colvector{2\\1\\-3\\2}&\vect{d}&=\colvector{1\\1\\1\\1}\text{.} \end{align*} Employ the matrix \(B\) to solve the two linear systems \(\linearsystem{A}{\vect{c}}\) and \(\linearsystem{A}{\vect{d}}\text{.}\)

Solution
C23

If it exists, find the inverse of the \(2\times 2\) matrix \(A\) and check your answer. (See Theorem TTMI.) \begin{align*} A=\begin{bmatrix} 7&3\\5&2 \end{bmatrix} \end{align*}

C24

If it exists, find the inverse of the \(2\times 2\) matrix \(A\) and check your answer. (See Theorem TTMI.) \begin{align*} A=\begin{bmatrix} 6&3\\4&2 \end{bmatrix} \end{align*}

C25

At the conclusion of Example CMI, verify that \(BA=I_5\) by computing the matrix product.

C26

Let \begin{equation*} D=\begin{bmatrix} 1 & -1 & 3 & -2 & 1\\ -2 & 3 & -5 & 3 & 0\\ 1 & -1 & 4 & -2 & 2\\ -1 & 4 & -1 & 0 & 4\\ 1 & 0 & 5 & -2 & 5 \end{bmatrix}\text{.} \end{equation*} Compute the inverse of \(D\text{,}\) \(\inverse{D}\text{,}\) by forming the \(5\times 10\) matrix \(\augmented{D}{I_5}\) and row-reducing (Theorem CINM). Then use a calculator to compute \(\inverse{D}\) directly.

Solution
C27

Let \begin{equation*} E=\begin{bmatrix} 1 & -1 & 3 & -2 & 1\\ -2 & 3 & -5 & 3 & -1\\ 1 & -1 & 4 & -2 & 2\\ -1 & 4 & -1 & 0 & 2\\ 1 & 0 & 5 & -2 & 4 \end{bmatrix}\text{.} \end{equation*} Compute the inverse of \(E\text{,}\) \(\inverse{E}\text{,}\) by forming the \(5\times 10\) matrix \(\augmented{E}{I_5}\) and row-reducing (Theorem CINM). Then use a calculator to compute \(\inverse{E}\) directly.

Solution
C28

Let \begin{equation*} C= \begin{bmatrix} 1 & 1 & 3 & 1\\ -2 & -1 & -4 & -1\\ 1 & 4 & 10 & 2\\ -2 & 0 & -4 & 5 \end{bmatrix} \end{equation*} Compute the inverse of \(C\text{,}\) \(\inverse{C}\text{,}\) by forming the \(4\times 8\) matrix \(\augmented{C}{I_4}\) and row-reducing (Theorem CINM). Then use a calculator to compute \(\inverse{C}\) directly.

Solution
C40

Find all solutions to the system of equations below, making use of the matrix inverse found in Exercise MISLE.C28. \begin{align*} x_1+x_2+3x_3+x_4&=-4\\ -2x_1-x_2-4x_3-x_4&=4\\ x_1+4x_2+10x_3+2x_4&=-20\\ -2x_1-4x_3+5x_4&=9 \end{align*}

Solution
C41

Use the inverse of a matrix to find all the solutions to the following system of equations. \begin{align*} x_1 + 2 x_2 - x_3 &= -3\\ 2 x_1 + 5 x_2 - x_3 &= -4\\ -x_1 - 4 x_2 &= 2 \end{align*}

Solution
C42

Use a matrix inverse to solve the linear system of equations. \begin{align*} x_1-x_2+2x_3&=5\\ x_1-2x_3&=-8\\ 2x_1-x_2-x_3&=-6 \end{align*}

Solution
T10

Construct an example to demonstrate that \(\inverse{(A+B)}=\inverse{A}+\inverse{B}\) is not true for all square matrices \(A\) and \(B\) of the same size.

Solution