Section MISLE  Matrix Inverses and Systems of Linear Equations

From A First Course in Linear Algebra
Version 2.23
© 2004.
Licensed under the GNU Free Documentation License.
http://linear.ups.edu/

We begin with a familiar example, performed in a novel way.

Example SABMI
Solutions to Archetype B with a matrix inverse
Archetype B is the system of m = 3 linear equations in n = 3 variables,

\eqalignno{ − 7{x}_{1} − 6{x}_{2} − 12{x}_{3} & = −33 & & \cr 5{x}_{1} + 5{x}_{2} + 7{x}_{3} & = 24 & & \cr {x}_{1} + 4{x}_{3} & = 5 & & }

By Theorem SLEMM we can represent this system of equations as

Ax = b

where

\eqalignno{ A = \left [\array{ −7&−6&−12\cr 5 & 5 & 7 \cr 1 & 0 & 4 } \right ] & &x = \left [\array{ {x}_{1} \cr {x}_{2} \cr {x}_{3} } \right ] & &b = \left [\array{ −33\cr 24 \cr 5 } \right ] & & & & & & }

We’ll pull a rabbit out of our hat and present the 3 × 3 matrix B,

B = \left [\array{ −10&−12&−9 \cr {13\over 2} & 8 & {11\over 2} \cr {5\over 2} & 3 & {5\over 2} } \right ]

and note that

BA = \left [\array{ −10&−12&−9 \cr {13\over 2} & 8 & {11\over 2} \cr {5\over 2} & 3 & {5\over 2} } \right ]\left [\array{ −7&−6&−12\cr 5 & 5 & 7 \cr 1 & 0 & 4 } \right ] = \left [\array{ 1&0&0\cr 0&1 &0 \cr 0&0&1} \right ]

Now apply this computation to the problem of solving the system of equations,

\eqalignno{ x & = {I}_{3}x & &\text{@(a href="fcla-jsmath-2.23li31.html#theorem.MMIM")Theorem MMIM@(/a)} & & & & \cr & = (BA)x & &\text{Substitution} & & & & \cr & = B(Ax) & &\text{@(a href="fcla-jsmath-2.23li31.html#theorem.MMA")Theorem MMA@(/a)} & & & & \cr & = Bb & &\text{Substitution} & & & & \cr & & & & }

So we have

x = Bb = \left [\array{ −10&−12&−9 \cr {13\over 2} & 8 & {11\over 2} \cr {5\over 2} & 3 & {5\over 2} } \right ]\left [\array{ −33\cr 24 \cr 5 } \right ] = \left [\array{ −3\cr 5 \cr 2 } \right ]

So with the help and assistance of B we have been able to determine a solution to the system represented by Ax = b through judicious use of matrix multiplication. We know by Theorem NMUS that since the coefficient matrix in this example is nonsingular, there would be a unique solution, no matter what the choice of b. The derivation above amplifies this result, since we were forced to conclude that x = Bb and the solution couldn’t be anything else. You should notice that this argument would hold for any particular value of b.

The matrix B of the previous example is called the inverse of A. When A and B are combined via matrix multiplication, the result is the identity matrix, which can be inserted “in front” of x as the first step in finding the solution. This is entirely analogous to how we might solve a single linear equation like 3x = 12.

x = 1x = \left ({1\over 3}\left (3\right )\right )x = {1\over 3}\left (3x\right ) = {1\over 3}\left (12\right ) = 4

Here we have obtained a solution by employing the “multiplicative inverse” of 3, {3}^{−1} = {1\over 3}. This works fine for any scalar multiple of x, except for zero, since zero does not have a multiplicative inverse. Consider seperately the two linear equations,

\eqalignno{ 0x & = 12 &0x & = 0 & & & & }

The first has no solutions, while the second has infinitely many solutions. For matrices, it is all just a little more complicated. Some matrices have inverses, some do not. And when a matrix does have an inverse, just how would we compute it? In other words, just where did that matrix B in the last example come from? Are there other matrices that might have worked just as well?

Subsection IM: Inverse of a Matrix

Definition MI
Matrix Inverse
Suppose A and B are square matrices of size n such that AB = {I}_{n} and BA = {I}_{n}. Then A is invertible and B is the inverse of A. In this situation, we write B = {A}^{−1}.

(This definition contains Notation MI.)

Notice that if B is the inverse of A, then we can just as easily say A is the inverse of B, or A and B are inverses of each other.

Not every square matrix has an inverse. In Example SABMI the matrix B is the inverse the coefficient matrix of Archetype B. To see this it only remains to check that AB = {I}_{3}. What about Archetype A? It is an example of a square matrix without an inverse.

Example MWIAA
A matrix without an inverse, Archetype A
Consider the coefficient matrix from Archetype A,

A = \left [\array{ 1&−1&2\cr 2& 1 &1 \cr 1& 1 &0 } \right ]

Suppose that A is invertible and does have an inverse, say B. Choose the vector of constants

b = \left [\array{ 1\cr 3 \cr 2 } \right ]

and consider the system of equations ℒS\kern -1.95872pt \left (A,\kern 1.95872pt b\right ). Just as in Example SABMI, this vector equation would have the unique solution x = Bb.

However, the system ℒS\kern -1.95872pt \left (A,\kern 1.95872pt b\right ) is inconsistent. Form the augmented matrix \left [\left .A\kern 1.95872pt \right \vert \kern 1.95872pt b\right ] and row-reduce to

\left [\array{ \text{1}&0& 1 &0\cr 0&\text{1 } &−1 &0 \cr 0&0& 0 &\text{1} } \right ]

which allows to recognize the inconsistency by Theorem RCLS.

So the assumption of A’s inverse leads to a logical inconsistency (the system can’t be both consistent and inconsistent), so our assumption is false. A is not invertible.

Its possible this example is less than satisfying. Just where did that particular choice of the vector b come from anyway? Stay tuned for an application of the future Theorem CSCS in Example CSAA.

Let’s look at one more matrix inverse before we embark on a more systematic study.

Example MI
Matrix inverse
Consider the matrices,

\eqalignno{ A & = \left [\array{ 1 & 2 & 1 & 2 & 1\cr −2 &−3 & 0 &−5 &−1 \cr 1 & 1 & 0 & 2 & 1\cr −2 &−3 &−1 &−3 &−2 \cr −1&−3&−1&−3& 1 } \right ] &B & = \left [\array{ −3& 3 & 6 &−1&−2\cr 0 &−2 &−5 &−1 & 1 \cr 1 & 2 & 4 & 1 &−1\cr 1 & 0 & 1 & 1 & 0 \cr 1 &−1&−2& 0 & 1 } \right ] & & & & }

Then

\eqalignno{ AB & = \left [\array{ 1 & 2 & 1 & 2 & 1\cr −2 &−3 & 0 &−5 &−1 \cr 1 & 1 & 0 & 2 & 1\cr −2 &−3 &−1 &−3 &−2 \cr −1&−3&−1&−3& 1 } \right ]\left [\array{ −3& 3 & 6 &−1&−2\cr 0 &−2 &−5 &−1 & 1 \cr 1 & 2 & 4 & 1 &−1\cr 1 & 0 & 1 & 1 & 0 \cr 1 &−1&−2& 0 & 1 } \right ] = \left [\array{ 1&0&0&0&0\cr 0&1 &0 &0 &0 \cr 0&0&1&0&0\cr 0&0 &0 &1 &0 \cr 0&0&0&0&1 } \right ] & & \text{and} \cr BA & = \left [\array{ −3& 3 & 6 &−1&−2\cr 0 &−2 &−5 &−1 & 1 \cr 1 & 2 & 4 & 1 &−1\cr 1 & 0 & 1 & 1 & 0 \cr 1 &−1&−2& 0 & 1 } \right ]\left [\array{ 1 & 2 & 1 & 2 & 1\cr −2 &−3 & 0 &−5 &−1 \cr 1 & 1 & 0 & 2 & 1\cr −2 &−3 &−1 &−3 &−2 \cr −1&−3&−1&−3& 1 } \right ] = \left [\array{ 1&0&0&0&0\cr 0&1 &0 &0 &0 \cr 0&0&1&0&0\cr 0&0 &0 &1 &0 \cr 0&0&0&0&1 } \right ] & & }

so by Definition MI, we can say that A is invertible and write B = {A}^{−1}.

We will now concern ourselves less with whether or not an inverse of a matrix exists, but instead with how you can find one when it does exist. In Section MINM we will have some theorems that allow us to more quickly and easily determine just when a matrix is invertible.

Subsection CIM: Computing the Inverse of a Matrix

We’ve seen that the matrices from Archetype B and Archetype K both have inverses, but these inverse matrices have just dropped from the sky. How would we compute an inverse? And just when is a matrix invertible, and when is it not? Writing a putative inverse with {n}^{2} unknowns and solving the resultant {n}^{2} equations is one approach. Applying this approach to 2 × 2 matrices can get us somewhere, so just for fun, let’s do it.

Theorem TTMI
Two-by-Two Matrix Inverse
Suppose

A = \left [\array{ a&b\cr c&d } \right ]

Then A is invertible if and only if ad − bc\mathrel{≠}0. When A is invertible, then

{ A}^{−1} = {1\over ad − bc}\left [\array{ d &−b\cr −c & a } \right ]

Proof   () Assume that ad − bc\mathrel{≠}0. We will use the definition of the inverse of a matrix to establish that A has inverse (Definition MI). Note that if ad − bc\mathrel{≠}0 then the displayed formula for {A}^{−1} is legitimate since we are not dividing by zero). Using this proposed formula for the inverse of A, we compute

\eqalignno{ A{A}^{−1} & = \left [\array{ a&b \cr c&d } \right ]\left ( {1\over ad − bc}\left [\array{ d &−b\cr −c & a } \right ]\right ) = {1\over ad − bc}\left [\array{ ad − bc& 0\cr 0 &ad − bc } \right ] = \left [\array{ 1&0\cr 0&1 } \right ] & & \text{and} \cr {A}^{−1}A & = {1\over ad − bc}\left [\array{ d &−b\cr −c & a } \right ]\left [\array{ a&b\cr c&d } \right ] = {1\over ad − bc}\left [\array{ ad − bc& 0\cr 0 &ad − bc } \right ] = \left [\array{ 1&0\cr 0&1 } \right ] & & }

By Definition MI this is sufficient to establish that A is invertible, and that the expression for {A}^{−1} is correct.

() Assume that A is invertible, and proceed with a proof by contradiction (Technique CD), by assuming also that ad − bc = 0. This translates to ad = bc. Let

B = \left [\array{ e&f\cr g&h } \right ]

be a putative inverse of A. This means that

{ I}_{2} = AB = \left [\array{ a&b\cr c&d } \right ]\left [\array{ e&f\cr g&h } \right ] = \left [\array{ ae + bg&af + bh\cr ce + dg &cf + dh } \right ]

Working on the matrices on two ends of this equation, we will multiply the top row by c and the bottom row by a.

\left [\array{ c&0\cr 0&a } \right ] = \left [\array{ ace + bcg&acf + bch\cr ace + adg &acf + adh } \right ]

We are assuming that ad = bc, so we can replace two occurrences of ad by bc in the bottom row of the right matrix.

\left [\array{ c&0\cr 0&a } \right ] = \left [\array{ ace + bcg&acf + bch\cr ace + bcg &acf + bch } \right ]

The matrix on the right now has two rows that are identical, and therefore the same must be true of the matrix on the left. Identical rows for the matrix on the left implies that a = 0 and c = 0.

With this information, the product AB becomes

\left [\array{ 1&0\cr 0&1 } \right ] = {I}_{2} = AB = \left [\array{ ae + bg&af + bh\cr ce + dg &cf + dh } \right ] = \left [\array{ bg&bh\cr dg &dh } \right ]

So bg = dh = 1 and thus b,g,d,h are all nonzero. But then bh and dg (the “other corners”) must also be nonzero, so this is (finally) a contradiction. So our assumption was false and we see that ad − bc\mathrel{≠}0 whenever A has an inverse.

There are several ways one could try to prove this theorem, but there is a continual temptation to divide by one of the eight entries involved (a through f), but we can never be sure if these numbers are zero or not. This could lead to an analysis by cases, which is messy, messy, messy. Note how the above proof never divides, but always multiplies, and how zero/nonzero considerations are handled. Pay attention to the expression ad − bc, as we will see it again in a while (Chapter D).

This theorem is cute, and it is nice to have a formula for the inverse, and a condition that tells us when we can use it. However, this approach becomes impractical for larger matrices, even though it is possible to demonstrate that, in theory, there is a general formula. (Think for a minute about extending this result to just 3 × 3 matrices. For starters, we need 18 letters!) Instead, we will work column-by-column. Let’s first work an example that will motivate the main theorem and remove some of the previous mystery.

Example CMI
Computing a matrix inverse
Consider the matrix defined in Example MI as,

A = \left [\array{ 1 & 2 & 1 & 2 & 1\cr −2 &−3 & 0 &−5 &−1 \cr 1 & 1 & 0 & 2 & 1\cr −2 &−3 &−1 &−3 &−2 \cr −1&−3&−1&−3& 1 } \right ]

For its inverse, we desire a matrix B so that AB = {I}_{5}. Emphasizing the structure of the columns and employing the definition of matrix multiplication Definition MM,

\eqalignno{ AB & = {I}_{5} & & \cr A[{B}_{1}|{B}_{2}|{B}_{3}|{B}_{4}|{B}_{5}] & = [{e}_{1}|{e}_{2}|{e}_{3}|{e}_{4}|{e}_{5}] & & \cr [A{B}_{1}|A{B}_{2}|A{B}_{3}|A{B}_{4}|A{B}_{5}] & = [{e}_{1}|{e}_{2}|{e}_{3}|{e}_{4}|{e}_{5}]. & & }

Equating the matrices column-by-column we have

\eqalignno{ A{B}_{1} = {e}_{1} & &A{B}_{2} = {e}_{2} & &A{B}_{3} = {e}_{3} & &A{B}_{4} = {e}_{4} & &A{B}_{5} = {e}_{5}. & & & & & & & & & & }

Since the matrix B is what we are trying to compute, we can view each column, {B}_{i}, as a column vector of unknowns. Then we have five systems of equations to solve, each with 5 equations in 5 variables. Notice that all 5 of these systems have the same coefficient matrix. We’ll now solve each system in turn,

\eqalignno{ \ & & & & &&& \text{Row-reduce the augmented matrix of the linear system $ℒS\kern -1.95872pt \left (A,\kern 1.95872pt {e}_{1}\right )$,} \cr \left [\array{ 1 & 2 & 1 & 2 & 1 &1\cr −2 &−3 & 0 &−5 &−1 &0 \cr 1 & 1 & 0 & 2 & 1 &0\cr −2 &−3 &−1 &−3 &−2 &0 \cr −1&−3&−1&−3& 1 &0 } \right ]&\mathop{\longrightarrow}\limits_{}^{\text{RREF}}&\left [\array{ \text{1}&0&0&0&0&−3\cr 0&\text{1 } &0 &0 &0 & 0 \cr 0&0&\text{1}&0&0& 1\cr 0&0 &0 &\text{1 } &0 & 1 \cr 0&0&0&0&\text{1}& 1 } \right ]&\text{ so}&{B}_{1} = \left [\array{ −3\cr 0 \cr 1\cr 1 \cr 1 } \right ]&&&&&& \text{Row-reduce the augmented matrix of the linear system $ℒS\kern -1.95872pt \left (A,\kern 1.95872pt {e}_{2}\right )$,} \cr \left [\array{ 1 & 2 & 1 & 2 & 1 &0\cr −2 &−3 & 0 &−5 &−1 &1 \cr 1 & 1 & 0 & 2 & 1 &0\cr −2 &−3 &−1 &−3 &−2 &0 \cr −1&−3&−1&−3& 1 &0 } \right ]&\mathop{\longrightarrow}\limits_{}^{\text{RREF}}&\left [\array{ \text{1}&0&0&0&0& 3\cr 0&\text{1 } &0 &0 &0 &−2 \cr 0&0&\text{1}&0&0& 2\cr 0&0 &0 &\text{1 } &0 & 0 \cr 0&0&0&0&\text{1}&−1 } \right ]&\text{ so}&{B}_{2} = \left [\array{ 3\cr −2 \cr 2\cr 0 \cr −1 } \right ]&&&&&& \text{Row-reduce the augmented matrix of the linear system $ℒS\kern -1.95872pt \left (A,\kern 1.95872pt {e}_{3}\right )$,} \cr \left [\array{ 1 & 2 & 1 & 2 & 1 &0\cr −2 &−3 & 0 &−5 &−1 &0 \cr 1 & 1 & 0 & 2 & 1 &1\cr −2 &−3 &−1 &−3 &−2 &0 \cr −1&−3&−1&−3& 1 &0 } \right ]&\mathop{\longrightarrow}\limits_{}^{\text{RREF}}&\left [\array{ \text{1}&0&0&0&0& 6\cr 0&\text{1 } &0 &0 &0 &−5 \cr 0&0&\text{1}&0&0& 4\cr 0&0 &0 &\text{1 } &0 & 1 \cr 0&0&0&0&\text{1}&−2 } \right ]&\text{ so}&{B}_{3} = \left [\array{ 6\cr −5 \cr 4\cr 1 \cr −2 } \right ]&&&&&& \text{Row-reduce the augmented matrix of the linear system $ℒS\kern -1.95872pt \left (A,\kern 1.95872pt {e}_{4}\right )$,} \cr \left [\array{ 1 & 2 & 1 & 2 & 1 &0\cr −2 &−3 & 0 &−5 &−1 &0 \cr 1 & 1 & 0 & 2 & 1 &0\cr −2 &−3 &−1 &−3 &−2 &1 \cr −1&−3&−1&−3& 1 &0 } \right ]&\mathop{\longrightarrow}\limits_{}^{\text{RREF}}&\left [\array{ \text{1}&0&0&0&0&−1\cr 0&\text{1 } &0 &0 &0 &−1 \cr 0&0&\text{1}&0&0& 1\cr 0&0 &0 &\text{1 } &0 & 1 \cr 0&0&0&0&\text{1}& 0 } \right ]&\text{ so}&{B}_{4} = \left [\array{ −1\cr −1 \cr 1\cr 1 \cr 0 } \right ]&&&&&& \text{Row-reduce the augmented matrix of the linear system $ℒS\kern -1.95872pt \left (A,\kern 1.95872pt {e}_{5}\right )$,} \cr \left [\array{ 1 & 2 & 1 & 2 & 1 &0\cr −2 &−3 & 0 &−5 &−1 &0 \cr 1 & 1 & 0 & 2 & 1 &0\cr −2 &−3 &−1 &−3 &−2 &0 \cr −1&−3&−1&−3& 1 &1 } \right ]&\mathop{\longrightarrow}\limits_{}^{\text{RREF}}&\left [\array{ \text{1}&0&0&0&0&−2\cr 0&\text{1 } &0 &0 &0 & 1 \cr 0&0&\text{1}&0&0&−1\cr 0&0 &0 &\text{1 } &0 & 0 \cr 0&0&0&0&\text{1}& 1 } \right ]&\text{ so}&{B}_{5} = \left [\array{ −2\cr 1 \cr −1\cr 0 \cr 1 } \right ]&&&&&& \cr & & & & && }

We can now collect our 5 solution vectors into the matrix B,

\eqalignno{ B = &[{B}_{1}|{B}_{2}|{B}_{3}|{B}_{4}|{B}_{5}] & & \cr = &\left [\left [\array{ −3\cr 0 \cr 1\cr 1 \cr 1 } \right ]\left \vert \left [\array{ 3\cr −2 \cr 2\cr 0 \cr −1 } \right ]\right .\left \vert \left [\array{ 6\cr −5 \cr 4\cr 1 \cr −2 } \right ]\right .\left \vert \left [\array{ −1\cr −1 \cr 1\cr 1 \cr 0 } \right ]\right .\left \vert \left [\array{ −2\cr 1 \cr −1\cr 0 \cr 1 } \right ]\right .\right ] & & \cr & = \left [\array{ −3& 3 & 6 &−1&−2\cr 0 &−2 &−5 &−1 & 1 \cr 1 & 2 & 4 & 1 &−1\cr 1 & 0 & 1 & 1 & 0 \cr 1 &−1&−2& 0 & 1 } \right ] & & }

By this method, we know that AB = {I}_{5}. Check that BA = {I}_{5}, and then we will know that we have the inverse of A.

Notice how the five systems of equations in the preceding example were all solved by exactly the same sequence of row operations. Wouldn’t it be nice to avoid this obvious duplication of effort? Our main theorem for this section follows, and it mimics this previous example, while also avoiding all the overhead.

Theorem CINM
Computing the Inverse of a Nonsingular Matrix
Suppose A is a nonsingular square matrix of size n. Create the n × 2n matrix M by placing the n × n identity matrix {I}_{n} to the right of the matrix A. Let N be a matrix that is row-equivalent to M and in reduced row-echelon form. Finally, let J be the matrix formed from the final n columns of N. Then AJ = {I}_{n}.

Proof   A is nonsingular, so by Theorem NMRRI there is a sequence of row operations that will convert A into {I}_{n}. It is this same sequence of row operations that will convert M into N, since having the identity matrix in the first n columns of N is sufficient to guarantee that N is in reduced row-echelon form.

If we consider the systems of linear equations, ℒS\kern -1.95872pt \left (A,\kern 1.95872pt {e}_{i}\right ), 1 ≤ i ≤ n, we see that the aforementioned sequence of row operations will also bring the augmented matrix of each of these systems into reduced row-echelon form. Furthermore, the unique solution to ℒS\kern -1.95872pt \left (A,\kern 1.95872pt {e}_{i}\right ) appears in column n + 1 of the row-reduced augmented matrix of the system and is identical to column n + i of N. Let {N}_{1},\kern 1.95872pt {N}_{2},\kern 1.95872pt {N}_{3},\kern 1.95872pt \mathop{\mathop{…}},\kern 1.95872pt {N}_{2n} denote the columns of N. So we find,

\eqalignno{ AJ = &A[{N}_{n+1}|{N}_{n+2}|{N}_{n+3}|\mathop{\mathop{…}}|{N}_{n+n}] & & & & \cr = &[A{N}_{n+1}|A{N}_{n+2}|A{N}_{n+3}|\mathop{\mathop{…}}|A{N}_{n+n}] & &\text{@(a href="fcla-jsmath-2.23li31.html#definition.MM")Definition MM@(/a)} & & & & \cr = &[{e}_{1}|{e}_{2}|{e}_{3}|\mathop{\mathop{…}}|{e}_{n}] & & & & \cr = &{I}_{n} & &\text{@(a href="fcla-jsmath-2.23li21.html#definition.IM")Definition IM@(/a)} & & & & }

as desired.

We have to be just a bit careful here about both what this theorem says and what it doesn’t say. If A is a nonsingular matrix, then we are guaranteed a matrix B such that AB = {I}_{n}, and the proof gives us a process for constructing B. However, the definition of the inverse of a matrix (Definition MI) requires that BA = {I}_{n} also. So at this juncture we must compute the matrix product in the “opposite” order before we claim B as the inverse of A. However, we’ll soon see that this is always the case, in Theorem OSIS, so the title of this theorem is not inaccurate.

What if A is singular? At this point we only know that Theorem CINM cannot be applied. The question of A’s inverse is still open. (But see Theorem NI in the next section.) We’ll finish by computing the inverse for the coefficient matrix of Archetype B, the one we just pulled from a hat in Example SABMI. There are more examples in the Archetypes (Appendix A) to practice with, though notice that it is silly to ask for the inverse of a rectangular matrix (the sizes aren’t right) and not every square matrix has an inverse (remember Example MWIAA?).

Example CMIAB
Computing a matrix inverse, Archetype B
Archetype B has a coefficient matrix given as

\eqalignno{ B = &\left [\array{ −7&−6&−12\cr 5 & 5 & 7 \cr 1 & 0 & 4 } \right ] & & \text{Exercising @(a href="#theorem.CINM")Theorem CINM@(/a) we set} \cr M = &\left [\array{ −7&−6&−12&1&0&0\cr 5 & 5 & 7 &0 &1 &0 \cr 1 & 0 & 4 &0&0&1 } \right ]. & & \text{which row reduces to} \cr N = &\left [\array{ 1&0&0&−10&−12&−9 \cr 0&1&0& {13\over 2} & 8 & {11\over 2} \cr 0&0&1& {5\over 2} & 3 & {5\over 2} } \right ]. & & \text{So} \cr {B}^{−1} = &\left [\array{ −10&−12&−9 \cr {13\over 2} & 8 & {11\over 2} \cr {5\over 2} & 3 & {5\over 2} } \right ] & & }

once we check that {B}^{−1}B = {I}_{ 3} (the product in the opposite order is a consequence of the theorem).

While we can use a row-reducing procedure to compute any needed inverse, most computational devices have a built-in procedure to compute the inverse of a matrix straightaway.   See: Computation MI.MMA Computation MI.SAGE

Subsection PMI: Properties of Matrix Inverses

The inverse of a matrix enjoys some nice properties. We collect a few here. First, a matrix can have but one inverse.

Theorem MIU
Matrix Inverse is Unique
Suppose the square matrix A has an inverse. Then {A}^{−1} is unique.

Proof   As described in Technique U, we will assume that A has two inverses. The hypothesis tells there is at least one. Suppose then that B and C are both inverses for A, so we know by Definition MI that AB = BA = {I}_{n} and AC = CA = {I}_{n}. Then we have,

\eqalignno{ B & = B{I}_{n} & &\text{@(a href="fcla-jsmath-2.23li31.html#theorem.MMIM")Theorem MMIM@(/a)} & & & & \cr & = B(AC) & &\text{@(a href="#definition.MI")Definition MI@(/a)} & & & & \cr & = (BA)C & &\text{@(a href="fcla-jsmath-2.23li31.html#theorem.MMA")Theorem MMA@(/a)} & & & & \cr & = {I}_{n}C & &\text{@(a href="#definition.MI")Definition MI@(/a)} & & & & \cr & = C & &\text{@(a href="fcla-jsmath-2.23li31.html#theorem.MMIM")Theorem MMIM@(/a)} & & & & \cr & & & & }

So we conclude that B and C are the same, and cannot be different. So any matrix that acts like an inverse, must be the inverse.

When most of us dress in the morning, we put on our socks first, followed by our shoes. In the evening we must then first remove our shoes, followed by our socks. Try to connect the conclusion of the following theorem with this everyday example.

Theorem SS
Socks and Shoes
Suppose A and B are invertible matrices of size n. Then AB is an invertible matrix and {(AB)}^{−1} = {B}^{−1}{A}^{−1}.

Proof   At the risk of carrying our everyday analogies too far, the proof of this theorem is quite easy when we compare it to the workings of a dating service. We have a statement about the inverse of the matrix AB, which for all we know right now might not even exist. Suppose AB was to sign up for a dating service with two requirements for a compatible date. Upon multiplication on the left, and on the right, the result should be the identity matrix. In other words, AB’s ideal date would be its inverse.

Now along comes the matrix {B}^{−1}{A}^{−1} (which we know exists because our hypothesis says both A and B are invertible and we can form the product of these two matrices), also looking for a date. Let’s see if {B}^{−1}{A}^{−1} is a good match for AB. First they meet at a non-committal neutral location, say a coffee shop, for quiet conversation:

\eqalignno{ ({B}^{−1}{A}^{−1})(AB) & = {B}^{−1}({A}^{−1}A)B & &\text{@(a href="fcla-jsmath-2.23li31.html#theorem.MMA")Theorem MMA@(/a)} & & & & \cr & = {B}^{−1}{I}_{ n}B & &\text{@(a href="#definition.MI")Definition MI@(/a)} & & & & \cr & = {B}^{−1}B & &\text{@(a href="fcla-jsmath-2.23li31.html#theorem.MMIM")Theorem MMIM@(/a)} & & & & \cr & = {I}_{n} & &\text{@(a href="#definition.MI")Definition MI@(/a)} & & & & \text{The first date having gone smoothly, a second, more serious, date is arranged, say dinner and a show:} \cr (AB)({B}^{−1}{A}^{−1}) & = A(B{B}^{−1}){A}^{−1} & &\text{@(a href="fcla-jsmath-2.23li31.html#theorem.MMA")Theorem MMA@(/a)} & & & & \cr & = A{I}_{n}{A}^{−1} & &\text{@(a href="#definition.MI")Definition MI@(/a)} & & & & \cr & = A{A}^{−1} & &\text{@(a href="fcla-jsmath-2.23li31.html#theorem.MMIM")Theorem MMIM@(/a)} & & & & \cr & = {I}_{n} & &\text{@(a href="#definition.MI")Definition MI@(/a)} & & & &

}

So the matrix {B}^{−1}{A}^{−1} has met all of the requirements to be AB’s inverse (date) and with the ensuing marriage proposal we can announce that {(AB)}^{−1} = {B}^{−1}{A}^{−1}.

Theorem MIMI
Matrix Inverse of a Matrix Inverse
Suppose A is an invertible matrix. Then {A}^{−1} is invertible and {({A}^{−1})}^{−1} = A.

Proof   As with the proof of Theorem SS, we examine if A is a suitable inverse for {A}^{−1} (by definition, the opposite is true).

\eqalignno{ A{A}^{−1} & = {I}_{ n} & &\text{@(a href="#definition.MI")Definition MI@(/a)} & & & & \text{and} \cr {A}^{−1}A & = {I}_{ n} & &\text{@(a href="#definition.MI")Definition MI@(/a)} & & & &

}

The matrix A has met all the requirements to be the inverse of {A}^{−1}, and so is invertible and we can write A = {({A}^{−1})}^{−1}.

Theorem MIT
Matrix Inverse of a Transpose
Suppose A is an invertible matrix. Then {A}^{t} is invertible and {({A}^{t})}^{−1} = {({A}^{−1})}^{t}.

Proof   As with the proof of Theorem SS, we see if {({A}^{−1})}^{t} is a suitable inverse for {A}^{t}. Apply Theorem MMT to see that

\eqalignno{ {({A}^{−1})}^{t}{A}^{t} & = {(A{A}^{−1})}^{t} & &\text{@(a href="fcla-jsmath-2.23li31.html#theorem.MMT")Theorem MMT@(/a)} & & & & \cr & = {I}_{n}^{t} & &\text{@(a href="#definition.MI")Definition MI@(/a)} & & & & \cr & = {I}_{n} & &\text{@(a href="fcla-jsmath-2.23li30.html#definition.SYM")Definition SYM@(/a)} & & & & \text{and} \cr {A}^{t}{({A}^{−1})}^{t} & = {({A}^{−1}A)}^{t} & &\text{@(a href="fcla-jsmath-2.23li31.html#theorem.MMT")Theorem MMT@(/a)} & & & & \cr & = {I}_{n}^{t} & &\text{@(a href="#definition.MI")Definition MI@(/a)} & & & & \cr & = {I}_{n} & &\text{@(a href="fcla-jsmath-2.23li30.html#definition.SYM")Definition SYM@(/a)} & & & &

}

The matrix {({A}^{−1})}^{t} has met all the requirements to be the inverse of {A}^{t}, and so is invertible and we can write {({A}^{t})}^{−1} = {({A}^{−1})}^{t}.

Theorem MISM
Matrix Inverse of a Scalar Multiple
Suppose A is an invertible matrix and α is a nonzero scalar. Then {\left (αA\right )}^{−1} = {1\over α}{A}^{−1} and αA is invertible.

Proof   As with the proof of Theorem SS, we see if {1\over α}{A}^{−1} is a suitable inverse for αA.

\eqalignno{ \left ( {1\over α}{A}^{−1}\right )\left (αA\right ) & = \left ( {1\over α}α\right )\left (A{A}^{−1}\right ) & &\text{@(a href="fcla-jsmath-2.23li31.html#theorem.MMSMM")Theorem MMSMM@(/a)} & & & & \cr & = 1{I}_{n} & &\text{Scalar multiplicative inverses} & & & & \cr & = {I}_{n} & &\text{@(a href="fcla-jsmath-2.23li30.html#property.OM")Property OM@(/a)} & & & & \text{and} \cr \left (αA\right )\left ( {1\over α}{A}^{−1}\right ) & = \left (α{1\over α}\right )\left ({A}^{−1}A\right ) & &\text{@(a href="fcla-jsmath-2.23li31.html#theorem.MMSMM")Theorem MMSMM@(/a)} & & & & \cr & = 1{I}_{n} & &\text{Scalar multiplicative inverses} & & & & \cr & = {I}_{n} & &\text{@(a href="fcla-jsmath-2.23li30.html#property.OM")Property OM@(/a)} & & & &

}

The matrix {1\over α}{A}^{−1} has met all the requirements to be the inverse of αA, so we can write {\left (αA\right )}^{−1} = {1\over α}{A}^{−1}.

Notice that there are some likely theorems that are missing here. For example, it would be tempting to think that {(A + B)}^{−1} = {A}^{−1} + {B}^{−1}, but this is false. Can you find a counterexample? (See Exercise MISLE.T10.)

Subsection READ: Reading Questions

  1. Compute the inverse of the matrix below.
    \left [\array{ 4&10\cr 2& 6} \right ]
  2. Compute the inverse of the matrix below.
    \left [\array{ 2 & 3 & 1\cr 1 &−2 &−3 \cr −2& 4 & 6} \right ]
  3. Explain why Theorem SS has the title it does. (Do not just state the theorem, explain the choice of the title making reference to the theorem itself.)

Subsection EXC: Exercises

C16 If it exists, find the inverse of A = \left [\array{ 1& 0 &1\cr 1& 1 &1 \cr 2&−1&1 } \right ], and check your answer.  
Contributed by Chris Black Solution [683]

C17 If it exists, find the inverse of A = \left [\array{ 2&−1&1\cr 1& 2 &1 \cr 3& 1 &2 } \right ], and check your answer.  
Contributed by Chris Black Solution [683]

C18 If it exists, find the inverse of A = \left [\array{ 1&3&1\cr 1&2 &1 \cr 2&2&1} \right ], and check your answer.  
Contributed by Chris Black Solution [683]

C19 If it exists, find the inverse of A = \left [\array{ 1&3&1\cr 0&2 &1 \cr 2&2&1} \right ], and check your answer.  
Contributed by Chris Black Solution [683]

C21 Verify that B is the inverse of A.

\eqalignno{ A & = \left [\array{ 1 & 1 &−1& 2\cr −2 &−1 & 2 &−3 \cr 1 & 1 & 0 & 2\cr −1 & 2 & 0 & 2 } \right ] &B & = \left [\array{ 4 & 2 & 0 &−1\cr 8 & 4 &−1 &−1 \cr −1& 0 & 1 & 0\cr −6 &−3 & 1 & 1 } \right ] & & & & }

 
Contributed by Robert Beezer Solution [683]

C22 Recycle the matrices A and B from Exercise MISLE.C21 and set

\eqalignno{ c & = \left [\array{ 2\cr 1 \cr −3\cr 2 } \right ] &d & = \left [\array{ 1\cr 1 \cr 1\cr 1 } \right ] & & & & }

Employ the matrix B to solve the two linear systems ℒS\kern -1.95872pt \left (A,\kern 1.95872pt c\right ) and ℒS\kern -1.95872pt \left (A,\kern 1.95872pt d\right ).  
Contributed by Robert Beezer Solution [683]

C23 If it exists, find the inverse of the 2 × 2 matrix

\eqalignno{ A = \left [\array{ 7&3\cr 5&2 } \right ] & & }

and check your answer. (See Theorem TTMI.)  
Contributed by Robert Beezer

C24 If it exists, find the inverse of the 2 × 2 matrix

\eqalignno{ A = \left [\array{ 6&3\cr 4&2 } \right ] & & }

and check your answer. (See Theorem TTMI.)  
Contributed by Robert Beezer

C25 At the conclusion of Example CMI, verify that BA = {I}_{5} by computing the matrix product.  
Contributed by Robert Beezer

C26 Let

D = \left [\array{ 1 &−1& 3 &−2&1\cr −2 & 3 &−5 & 3 &0 \cr 1 &−1& 4 &−2&2\cr −1 & 4 &−1 & 0 &4 \cr 1 & 0 & 5 &−2&5 } \right ]

Compute the inverse of D, {D}^{−1}, by forming the 5 × 10 matrix \left [\left .D\kern 1.95872pt \right \vert \kern 1.95872pt {I}_{5}\right ] and row-reducing (Theorem CINM). Then use a calculator to compute {D}^{−1} directly.  
Contributed by Robert Beezer Solution [684]

C27 Let

E = \left [\array{ 1 &−1& 3 &−2& 1\cr −2 & 3 &−5 & 3 &−1 \cr 1 &−1& 4 &−2& 2\cr −1 & 4 &−1 & 0 & 2 \cr 1 & 0 & 5 &−2& 4 } \right ]

Compute the inverse of E, {E}^{−1}, by forming the 5 × 10 matrix \left [\left .E\kern 1.95872pt \right \vert \kern 1.95872pt {I}_{5}\right ] and row-reducing (Theorem CINM). Then use a calculator to compute {E}^{−1} directly.  
Contributed by Robert Beezer Solution [684]

C28 Let

C = \left [\array{ 1 & 1 & 3 & 1\cr −2 &−1 &−4 &−1 \cr 1 & 4 &10& 2\cr −2 & 0 &−4 & 5 } \right ]

Compute the inverse of C, {C}^{−1}, by forming the 4 × 8 matrix \left [\left .C\kern 1.95872pt \right \vert \kern 1.95872pt {I}_{4}\right ] and row-reducing (Theorem CINM). Then use a calculator to compute {C}^{−1} directly.  
Contributed by Robert Beezer Solution [685]

C40 Find all solutions to the system of equations below, making use of the matrix inverse found in Exercise MISLE.C28.

\eqalignno{ {x}_{1} + {x}_{2} + 3{x}_{3} + {x}_{4} & = −4 & & \cr − 2{x}_{1} − {x}_{2} − 4{x}_{3} − {x}_{4} & = 4 & & \cr {x}_{1} + 4{x}_{2} + 10{x}_{3} + 2{x}_{4} & = −20 & & \cr − 2{x}_{1} − 4{x}_{3} + 5{x}_{4} & = 9 & & }

 
Contributed by Robert Beezer Solution [685]

C41 Use the inverse of a matrix to find all the solutions to the following system of equations.

\eqalignno{ {x}_{1} + 2{x}_{2} − {x}_{3} & = −3 & & \cr 2{x}_{1} + 5{x}_{2} − {x}_{3} & = −4 & & \cr − {x}_{1} − 4{x}_{2} & = 2 & & }

 
Contributed by Robert Beezer Solution [686]

C42 Use a matrix inverse to solve the linear system of equations.

\eqalignno{ {x}_{1} − {x}_{2} + 2{x}_{3} & = 5 & & \cr {x}_{1} − 2{x}_{3} & = −8 & & \cr 2{x}_{1} − {x}_{2} − {x}_{3} & = −6 & & }

 
Contributed by Robert Beezer Solution [687]

T10 Construct an example to demonstrate that {(A + B)}^{−1} = {A}^{−1} + {B}^{−1} is not true for all square matrices A and B of the same size.  
Contributed by Robert Beezer Solution [689]

Subsection SOL: Solutions

C16 Contributed by Chris Black Statement [676]
Answer: {A}^{−1} = \left [\array{ −2& 1 & 1\cr −1 & 1 & 0 \cr 3 &−1&−1 } \right ].

C17 Contributed by Chris Black Statement [676]
The procedure we have for finding a matrix inverse fails for this matrix A since A does not row-reduce to {I}_{3}. We suspect in this case that A is not invertible, although we do not yet know that concretely. (Stay tuned for upcoming revelations in Section MINM!)

C18 Contributed by Chris Black Statement [676]
Answer: {A}^{−1} = \left [\array{ 0 &−1& 1\cr 1 &−1 & 0 \cr −2& 4 &−1 } \right ]

C19 Contributed by Chris Black Statement [676]
Answer: {A}^{−1} = \left [\array{ 0 &−1∕2& 1∕2\cr 1 &−1∕2 &−1∕2 \cr −2& 2 & 1 } \right ]

C21 Contributed by Robert Beezer Statement [676]
Check that both matrix products (Definition MM) AB and BA equal the 4 × 4 identity matrix {I}_{4} (Definition IM).

C22 Contributed by Robert Beezer Statement [677]
Represent each of the two systems by a vector equality, Ax = c and Ay = d. Then in the spirit of Example SABMI, solutions are given by

\eqalignno{ x & = Bc = \left [\array{ 8\cr 21 \cr −5\cr −16 } \right ] &y & = Bd = \left [\array{ 5\cr 10 \cr 0\cr −7 } \right ] & & & & }

Notice how we could solve many more systems having A as the coefficient matrix, and how each such system has a unique solution. You might check your work by substituting the solutions back into the systems of equations, or forming the linear combinations of the columns of A suggested by Theorem SLSLC.

C26 Contributed by Robert Beezer Statement [678]
The inverse of D is

{ D}^{−1} = \left [\array{ −7&−6&−3& 2 & 1\cr −7 &−4 & 2 & 2 &−1 \cr −5&−2& 3 & 1 &−1\cr −6 &−3 & 1 & 1 & 0 \cr 4 & 2 &−2&−1& 1 } \right ]

C27 Contributed by Robert Beezer Statement [679]
The matrix E has no inverse, though we do not yet have a theorem that allows us to reach this conclusion. However, when row-reducing the matrix \left [\left .E\kern 1.95872pt \right \vert \kern 1.95872pt {I}_{5}\right ], the first 5 columns will not row-reduce to the 5 × 5 identity matrix, so we are a t a loss on how we might compute the inverse. When requesting that your calculator compute {E}^{−1}, it should give some indication that E does not have an inverse.

C28 Contributed by Robert Beezer Statement [680]
Employ Theorem CINM,

\left [\array{ 1 & 1 & 3 & 1 &1&0&0&0\cr −2 &−1 &−4 &−1 &0 &1 &0 &0 \cr 1 & 4 &10& 2 &0&0&1&0\cr −2 & 0 &−4 & 5 &0 &0 &0 &1 } \right ]\mathop{\longrightarrow}\limits_{}^{\text{RREF}}\left [\array{ \text{1}&0&0&0& 38 & 18 & −5 &−2\cr 0&\text{1 } &0 &0 & 96 & 47 &−12 &−5 \cr 0&0&\text{1}&0&−39&−19& 5 & 2\cr 0&0 &0 &\text{1 } &−16 & −8 & 2 & 1 } \right ]

And therefore we see that C is nonsingular (C row-reduces to the identity matrix, Theorem NMRRI) and by Theorem CINM,

{ C}^{−1} = \left [\array{ 38 & 18 & −5 &−2\cr 96 & 47 &−12 &−5 \cr −39&−19& 5 & 2\cr −16 & −8 & 2 & 1 } \right ]

C40 Contributed by Robert Beezer Statement [680]
View this system as ℒS\kern -1.95872pt \left (C,\kern 1.95872pt b\right ), where C is the 4 × 4 matrix from Exercise MISLE.C28 and b = \left [\array{ −4\cr 4 \cr −20\cr 9 } \right ]. Since C was seen to be nonsingular in Exercise MISLE.C28 Theorem SNCM says the solution, which is unique by Theorem NMUS, is given by

{ C}^{−1}b = \left [\array{ 38 & 18 & −5 &−2\cr 96 & 47 &−12 &−5 \cr −39&−19& 5 & 2\cr −16 & −8 & 2 & 1 } \right ]\left [\array{ −4\cr 4 \cr −20\cr 9 } \right ] = \left [\array{ 2\cr −1 \cr −2\cr 1 } \right ]

Notice that this solution can be easily checked in the original system of equations.

C41 Contributed by Robert Beezer Statement [681]
The coefficient matrix of this system of equations is

A = \left [\array{ 1 & 2 &−1\cr 2 & 5 &−1 \cr −1&−4& 0 } \right ]

and the vector of constants is b = \left [\array{ −3\cr −4 \cr 2 } \right ]. So by Theorem SLEMM we can convert the system to the form Ax = b. Row-reducing this matrix yields the identity matrix so by Theorem NMRRI we know A is nonsingular. This allows us to apply Theorem SNCM to find the unique solution as

x = {A}^{−1}b = \left [\array{ −4& 4 & 3\cr 1 &−1 &−1 \cr −3& 2 & 1 } \right ]\left [\array{ −3\cr −4 \cr 2 } \right ] = \left [\array{ 2\cr −1 \cr 3 } \right ]

Remember, you can check this solution easily by evaluating the matrix-vector product Ax (Definition MVP).

C42 Contributed by Robert Beezer Statement [681]
We can reformulate the linear system as a vector equality with a matrix-vector product via Theorem SLEMM. The system is then represented by Ax = b where

\eqalignno{ A & = \left [\array{ 1&−1& 2\cr 1& 0 &−2 \cr 2&−1&−1 } \right ] &b & = \left [\array{ 5\cr −8 \cr −6 } \right ] & & & & }

According to Theorem SNCM, if A is nonsingular then the (unique) solution will be given by {A}^{−1}b. We attempt the computation of {A}^{−1} through Theorem CINM, or with our favorite computational device and obtain,

\eqalignno{ {A}^{−1} = \left [\array{ 2&3&−2\cr 3&5 &−4 \cr 1&1&−1 } \right ] & & }

So by Theorem NI, we know A is nonsingular, and so the unique solution is

\eqalignno{ {A}^{−1}b = \left [\array{ 2&3&−2\cr 3&5 &−4 \cr 1&1&−1 } \right ]\left [\array{ 5\cr −8 \cr −6 } \right ] = \left [\array{ −2\cr −1 \cr 3 } \right ] & & }

T10 Contributed by Robert Beezer Statement [682]
For a large collection of small examples, let D be any 2 × 2 matrix that has an inverse (Theorem TTMI can help you construct such a matrix, {I}_{2} is a simple choice). Set A = D and B = (−1)D. While {A}^{−1} and {B}^{−1} both exist, what is {\left (A + B\right )}^{−1}?

For a large collection of examples of any size, consider A = B = {I}_{n}. Can the proposed statement be salvaged to become a theorem?