Section MISLE Matrix Inverses and Systems of Linear Equations
Subsection SI Solutions and Inverses
We begin with a familiar example, performed in a novel way.Example SABMI. Solutions to Archetype B with a matrix inverse.
Archetype B is the system of \(m=3\) linear equations in \(n=3\) variables
By Theorem SLEMM we can represent this system of equations as \(A\vect{x}=\vect{b}\text{,}\) where
Now, entirely unmotivated, we define the \(3\times 3\) matrix \(B\text{,}\)
and note the remarkable fact that
Now apply this computation to the problem of solving the system of equations
So we have
So with the help and assistance of \(B\) we have been able to determine a solution to the system represented by \(A\vect{x}=\vect{b}\) through judicious use of matrix multiplication. We know by Theorem NMUS that since the coefficient matrix in this example is nonsingular, there would be a unique solution, no matter what the choice of \(\vect{b}\text{.}\) The derivation above amplifies this result, since we were forced to conclude that \(\vect{x}=B\vect{b}\) and the solution could not be anything else. You should notice that this argument would hold for any particular choice of \(\vect{b}\text{.}\)
Subsection IM Inverse of a Matrix
Definition MI. Matrix Inverse.
Suppose A and B are square matrices of size n such that AB=In and BA=In. Then A is invertible and B is the inverse of A. In this situation, we write B=A−1.
Example MWIAA. A matrix without an inverse, Archetype A.
Consider the coefficient matrix from Archetype A.
We will show that \(A\) is a matrix with no inverse, with a proof by contradiction. To this end, suppose that \(A\) is invertible, and call its inverse the matrix \(B\text{.}\) Choose the vector of constants
and consider the system of equations \(\linearsystem{A}{\vect{b}}\text{.}\) We could now proceed exactly as we did in Example SABMI, and employ the matrix \(B\) to determine a unique solution to this vector equation. Namely, the solution would be \(\vect{x}=B\vect{b}\text{.}\) In other words, the system is consistent.
However, we will now show the system \(\linearsystem{A}{\vect{b}}\) has no solutions. In other words, this system is inconsistent. Form the augmented matrix \(\augmented{A}{\vect{b}}\) and row-reduce to
which allows us to recognize the inconsistency by Theorem RCLS.
So the assumption of \(A\)'s inverse leads to a logical inconsistency as the system cannot be both consistent and inconsistent. So our assumption of an inverse is false, and \(A\) is a matrix with no inverse (provably). So we say \(A\) is not invertible.
It is possible this example is less than satisfying. Just where did that particular choice of the vector \(\vect{b}\) come from anyway? Stay tuned for an application of the future Theorem CSCS in Example CSAA.
Example MI. Matrix inverse.
Consider the matrices
Then
and
\begin{align*} BA &= \begin{bmatrix} -3 & 3 & 6 & -1 & -2 \\ 0 & -2 & -5 & -1 & 1 \\ 1 & 2 & 4 & 1 & -1 \\ 1 & 0 & 1 & 1 & 0 \\ 1 & -1 & -2 & 0 & 1 \end{bmatrix} \begin{bmatrix} 1 & 2 & 1 & 2 & 1 \\ -2 & -3 & 0 & -5 & -1 \\ 1 & 1 & 0 & 2 & 1 \\ -2 & -3 & -1 & -3 & -2 \\ -1 & -3 & -1 & -3 & 1 \end{bmatrix} = \begin{bmatrix} 1 & 0 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 0 & 1 \end{bmatrix} \end{align*}so by Definition MI, we can say that \(A\) is invertible and write \(B=\inverse{A}\text{.}\)
Subsection CIM Computing the Inverse of a Matrix
We have just seen inverses of matrices in Example SABMI and Example MI, but these inverse matrices have just dropped from the sky. How would we compute an inverse? And just when is a matrix invertible, and when is it not? Writing a putative inverse with n2 unknowns and solving the resulting n2 equations is one approach. Applying this approach to 2×2 matrices can get us somewhere, so just for fun, let us do it.Theorem TTMI. Two-by-Two Matrix Inverse.
Suppose
Then A is invertible if and only if ad−bc≠0. When A is invertible, then
Proof.
(⇐)
Assume that \(ad-bc\neq 0\text{.}\) We will use the definition of the inverse of a matrix to establish that \(A\) has an inverse (Definition MI). Note that if \(ad-bc\neq 0\) then the displayed formula for \(\inverse{A}\) is legitimate since we are not dividing by zero). Using this proposed formula for the inverse of \(A\text{,}\) we simply compute
and
\begin{align*} \inverse{A}A &= \frac{1}{ad-bc} \begin{bmatrix}d&-b\\-c&a\end{bmatrix} \begin{bmatrix}a&b\\c&d\end{bmatrix} = \frac{1}{ad-bc} \begin{bmatrix}ad-bc&0\\0&ad-bc\end{bmatrix} = \begin{bmatrix}1&0\\0&1\end{bmatrix}\text{.} \end{align*}By Definition MI this is sufficient to establish that \(A\) is invertible, and that the expression for \(\inverse{A}\) is correct.
(⇒)
Assume that \(A\) is invertible, and proceed with a proof by contradiction (Proof Technique CD), by assuming also that \(ad-bc=0\text{.}\) This translates to \(ad=bc\text{.}\) Let
be a putative inverse of \(A\text{.}\)
This means that
Working on the matrices on the two ends of this equation, we will multiply the top row by \(c\) and the bottom row by \(a\text{.}\)
We are assuming that \(ad=bc\text{,}\) so we can replace two occurrences of \(ad\) by \(bc\) in the bottom row of the right matrix.
The matrix on the right now has two rows that are identical, and therefore the same must be true of the matrix on the left. Identical rows for the matrix on the left implies that \(a=0\) and \(c=0\text{.}\)
With this information, the product \(AB\) becomes
So \(bg=dh=1\) and thus \(b,g,d,h\) are all nonzero. But then \(bh\) and \(dg\) (the “other corners”) must also be nonzero, so this is (finally) a contradiction. So our assumption was false and we see that \(ad-bc\neq 0\) whenever \(A\) has an inverse.
Example CMI. Computing a matrix inverse.
Consider the matrix defined in Example MI.
For its inverse, we desire a matrix \(B\) so that \(AB=I_5\text{.}\) Emphasizing the structure of the columns and employing the definition of matrix multiplication (Definition MM), we have
Equating the matrices column-by-column we have
Since the matrix \(B\) is what we are trying to compute, we can view each column, \(\vect{B}_i\text{,}\) as a column vector of unknowns in a linear system of equations. Then we have five systems of equations to solve, each with 5 equations in 5 variables. Notice that all 5 of these systems have the same coefficient matrix. We will now solve each system in turn.
Row-reduce the augmented matrix of the linear system \(\linearsystem{A}{\vect{e}_1}\text{,}\)
\begin{align*} \begin{bmatrix} 1 & 2 & 1 & 2 & 1 & 1\\ -2 & -3 & 0 & -5 & -1 & 0\\ 1 & 1 & 0 & 2 & 1 & 0\\ -2 & -3 & -1 & -3 & -2 & 0\\ -1 & -3 & -1 & -3 & 1 & 0 \end{bmatrix} \rref \begin{bmatrix} \leading{1} & 0 & 0 & 0 & 0 & -3\\ 0 & \leading{1} & 0 & 0 & 0 & 0\\ 0 & 0 & \leading{1} & 0 & 0 & 1\\ 0 & 0 & 0 & \leading{1} & 0 & 1\\ 0 & 0 & 0 & 0 & \leading{1} & 1 \end{bmatrix} ; \vect{B}_1=\colvector{-3\\0\\1\\1\\1}\\ \end{align*}Row-reduce the augmented matrix of the linear system \(\linearsystem{A}{\vect{e}_2}\text{,}\)
\begin{align*} \begin{bmatrix} 1 & 2 & 1 & 2 & 1 & 0\\ -2 & -3 & 0 & -5 & -1 & 1\\ 1 & 1 & 0 & 2 & 1 & 0\\ -2 & -3 & -1 & -3 & -2 & 0\\ -1 & -3 & -1 & -3 & 1 & 0 \end{bmatrix} \rref \begin{bmatrix} \leading{1} & 0 & 0 & 0 & 0 & 3\\ 0 & \leading{1} & 0 & 0 & 0 & -2\\ 0 & 0 & \leading{1} & 0 & 0 & 2\\ 0 & 0 & 0 & \leading{1} & 0 & 0\\ 0 & 0 & 0 & 0 & \leading{1} & -1 \end{bmatrix} ; \vect{B}_2=\colvector{3\\-2\\2\\0\\-1}\\ \end{align*}Row-reduce the augmented matrix of the linear system \(\linearsystem{A}{\vect{e}_3}\text{,}\)
\begin{align*} \begin{bmatrix} 1 & 2 & 1 & 2 & 1 & 0\\ -2 & -3 & 0 & -5 & -1 & 0\\ 1 & 1 & 0 & 2 & 1 & 1\\ -2 & -3 & -1 & -3 & -2 & 0\\ -1 & -3 & -1 & -3 & 1 & 0 \end{bmatrix} \rref \begin{bmatrix} \leading{1} & 0 & 0 & 0 & 0 & 6\\ 0 & \leading{1} & 0 & 0 & 0 & -5\\ 0 & 0 & \leading{1} & 0 & 0 & 4\\ 0 & 0 & 0 & \leading{1} & 0 & 1\\ 0 & 0 & 0 & 0 & \leading{1} & -2 \end{bmatrix} ; \vect{B}_3=\colvector{6\\-5\\4\\1\\-2}\\ \end{align*}Row-reduce the augmented matrix of the linear system \(\linearsystem{A}{\vect{e}_4}\text{,}\)
\begin{align*} \begin{bmatrix} 1 & 2 & 1 & 2 & 1 & 0\\ -2 & -3 & 0 & -5 & -1 & 0\\ 1 & 1 & 0 & 2 & 1 & 0\\ -2 & -3 & -1 & -3 & -2 & 1\\ -1 & -3 & -1 & -3 & 1 & 0 \end{bmatrix} \rref \begin{bmatrix} \leading{1} & 0 & 0 & 0 & 0 & -1\\ 0 & \leading{1} & 0 & 0 & 0 & -1\\ 0 & 0 & \leading{1} & 0 & 0 & 1\\ 0 & 0 & 0 & \leading{1} & 0 & 1\\ 0 & 0 & 0 & 0 & \leading{1} & 0 \end{bmatrix} ; \vect{B}_4=\colvector{-1\\-1\\1\\1\\0}\\ \end{align*}Row-reduce the augmented matrix of the linear system \(\linearsystem{A}{\vect{e}_5}\text{,}\)
\begin{align*} \begin{bmatrix} 1 & 2 & 1 & 2 & 1 & 0\\ -2 & -3 & 0 & -5 & -1 & 0\\ 1 & 1 & 0 & 2 & 1 & 0\\ -2 & -3 & -1 & -3 & -2 & 0\\ -1 & -3 & -1 & -3 & 1 & 1 \end{bmatrix} \rref \begin{bmatrix} \leading{1} & 0 & 0 & 0 & 0 & -2\\ 0 & \leading{1} & 0 & 0 & 0 & 1\\ 0 & 0 & \leading{1} & 0 & 0 & -1\\ 0 & 0 & 0 & \leading{1} & 0 & 0\\ 0 & 0 & 0 & 0 & \leading{1} & 1 \end{bmatrix} ; \vect{B}_5=\colvector{-2\\1\\-1\\0\\1} \end{align*}We can now collect our 5 solution vectors into the matrix \(B\text{,}\)
By this method, we know that \(AB=I_5\text{.}\) Check that \(BA=I_5\text{,}\) and then we will know that we have the inverse of \(A\text{.}\)
Theorem CINM. Computing the Inverse of a Nonsingular Matrix.
Suppose A is a nonsingular square matrix of size n. Create the n×2n matrix M by placing the n×n identity matrix In to the right of the matrix A. Let N be a matrix that is row-equivalent to M and in reduced row-echelon form. Finally, let J be the matrix formed from the final n columns of N. Then AJ=In.
Proof.
\(A\) is nonsingular, so by Theorem NMRRI there is a sequence of row operations that will convert \(A\) into \(I_n\text{.}\) It is this same sequence of row operations that will convert \(M\) into \(N\text{,}\) since having the identity matrix in the first \(n\) columns of \(N\) is sufficient to guarantee that \(N\) is in reduced row-echelon form.
If we consider the systems of linear equations, \(\linearsystem{A}{\vect{e}_i}\text{,}\) \(1\leq i\leq n\text{,}\) we see that the aforementioned sequence of row operations will also bring the augmented matrix of each of these systems into reduced row-echelon form. Furthermore, the unique solution to \(\linearsystem{A}{\vect{e}_i}\) appears in column \(n+1\) of the row-reduced augmented matrix of the system and is identical to column \(n+i\) of \(N\text{.}\) Let \(\vectorlist{N}{2n}\) denote the columns of \(N\text{.}\) So we find,
as desired.
Example CMIAB. Computing a matrix inverse, Archetype B.
Archetype B has a coefficient matrix given as
Exercising Theorem CINM we set \(M\) and row-reduce,
\begin{align*} M=& \begin{bmatrix} -7&-6&-12&1&0&0\\ 5&5&7&0&1&0\\ 1&0&4&0&0&1 \end{bmatrix} &\rref N=& \begin{bmatrix} 1&0&0&-10 & -12 & -9\\ 0&1&0&\frac{13}{2} & 8 & \frac{11}{2}\\ 0&0&1&\frac{5}{2} & 3 & \frac{5}{2} \end{bmatrix}\\ \end{align*}So
\begin{align*} \inverse{B}=& \begin{bmatrix} -10 & -12 & -9\\ \frac{13}{2} & 8 & \frac{11}{2}\\ \frac{5}{2} & 3 & \frac{5}{2} \end{bmatrix} \end{align*}once we check that \(\inverse{B}B=I_3\) (the product in the opposite order is a consequence of the theorem).
Sage MISLE. Matrix Inverse, Systems of Equations.
We can use the computational method described in this section in hopes of finding a matrix inverse, as Theorem CINM gets us halfway there. We will continue with the matrix from Example MI. First we check that the matrix is nonsingular so we can apply the theorem, then we get “half” an inverse, and verify that it also behaves as a “full” inverse by meeting the full definition of a matrix inverse (Definition MI).
Note that the matrix J
is constructed by taking the last 5 columns of N
(numbered 5 through 9) and using them in the matrix_from_columns()
matrix method. What happens if you apply the procedure above to a singular matrix? That would be an instructive experiment to conduct.
With an inverse of a coefficient matrix in hand, we can easily solve systems of equations, in the style of Example SABMI. We will recycle the matrices A
and its inverse, J
, from above, so be sure to run those compute cells first if you are playing along. We consider a system with A
as a coefficient matrix and solve a linear system twice, once the old way and once the new way. Recall that with a nonsingular coefficient matrix, the solution will be unique for any choice of const
, so you can experiment by changing the vector of constants and re-executing the code.
Subsection PMI Properties of Matrix Inverses
The inverse of a matrix enjoys some nice properties. We collect a few here. First, a matrix can have but one inverse.Theorem MIU. Matrix Inverse is Unique.
Suppose the square matrix A has an inverse. Then A−1 is unique.
Proof.
As described in Proof Technique U, we will assume that \(A\) has two inverses. The hypothesis tells there is at least one. Suppose then that \(B\) and \(C\) are both inverses for \(A\text{,}\) so we know by Definition MI that \(AB=BA=I_n\) and \(AC=CA=I_n\text{.}\) Then we have
So we conclude that \(B\) and \(C\) are the same, and cannot be different. So any matrix that acts like an inverse, must be the inverse.
Theorem SS. Socks and Shoes.
Suppose A and B are invertible matrices of size n. Then AB is an invertible matrix and (AB)−1=B−1A−1.
Proof.
At the risk of carrying our everyday analogies too far, the proof of this theorem is quite easy when we compare it to the workings of a dating service. We have a statement about the inverse of the matrix \(AB\text{,}\) which for all we know right now might not even exist. Suppose \(AB\) was to sign up for a dating service with two requirements for a compatible date. Upon multiplication on the left, and on the right, the result should be the identity matrix. In other words, \(AB\)'s ideal date would be its inverse.
Now along comes the matrix \(\inverse{B}\inverse{A}\) (which we know exists because our hypothesis says both \(A\) and \(B\) are invertible and we can form the product of these two matrices), also looking for a date. Let us see if \(\inverse{B}\inverse{A}\) is a good match for \(AB\text{.}\) First they meet at a noncommittal neutral location, say a coffee shop, for quiet conversation,
The first date having gone smoothly, a second, more serious, date is arranged, say dinner and a show,
\begin{align*} (AB)(\inverse{B}\inverse{A}) &=A(B\inverse{B})\inverse{A}&& \knowl{./knowl/theorem-MMA.html}{\text{Theorem MMA}}\\ &=AI_n\inverse{A}&& \knowl{./knowl/definition-MI.html}{\text{Definition MI}}\\ &=A\inverse{A}&& \knowl{./knowl/theorem-MMIM.html}{\text{Theorem MMIM}}\\ &=I_n&& \knowl{./knowl/definition-MI.html}{\text{Definition MI}} \end{align*}So the matrix \(\inverse{B}\inverse{A}\) has met all of the requirements to be \(AB\)'s inverse (date) and with the ensuing marriage proposal we can announce that \(\inverse{(AB)}=\inverse{B}\inverse{A}\text{.}\)
Theorem MIMI. Matrix Inverse of a Matrix Inverse.
Suppose A is an invertible matrix. Then A−1 is invertible and (A−1)−1=A.
Proof.
As with the proof of Theorem SS, we examine if \(A\) is a suitable inverse for \(\inverse{A}\) (by definition, the opposite is true).
and
\begin{align*} \inverse{A}A&=I_n&& \knowl{./knowl/definition-MI.html}{\text{Definition MI}} \end{align*}The matrix \(A\) has met all the requirements to be the inverse of \(\inverse{A}\text{,}\) and so is invertible and we can write \(A=\inverse{(\inverse{A})}\text{.}\)
Theorem MIT. Matrix Inverse of a Transpose.
Suppose A is an invertible matrix. Then At is invertible and (At)−1=(A−1)t.
Proof.
As with the proof of Theorem SS, we see if \(\transpose{(\inverse{A})}\) is a suitable inverse for \(\transpose{A}\text{.}\) Apply Theorem MMT to see that
and
\begin{align*} \transpose{A}\transpose{(\inverse{A})} &=\transpose{(\inverse{A}A)}&& \knowl{./knowl/theorem-MMT.html}{\text{Theorem MMT}}\\ &=\transpose{I_n}&& \knowl{./knowl/definition-MI.html}{\text{Definition MI}}\\ &=I_n&& \knowl{./knowl/definition-SYM.html}{\text{Definition SYM}}\text{.} \end{align*}The matrix \(\transpose{(\inverse{A})}\) has met all the requirements to be the inverse of \(\transpose{A}\text{,}\) and so is invertible and we can write \(\inverse{(\transpose{A})}=\transpose{(\inverse{A})}\text{.}\)
Theorem MISM. Matrix Inverse of a Scalar Multiple.
Suppose A is an invertible matrix and α is a nonzero scalar. Then (αA)−1=1αA−1 and αA is invertible.
Proof.
As with the proof of Theorem SS, we see if \(\frac{1}{\alpha}\inverse{A}\) is a suitable inverse for \(\alpha A\text{.}\)
and
\begin{align*} \left(\alpha A\right)\left(\frac{1}{\alpha}\inverse{A}\right)&= \left(\alpha\frac{1}{\alpha}\right)\left(A\inverse{A}\right)&& \knowl{./knowl/theorem-MMSMM.html}{\text{Theorem MMSMM}}\\ &=1I_n&&\text{Scalar multiplicative inverses}\\ &=I_n&& \knowl{./knowl/property-OM.html}{\text{Property OM}} \end{align*}The matrix \(\frac{1}{\alpha}\inverse{A}\) has met all the requirements to be the inverse of \(\alpha A\text{,}\) so we can write \(\inverse{\left(\alpha A\right)}=\frac{1}{\alpha}\inverse{A}\text{.}\)
Reading Questions MISLE Reading Questions
1.
Compute the inverse of the matrix below.
2.
Compute the inverse of the matrix below.
3.
Explain why Theorem SS has the title it does. (Do not just state the theorem, explain the choice of the title making reference to the theorem itself.)
Exercises MISLE Exercises
C16.
If it exists, find the inverse of A, and check your answer.
C17.
If it exists, find the inverse of A=[2−11121312], and check your answer.
The procedure we have for finding a matrix inverse fails for this matrix \(A\) since \(A\) does not row-reduce to \(I_3\text{.}\) We suspect in this case that \(A\) is not invertible, although we do not yet know that concretely. (Stay tuned for upcoming revelations in Section MINM!)
C18.
If it exists, find the inverse of A=[131121221], and check your answer.
C19.
If it exists, find the inverse of A=[131021221], and check your answer.
C21.
Verify that B is the inverse of A.
Check that both matrix products (Definition MM) \(AB\) and \(BA\) equal the \(4\times 4\) identity matrix \(I_4\) (Definition IM).
C22.
Recycle the matrices A and B from Exercise MISLE.C21 and set
Employ the matrix B to solve the two linear systems LS(A,c) and LS(A,d).
Represent each of the two systems by a vector equality, \(A\vect{x}=\vect{c}\) and \(A\vect{y}=\vect{d}\text{.}\) Then in the spirit of Example SABMI, solutions are given by
Notice how we could solve many more systems having \(A\) as the coefficient matrix, and how each such system has a unique solution. You might check your work by substituting the solutions back into the systems of equations, or forming the linear combinations of the columns of \(A\) suggested by Theorem SLSLC.
C23.
If it exists, find the inverse of the 2×2 matrix A and check your answer. (See Theorem TTMI.)
C24.
If it exists, find the inverse of the 2×2 matrix A and check your answer. (See Theorem TTMI.)
C25.
At the conclusion of Example CMI, verify that BA=I5 by computing the matrix product.
C26.
Let
Compute the inverse of D, D−1, by forming the 5×10 matrix [D|I5] and row-reducing (Theorem CINM). Then use a calculator to compute D−1 directly.
The inverse of \(D\) is
C27.
Let
Compute the inverse of E, E−1, by forming the 5×10 matrix [E|I5] and row-reducing (Theorem CINM). Then use a calculator to compute E−1 directly.
The matrix \(E\) has no inverse, though we do not yet have a theorem that allows us to reach this conclusion. However, when row-reducing the matrix \(\augmented{E}{I_5}\text{,}\) the first 5 columns will not row-reduce to the \(5\times 5\) identity matrix, so we are at a loss on how we might compute the inverse. When requesting that your calculator compute \(\inverse{E}\text{,}\) it should give some indication that \(E\) does not have an inverse.
C28.
Let
Compute the inverse of C, C−1, by forming the 4×8 matrix [C|I4] and row-reducing (Theorem CINM). Then use a calculator to compute C−1 directly.
Employ Theorem CINM,
And therefore we see that \(C\) is nonsingular (\(C\) row-reduces to the identity matrix, Theorem NMRRI) and by Theorem CINM,
C40.
Find all solutions to the system of equations below, making use of the matrix inverse found in Exercise MISLE.C28.
View this system as \(\linearsystem{C}{\vect{b}}\text{,}\) where \(C\) is the \(4\times 4\) matrix from Exercise MISLE.C28 and \(\vect{b}=\colvector{-4\\4\\-20\\9}\text{.}\) Since \(C\) was seen to be nonsingular in Exercise MISLE.C28 Theorem SNCM says the solution, which is unique by Theorem NMUS, is given by
Notice that this solution can be easily checked in the original system of equations.
C41.
Use the inverse of a matrix to find all the solutions to the following system of equations.
The coefficient matrix of this system of equations is
and the vector of constants is \(\vect{b}=\colvector{-3\\-4\\2}\text{.}\) So by Theorem SLEMM we can convert the system to the form \(A\vect{x}=\vect{b}\text{.}\) Row-reducing this matrix yields the identity matrix so by Theorem NMRRI we know \(A\) is nonsingular. This allows us to apply Theorem SNCM to find the unique solution as
Remember, you can check this solution easily by evaluating the matrix-vector product \(A\vect{x}\) (Definition MVP).
C42.
Use a matrix inverse to solve the linear system of equations.
We can reformulate the linear system as a vector equality with a matrix-vector product via Theorem SLEMM. The system is then represented by \(A\vect{x}=\vect{b}\) where
According to Theorem SNCM, if \(A\) is nonsingular then the (unique) solution will be given by \(\inverse{A}\vect{b}\text{.}\) We attempt the computation of \(\inverse{A}\) through Theorem CINM, or with our favorite computational device and obtain,
So by Theorem NI, we know \(A\) is nonsingular, and so the unique solution is
T10.
Construct an example to demonstrate that (A+B)−1=A−1+B−1 is not true for all square matrices A and B of the same size.
For a large collection of small examples, let \(D\) be any \(2\times 2\) matrix that has an inverse (Theorem TTMI can help you construct such a matrix, \(I_2\) is a simple choice). Set \(A=D\) and \(B=(-1)D\text{.}\) While \(\inverse{A}\) and \(\inverse{B}\) both exist, what is \(\inverse{\left(A+B\right)}\text{?}\)
For a large collection of examples of any size, consider \(A=B=I_n\text{.}\) Can the proposed statement be salvaged to become a theorem?