Section MINM Matrix Inverses and Nonsingular Matrices
Subsection NMI Nonsingular Matrices are Invertible
We need a couple of technical results for starters. Some books would call these minor, but essential, results lemmas. We'll just call 'em theorems. See Proof Technique LC for more on the distinction. The first of these technical results is interesting in that the hypothesis says something about a product of two square matrices and the conclusion then says the same thing about each individual matrix in the product. This result has an analogy in the algebra of complex numbers: suppose α,β∈C, then αβ≠0 if and only if α≠0 and β≠0 (see Theorem ZPZF). We can view this result as suggesting that the term “nonsingular” for matrices is like the term “nonzero” for scalars. Consider too that we know singular matrices, as coefficient matrices for systems of equations, will sometimes lead to systems with no solutions, or systems with infinitely many solutions (Theorem NMUS). What do linear equations with zero look like? Consider 0x=5, which has no solution, and 0x=0, which has infinitely many solutions. In the algebra of scalars, zero is exceptional (meaning different, not better), and in the algebra of matrices, singular matrices are also the exception. While there is only one zero scalar, and there are infinitely many singular matrices, we will see that singular matrices are a distinct minority. You will recall that at the end of Section NM we remarked that nonsingular seemed an odd word for matrices that led to systems with a single solution. Now, instead of thinking of “nonsingular” this way, rather think of it as being more like how “nonzero” is used for scalars.Theorem NPNF. Nonsingular Product has Nonsingular Factors.
Suppose that A and B are square matrices of size n. The product AB is nonsingular if and only if A and B are both nonsingular.
Proof.
(⇒)
For this portion of the proof we will form the logically-equivalent contrapositive and prove that statement using two cases. “\(AB\) is nonsingular implies \(A\) and \(B\) are both nonsingular” becomes “\(A\) or \(B\) is singular implies \(AB\) is singular.” (Be sure to undertstand why the “and” became an “or”, see Proof Technique CP.)
Case 1. \(B\) is singular.
Because \(B\) is singular there must be a nonzero vector \(\vect{z}\) that is a solution to \(\homosystem{B}\text{.}\) SoCase 2. \(B\) is nonsingular.
Since \(B\) is nonsingular, our hypothesis implies that \(A\) must be singular. Because \(A\) is singular, there is a nonzero vector \(\vect{y}\) that is a solution to \(\homosystem{A}\text{.}\) Now consider the linear system \(\linearsystem{B}{\vect{y}}\text{.}\) Since \(B\) is nonsingular, the system has a unique solution (Theorem NMUS), which we will denote as \(\vect{w}\text{.}\) We first claim \(\vect{w}\) is not the zero vector either. Assuming the opposite, suppose that \(\vect{w}=\zerovector\) (Proof Technique CD). Then(⇐)
Now assume that both \(A\) and \(B\) are nonsingular. Suppose that \(\vect{x}\in\complex{n}\) is a solution to \(\homosystem{AB}\text{.}\) Then
By Theorem SLEMM, \(B\vect{x}\) is a solution to \(\homosystem{A}\text{,}\) and by the definition of a nonsingular matrix (Definition NM), we conclude that \(B\vect{x}=\zerovector\text{.}\) Now, by an entirely similar argument, the nonsingularity of \(B\) forces us to conclude that \(\vect{x}=\zerovector\text{.}\) So the only solution to \(\homosystem{AB}\) is the zero vector and we conclude that \(AB\) is nonsingular by Definition NM.
Theorem OSIS. One-Sided Inverse is Sufficient.
Suppose A and B are square matrices of size n such that AB=In. Then BA=In.
Proof.
The matrix \(I_n\) is nonsingular (since it row-reduces easily to \(I_n\text{,}\) Theorem NMRRI). So \(A\) and \(B\) are nonsingular by Theorem NPNF, so in particular \(B\) is nonsingular. We can therefore apply Theorem CINM to assert the existence of a matrix \(C\) so that \(BC=I_n\text{.}\) This application of Theorem CINM could be a bit confusing, mostly because of the names of the matrices involved. \(B\) is nonsingular, so there must be a “right-inverse” for \(B\text{,}\) and we are calling it \(C\text{.}\)
Now
which is the desired conclusion.
Theorem NI. Nonsingularity is Invertibility.
Suppose that A is a square matrix. Then A is nonsingular if and only if A is invertible.
Proof.
(⇐)
Since \(A\) is invertible, we can write \(I_n=A\inverse{A}\) (Definition MI). Notice that \(I_n\) is nonsingular (Theorem NMRRI) so Theorem NPNF implies that \(A\) (and \(\inverse{A}\)) is nonsingular.
(⇒)
Suppose now that \(A\) is nonsingular. By Theorem CINM we find \(B\) so that \(AB=I_n\text{.}\) Then Theorem OSIS tells us that \(BA=I_n\text{.}\) So \(B\) is \(A\)'s inverse, and by construction, \(A\) is invertible.
Theorem NME3. Nonsingular Matrix Equivalences, Round 3.
Suppose that A is a square matrix of size n. The following are equivalent.
- A is nonsingular.
- A row-reduces to the identity matrix.
- The null space of A contains only the zero vector, N(A)={0}.
- The linear system LS(A,b) has a unique solution for every possible choice of b.
- The columns of A are a linearly independent set.
- A is invertible.
Proof.
We can update our list of equivalences for nonsingular matrices (Theorem NME2) with the equivalent condition from Theorem NI.
Theorem SNCM. Solution with Nonsingular Coefficient Matrix.
Suppose that A is nonsingular. Then the unique solution to LS(A,b) is A−1b.
Proof.
By Theorem NMUS we know already that \(\linearsystem{A}{\vect{b}}\) has a unique solution for every choice of \(\vect{b}\text{.}\) We need to show that the expression stated is indeed a solution (the solution). That is easy, just “plug it in” to the vector equation representation of the system (Theorem SLEMM)
Since \(A\vect{x}=\vect{b}\) is true when we substitute \(\inverse{A}\vect{b}\) for \(\vect{x}\text{,}\) \(\inverse{A}\vect{b}\) is a (the!) solution to \(\linearsystem{A}{\vect{b}}\text{.}\) See the exercise's solution for Exercise MM.T38 for an alternate approach to the uniqueness of the system's solution.
Theorem ITMT. Inverse of a Triangular Matrix is Triangular.
Suppose that A is a matrix that is nonsingular and triangular. Then the inverse of A, A−1, is triangular of the same type. Furthermore, the diagonal entries of A−1 are the reciprocals of the corresponding diagonal entries of A. More precisely, [A−1]ii=[A]−1ii.
Proof.
We give the proof for the case when \(A\) is lower triangular of size \(n\text{,}\) and leave the case when \(A\) is upper triangular for you. Consider the process for computing the inverse of a matrix that is outlined in the proof of Theorem CINM. We augment \(A\) with the size \(n\) identity matrix, \(I_n\text{,}\) and row-reduce the \(n\times 2n\) matrix to reduced row-echelon form via the algorithm in Theorem REMEF. The proof involves tracking the peculiarities of this process in the case of a lower triangular matrix. Let \(M=\augmented{A}{I_n}\text{.}\)
First, by Theorem NTM, none of the diagonal elements of \(A\) are zero. We follow the procedure of Theorem REMEF) in just a slightly different order, first creating a row-equivalent matrix \(M^\prime\text{.}\) For each \(1\leq i\leq n\text{,}\) multiply row \(i\) by the nonzero scalar \(\matrixentry{A}{ii}^{-1}\text{.}\) This sets \(\matrixentry{M^\prime}{ii}=1\) and \(\matrixentry{M^\prime}{i,n+i}=\matrixentry{A}{ii}^{-1}\text{,}\) and leaves every zero entry of \(M\) unchanged.
Let \(M_j\) denote the matrix obtained from \(M^\prime\) after converting column \(j\) to a pivot column. We can convert column \(j\) of \(M_{j-1}\) into a pivot column with a set of \(n-j-1\) row operations of the form \(\rowopadd{\alpha}{j}{k}\) with \(j+1\leq k\leq n\text{.}\) The key observation here is that we add multiples of row \(j\) only to higher-numbered rows. This means that none of the entries in rows \(1\) through \(j-1\) is changed, and since row \(j\) has zeros in columns \(j+1\) through \(n\text{,}\) none of the entries in rows \(j+1\) through \(n\) is changed in columns \(j+1\) through \(n\text{.}\) The first \(n\) columns of \(M^\prime\) form a lower triangular matrix with 1's on the diagonal. In its conversion to the identity matrix through this sequence of row operations, it remains lower triangular with 1's on the diagonal.
What happens in columns \(n+1\) through \(2n\) of \(M^\prime\text{?}\) These columns began in \(M\) as the identity matrix, and in \(M^\prime\) each diagonal entry was scaled to a reciprocal of the corresponding diagonal entry of \(A\text{.}\) Notice that trivially, these final \(n\) columns of \(M^\prime\) form a lower triangular matrix. Just as we argued for the first \(n\) columns, the row operations that convert \(M_{j-1}\) into \(M_j\) will preserve the lower triangular form in the final \(n\) columns and preserve the exact values of the diagonal entries. By Theorem CINM, the final \(n\) columns of \(M_n\) is the inverse of \(A\text{,}\) and this matrix has the necessary properties advertised in the conclusion of this theorem.
Sage MI. Matrix Inverse.
Now we know that invertibility is equivalent to nonsingularity, and that the procedure outlined in Theorem CINM will always yield an inverse for a nonsingular matrix. But rather than using that procedure, Sage implements a .inverse()
method. In the following, we compute the inverse of a \(3\times 3\) matrix, and then purposely convert it to a singular matrix by replacing the last column by a linear combination of the first two.
Notice how the failure to invert C
is explained by the matrix being singular.
Systems with nonsingular coefficient matrices can be solved easily with the matrix inverse. We will recycle A
as a coefficient matrix, so be sure to execute the code above.
If you find it more convenient, you can use the same notation as the text for a matrix inverse.
Sage NME3. Nonsingular Matrix Equivalences, Round 3.
For square matrices, Sage has the methods .is_singular()
and .is_invertible()
. By Theorem NI we know these two functions to be logical opposites. One way to express this is that these two methods will always return different values. Here we demonstrate with a nonsingular matrix and a singular matrix. The comparison !=
is “not equal.”
We could test other properties of the matrix inverse, such as Theorem SS.
Subsection UM Unitary Matrices
Recall that the adjoint of a matrix is A∗=(¯A)t (Definition A).Definition UM. Unitary Matrices.
Suppose that U is a square matrix of size n such that U∗U=In. Then we say U is unitary.
Example UM3. Unitary matrix of size 3.
Let
The computations get a bit tiresome, but if you work your way through the computation of \(\adjoint{U}U\text{,}\) you will arrive at the \(3\times 3\) identity matrix \(I_3\text{.}\)
Example UPM. Unitary permutation matrix.
The matrix
is unitary as can be easily checked. Notice that it is just a rearrangement of the columns of the \(5\times 5\) identity matrix, \(I_5\) (Definition IM).
An interesting exercise is to build another \(5\times 5\) unitary matrix, \(R\text{,}\) using a different rearrangement of the columns of \(I_5\text{.}\) Then form the product \(PR\text{.}\) This will be another unitary matrix (Exercise MINM.T10). If you were to build all \(5!=5\times 4\times 3\times 2\times 1=120\) matrices of this type you would have a set that remains closed under matrix multiplication. It is an example of another algebraic structure known as a group since together the set and the one operation (matrix multiplication here) is closed, associative, has an identity (\(I_5\)), and inverses (Theorem UMI). Notice though that the operation in this group is not commutative!
Theorem UMI. Unitary Matrices are Invertible.
Suppose that U is a unitary matrix of size n. Then U is nonsingular, and U−1=U∗.
Proof.
By Definition UM, we know that \(\adjoint{U}U=I_n\text{.}\) The matrix \(I_n\) is nonsingular (since it row-reduces easily to \(I_n\text{,}\) Theorem NMRRI). So by Theorem NPNF, \(U\) and \(\adjoint{U}\) are both nonsingular matrices.
The equation \(\adjoint{U}U=I_n\) gets us halfway to an inverse of \(U\text{,}\) and Theorem OSIS tells us that then \(U\adjoint{U}=I_n\) also. So \(U\) and \(\adjoint{U}\) are inverses of each other (Definition MI).
Theorem CUMOS. Columns of Unitary Matrices are Orthonormal Sets.
Suppose that S={A1,A2,A3,…,An} is the set of columns of a square matrix A of size n. Then A is a unitary matrix if and only if S is an orthonormal set.
Proof.
The proof revolves around recognizing that a typical entry of the product \(\adjoint{A}A\) is an inner product of columns of \(A\text{.}\) Here are the details to support this claim.
We now employ this equality in a chain of equivalences,
Example OSMC. Orthonormal set from matrix columns.
The matrix
from Example UM3 is a unitary matrix. By Theorem CUMOS, its columns
form an orthonormal set. You might find checking the six inner products of pairs of these vectors easier than doing the matrix product \(\adjoint{U}U\text{.}\) Or, because the inner product is anti-commutative (Theorem IPAC) you only need check three inner products (see Exercise MINM.T12).
Theorem UMPIP. Unitary Matrices Preserve Inner Products.
Suppose that U is a unitary matrix of size n and u and v are two vectors from Cn. Then
Proof.
The second conclusion is just a specialization of the first conclusion.
Sage UM. Unitary Matrices.
No surprise about how we check if a matrix is unitary. Here is Example UM3,
We can verify Theorem UMPIP, where the vectors u
and v
are created randomly. Try evaluating this compute cell with your own choices.
If you want to experiment with permutation matrices, Sage has these too. We can create a permutation matrix from a list that indicates for each column the row with a one in it. Notice that the product here of two permutation matrices is again a permutation matrix.
Reading Questions MINM Reading Questions
1.
Compute the inverse of the coefficient matrix of the system of equations below and use the inverse to solve the system.
2.
In the reading questions for Section MISLE you were asked to find the inverse of the 3×3 matrix below.
Because the matrix was not nonsingular, you had no theorems at that point that would allow you to compute the inverse. Explain why you now know that the inverse does not exist (which is different than not being able to compute it) by quoting the relevant theorem's acronym.
3.
Is the matrix A unitary? Why?
Exercises MINM Exercises
C20.
Verify that AB is nonsingular.
C40.
Solve the system of equations below using the inverse of a matrix.
The coefficient matrix and vector of constants for the system are
\(\inverse{A}\) can be computed by using a calculator, or by the method of Theorem CINM. Then Theorem SNCM says the unique solution is
M10.
Find values of x, y, z so that matrix A is invertible.
There are an infinite number of possible answers. We want to find a vector \(\colvector{x \\ y \\ z}\) so that the set
is a linearly independent set. We need a vector not in the span of the first two columns, which geometrically means that we need it to not be in the same plane as the first two columns of \(A\text{.}\) We can choose any values we want for \(x\) and \(y\text{,}\) and then choose a value of \(z\) that makes the three vectors independent.
I will (arbitrarily) choose \(x = 1\text{,}\) \(y = 1\text{.}\) Then, we have
which is invertible if and only if \(4-6z \ne 0\text{.}\) Thus, we can choose any value as long as \(z \ne \frac{2}{3}\text{,}\) so we choose \(z = 0\text{,}\) and we have found a matrix \(A = \begin{bmatrix} 1 & 2 & 1\\ 3 & 0 & 1 \\ 1 & 1 & 0 \end{bmatrix}\) that is invertible.
M11.
Find values of x, y z so that matrix A is singular.
There are an infinite number of possible answers. We need the set of vectors
to be linearly dependent. One way to do this by inspection is to have \(\colvector{x \\ y \\ z} =\colvector{1 \\ 4 \\ 5}\text{.}\) Thus, if we let \(x = 1\text{,}\) \(y = 4\text{,}\) \(z = 5\text{,}\) then the matrix \(A = \begin{bmatrix} 1 & 1 & 1\\ 1 & 4 & 4 \\ 0 & 5 & 5 \end{bmatrix}\) is singular.
M15.
If A and B are n×n matrices, A is nonsingular, and B is singular, show directly that AB is singular, without using Theorem NPNF.
If \(B\) is singular, then there exists a vector \(\vect{x}\ne\zerovector\) so that \(\vect{x}\in \nsp{B}\text{.}\) Thus, \(B\vect{x} = \vect{0}\text{,}\) so \(A(B\vect{x}) = (AB)\vect{x} = \zerovector\text{,}\) so \(\vect{x}\in\nsp{AB}\text{.}\) Since the null space of \(AB\) is not trivial, \(AB\) is a singular matrix.
M20.
Construct an example of a 4×4 unitary matrix.
The \(4\times 4\) identity matrix, \(I_4\text{,}\) would be one example (Definition IM). Any of the 23 other rearrangements of the columns of \(I_4\) would be a simple, but less trivial, example. See Example UPM.
M80.
Matrix multiplication interacts nicely with many operations. But not always with transforming a matrix to reduced row-echelon form. Suppose that A is an m×n matrix and B is an n×p matrix. Let P be a matrix that is row-equivalent to A and in reduced row-echelon form, Q be a matrix that is row-equivalent to B and in reduced row-echelon form, and let R be a matrix that is row-equivalent to AB and in reduced row-echelon form. Is PQ=R? (In other words, with nonstandard notation, is rref(A)rref(B)=rref(AB)?)
Construct a counterexample to show that, in general, this statement is false. Then find a large class of matrices where if A and B are in the class, then the statement is true.
Take
Then \(A\) is already in reduced row-echelon form, and by swapping rows, \(B\) row-reduces to \(A\text{.}\) So the product of the row-echelon forms of \(A\) is \(AA=A\neq\zeromatrix\text{.}\) However, the product \(AB\) is the \(2\times 2\) zero matrix, which is in reduced-echelon form, and not equal to \(AA\text{.}\) When you get there, Theorem PEEF or Theorem EMDRO might shed some light on why we would not expect this statement to be true in general.
If \(A\) and \(B\) are nonsingular, then \(AB\) is nonsingular (Theorem NPNF), and all three matrices \(A\text{,}\) \(B\) and \(AB\) row-reduce to the identity matrix (Theorem NMRRI). By Theorem MMIM, the desired relationship is true.
T10.
Suppose that Q and P are unitary matrices of size n. Prove that QP is a unitary matrix.
T11.
Prove that Hermitian matrices (Definition HM) have real entries on the diagonal. More precisely, suppose that A is a Hermitian matrix of size n. Then [A]ii∈R, 1≤i≤n.
T12.
Suppose that we are checking if a square matrix of size n is unitary. Show that a straightforward application of Theorem CUMOS requires the computation of n2 inner products when the matrix is unitary, and fewer when the matrix is not orthogonal. Then show that this maximum number of inner products can be reduced to 12n(n+1) in light of Theorem IPAC.
T25.
The notation Ak means a repeated matrix product between k copies of the square matrix A.
- Assume A is an n×n matrix where A2=O (which does not imply that A=O.) Prove that In−A is invertible by showing that In+A is an inverse of In−A.
- Assume that A is an n×n matrix where A3=O. Prove that In−A is invertible.
- Form a general theorem based on your observations from parts (1) and (2) and provide a proof.
T50.
This exercise asks you to construct a new proof of a result similar to Theorem NPNF by adding extra assumptions and using less powerful results than we did in this section. Add the hypothesis that A and B are both upper triangular matrices. Establish the same equivalence as in Theorem NPNF, but make use of Theorem NTM, Theorem PTMT, and Exercise MM.T28. In particular, you should not need any results that occur after Section MM.