From A First Course in Linear Algebra
Version 0.94
© 2004.
Licensed under the GNU Free Documentation License.
http://linear.ups.edu/
We saw in Theorem CINM that if a square matrix
is nonsingular, then
there is a matrix so
that . In other words,
is halfway to being an
inverse of . We will see
in this section that
automatically fulfills the second condition
().
Example MWIAA showed us that the coefficient matrix from Archetype A had
no inverse. Not coincidentally, this coefficient matrix is singular. We’ll make all
these connections precise now. Not many examples or definitions in this section,
just theorems.
We need a couple of technical results for starters. Some books would call these minor, but essential, results “lemmas.” We’ll just call ’em theorems. See Technique LC for more on the distinction.
The first of these technical results is interesting in that the hypothesis says something about a product of two square matrices and the conclusion then says the same thing about each individual matrix in the product.
Theorem NPNT
Nonsingular Product has Nonsingular Terms
Suppose that
and are square
matrices of size
and the product is
nonsingular. Then
and are both
nonsingular.
Proof We’ll do the proof in two parts, each as a proof by contradiction (Technique CD). Establishing that is nonsingular is the easier part, so we will do it first, but in reality, we will need to know that is nonsingular when we prove that is nonsingular.
You can also think of this proof as being a study of four possible conclusions in the table below. One of the four rows must happen (the list is exhaustive). In the proof we learn that the first three rows lead to contradictions, and so are impossible. That leaves the fourth row as a certainty, which is our desired conclusion.
Case
| ||
Singular | Singular | 1 |
Nonsingular | Singular | 1 |
Singular | Nonsingular | 2 |
Nonsingular | Nonsingular |
Case 1. Suppose is singular. Then there is a nonzero vector that is a solution to . So
Because is a nonzero solution to , we conclude that is singular (Definition NM). This is a contradiction, so is nonsingular, as desired.
Case 2. Suppose is singular. Then there is a nonzero vector that is a solution to . Now use this vector and consider the linear system . Since we know is nonsingular (from Case 1), the system has a unique solution, which we will call . We claim is not the zero vector either. Assuming the opposite, suppose that . Then
So is a nonzero solution to , and thus we can say that is singular (Definition NM). This is a contradiction, so is nonsingular, as desired.
This is a powerful result, because it allows us to begin with a hypothesis that something complicated (the matrix product ) has the property of being nonsingular, and we can then conclude that the simpler constituents ( and individually) then also have the property of being nonsingular. If we had thought that the matrix product was an artificial construction, results like this would make us begin to think twice.
The contrapositive of this result is equally interesting. It says that if either or (or both) is a singular matrix, then the product is also singular. Notice how the negation of the theorem’s conclusion ( and both nonsingular) becomes the statement “at least one of and is singular.” (See Technique CP.)
Theorem OSIS
One-Sided Inverse is Sufficient
Suppose and
are square
matrices of size
such that .
Then .
Proof The matrix is nonsingular (since it row-reduces easily to , Theorem NMRRI). So and are nonsingular by Theorem NPNT, so in particular is nonsingular. We can therefore apply Theorem CINM to assert the existence of a matrix so that . This application of Theorem CINM could be a bit confusing, mostly because of the names of the matrices involved. is nonsingular, so there must be a “right-inverse” for , and we’re calling it .
Now
which is the desired conclusion.
So Theorem OSIS tells us that if is nonsingular, then the matrix guaranteed by Theorem CINM will be both a “right-inverse” and a “left-inverse” for , so is invertible and .
So if you have a nonsingular matrix, , you can use the procedure described in Theorem CINM to find an inverse for . If is singular, then the procedure in Theorem CINM will fail as the first columns of will not row-reduce to the identity matrix. However, we can say a bit more. When is singular, then does not have an inverse (which is very different from saying that the procedure in Theorem CINM fails to find an inverse). This may feel like we are splitting hairs, but its important that we do not make unfounded assumptions. These observations motivate the next theorem.
Theorem NI
Nonsingularity is Invertibility
Suppose that is a
square matrix. Then is
nonsingular if and only if
is invertible.
Proof () Suppose is invertible, and suppose that is any solution to the homogeneous system . Then
So the only solution to is the zero vector, so by Definition NM, is nonsingular.
() Suppose now that is nonsingular. By Theorem CINM we find so that . Then Theorem OSIS tells us that . So is ’s inverse, and by construction, is invertible.
So for a square matrix, the properties of having an inverse and of having a trivial null space are one and the same. Can’t have one without the other.
Theorem NME3
Nonsingular Matrix Equivalences, Round 3
Suppose that is a
square matrix of size .
The following are equivalent.
Proof We can update our list of equivalences for nonsingular matrices (Theorem NME2) with the equivalent condition from Theorem NI.
In the case that is a nonsingular coefficient matrix of a system of equations, the inverse allows us to very quickly compute the unique solution, for any vector of constants.
Theorem SNCM
Solution with Nonsingular Coefficient Matrix
Suppose that
is nonsingular. Then the unique solution to
is
.
Proof By Theorem NMUS we know already that has a unique solution for every choice of . We need to show that the expression stated is indeed a solution (the solution). That’s easy, just “plug it in” to the corresponding vector equation representation (Theorem SLEMM),
Since is true when we substitute for , is a (the!) solution to .
Definition UM
Unitary Matrices
Suppose that is a
square matrix of size
such that .
Then we say
is unitary.
This condition may seem rather far-fetched at first glance. Would there be any matrix that behaved this way? Well, yes, here’s one.
Example UM3
Unitary matrix of size 3
The computations get a bit tiresome, but if you work your way through , you will arrive at the identity matrix .
Unitary matrices do not have to look quite so gruesome. Here’s a larger one that is a bit more pleasing.
Example UPM
Unitary permutation matrix
The matrix
is unitary as can be easily checked. Notice that it is just a rearrangement of the columns of the identity matrix, (Definition IM).
An interesting exercise is to build another unitary matrix, , using a different rearrangement of the columns of . Then form the product . This will be another unitary matrix (Exercise MINM.T10). If you were to build all matrices of this type you would have a set that remains closed under matrix multiplication. It is an example of another algebraic structure known as a group since together the set and the one operation (matrix multiplication here) is closed, associative, has an identity (), and inverses (Theorem UMI). Notice though that the operation in this group is not commutative!
If a matrix has only real number entries (we say it is a real matrix) then the defining property of being unitary simplifies to . In this case we, and everybody else, calls the matrix orthogonal, so you may often encounter this term in your other reading.
Unitary matrices have easily computed inverses. They also have columns that form orthonormal sets. Here are the theorems that show us that unitary matrices are not as strange as they might initially appear.
Theorem UMI
Unitary Matrices are Invertible
Suppose that is a
unitary matrix of size .
Then is
nonsingular, and .
Proof By Definition UM, we know that . The matrix is nonsingular (since it row-reduces easily to , Theorem NMRRI). So by Theorem NPNT, and are both nonsingular.
The equation gets us halfway to an inverse of , and Theorem OSIS tells us that also. So and are inverses of each other (Definition MI).
Theorem CUMOS
Columns of Unitary Matrices are Orthonormal Sets
Suppose that is a
square matrix of size
with columns . Then
is a unitary matrix
if and only if is an
orthonormal set.
Proof The proof revolves around recognizing that a typical entry of the product is an inner product of columns of . Here are the details to support this claim.
We now employ this equality in a chain of equivalences,
Example OSMC
Orthonormal Set from Matrix Columns
The matrix
from Example UM3 is a unitary matrix. By Theorem CUMOS, its columns
form an orthonormal set. You might find checking the six inner products of pairs of these vectors easier than doing the matrix product . Or, because the inner product is anti-commutative (Theorem IPAC) you only need check three inner products (see Exercise MINM.T12).
When using vectors and matrices that only have real number entries, orthogonal matrices are those matrices with inverses that equal their transpose. Similarly, the inner product is the familiar dot product. Keep this special case in mind as you read the next theorem.
Theorem UMPIP
Unitary Matrices Preserve Inner Products
Suppose that is a
unitary matrix of size
and and
are two
vectors from .
Then
Proof
The second conclusion is just a specialization of the first conclusion.
Definition A
Adjoint
If is a square matrix,
then its adjoint is .
Sometimes a matrix is equal to its adjoint. One of the most common situations where this occurs is when a matrix has only real number entries. Then we are simply talking about symmetric matrices (Definition SYM).
Definition HM
Hermitian Matrix
The square matrix is
Hermitian (or self-adjoint) if
Again, the real matrices that are Hermitian is exactly the set of symmetric matrices. In Section PEE we will uncover some amazing properties of Hermitian matrices, so when you get there, run back here to remind yourself of this definition.
A final reminder: the terms “dot product,” “orthogonal matrix” and “symmetric matrix” used in reference to vectors or matrices with real number entries correspond to the terms inner product, unitary matrix and Hermitian matrix when we generalize to include complex number entries.
Because the matrix was not nonsingular, you had no theorems at that point that would allow you to compute the inverse. Explain why you now know that the inverse does not exist (which is different than not being able to compute it) by quoting the relevant theorem’s acronym.
C40 Solve the system of equations below using the inverse of a matrix.
Contributed by Robert Beezer Solution [624]
M20 Construct an example of a
unitary matrix.
Contributed by Robert Beezer Solution [625]
T10 Suppose that
and are unitary
matrices of size .
Prove that
is a unitary matrix.
Contributed by Robert Beezer
T11 Prove that Hermitian matrices (Definition HM) have
real entries on the diagonal. More precisely, suppose that
is a Hermitian
matrix of size .
Then ,
.
Contributed by Robert Beezer
T12 Suppose that we are checking if a square matrix of size
is unitary.
Show that a straightforward application of Theorem CUMOS requires the computation
of
inner products when the matrix is unitary, and fewer when the matrix is not
orthogonal. Then show that this maximum number of inner products can be reduced
to in
light of Theorem IPAC.
Contributed by Robert Beezer
C40 Contributed by Robert Beezer Statement [622]
The coefficient matrix and vector of constants for the system are
can be computed by using a calculator, or by the method of Theorem CINM. Then Theorem SNCM says the unique solution is
M20 Contributed by Robert Beezer Statement [622]
The identity
matrix, ,
would be one example (Definition IM). Any of the 23 other rearrangements of the
columns of
would be a simple, but less trivial, example. See Example UPM.