From A First Course in Linear Algebra

Version 2.20

© 2004.

Licensed under the GNU Free Documentation License.

http://linear.ups.edu/

We saw in Theorem CINM that if a square matrix
$A$ is nonsingular, then
there is a matrix $B$ so
that $AB={I}_{n}$. In other words,
$B$ is halfway to being an
inverse of $A$. We will see
in this section that $B$
automatically fulfills the second condition
($BA={I}_{n}$).
Example MWIAA showed us that the coefficient matrix from Archetype A had
no inverse. Not coincidentally, this coefficient matrix is singular. We’ll make all
these connections precise now. Not many examples or definitions in this section,
just theorems.

We need a couple of technical results for starters. Some books would call these minor, but essential, results “lemmas.” We’ll just call ’em theorems. See Technique LC for more on the distinction.

The first of these technical results is interesting in that the hypothesis says something about a product of two square matrices and the conclusion then says the same thing about each individual matrix in the product. This result has an analogy in the algebra of complex numbers: suppose $\alpha ,\phantom{\rule{0.3em}{0ex}}\beta \in \u2102$, then $\alpha \beta \ne 0$ if and only if $\alpha \ne 0$ and $\beta \ne 0$. We can view this result as suggesting that the term “nonsingular” for matrices is like the term “nonzero” for scalars.

Theorem NPNT

Nonsingular Product has Nonsingular Terms

Suppose that $A$ and
$B$ are square matrices
of size $n$. The product
$AB$ is nonsingular
if and only if $A$
and $B$ are both
nonsingular. $\square $

Proof $\left(\Rightarrow \right)$ We’ll do this portion of the proof in two parts, each as a proof by contradiction (Technique CD). Assume that $AB$ is nonsingular. Establishing that $B$ is nonsingular is the easier part, so we will do it first, but in reality, we will need to know that $B$ is nonsingular when we prove that $A$ is nonsingular.

You can also think of this proof as being a study of four possible conclusions in the table below. One of the four rows must happen (the list is exhaustive). In the proof we learn that the first three rows lead to contradictions, and so are impossible. That leaves the fourth row as a certainty, which is our desired conclusion.

$A$ | $B$ | Case |

Singular | Singular | 1 |

Nonsingular | Singular | 1 |

Singular | Nonsingular | 2 |

Nonsingular | Nonsingular |

Part 1. Suppose $B$ is singular. Then there is a nonzero vector $z$ that is a solution to $\mathcal{\mathcal{L}}\mathcal{S}\phantom{\rule{0.3em}{0ex}}\left(B,\phantom{\rule{0.3em}{0ex}}0\right)$. So

$$\begin{array}{llllllll}\hfill \left(AB\right)z& =A\left(Bz\right)\phantom{\rule{2em}{0ex}}& \hfill & \text{}\text{Theorem MMA}\text{}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =A0\phantom{\rule{2em}{0ex}}& \hfill & \text{}\text{Theorem SLEMM}\text{}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =0\phantom{\rule{2em}{0ex}}& \hfill & \text{}\text{Theorem MMZM}\text{}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill \end{array}$$Because $z$ is a nonzero solution to $\mathcal{\mathcal{L}}\mathcal{S}\phantom{\rule{0.3em}{0ex}}\left(AB,\phantom{\rule{0.3em}{0ex}}0\right)$, we conclude that $AB$ is singular (Definition NM). This is a contradiction, so $B$ is nonsingular, as desired.

Part 2. Suppose $A$ is singular. Then there is a nonzero vector $y$ that is a solution to $\mathcal{\mathcal{L}}\mathcal{S}\phantom{\rule{0.3em}{0ex}}\left(A,\phantom{\rule{0.3em}{0ex}}0\right)$. Now consider the linear system $\mathcal{\mathcal{L}}\mathcal{S}\phantom{\rule{0.3em}{0ex}}\left(B,\phantom{\rule{0.3em}{0ex}}y\right)$. Since we know $B$ is nonsingular from Case 1, the system has a unique solution (Theorem NMUS), which we will denote as $w$. We first claim $w$ is not the zero vector either. Assuming the opposite, suppose that $w=0$ (Technique CD). Then

$$\begin{array}{llllllll}\hfill y& =Bw\phantom{\rule{2em}{0ex}}& \hfill & \text{}\text{Theorem SLEMM}\text{}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =B0\phantom{\rule{2em}{0ex}}& \hfill & \text{Hypothesis}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =0\phantom{\rule{2em}{0ex}}& \hfill & \text{}\text{Theorem MMZM}\text{}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \\ \multicolumn{8}{c}{\text{contraryto}y\text{beingnonzero.So}w\ne 0\text{.Thepiecesareinplace,soherewe go,}}\\ \phantom{\rule{2em}{0ex}}\\ \hfill \left(AB\right)w& =A\left(Bw\right)\phantom{\rule{2em}{0ex}}& \hfill & \text{}\text{Theorem MMA}\text{}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =Ay\phantom{\rule{2em}{0ex}}& \hfill & \text{}\text{Theorem SLEMM}\text{}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =0\phantom{\rule{2em}{0ex}}& \hfill & \text{}\text{Theorem SLEMM}\text{}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill \end{array}$$So $w$ is a nonzero solution to $\mathcal{\mathcal{L}}\mathcal{S}\phantom{\rule{0.3em}{0ex}}\left(AB,\phantom{\rule{0.3em}{0ex}}0\right)$, and thus we can say that $AB$ is singular (Definition NM). This is a contradiction, so $A$ is nonsingular, as desired.

($\Leftarrow $) Now assume that both $A$ and $B$ are nonsingular. Suppose that $x\in {\u2102}^{n}$ is a solution to $\mathcal{\mathcal{L}}\mathcal{S}\phantom{\rule{0.3em}{0ex}}\left(AB,\phantom{\rule{0.3em}{0ex}}0\right)$. Then

$$\begin{array}{llllllll}\hfill 0& =\left(AB\right)x\phantom{\rule{2em}{0ex}}& \hfill & \text{}\text{Theorem SLEMM}\text{}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =A\left(Bx\right)\phantom{\rule{2em}{0ex}}& \hfill & \text{}\text{Theorem MMA}\text{}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$$By Theorem SLEMM, $Bx$ is a solution to $\mathcal{\mathcal{L}}\mathcal{S}\phantom{\rule{0.3em}{0ex}}\left(A,\phantom{\rule{0.3em}{0ex}}0\right)$, and by the definition of a nonsingular matrix (Definition NM), we conclude that $Bx=0$. Now, by an entirely similar argument, the nonsingularity of $B$ forces us to conclude that $x=0$. So the only solution to $\mathcal{\mathcal{L}}\mathcal{S}\phantom{\rule{0.3em}{0ex}}\left(AB,\phantom{\rule{0.3em}{0ex}}0\right)$ is the zero vector and we conclude that $AB$ is nonsingular by Definition NM. $\u25a0$

This is a powerful result in the “forward” direction, because it allows us to begin with a hypothesis that something complicated (the matrix product $AB$) has the property of being nonsingular, and we can then conclude that the simpler constituents ($A$ and $B$ individually) then also have the property of being nonsingular. If we had thought that the matrix product was an artificial construction, results like this would make us begin to think twice.

The contrapositive of this result is equally interesting. It says that $A$ or $B$ (or both) is a singular matrix if and only if the product $AB$ is singular. Notice how the negation of the theorem’s conclusion ($A$ and $B$ both nonsingular) becomes the statement “at least one of $A$ and $B$ is singular.” (See Technique CP.)

Theorem OSIS

One-Sided Inverse is Sufficient

Suppose $A$ and
$B$ are square
matrices of size $n$
such that $AB={I}_{n}$.
Then $BA={I}_{n}$.
$\square $

Proof The matrix ${I}_{n}$ is nonsingular (since it row-reduces easily to ${I}_{n}$, Theorem NMRRI). So $A$ and $B$ are nonsingular by Theorem NPNT, so in particular $B$ is nonsingular. We can therefore apply Theorem CINM to assert the existence of a matrix $C$ so that $BC={I}_{n}$. This application of Theorem CINM could be a bit confusing, mostly because of the names of the matrices involved. $B$ is nonsingular, so there must be a “right-inverse” for $B$, and we’re calling it $C$.

Now

$$\begin{array}{llllllll}\hfill BA& =\left(BA\right){I}_{n}\phantom{\rule{2em}{0ex}}& \hfill & \text{}\text{Theorem MMIM}\text{}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =\left(BA\right)\left(BC\right)\phantom{\rule{2em}{0ex}}& \hfill & \text{}\text{Theorem CINM}\text{}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =B\left(AB\right)C\phantom{\rule{2em}{0ex}}& \hfill & \text{}\text{Theorem MMA}\text{}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =B{I}_{n}C\phantom{\rule{2em}{0ex}}& \hfill & \text{Hypothesis}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =BC\phantom{\rule{2em}{0ex}}& \hfill & \text{}\text{Theorem MMIM}\text{}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & ={I}_{n}\phantom{\rule{2em}{0ex}}& \hfill & \text{}\text{Theorem CINM}\text{}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$$which is the desired conclusion. $\u25a0$

So Theorem OSIS tells us that if $A$ is nonsingular, then the matrix $B$ guaranteed by Theorem CINM will be both a “right-inverse” and a “left-inverse” for $A$, so $A$ is invertible and ${A}^{-1}=B$.

So if you have a nonsingular matrix, $A$, you can use the procedure described in Theorem CINM to find an inverse for $A$. If $A$ is singular, then the procedure in Theorem CINM will fail as the first $n$ columns of $M$ will not row-reduce to the identity matrix. However, we can say a bit more. When $A$ is singular, then $A$ does not have an inverse (which is very different from saying that the procedure in Theorem CINM fails to find an inverse). This may feel like we are splitting hairs, but its important that we do not make unfounded assumptions. These observations motivate the next theorem.

Theorem NI

Nonsingularity is Invertibility

Suppose that $A$ is a
square matrix. Then $A$ is
nonsingular if and only if $A$
is invertible. $\square $

Proof ($\Leftarrow $) Suppose $A$ is invertible, and suppose that $x$ is any solution to the homogeneous system $\mathcal{\mathcal{L}}\mathcal{S}\phantom{\rule{0.3em}{0ex}}\left(A,\phantom{\rule{0.3em}{0ex}}0\right)$. Then

$$\begin{array}{llllllll}\hfill x& ={I}_{n}x\phantom{\rule{2em}{0ex}}& \hfill & \text{}\text{Theorem MMIM}\text{}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =\left({A}^{-1}A\right)x\phantom{\rule{2em}{0ex}}& \hfill & \text{}\text{Definition MI}\text{}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & ={A}^{-1}\left(Ax\right)\phantom{\rule{2em}{0ex}}& \hfill & \text{}\text{Theorem MMA}\text{}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & ={A}^{-1}0\phantom{\rule{2em}{0ex}}& \hfill & \text{}\text{Theorem SLEMM}\text{}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =0\phantom{\rule{2em}{0ex}}& \hfill & \text{}\text{Theorem MMZM}\text{}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$$So the only solution to $\mathcal{\mathcal{L}}\mathcal{S}\phantom{\rule{0.3em}{0ex}}\left(A,\phantom{\rule{0.3em}{0ex}}0\right)$ is the zero vector, so by Definition NM, $A$ is nonsingular.

($\Rightarrow $) Suppose now that $A$ is nonsingular. By Theorem CINM we find $B$ so that $AB={I}_{n}$. Then Theorem OSIS tells us that $BA={I}_{n}$. So $B$ is $A$’s inverse, and by construction, $A$ is invertible. $\u25a0$

So for a square matrix, the properties of having an inverse and of having a trivial null space are one and the same. Can’t have one without the other.

Theorem NME3

Nonsingular Matrix Equivalences, Round 3

Suppose that $A$ is a
square matrix of size $n$.
The following are equivalent.

- $A$ is nonsingular.
- $A$ row-reduces to the identity matrix.
- The null space of $A$ contains only the zero vector, $\mathcal{N}\phantom{\rule{0.3em}{0ex}}\left(A\right)=\left\{0\right\}$.
- The linear system $\mathcal{\mathcal{L}}\mathcal{S}\phantom{\rule{0.3em}{0ex}}\left(A,\phantom{\rule{0.3em}{0ex}}b\right)$ has a unique solution for every possible choice of $b$.
- The columns of $A$ are a linearly independent set.
- $A$ is invertible.

Proof We can update our list of equivalences for nonsingular matrices (Theorem NME2) with the equivalent condition from Theorem NI. $\u25a0$

In the case that $A$ is a nonsingular coefficient matrix of a system of equations, the inverse allows us to very quickly compute the unique solution, for any vector of constants.

Theorem SNCM

Solution with Nonsingular Coefficient Matrix

Suppose that $A$
is nonsingular. Then the unique solution to
$\mathcal{\mathcal{L}}\mathcal{S}\phantom{\rule{0.3em}{0ex}}\left(A,\phantom{\rule{0.3em}{0ex}}b\right)$ is
${A}^{-1}b$.
$\square $

Proof By Theorem NMUS we know already that $\mathcal{\mathcal{L}}\mathcal{S}\phantom{\rule{0.3em}{0ex}}\left(A,\phantom{\rule{0.3em}{0ex}}b\right)$ has a unique solution for every choice of $b$. We need to show that the expression stated is indeed a solution (the solution). That’s easy, just “plug it in” to the corresponding vector equation representation (Theorem SLEMM),

$$\begin{array}{llllllll}\hfill A\left({A}^{-1}b\right)& =\left(A{A}^{-1}\right)b\phantom{\rule{2em}{0ex}}& \hfill & \text{}\text{Theorem MMA}\text{}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & ={I}_{n}b\phantom{\rule{2em}{0ex}}& \hfill & \text{}\text{Definition MI}\text{}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =b\phantom{\rule{2em}{0ex}}& \hfill & \text{}\text{Theorem MMIM}\text{}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$$Since $Ax=b$ is true when we substitute ${A}^{-1}b$ for $x$, ${A}^{-1}b$ is a (the!) solution to $\mathcal{\mathcal{L}}\mathcal{S}\phantom{\rule{0.3em}{0ex}}\left(A,\phantom{\rule{0.3em}{0ex}}b\right)$. $\u25a0$

Recall that the adjoint of a matrix is ${A}^{\ast}={\left(\overline{A}\right)}^{t}$ (Definition A).

Definition UM

Unitary Matrices

Suppose that $U$ is a
square matrix of size $n$
such that ${U}^{\ast}U={I}_{n}$.
Then we say $U$
is unitary. $\u25b3$

This condition may seem rather far-fetched at first glance. Would there be any matrix that behaved this way? Well, yes, here’s one.

Example UM3

Unitary matrix of size 3

$$U=\left[\begin{array}{ccc}\hfill \frac{1+i}{\sqrt{5}}\hfill & \hfill \frac{3+2\phantom{\rule{0.3em}{0ex}}i}{\sqrt{55}}\hfill & \hfill \frac{2+2i}{\sqrt{22}}\hfill \\ \hfill \frac{1-i}{\sqrt{5}}\hfill & \hfill \frac{2+2\phantom{\rule{0.3em}{0ex}}i}{\sqrt{55}}\hfill & \hfill \frac{-3+i}{\sqrt{22}}\hfill \\ \hfill \frac{i}{\sqrt{5}}\hfill & \hfill \frac{3-5\phantom{\rule{0.3em}{0ex}}i}{\sqrt{55}}\hfill & \hfill -\frac{2}{\sqrt{22}}\hfill \end{array}\right]$$ |

The computations get a bit tiresome, but if you work your way through the computation of ${U}^{\ast}U$, you will arrive at the $3\times 3$ identity matrix ${I}_{3}$. $\u22a0$

Unitary matrices do not have to look quite so gruesome. Here’s a larger one that is a bit more pleasing.

Example UPM

Unitary permutation matrix

The matrix

$$P=\left[\begin{array}{ccccc}\hfill 0\hfill & \hfill 1\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 1\hfill & \hfill 0\hfill \\ \hfill 1\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 1\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 1\hfill & \hfill 0\hfill & \hfill 0\hfill \end{array}\right]$$ |

is unitary as can be easily checked. Notice that it is just a rearrangement of the columns of the $5\times 5$ identity matrix, ${I}_{5}$ (Definition IM).

An interesting exercise is to build another $5\times 5$ unitary matrix, $R$, using a different rearrangement of the columns of ${I}_{5}$. Then form the product $PR$. This will be another unitary matrix (Exercise MINM.T10). If you were to build all $5!=5\times 4\times 3\times 2\times 1=120$ matrices of this type you would have a set that remains closed under matrix multiplication. It is an example of another algebraic structure known as a group since together the set and the one operation (matrix multiplication here) is closed, associative, has an identity (${I}_{5}$), and inverses (Theorem UMI). Notice though that the operation in this group is not commutative! $\u22a0$

If a matrix $A$ has only real number entries (we say it is a real matrix) then the defining property of being unitary simplifies to ${A}^{t}A={I}_{n}$. In this case we, and everybody else, calls the matrix orthogonal, so you may often encounter this term in your other reading when the complex numbers are not under consideration.

Unitary matrices have easily computed inverses. They also have columns that form orthonormal sets. Here are the theorems that show us that unitary matrices are not as strange as they might initially appear.

Theorem UMI

Unitary Matrices are Invertible

Suppose that $U$ is a
unitary matrix of size $n$.
Then $U$ is
nonsingular, and ${U}^{-1}={U}^{\ast}$.
$\square $

Proof By Definition UM, we know that ${U}^{\ast}U={I}_{n}$. The matrix ${I}_{n}$ is nonsingular (since it row-reduces easily to ${I}_{n}$, Theorem NMRRI). So by Theorem NPNT, $U$ and ${U}^{\ast}$ are both nonsingular matrices.

The equation ${U}^{\ast}U={I}_{n}$ gets us halfway to an inverse of $U$, and Theorem OSIS tells us that then $U{U}^{\ast}={I}_{n}$ also. So $U$ and ${U}^{\ast}$ are inverses of each other (Definition MI). $\u25a0$

Theorem CUMOS

Columns of Unitary Matrices are Orthonormal Sets

Suppose that $A$ is a
square matrix of size $n$
with columns $S=\left\{{A}_{1},\phantom{\rule{0.3em}{0ex}}{A}_{2},\phantom{\rule{0.3em}{0ex}}{A}_{3},\phantom{\rule{0.3em}{0ex}}\dots ,\phantom{\rule{0.3em}{0ex}}{A}_{n}\right\}$. Then
$A$ is a unitary matrix
if and only if $S$ is an
orthonormal set. $\square $

Proof The proof revolves around recognizing that a typical entry of the product ${A}^{\ast}A$ is an inner product of columns of $A$. Here are the details to support this claim.

$$\begin{array}{llllllll}\hfill {\left[{A}^{\ast}A\right]}_{ij}& =\sum _{k=1}^{n}{\left[{A}^{\ast}\right]}_{ik}{\left[A\right]}_{kj}\phantom{\rule{2em}{0ex}}& \hfill & \text{}\text{Theorem EMP}\text{}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =\sum _{k=1}^{n}{\left[{\left(\overline{A}\right)}^{t}\right]}_{ik}{\left[A\right]}_{kj}\phantom{\rule{2em}{0ex}}& \hfill & \text{}\text{Theorem EMP}\text{}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =\sum _{k=1}^{n}{\left[\phantom{\rule{0.3em}{0ex}}\overline{A}\phantom{\rule{0.3em}{0ex}}\right]}_{ki}{\left[A\right]}_{kj}\phantom{\rule{2em}{0ex}}& \hfill & \text{}\text{Definition TM}\text{}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =\sum _{k=1}^{n}\overline{{\left[A\right]}_{ki}}{\left[A\right]}_{kj}\phantom{\rule{2em}{0ex}}& \hfill & \text{}\text{Definition CCM}\text{}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =\sum _{k=1}^{n}{\left[A\right]}_{kj}\overline{{\left[A\right]}_{ki}}\phantom{\rule{2em}{0ex}}& \hfill & \text{}\text{Property CMCN}\text{}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =\sum _{k=1}^{n}{\left[{A}_{j}\right]}_{k}\overline{{\left[{A}_{i}\right]}_{k}}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =\u2329{A}_{j},\phantom{\rule{0.3em}{0ex}}{A}_{i}\u232a\phantom{\rule{2em}{0ex}}& \hfill & \text{}\text{Definition IP}\text{}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$$We now employ this equality in a chain of equivalences,

$$\begin{array}{llllll}\hfill & \text{}S=\left\{{A}_{1},\phantom{\rule{0.3em}{0ex}}{A}_{2},\phantom{\rule{0.3em}{0ex}}{A}_{3},\phantom{\rule{0.3em}{0ex}}\dots ,\phantom{\rule{0.3em}{0ex}}{A}_{n}\right\}\text{isanorthonormalset}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & \phantom{\rule{3.26288pt}{0ex}}\iff \phantom{\rule{3.26288pt}{0ex}}\u2329{A}_{j},\phantom{\rule{0.3em}{0ex}}{A}_{i}\u232a=\left\{\begin{array}{cc}0\phantom{\rule{1em}{0ex}}\hfill & \text{if}i\ne j\text{}\hfill \\ 1\phantom{\rule{1em}{0ex}}\hfill & \text{if}i=j\text{}\hfill \end{array}\right.\phantom{\rule{2em}{0ex}}& \hfill & \text{}\text{Definition ONS}\text{}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & \phantom{\rule{3.26288pt}{0ex}}\iff \phantom{\rule{3.26288pt}{0ex}}{\left[{A}^{\ast}A\right]}_{ij}=\left\{\begin{array}{cc}0\phantom{\rule{1em}{0ex}}\hfill & \text{if}i\ne j\text{}\hfill \\ 1\phantom{\rule{1em}{0ex}}\hfill & \text{if}i=j\text{}\hfill \end{array}\right.\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & \phantom{\rule{3.26288pt}{0ex}}\iff \phantom{\rule{3.26288pt}{0ex}}{\left[{A}^{\ast}A\right]}_{ij}={\left[{I}_{n}\right]}_{ij},\phantom{\rule{1em}{0ex}}1\le i\le n,\phantom{\rule{1em}{0ex}}1\le j\le n\phantom{\rule{2em}{0ex}}& \hfill & \text{}\text{Definition IM}\text{}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & \phantom{\rule{3.26288pt}{0ex}}\iff \phantom{\rule{3.26288pt}{0ex}}{A}^{\ast}A={I}_{n}\phantom{\rule{2em}{0ex}}& \hfill & \text{}\text{Definition ME}\text{}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & \phantom{\rule{3.26288pt}{0ex}}\iff \phantom{\rule{3.26288pt}{0ex}}\text{}A\text{isaunitarymatrix}\phantom{\rule{2em}{0ex}}& \hfill & \text{}\text{Definition UM}\text{}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$$ $\u25a0$Example OSMC

Orthonormal set from matrix columns

The matrix

$$U=\left[\begin{array}{ccc}\hfill \frac{1+i}{\sqrt{5}}\hfill & \hfill \frac{3+2\phantom{\rule{0.3em}{0ex}}i}{\sqrt{55}}\hfill & \hfill \frac{2+2i}{\sqrt{22}}\hfill \\ \hfill \frac{1-i}{\sqrt{5}}\hfill & \hfill \frac{2+2\phantom{\rule{0.3em}{0ex}}i}{\sqrt{55}}\hfill & \hfill \frac{-3+i}{\sqrt{22}}\hfill \\ \hfill \frac{i}{\sqrt{5}}\hfill & \hfill \frac{3-5\phantom{\rule{0.3em}{0ex}}i}{\sqrt{55}}\hfill & \hfill -\frac{2}{\sqrt{22}}\hfill \end{array}\right]$$ |

from Example UM3 is a unitary matrix. By Theorem CUMOS, its columns

$$\left\{\left[\begin{array}{c}\hfill \frac{1+i}{\sqrt{5}}\hfill \\ \hfill \frac{1-i}{\sqrt{5}}\hfill \\ \hfill \frac{i}{\sqrt{5}}\hfill \end{array}\right],\phantom{\rule{0.3em}{0ex}}\left[\begin{array}{c}\hfill \frac{3+2\phantom{\rule{0.3em}{0ex}}i}{\sqrt{55}}\hfill \\ \hfill \frac{2+2\phantom{\rule{0.3em}{0ex}}i}{\sqrt{55}}\hfill \\ \hfill \frac{3-5\phantom{\rule{0.3em}{0ex}}i}{\sqrt{55}}\hfill \end{array}\right],\phantom{\rule{0.3em}{0ex}}\left[\begin{array}{c}\hfill \frac{2+2i}{\sqrt{22}}\hfill \\ \hfill \frac{-3+i}{\sqrt{22}}\hfill \\ \hfill -\frac{2}{\sqrt{22}}\hfill \end{array}\right]\right\}$$ |

form an orthonormal set. You might find checking the six inner products of pairs of these vectors easier than doing the matrix product ${U}^{\ast}U$. Or, because the inner product is anti-commutative (Theorem IPAC) you only need check three inner products (see Exercise MINM.T12). $\u22a0$

When using vectors and matrices that only have real number entries, orthogonal matrices are those matrices with inverses that equal their transpose. Similarly, the inner product is the familiar dot product. Keep this special case in mind as you read the next theorem.

Theorem UMPIP

Unitary Matrices Preserve Inner Products

Suppose that $U$ is a
unitary matrix of size $n$
and $u$ and
$v$ are two
vectors from ${\u2102}^{n}$.
Then

Proof

$$\begin{array}{llllllll}\hfill \u2329Uu,\phantom{\rule{0.3em}{0ex}}Uv\u232a& ={\left(Uu\right)}^{t}\overline{Uv}\phantom{\rule{2em}{0ex}}& \hfill & \text{}\text{Theorem MMIP}\text{}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & ={u}^{t}{U}^{t}\overline{Uv}\phantom{\rule{2em}{0ex}}& \hfill & \text{}\text{Theorem MMT}\text{}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & ={u}^{t}{U}^{t}\overline{U}\overline{v}\phantom{\rule{2em}{0ex}}& \hfill & \text{}\text{Theorem MMCC}\text{}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & ={u}^{t}{\left(\overline{\overline{U}}\right)}^{t}\overline{U}\overline{v}\phantom{\rule{2em}{0ex}}& \hfill & \text{}\text{Theorem CCT}\text{}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & ={u}^{t}\overline{{\left(\overline{U}\right)}^{t}}\overline{U}\overline{v}\phantom{\rule{2em}{0ex}}& \hfill & \text{}\text{Theorem MCT}\text{}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & ={u}^{t}\overline{{\left(\overline{U}\right)}^{t}U}\overline{v}\phantom{\rule{2em}{0ex}}& \hfill & \text{}\text{Theorem MMCC}\text{}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & ={u}^{t}\overline{{U}^{\ast}U}\overline{v}\phantom{\rule{2em}{0ex}}& \hfill & \text{}\text{Definition A}\text{}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & ={u}^{t}\overline{{I}_{n}}\overline{v}\phantom{\rule{2em}{0ex}}& \hfill & \text{}\text{Definition UM}\text{}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & ={u}^{t}{I}_{n}\overline{v}\phantom{\rule{2em}{0ex}}& \hfill & \text{}\text{Definition IM}\text{}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & ={u}^{t}\overline{v}\phantom{\rule{2em}{0ex}}& \hfill & \text{}\text{Theorem MMIM}\text{}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =\u2329u,\phantom{\rule{0.3em}{0ex}}v\u232a\phantom{\rule{2em}{0ex}}& \hfill & \text{}\text{Theorem MMIP}\text{}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill \end{array}$$The second conclusion is just a specialization of the first conclusion.

$$\begin{array}{llllll}\hfill \Vert Uv\Vert & =\sqrt{{\Vert Uv\Vert}^{2}}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =\sqrt{\u2329Uv,\phantom{\rule{0.3em}{0ex}}Uv\u232a}\phantom{\rule{2em}{0ex}}& \hfill & \text{}\text{Theorem IPN}\text{}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =\sqrt{\u2329v,\phantom{\rule{0.3em}{0ex}}v\u232a}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =\sqrt{{\Vert v\Vert}^{2}}\phantom{\rule{2em}{0ex}}& \hfill & \text{}\text{Theorem IPN}\text{}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =\Vert v\Vert \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$$ $\u25a0$Aside from the inherent interest in this theorem, it makes a bigger statement about unitary matrices. When we view vectors geometrically as directions or forces, then the norm equates to a notion of length. If we transform a vector by multiplication with a unitary matrix, then the length (norm) of that vector stays the same. If we consider column vectors with two or three slots containing only real numbers, then the inner product of two such vectors is just the dot product, and this quantity can be used to compute the angle between two vectors. When two vectors are multiplied (transformed) by the same unitary matrix, their dot product is unchanged and their individual lengths are unchanged. The results in the angle between the two vectors remaining unchanged.

A “unitary transformation” (matrix-vector products with unitary matrices) thus preserve geometrical relationships among vectors representing directions, forces, or other physical quantities. In the case of a two-slot vector with real entries, this is simply a rotation. These sorts of computations are exceedingly important in computer graphics such as games and real-time simulations, especially when increased realism is achieved by performing many such computations quickly. We will see unitary matrices again in subsequent sections (especially Theorem OD) and in each instance, consider the interpretation of the unitary matrix as a sort of geometry-preserving transformation. Some authors use the term isometry to highlight this behavior. We will speak loosely of a unitary matrix as being a sort of generalized rotation.

A final reminder: the terms “dot product,” “symmetric matrix” and “orthogonal matrix” used in reference to vectors or matrices with real number entries correspond to the terms “inner product,” “Hermitian matrix” and “unitary matrix” when we generalize to include complex number entries, so keep that in mind as you read elsewhere.

- Compute the inverse of the coefficient matrix of the system of equations below and use the inverse to solve the system. $$\begin{array}{llll}\hfill 4{x}_{1}+10{x}_{2}& =12\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill 2{x}_{1}+6{x}_{2}& =4\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$$
- In the reading questions for Section MISLE you were asked to find the inverse of
the $3\times 3$
matrix below.
$$\left[\begin{array}{ccc}\hfill 2\hfill & \hfill 3\hfill & \hfill 1\hfill \\ \hfill 1\hfill & \hfill -2\hfill & \hfill -3\hfill \\ \hfill -2\hfill & \hfill 4\hfill & \hfill 6\hfill \end{array}\right]$$ Because the matrix was not nonsingular, you had no theorems at that point that would allow you to compute the inverse. Explain why you now know that the inverse does not exist (which is different than not being able to compute it) by quoting the relevant theorem’s acronym.

- Is the matrix $A$
unitary? Why?
$$A=\left[\begin{array}{cc}\hfill \frac{1}{\sqrt{22}}\left(4+2i\right)\hfill & \hfill \frac{1}{\sqrt{374}}\left(5+3i\right)\hfill \\ \hfill \frac{1}{\sqrt{22}}\left(-1-i\right)\hfill & \hfill \frac{1}{\sqrt{374}}\left(12+14i\right)\hfill \\ \hfill \hfill \end{array}\right]$$

C20 Let $A=\left[\begin{array}{ccc}\hfill 1\hfill & \hfill 2\hfill & \hfill 1\hfill \\ \hfill 0\hfill & \hfill 1\hfill & \hfill 1\hfill \\ \hfill 1\hfill & \hfill 0\hfill & \hfill 2\hfill \end{array}\right]$
and $B=\left[\begin{array}{ccc}\hfill -1\hfill & \hfill 1\hfill & \hfill 0\hfill \\ \hfill 1\hfill & \hfill 2\hfill & \hfill 1\hfill \\ \hfill 0\hfill & \hfill 1\hfill & \hfill 1\hfill \end{array}\right]$. Verify
that $AB$
is nonsingular.

Contributed by Chris Black

C40 Solve the system of equations below using the inverse of a matrix.

$$\begin{array}{lll}\hfill {x}_{1}+{x}_{2}+3{x}_{3}+{x}_{4}=5& \phantom{\rule{2em}{0ex}}& \hfill \\ \hfill -2{x}_{1}-{x}_{2}-4{x}_{3}-{x}_{4}=-7& \phantom{\rule{2em}{0ex}}& \hfill \\ \hfill {x}_{1}+4{x}_{2}+10{x}_{3}+2{x}_{4}=9& \phantom{\rule{2em}{0ex}}& \hfill \\ \hfill -2{x}_{1}-4{x}_{3}+5{x}_{4}=9& \phantom{\rule{2em}{0ex}}& \hfill \end{array}$$

Contributed by Robert Beezer Solution [717]

M10 Find values of $x$,
$y$
$z$ so that
matrix $A=\left[\begin{array}{ccc}\hfill 1\hfill & \hfill 2\hfill & \hfill x\hfill \\ \hfill 3\hfill & \hfill 0\hfill & \hfill y\hfill \\ \hfill 1\hfill & \hfill 1\hfill & \hfill z\hfill \end{array}\right]$
is invertible.

Contributed by Chris Black Solution [718]

M11 Find values of $x$,
$y$
$z$ so that
matrix $A=\left[\begin{array}{ccc}\hfill 1\hfill & \hfill x\hfill & \hfill 1\hfill \\ \hfill 1\hfill & \hfill y\hfill & \hfill 4\hfill \\ \hfill 0\hfill & \hfill z\hfill & \hfill 5\hfill \end{array}\right]$
is singular.

Contributed by Chris Black Solution [719]

M15 If $A$
and $B$ are
$n\times n$ matrices,
$A$ is nonsingular,
and $B$ is singular,
show directly that $AB$
is singular, without using Theorem NPNT.

Contributed by Chris Black Solution [720]

M20 Construct an example of a $4\times 4$
unitary matrix.

Contributed by Robert Beezer Solution [717]

M80 Matrix multiplication interacts nicely with many operations. But not always with transforming a matrix to reduced row-echelon form. Suppose that $A$ is an $m\times n$ matrix and $B$ is an $n\times p$ matrix. Let $P$ be a matrix that is row-equivalent to $A$ and in reduced row-echelon form, $Q$ be a matrix that is row-equivalent to $B$ and in reduced row-echelon form, and let $R$ be a matrix that is row-equivalent to $AB$ and in reduced row-echelon form. Is $PQ=R$? (In other words, with nonstandard notation, is $\text{rref}\left(A\right)\text{rref}\left(B\right)=\text{rref}\left(AB\right)$?)

Construct a counterexample to show that, in general, this
statement is false. Then find a large class of matrices where if
$A$ and
$B$ are in
the class, then the statement is true.

Contributed by Mark Hamrick Solution [720]

T10 Suppose that $Q$
and $P$ are unitary
matrices of size $n$.
Prove that $QP$
is a unitary matrix.

Contributed by Robert Beezer

T11 Prove that Hermitian matrices (Definition HM) have
real entries on the diagonal. More precisely, suppose that
$A$ is a Hermitian
matrix of size $n$.
Then ${\left[A\right]}_{ii}\in {\mathbb{R}}^{}$,
$1\le i\le n$.

Contributed by Robert Beezer

T12 Suppose that we are checking if a square matrix of size
$n$ is unitary.
Show that a straightforward application of Theorem CUMOS requires the computation
of ${n}^{2}$
inner products when the matrix is unitary, and fewer when the matrix is not
orthogonal. Then show that this maximum number of inner products can be reduced
to $\frac{1}{2}n\left(n+1\right)$ in
light of Theorem IPAC.

Contributed by Robert Beezer

T25 The notation ${A}^{k}$
means a repeated matrix product between
$k$ copies of the
square matrix $A$.

(a) Assume $A$ is
an $n\times n$ matrix where
${A}^{2}=\mathcal{O}$ (which does not
imply that $A=\mathcal{O}$.) Prove
that ${I}_{n}-A$ is invertible
by showing that ${I}_{n}+A$
is an inverse of ${I}_{n}-A$.

(b) Assume that $A$
is an $n\times n$ matrix
where ${A}^{3}=\mathcal{O}$.
Prove that ${I}_{n}-A$
is invertible.

(c) Form a general theorem based on your observations from parts (a) and (b)
and provide a proof.

Contributed by Manley Perkel

C40 Contributed by Robert Beezer Statement [713]

The coefficient matrix and vector of constants for the system are

${A}^{-1}$ can be computed by using a calculator, or by the method of Theorem CINM. Then Theorem SNCM says the unique solution is

$${A}^{-1}b=\left[\begin{array}{cccc}\hfill 38\hfill & \hfill 18\hfill & \hfill -5\hfill & \hfill -2\hfill \\ \hfill 96\hfill & \hfill 47\hfill & \hfill -12\hfill & \hfill -5\hfill \\ \hfill -39\hfill & \hfill -19\hfill & \hfill 5\hfill & \hfill 2\hfill \\ \hfill -16\hfill & \hfill -8\hfill & \hfill 2\hfill & \hfill 1\hfill \end{array}\right]\left[\begin{array}{c}\hfill 5\hfill \\ \hfill -7\hfill \\ \hfill 9\hfill \\ \hfill 9\hfill \end{array}\right]=\left[\begin{array}{c}\hfill 1\hfill \\ \hfill -2\hfill \\ \hfill 1\hfill \\ \hfill 3\hfill \end{array}\right]$$ |

M20 Contributed by Robert Beezer Statement [714]

The $4\times 4$ identity
matrix, ${I}_{4}$,
would be one example (Definition IM). Any of the 23 other rearrangements of the
columns of ${I}_{4}$
would be a simple, but less trivial, example. See Example UPM.

M10 Contributed by Chris Black Statement [713]

There are an infinite number of possible answers. We want to find a vector
$\left[\begin{array}{c}\hfill x\hfill \\ \hfill y\hfill \\ \hfill z\hfill \end{array}\right]$so
that the set

is a linearly independent set. We need a vector not in the span of the first two columns, which geometrically means that we need it to not be in the same plane as the first two columns of $A$. We can choose any values we want for $x$ and $y$, and then choose a value of $z$ that makes the three vectors independent.

I will (arbitrarily) choose $x=1$, $y=1$. Then, we have

$$\begin{array}{llll}\hfill A=\left[\begin{array}{ccc}\hfill 1\hfill & \hfill 2\hfill & \hfill 1\hfill \\ \hfill 3\hfill & \hfill 0\hfill & \hfill 1\hfill \\ \hfill 1\hfill & \hfill 1\hfill & \hfill z\hfill \end{array}\right]& \underset{}{\overset{\text{RREF}}{\to}}\left[\begin{array}{ccc}\hfill \text{1}\hfill & \hfill 0\hfill & \hfill 2z-1\hfill \\ \hfill 0\hfill & \hfill \text{1}\hfill & \hfill 1-z\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 4-6z\hfill \end{array}\right]\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$$which is invertible if and only if $4-6z\ne 0$. Thus, we can choose any value as long as $z\ne \frac{2}{3}$, so we choose $z=0$, and we have found a matrix $A=\left[\begin{array}{ccc}\hfill 1\hfill & \hfill 2\hfill & \hfill 1\hfill \\ \hfill 3\hfill & \hfill 0\hfill & \hfill 1\hfill \\ \hfill 1\hfill & \hfill 1\hfill & \hfill 0\hfill \end{array}\right]$ that is invertible.

M11 Contributed by Chris Black Statement [714]

There are an infinite number of possible answers. We need the set of
vectors

to be linearly dependent. One way to do this by inspection is to have $\left[\begin{array}{c}\hfill x\hfill \\ \hfill y\hfill \\ \hfill z\hfill \end{array}\right]=\left[\begin{array}{c}\hfill 1\hfill \\ \hfill 4\hfill \\ \hfill 5\hfill \end{array}\right]$. Thus, if we let $x=1$, $y=4$, $z=5$, then the matrix $A=\left[\begin{array}{ccc}\hfill 1\hfill & \hfill 1\hfill & \hfill 1\hfill \\ \hfill 1\hfill & \hfill 4\hfill & \hfill 4\hfill \\ \hfill 0\hfill & \hfill 5\hfill & \hfill 5\hfill \end{array}\right]$ is singular.

M15 Contributed by Chris Black Statement [714]

If $B$ is singular, then
there exists a vector $x\ne 0$
so that $x\in \mathcal{N}\phantom{\rule{0.3em}{0ex}}\left(B\right)$.
Thus, $Bx=0$,
so $A\left(Bx\right)=\left(AB\right)x=0$, so
$x\in \mathcal{N}\phantom{\rule{0.3em}{0ex}}\left(AB\right)$. Since the
null space of $AB$
is not trivial, $AB$
is a nonsingular matrix.

M80 Contributed by Robert Beezer Statement [714]

Take

Then $A$ is already in reduced row-echelon form, and by swapping rows, $B$ row-reduces to $A$. So the product of the row-echelon forms of $A$ is $AA=A\ne \mathcal{O}$. However, the product $AB$ is the $2\times 2$ zero matrix, which is in reduced-echelon form, and not equal to $AA$. When you get there, Theorem PEEF or Theorem EMDRO might shed some light on why we would not expect this statement to be true in general.

If $A$ and $B$ are nonsingular, then $AB$ is nonsingular (Theorem NPNT), and all three matrices $A$, $B$ and $AB$ row-reduce to the identity matrix (Theorem NMRRI). By Theorem MMIM, the desired relationship is true.