##### TheoremDZRCDeterminant with Zero Row or Column

Suppose that \(A\) is a square matrix with a row where every entry is zero, or a column where every entry is zero. Then \(\detname{A}=0\text{.}\)

We have seen how to compute the determinant of a matrix, and the incredible fact that we can perform expansion about *any* row *or* column to make this computation. In this largely theoretical section, we will state and prove several more intriguing properties about determinants. Our main goal will be the two results in Theorem SMZD and Theorem DRMM, but more specifically, we will see how the value of a determinant will allow us to gain insight into the various properties of a square matrix.

We start easy with a straightforward theorem whose proof presages the style of subsequent proofs in this subsection.

Suppose that \(A\) is a square matrix with a row where every entry is zero, or a column where every entry is zero. Then \(\detname{A}=0\text{.}\)

Suppose that \(A\) is a square matrix. Let \(B\) be the square matrix obtained from \(A\) by interchanging the location of two rows, or interchanging the location of two columns. Then \(\detname{B}=-\detname{A}\text{.}\)

So Theorem DRCS tells us the effect of the first row operation (Definition RO) on the determinant of a matrix. Here is the effect of the second row operation.

Suppose that \(A\) is a square matrix. Let \(B\) be the square matrix obtained from \(A\) by multiplying a single row by the scalar \(\alpha\text{,}\) or by multiplying a single column by the scalar \(\alpha\text{.}\) Then \(\detname{B}=\alpha\detname{A}\text{.}\)

Let us go for understanding the effect of all three row operations. But first we need an intermediate result, but it is an easy one.

Suppose that \(A\) is a square matrix with two equal rows, or two equal columns. Then \(\detname{A}=0\text{.}\)

Now explain the third row operation. Here we go.

Suppose that \(A\) is a square matrix. Let \(B\) be the square matrix obtained from \(A\) by multiplying a row by the scalar \(\alpha\) and then adding it to another row, or by multiplying a column by the scalar \(\alpha\) and then adding it to another column. Then \(\detname{B}=\detname{A}\text{.}\)

Is this what you expected? We could argue that the third row operation is the most popular, and yet it has no effect whatsoever on the determinant of a matrix! We can exploit this, along with our understanding of the other two row operations, to provide another approach to computing a determinant. We will explain this approach in the context of an example.

As a final preparation for our two most important theorems about determinants, we prove a handful of facts about the interplay of row operations and matrix multiplication with elementary matrices with regard to the determinant. But first, a simple, but crucial, fact about the identity matrix.

For every \(n\geq 1\text{,}\) \(\detname{I_n}=1\text{.}\)

For the three possible versions of an elementary matrix (Definition ELEM) we have the determinants,

- \(\detname{\elemswap{i}{j}}=-1\)
- \(\detname{\elemmult{\alpha}{i}}=\alpha\)
- \(\detname{\elemadd{\alpha}{i}{j}}=1\)

Suppose that \(A\) is a square matrix of size \(n\) and \(E\) is any elementary matrix of size \(n\text{.}\) Then \begin{equation*} \detname{EA}=\detname{E}\detname{A}\text{.} \end{equation*}

If you asked someone with substantial experience working with matrices about the value of the determinant, they'd be likely to quote the following theorem as the first thing to come to mind.

Let \(A\) be a square matrix. Then \(A\) is singular if and only if \(\detname{A}=0\text{.}\)

For the case of \(2\times 2\) matrices you might compare the application of Theorem SMZD with the combination of the results stated in Theorem DMST and Theorem TTMI.

In Section MINM we said “singular matrices are a distinct minority.” If you built a random matrix and took its determinant, how likely would it be that you got zero?

Since Theorem SMZD is an equivalence (Proof Technique E) we can expand on our growing list of equivalences about nonsingular matrices. The addition of the condition \(\detname{A}\neq 0\) is one of the best motivations for learning about determinants.

Suppose that \(A\) is a square matrix of size \(n\text{.}\) The following are equivalent.

- \(A\) is nonsingular.
- \(A\) row-reduces to the identity matrix.
- The null space of \(A\) contains only the zero vector, \(\nsp{A}=\set{\zerovector}\text{.}\)
- The linear system \(\linearsystem{A}{\vect{b}}\) has a unique solution for every possible choice of \(\vect{b}\text{.}\)
- The columns of \(A\) are a linearly independent set.
- \(A\) is invertible.
- The column space of \(A\) is \(\complex{n}\text{,}\) \(\csp{A}=\complex{n}\text{.}\)
- The columns of \(A\) are a basis for \(\complex{n}\text{.}\)
- The rank of \(A\) is \(n\text{,}\) \(\rank{A}=n\text{.}\)
- The nullity of \(A\) is zero, \(\nullity{A}=0\text{.}\)
- The determinant of \(A\) is nonzero, \(\detname{A}\neq 0\text{.}\)

Computationally, row-reducing a matrix is the most efficient way to determine if a matrix is nonsingular, though the effect of using division in a computer can lead to round-off errors that confuse small quantities with critical zero quantities. Conceptually, the determinant may seem the most efficient way to determine if a matrix is nonsingular. The definition of a determinant uses just addition, subtraction and multiplication, so division is never a problem. And the final test is easy: is the determinant zero or not? However, the number of operations involved in computing a determinant by the definition very quickly becomes so excessive as to be impractical.

Now for the *coup de grace*. We will generalize Theorem DEMMM to the case of *any* two square matrices. You may recall thinking that matrix multiplication was defined in a needlessly complicated manner. For sure the definition of a determinant seems even stranger. (Though Theorem SMZD might be forcing you to reconsider.) Read the statement of the next theorem and contemplate how nicely matrix multiplication and determinants play with each other.

Suppose that \(A\) and \(B\) are square matrices of the same size. Then \(\detname{AB}=\detname{A}\detname{B}\text{.}\)

It is amazing that matrix multiplication and the determinant interact this way. Might it also be true that \(\detname{A+B}=\detname{A}+\detname{B}\text{?}\) (Exercise PDM.M30)

Consider the two matrices below, and suppose you already have computed \(\detname{A}=-120\text{.}\) What is \(\detname{B}\text{?}\) Why? \begin{align*} A&= \begin{bmatrix} 0 & 8 & 3 & -4 \\ -1 & 2 & -2 & 5 \\ -2 & 8 & 4 & 3 \\ 0 & -4 & 2 & -3 \end{bmatrix} & B&= \begin{bmatrix} 0 & 8 & 3 & -4 \\ 0 & -4 & 2 & -3 \\ -2 & 8 & 4 & 3 \\ -1 & 2 & -2 & 5 \end{bmatrix} \end{align*}

State the theorem that allows us to make yet another extension to our NMEx series of theorems.

What is amazing about the interaction between matrix multiplication and the determinant?

Each of the archetypes below is a system of equations with a square coefficient matrix, or is a square matrix itself. Compute the determinant of each matrix, noting how Theorem SMZD indicates when the matrix is singular or nonsingular.

Archetype A, Archetype B, Archetype F, Archetype K, Archetype L

Construct a \(3\times 3\) nonsingular matrix and call it \(A\text{.}\) Then, for each entry of the matrix, compute the corresponding cofactor, and create a new \(3\times 3\) matrix full of these cofactors by placing the cofactor of an entry in the same location as the entry it was based on. Once complete, call this matrix \(C\text{.}\) Compute \(A\transpose{C}\text{.}\) Any observations? Repeat with a new matrix, or perhaps with a \(4\times 4\) matrix.

SolutionConstruct an example to show that the following statement is not true for all square matrices \(A\) and \(B\) of the same size: \(\detname{A+B}=\detname{A}+\detname{B}\text{.}\)

Theorem NPNT says that if the product of square matrices \(AB\) is nonsingular, then the individual matrices \(A\) and \(B\) are nonsingular also. Construct a new proof of this result making use of theorems about determinants of matrices.

Use Theorem DRCM to prove Theorem DZRC as a corollary. (See Proof Technique LC.)

Suppose that \(A\) is a square matrix of size \(n\) and \(\alpha\in\complexes\) is a scalar. Prove that \(\detname{\alpha A}=\alpha^n\detname{A}\text{.}\)

Employ Theorem DT to construct the second half of the proof of Theorem DRCM (the portion about a multiple of a column).