Skip to main content
\(\newcommand{\orderof}[1]{\sim #1} \newcommand{\Z}{\mathbb{Z}} \newcommand{\reals}{\mathbb{R}} \newcommand{\real}[1]{\mathbb{R}^{#1}} \newcommand{\complexes}{\mathbb{C}} \newcommand{\complex}[1]{\mathbb{C}^{#1}} \newcommand{\conjugate}[1]{\overline{#1}} \newcommand{\modulus}[1]{\left\lvert#1\right\rvert} \newcommand{\zerovector}{\vect{0}} \newcommand{\zeromatrix}{\mathcal{O}} \newcommand{\innerproduct}[2]{\left\langle#1,\,#2\right\rangle} \newcommand{\norm}[1]{\left\lVert#1\right\rVert} \newcommand{\dimension}[1]{\dim\left(#1\right)} \newcommand{\nullity}[1]{n\left(#1\right)} \newcommand{\rank}[1]{r\left(#1\right)} \newcommand{\ds}{\oplus} \newcommand{\detname}[1]{\det\left(#1\right)} \newcommand{\detbars}[1]{\left\lvert#1\right\rvert} \newcommand{\trace}[1]{t\left(#1\right)} \newcommand{\sr}[1]{#1^{1/2}} \newcommand{\spn}[1]{\left\langle#1\right\rangle} \newcommand{\nsp}[1]{\mathcal{N}\!\left(#1\right)} \newcommand{\csp}[1]{\mathcal{C}\!\left(#1\right)} \newcommand{\rsp}[1]{\mathcal{R}\!\left(#1\right)} \newcommand{\lns}[1]{\mathcal{L}\!\left(#1\right)} \newcommand{\per}[1]{#1^\perp} \newcommand{\augmented}[2]{\left\lbrack\left.#1\,\right\rvert\,#2\right\rbrack} \newcommand{\linearsystem}[2]{\mathcal{LS}\!\left(#1,\,#2\right)} \newcommand{\homosystem}[1]{\linearsystem{#1}{\zerovector}} \newcommand{\rowopswap}[2]{R_{#1}\leftrightarrow R_{#2}} \newcommand{\rowopmult}[2]{#1R_{#2}} \newcommand{\rowopadd}[3]{#1R_{#2}+R_{#3}} \newcommand{\leading}[1]{\boxed{#1}} \newcommand{\rref}{\xrightarrow{\text{RREF}}} \newcommand{\elemswap}[2]{E_{#1,#2}} \newcommand{\elemmult}[2]{E_{#2}\left(#1\right)} \newcommand{\elemadd}[3]{E_{#2,#3}\left(#1\right)} \newcommand{\scalarlist}[2]{{#1}_{1},\,{#1}_{2},\,{#1}_{3},\,\ldots,\,{#1}_{#2}} \newcommand{\vect}[1]{\mathbf{#1}} \newcommand{\colvector}[1]{\begin{bmatrix}#1\end{bmatrix}} \newcommand{\vectorcomponents}[2]{\colvector{#1_{1}\\#1_{2}\\#1_{3}\\\vdots\\#1_{#2}}} \newcommand{\vectorlist}[2]{\vect{#1}_{1},\,\vect{#1}_{2},\,\vect{#1}_{3},\,\ldots,\,\vect{#1}_{#2}} \newcommand{\vectorentry}[2]{\left\lbrack#1\right\rbrack_{#2}} \newcommand{\matrixentry}[2]{\left\lbrack#1\right\rbrack_{#2}} \newcommand{\lincombo}[3]{#1_{1}\vect{#2}_{1}+#1_{2}\vect{#2}_{2}+#1_{3}\vect{#2}_{3}+\cdots +#1_{#3}\vect{#2}_{#3}} \newcommand{\matrixcolumns}[2]{\left\lbrack\vect{#1}_{1}|\vect{#1}_{2}|\vect{#1}_{3}|\ldots|\vect{#1}_{#2}\right\rbrack} \newcommand{\transpose}[1]{#1^{t}} \newcommand{\inverse}[1]{#1^{-1}} \newcommand{\submatrix}[3]{#1\left(#2|#3\right)} \newcommand{\adj}[1]{\transpose{\left(\conjugate{#1}\right)}} \newcommand{\adjoint}[1]{#1^\ast} \newcommand{\set}[1]{\left\{#1\right\}} \newcommand{\setparts}[2]{\left\lbrace#1\,\middle|\,#2\right\rbrace} \newcommand{\card}[1]{\left\lvert#1\right\rvert} \newcommand{\setcomplement}[1]{\overline{#1}} \newcommand{\charpoly}[2]{p_{#1}\left(#2\right)} \newcommand{\eigenspace}[2]{\mathcal{E}_{#1}\left(#2\right)} \newcommand{\eigensystem}[3]{\lambda&=#2&\eigenspace{#1}{#2}&=\spn{\set{#3}}} \newcommand{\geneigenspace}[2]{\mathcal{G}_{#1}\left(#2\right)} \newcommand{\algmult}[2]{\alpha_{#1}\left(#2\right)} \newcommand{\geomult}[2]{\gamma_{#1}\left(#2\right)} \newcommand{\indx}[2]{\iota_{#1}\left(#2\right)} \newcommand{\ltdefn}[3]{#1\colon #2\rightarrow#3} \newcommand{\lteval}[2]{#1\left(#2\right)} \newcommand{\ltinverse}[1]{#1^{-1}} \newcommand{\restrict}[2]{{#1}|_{#2}} \newcommand{\preimage}[2]{#1^{-1}\left(#2\right)} \newcommand{\rng}[1]{\mathcal{R}\!\left(#1\right)} \newcommand{\krn}[1]{\mathcal{K}\!\left(#1\right)} \newcommand{\compose}[2]{{#1}\circ{#2}} \newcommand{\vslt}[2]{\mathcal{LT}\left(#1,\,#2\right)} \newcommand{\isomorphic}{\cong} \newcommand{\similar}[2]{\inverse{#2}#1#2} \newcommand{\vectrepname}[1]{\rho_{#1}} \newcommand{\vectrep}[2]{\lteval{\vectrepname{#1}}{#2}} \newcommand{\vectrepinvname}[1]{\ltinverse{\vectrepname{#1}}} \newcommand{\vectrepinv}[2]{\lteval{\ltinverse{\vectrepname{#1}}}{#2}} \newcommand{\matrixrep}[3]{M^{#1}_{#2,#3}} \newcommand{\matrixrepcolumns}[4]{\left\lbrack \left.\vectrep{#2}{\lteval{#1}{\vect{#3}_{1}}}\right|\left.\vectrep{#2}{\lteval{#1}{\vect{#3}_{2}}}\right|\left.\vectrep{#2}{\lteval{#1}{\vect{#3}_{3}}}\right|\ldots\left|\vectrep{#2}{\lteval{#1}{\vect{#3}_{#4}}}\right.\right\rbrack} \newcommand{\cbm}[2]{C_{#1,#2}} \newcommand{\jordan}[2]{J_{#1}\left(#2\right)} \newcommand{\hadamard}[2]{#1\circ #2} \newcommand{\hadamardidentity}[1]{J_{#1}} \newcommand{\hadamardinverse}[1]{\widehat{#1}} \newcommand{\lt}{<} \newcommand{\gt}{>} \newcommand{\amp}{&} \)

SectionDMDeterminant of a Matrix

Before we define the determinant of a matrix, we take a slight detour to introduce elementary matrices. These will bring us back to the beginning of the course and our old friend, row operations.

SubsectionEMElementary Matrices

Elementary matrices are very simple, as you might have suspected from their name. Their purpose is to effect row operations (Definition RO) on a matrix through matrix multiplication (Definition MM). Their definitions look much more complicated than they really are, so be sure to skip over them on your first reading and head right for the explanation that follows and the first example.

DefinitionELEMElementary Matrices

  1. For \(i\neq j\text{,}\) \(\elemswap{i}{j}\) is the square matrix of size \(n\) with \begin{equation*} \matrixentry{\elemswap{i}{j}}{k\ell}= \begin{cases} 0 & k\neq i, k\neq j, \ell\neq k\\ 1 & k\neq i, k\neq j, \ell=k\\ 0 & k=i, \ell\neq j\\ 1 & k=i, \ell=j\\ 0 & k=j, \ell\neq i\\ 1 & k=j, \ell=i \end{cases}\text{.} \end{equation*}
  2. For \(\alpha\neq 0\text{,}\) \(\elemmult{\alpha}{i}\) is the square matrix of size \(n\) with \begin{equation*} \matrixentry{\elemmult{\alpha}{i}}{k\ell}= \begin{cases} 0 & \ell\neq k\\ 1 & k\neq i, \ell=k\\ \alpha & k=i, \ell=i \end{cases}\text{.} \end{equation*}
  3. For \(i\neq j\text{,}\) \(\elemadd{\alpha}{i}{j}\) is the square matrix of size \(n\) with \begin{equation*} \matrixentry{\elemadd{\alpha}{i}{j}}{k\ell}= \begin{cases} 0 & k\neq j, \ell\neq k\\ 1 & k\neq j, \ell=k\\ 0 & k=j, \ell\neq i, \ell\neq j\\ 1 & k=j, \ell=j\\ \alpha & k=j, \ell=i\\ \end{cases}\text{.} \end{equation*}

Again, these matrices are not as complicated as their definitions suggest, since they are just small perturbations of the \(n\times n\) identity matrix (Definition IM). \(\elemswap{i}{j}\) is the identity matrix with rows \(i\) and \(j\) trading places, \(\elemmult{\alpha}{i}\) is the identity matrix where the diagonal entry in row \(i\) and column \(i\) has been replaced by \(\alpha\text{,}\) and \(\elemadd{\alpha}{i}{j}\) is the identity matrix where the entry in row \(j\) and column \(i\) has been replaced by \(\alpha\text{.}\) (Yes, those subscripts look backwards in the description of \(\elemadd{\alpha}{i}{j}\)). Notice that our notation makes no reference to the size of the elementary matrix, since this will always be apparent from the context, or unimportant.

The raison d'etre for elementary matrices is to “do” row operations on matrices with matrix multiplication. So here is an example where we will both see some elementary matrices and see how they accomplish row operations when used with matrix multiplication.

The next three theorems establish that each elementary matrix effects a row operation via matrix multiplication.

Proof

Later in this section we will need two facts about elementary matrices.

Proof

Notice that we have now made use of the nonzero restriction on \(\alpha\) in the definition of \(\elemmult{\alpha}{i}\text{.}\) One more key property of elementary matrices.

Proof

SubsectionDDDefinition of the Determinant

We will now turn to the definition of a determinant and do some sample computations. The definition of the determinant function is recursive, that is, the determinant of a large matrix is defined in terms of the determinant of smaller matrices. To this end, we will make a few definitions.

DefinitionSMSubMatrix

Suppose that \(A\) is an \(m\times n\) matrix. Then the submatrix \(\submatrix{A}{i}{j}\) is the \((m-1)\times (n-1)\) matrix obtained from \(A\) by removing row \(i\) and column \(j\text{.}\)

DefinitionDMDeterminant of a Matrix

Suppose \(A\) is a square matrix. Then its determinant, \(\detname{A}=\detbars{A}\text{,}\) is an element of \(\complexes\) defined recursively by:

  1. If \(A\) is a \(1\times 1\) matrix, then \(\detname{A}=\matrixentry{A}{11}\text{.}\)
  2. If \(A\) is a matrix of size \(n\) with \(n\geq 2\text{,}\) then \begin{align*} \detname{A}&= \matrixentry{A}{11}\detname{\submatrix{A}{1}{1}} -\matrixentry{A}{12}\detname{\submatrix{A}{1}{2}} +\matrixentry{A}{13}\detname{\submatrix{A}{1}{3}}-\\ &\quad \matrixentry{A}{14}\detname{\submatrix{A}{1}{4}} +\cdots +(-1)^{n+1}\matrixentry{A}{1n}\detname{\submatrix{A}{1}{n}}\text{.} \end{align*}

So to compute the determinant of a \(5\times 5\) matrix we must build 5 submatrices, each of size \(4\text{.}\) To compute the determinants of each the \(4\times 4\) matrices we need to create 4 submatrices each, these now of size \(3\) and so on. To compute the determinant of a \(10\times 10\) matrix would require computing the determinant of \(10!=10\times 9\times 8\times 7\times 6\times 5\times 4\times 3\times 2=3,628,800\) \(1\times 1\) matrices. Fortunately there are better ways. However this does suggest an excellent computer programming exercise to write a recursive procedure to compute a determinant.

Let us compute the determinant of a reasonably sized matrix by hand.

In practice it is a bit silly to decompose a \(2\times 2\) matrix down into a couple of \(1\times 1\) matrices and then compute the exceedingly easy determinant of these puny matrices. So here is a simple theorem.

Proof

Do you recall seeing the expression \(ad-bc\) before? (Hint: Theorem TTMI.)

SubsectionCDComputing Determinants

There are a variety of ways to compute the determinant. We will establish first that we can choose to mimic our definition of the determinant, but by using matrix entries and submatrices based on a row other than the first one.

Proof

We can also obtain a formula that computes a determinant by expansion about a column, but this will be simpler if we first prove a result about the interplay of determinants and transposes. Notice how the following proof makes use of the ability to compute a determinant by expanding about any row.

Proof

Now we can easily get the result that a determinant can be computed by expansion about any column as well.

Proof

That the determinant of an \(n\times n\) matrix can be computed in \(2n\) different (albeit similar) ways is nothing short of remarkable. For the doubters among us, we will do an example, computing a \(4\times 4\) matrix in two different ways.

When a matrix has all zeros above (or below) the diagonal, exploiting the zeros by expanding about the proper row or column makes computing a determinant insanely easy.

When you consult other texts in your study of determinants, you may run into the terms minor and cofactor, especially in a discussion centered on expansion about rows and columns. We have chosen not to make these definitions formally since we have been able to get along without them. However, informally, a minor is a determinant of a submatrix, specifically \(\detname{\submatrix{A}{i}{j}}\) and is usually referenced as the minor of the matrix entry \(\matrixentry{A}{ij}\text{.}\) A cofactor is a signed minor, specifically the cofactor of the matrix entry \(\matrixentry{A}{ij}\) is \((-1)^{i+j}\detname{\submatrix{A}{i}{j}}\text{.}\)

SubsectionReading Questions

1

Construct the elementary matrix that will effect the row operation \(\rowopadd{-6}{2}{3}\) on a \(4\times 7\) matrix.

2

Compute the determinant of the matrix \begin{equation*} \begin{bmatrix} 2&3&-1\\ 3&8&2\\ 4&-1&-3 \end{bmatrix}\text{.} \end{equation*}

3

Compute the determinant of the matrix \begin{equation*} \begin{bmatrix} 3 & 9 & -2 & 4 & 2 \\ 0 & 1 & 4 & -2 & 7 \\ 0 & 0 & -2 & 5 & 2 \\ 0 & 0 & 0 & -1 & 6 \\ 0 & 0 & 0 & 0 & 4 \end{bmatrix}\text{.} \end{equation*}

SubsectionExercises

C21

Doing the computations by hand, find the determinant of the matrix below. \begin{equation*} \begin{bmatrix} 1 & 3\\ 6 & 2 \end{bmatrix} \end{equation*}

Solution
C22

Doing the computations by hand, find the determinant of the matrix below. \begin{equation*} \begin{bmatrix} 1 & 3\\ 2 & 6 \end{bmatrix} \end{equation*}

Solution
C23

Doing the computations by hand, find the determinant of the matrix below. \begin{equation*} \begin{bmatrix} 1 & 3 & 2 \\ 4 & 1 & 3 \\ 1 & 0 & 1 \end{bmatrix} \end{equation*}

Solution
C24

Doing the computations by hand, find the determinant of the matrix below. \begin{equation*} \begin{bmatrix} -2 & 3 & -2 \\ -4 & -2 & 1 \\ 2 & 4 & 2 \end{bmatrix} \end{equation*}

Solution
C25

Doing the computations by hand, find the determinant of the matrix below. \begin{equation*} \begin{bmatrix} 3 & -1 & 4\\ 2 & 5 & 1\\ 2 & 0 & 6 \end{bmatrix} \end{equation*}

Solution
C26

Doing the computations by hand, find the determinant of the matrix \(A\text{.}\) \begin{equation*} A= \begin{bmatrix} 2 & 0 & 3 & 2 \\ 5 & 1 & 2 & 4 \\ 3 & 0 & 1 & 2 \\ 5 & 3 & 2 & 1 \end{bmatrix} \end{equation*}

Solution
C27

Doing the computations by hand, find the determinant of the matrix \(A\text{.}\) \begin{equation*} A= \begin{bmatrix} 1 & 0 & 1 & 1\\ 2 & 2 & -1 & 1\\ 2 & 1 & 3 & 0\\ 1 & 1 & 0 & 1 \end{bmatrix} \end{equation*}

Solution
C28

Doing the computations by hand, find the determinant of the matrix \(A\text{.}\) \begin{equation*} A= \begin{bmatrix} 1 & 0 & 1 & 1\\ 2 & -1 & -1 & 1\\ 2 & 5 & 3 & 0\\ 1 & -1 & 0 & 1 \end{bmatrix} \end{equation*}

Solution
C29

Doing the computations by hand, find the determinant of the matrix \(A\text{.}\) \begin{equation*} A= \begin{bmatrix} 2 & 3 & 0 & 2 & 1\\ 0 & 1 & 1 & 1 & 2\\ 0 & 0 & 1 & 2 & 3\\ 0 & 1 & 2 & 1 & 0\\ 0 & 0 & 0 & 1 & 2 \end{bmatrix} \end{equation*}

Solution
C30

Doing the computations by hand, find the determinant of the matrix \(A\text{.}\) \begin{equation*} A= \begin{bmatrix} 2 & 1 & 1 & 0 & 1\\ 2 & 1 & 2 & -1 & 1\\ 0 & 0 & 1 & 2 & 0\\ 1 & 0 & 3 & 1 & 1\\ 2 & 1 & 1 & 2 & 1 \end{bmatrix} \end{equation*}

Solution
M10

Find a value of \(k\) so that the matrix \begin{align*} A = \begin{bmatrix} 2 & 4 \\ 3 & k\end{bmatrix} \end{align*} has \(\det(A) = 0\text{,}\) or explain why it is not possible.

Solution
M11

Find a value of \(k\) so that the matrix \begin{equation*} A = \begin{bmatrix} 1 & 2 & 1\\ 2 & 0 & 1 \\ 2 & 3 & k \end{bmatrix} \end{equation*} has \(\det(A) = 0\text{,}\) or explain why it is not possible.

Solution
M15

Given the matrix \begin{equation*} B=\begin{bmatrix} 2 - x & 1 \\ 4 & 2 - x \end{bmatrix}\text{,} \end{equation*} find all values of \(x\) that are solutions of \(\det(B) = 0\text{.}\)

Solution
M16

Given the matrix \begin{equation*} B =\begin{bmatrix} 4 - x & -4 & -4\\ 2 & -2 - x & -4\\ 3 & -3 & -4 -x \end{bmatrix}\text{,} \end{equation*} find all values of \(x\) that are solutions of \(\det(B) = 0\text{.}\)

Solution
M30

The two matrices below are row-equivalent. How would you confirm this? Since the matrices are row-equivalent, there is a sequence of row operations that converts \(X\) into \(Y\text{,}\) which would be a product of elementary matrices, \(M\text{,}\) such that \(MX=Y\text{.}\) Find \(M\text{.}\) (This approach could be used to find the “9 scalars” of the very early Exercise RREF.M40.)

Hint: Compute the extended echelon form for both matrices, and then use the property from Theorem PEEF that reads \(B=JA\text{,}\) where \(A\) is the original matrix, \(B\) is the echelon form of the matrix and \(J\) is a nonsingular matrix obtained from extended echelon form. Combine the two square matrices in the right way to obtain \(M\text{.}\) \begin{align*} X&=\begin{bmatrix} -1 & 3 & 1 & -2 & 8 \\ -1 & 3 & 2 & -1 & 4 \\ 2 & -4 & -3 & 2 & -7 \\ -2 & 5 & 3 & -2 & 8 \end{bmatrix} & Y&=\begin{bmatrix} -1 & 2 & 2 & 0 & 0 \\ -3 & 6 & 8 & -1 & 1 \\ 0 & 1 & -2 & -2 & 9 \\ -1 & 4 & -3 & -3 & 16 \end{bmatrix}\text{.} \end{align*}