From A First Course in Linear Algebra
Version 2.10
© 2004.
Licensed under the GNU Free Documentation License.
http://linear.ups.edu/
First, a slight detour, as we introduce elementary matrices, which will bring us
back to the beginning of the course and our old friend, row operations.
Elementary matrices are very simple, as you might have suspected from their name. Their purpose is to effect row operations (Definition RO) on a matrix through matrix multiplication (Definition MM). Their definitions look more complicated than they really are, so be sure to read ahead after you read the definition for some explanations and an example.
Definition ELEM
Elementary Matrices
{
\left [{E}_{i,j}\right ]}_{kℓ} = \left \{\array{
0\quad &k\mathrel{≠}i,k\mathrel{≠}j,ℓ\mathrel{≠}k
\cr
1\quad &k\mathrel{≠}i,k\mathrel{≠}j,ℓ = k
\cr
0\quad &k = i,ℓ\mathrel{≠}j
\cr
1\quad &k = i,ℓ = j
\cr
0\quad &k = j,ℓ\mathrel{≠}i
\cr
1\quad &k = j,ℓ = i
} \right .
|
{
\left [{E}_{i}\left (α\right )\right ]}_{kℓ} = \left \{\array{
0\quad &k\mathrel{≠}i,ℓ\mathrel{≠}k
\cr
1\quad &k\mathrel{≠}i,ℓ = k
\cr
α\quad &k = i,ℓ = i
} \right .
|
{
\left [{E}_{i,j}\left (α\right )\right ]}_{kℓ} = \left \{\array{
0\quad &k\mathrel{≠}j,ℓ\mathrel{≠}k
\cr
1\quad &k\mathrel{≠}j,ℓ = k
\cr
0\quad &k = j,ℓ\mathrel{≠}i,ℓ\mathrel{≠}j
\cr
1\quad &k = j,ℓ = j
\cr
α\quad &k = j,ℓ = i
\cr
\quad
} \right .
|
(This definition contains Notation ELEM.)
Again, these matrices are not as complicated as they appear, since they are mostly perturbations of the n × n identity matrix (Definition IM). {E}_{i,j} is the identity matrix with rows (or columns) i and j trading places, {E}_{i}\left (α\right ) is the identity matrix where the diagonal entry in row i and column i has been replaced by α, and {E}_{i,j}\left (α\right ) is the identity matrix where the entry in row j and column i has been replaced by α. (Yes, those subscripts look backwards in the description of {E}_{i,j}\left (α\right )). Notice that our notation makes no reference to the size of the elementary matrix, since this will always be apparent from the context, or unimportant.
The raison d’être for elementary matrices is to “do” row operations on matrices with matrix multiplication. So here is an example where we will both see some elementary matrices and see how they can accomplish row operations.
Example EMRO
Elementary matrices and row operations
We will perform a sequence of row operations (Definition RO) on the
3 × 4 matrix
A,
while also multiplying the matrix on the left by the appropriate
3 × 3
elementary matrix.
A = \left [\array{
2&1&3&1
\cr
1&3&2&4
\cr
5&0&3&1 } \right ]
|
The next three theorems establish that each elementary matrix effects a row operation via matrix multiplication.
Theorem EMDRO
Elementary Matrices Do Row Operations
Suppose that A
is an m × n
matrix, and B
is a matrix of the same size that is obtained from
A by a
single row operation (Definition RO). Then there is an elementary matrix of size
m that will
convert A
to B via
matrix multiplication on the left. More precisely,
Proof In each of the three conclusions, performing the row operation on A will create the matrix B where only one or two rows will have changed. So we will establish the equality of the matrix entries row by row, first for the unchanged rows, then for the changed rows, showing in each case that the result of the matrix product is the same as the result of the row operation. Here we go.
Row k of the product {E}_{i,j}A, where k\mathrel{≠}i, k\mathrel{≠}j, is unchanged from A,
Row i of the product {E}_{i,j}A is row j of A,
Row j of the product {E}_{i,j}A is row i of A,
So the matrix product {E}_{i,j}A is the same as the row operation that swaps rows i and j.
Row k of the product {E}_{i}\left (α\right )A, where k\mathrel{≠}i, is unchanged from A,
Row i of the product {E}_{i}\left (α\right )A is α times row i of A,
So the matrix product {E}_{i}\left (α\right )A is the same as the row operation that swaps multiplies row i by α.
Row k of the product {E}_{i,j}\left (α\right )A, where k\mathrel{≠}j, is unchanged from A,
Row j of the product {E}_{i,j}\left (α\right )A, is α times row i of A and then added to row j of A,
So the matrix product {E}_{i,j}\left (α\right )A is the same as the row operation that multiplies row i by α and adds the result to row j. ■
Later in this section we will need two facts about elementary matrices.
Theorem EMN
Elementary Matrices are Nonsingular
If E is an elementary
matrix, then E is
nonsingular. □
Proof We show that we can row-reduce each elementary matrix to the identity matrix. Given an elementary matrix of the form {E}_{i,j}, perform the row operation that swaps row j with row i. Given an elementary matrix of the form {E}_{i}\left (α\right ), with α\mathrel{≠}0, perform the row operation that multiplies row i by 1∕α. Given an elementary matrix of the form {E}_{i,j}\left (α\right ), with α\mathrel{≠}0, perform the row operation that multiplies row i by − α and adds it to row j. In each case, the result of the single row operation is the identity matrix. So each elementary matrix is row-equivalent to the identity matrix, and by Theorem NMRRI is nonsingular. ■
Notice that we have now made use of the nonzero restriction on α in the definition of {E}_{i}\left (α\right ). One more key property of elementary matrices.
Theorem NMPEM
Nonsingular Matrices are Products of Elementary Matrices
Suppose that A
is a nonsingular matrix. Then there exists elementary matrices
{E}_{1},\kern 1.95872pt {E}_{2},\kern 1.95872pt {E}_{3},\kern 1.95872pt \mathop{\mathop{…}},\kern 1.95872pt {E}_{t} so
that A = {E}_{1}{E}_{2}{E}_{3}\mathop{\mathop{…}}{E}_{t}.
□
Proof Since A is nonsingular, it is row-equivalent to the identity matrix by Theorem NMRRI, so there is a sequence of t row operations that converts I to A. For each of these row operations, form the associated elementary matrix from Theorem EMDRO and denote these matrices by {E}_{1},\kern 1.95872pt {E}_{2},\kern 1.95872pt {E}_{3},\kern 1.95872pt \mathop{\mathop{…}},\kern 1.95872pt {E}_{t}. Applying the first row operation to I yields the matrix {E}_{1}I. The second row operation yields {E}_{2}({E}_{1}I), and the third row operation creates {E}_{3}{E}_{2}{E}_{1}I. The result of the full sequence of t row operations will yield A, so
A = {E}_{t}\mathop{\mathop{…}}{E}_{3}{E}_{2}{E}_{1}I = {E}_{t}\mathop{\mathop{…}}{E}_{3}{E}_{2}{E}_{1}
|
Other than the cosmetic matter of re-indexing these elementary matrices in the opposite order, this is the desired result. ■
We’ll now turn to the definition of a determinant and do some sample computations. The definition of the determinant function is recursive, that is, the determinant of a large matrix is defined in terms of the determinant of smaller matrices. To this end, we will make a few definitions.
Definition SM
SubMatrix
Suppose that A is
an m × n matrix. Then
the submatrix A\left (i|j\right )
is the (m − 1) × (n − 1) matrix
obtained from A
by removing row i
and column j.
(This definition contains Notation SM.) △
Example SS
Some submatrices
For the matrix
A = \left [\array{
1&−2&3&9
\cr
4&−2&0&1
\cr
3& 5 &2&1 } \right ]
|
we have the submatrices
Definition DM
Determinant of a Matrix
Suppose A is a square matrix.
Then its determinant, \mathop{ det} \left (A\right ) = \left \vert A\right \vert ,
is an element of {ℂ}^{}
defined recursively by:
If A is a
1 × 1 matrix,
then \mathop{ det} \left (A\right ) ={ \left [A\right ]}_{11}.
If A is a matrix
of size n
with n ≥ 2,
then
(This definition contains Notation DM.) △
So to compute the determinant of a 5 × 5 matrix we must build 5 submatrices, each of size 4. To compute the determinants of each the 4 × 4 matrices we need to create 4 submatrices each, these now of size 3 and so on. To compute the determinant of a 10 × 10 matrix would require computing the determinant of 10! = 10 × 9 × 8 × 7 × 6 × 5 × 4 × 3 × 2 = 3, 628, 800 1 × 1 matrices. Fortunately there are better ways. However this does suggest an excellent computer programming exercise to write a recursive procedure to compute a determinant.
Let’s compute the determinant of a reasonable sized matrix by hand.
Example D33M
Determinant of a 3 × 3
matrix
Suppose that we have the 3 × 3
matrix
A = \left [\array{
3 & 2 &−1
\cr
4 & 1 & 6
\cr
−3&−1& 2 } \right ]
|
Then
In practice it is a bit silly to decompose a 2 × 2 matrix down into a couple of 1 × 1 matrices and then compute the exceedingly easy determinant of these puny matrices. So here is a simple theorem.
Theorem DMST
Determinant of Matrices of Size Two
Suppose that A = \left [\array{
a&b
\cr
c&d } \right ].
Then \mathop{ det} \left (A\right ) = ad − bc
□
Proof Applying Definition DM,
\left \vert \array{
a&b
\cr
c&d } \right \vert = a\left \vert \array{
d } \right \vert −b\left \vert \array{
c } \right \vert = ad−bc
|
Do you recall seeing the expression ad − bc before? (Hint: Theorem TTMI)
There are a variety of ways to compute the determinant. We will establish first that we can choose to mimic our definition of the determinant, but by using matrix entries and submatrices based on a row other than the first one.
Theorem DER
Determinant Expansion about Rows
Suppose that A is a
square matrix of size n.
Then
which is known as expansion about row i. □
Proof First, the statement of the theorem coincides with Definition DM when i = 1, so throughout, we need only consider i > 1.
Given the recursive definition of the determinant, it should be no surprise that we will use induction for this proof (Technique I). When n = 1, there is nothing to prove since there is but one row. When n = 2, we just examine expansion about the second row,
So the theorem is true for matrices of size n = 1 and n = 2. Now assume the result is true for all matrices of size n − 1 as we derive an expression for expansion about row i for a matrix of size n. We will abuse our notation for a submatrix slightly, so A\left ({i}_{1},{i}_{2}|{j}_{1},{j}_{2}\right ) will denote the matrix formed by removing rows {i}_{1} and {i}_{2}, along with removing columns {j}_{1} and {j}_{2}. Also, as we take a determinant of a submatrix, we will need to “jump up” the index of summation partway through as we “skip over” a missing column. To do this smoothly we will set
{
ϵ}_{ℓj} = \left \{\array{
0\quad &ℓ < j
\cr
1\quad &ℓ > j } \right .
|
Now,
We can also obtain a formula that computes a determinant by expansion about a column, but this will be simpler if we first prove a result about the interplay of determinants and transposes. Notice how the following proof makes use of the ability to compute a determinant by expanding about any row.
Theorem DT
Determinant of the Transpose
Suppose that A is a
square matrix. Then \mathop{ det} \left ({A}^{t}\right ) =\mathop{ det} \left (A\right ).
□
Proof With our definition of the determinant (Definition DM) and theorems like Theorem DER, using induction (Technique I) is a natural approach to proving properties of determinants. And so it is here. Let n be the size of the matrix A, and we will use induction on n.
For n = 1, the transpose of a matrix is identical to the original matrix, so vacuously, the determinants are equal.
Now assume the result is true for matrices of size n − 1. Then,
Now we can easily get the result that a determinant can be computed by expansion about any column as well.
Theorem DEC
Determinant Expansion about Columns
Suppose that A is a
square matrix of size n.
Then
which is known as expansion about column j. □
Proof
That the determinant of an n × n matrix can be computed in 2n different (albeit similar) ways is nothing short of remarkable. For the doubters among us, we will do an example, computing a 4 × 4 matrix in two different ways.
Example TCSD
Two computations, same determinant
Let
A = \left [\array{
−2& 3 & 0 & 1
\cr
9 &−2& 0 & 1
\cr
1 & 3 &−2&−1
\cr
4 & 1 & 2 & 6 } \right ]
|
Then expanding about the fourth row (Theorem DER with i = 4) yields,
while expanding about column 3 (Theorem DEC with j = 3) gives
Notice how much easier the second computation was. By choosing to expand about the third column, we have two entries that are zero, so two 3 × 3 determinants need not be computed at all! ⊠
When a matrix has all zeros above (or below) the diagonal, exploiting the zeros by expanding about the proper row or column makes computing a determinant insanely easy.
Example DUTM
Determinant of an upper triangular matrix
Suppose that
T = \left [\array{
2& 3 &−1& 3 & 3
\cr
0&−1& 5 & 2 &−1
\cr
0& 0 & 3 & 9 & 2
\cr
0& 0 & 0 &−1& 3
\cr
0& 0 & 0 & 0 & 5 } \right ]
|
We will compute the determinant of this 5 × 5 matrix by consistently expanding about the first column for each submatrix that arises and does not have a zero entry multiplying it.
If you consult other texts in your study of determinants, you may run into the terms “minor” and “cofactor,” especially in a discussion centered on expansion about rows and columns. We’ve chosen not to make these definitions formally since we’ve been able to get along without them. However, informally, a minor is a determinant of a submatrix, specifically \mathop{ det} \left (A\left (i|j\right )\right ) and is usually referenced as the minor of {\left [A\right ]}_{ij}. A cofactor is a signed minor, specifically the cofactor of {\left [A\right ]}_{ij} is {(−1)}^{i+j}\mathop{ det} \left (A\left (i|j\right )\right ).
\left [\array{
2& 3 &−1
\cr
3& 8 & 2
\cr
4&−1&−3
} \right ]
|
\left [\array{
3&9&−2& 4 &2
\cr
0&1& 4 &−2&7
\cr
0&0&−2& 5 &2
\cr
0&0& 0 &−1&6
\cr
0&0& 0 & 0 &4
} \right ]
|
C21 Doing the computations by hand, find the determinant of the matrix below.
\left [\array{
1&3
\cr
6&2 } \right ]
|
Contributed by Chris Black Solution [1221]
C22 Doing the computations by hand, find the determinant of the matrix below.
\left [\array{
1&3
\cr
2&6 } \right ]
|
Contributed by Chris Black Solution [1221]
C23 Doing the computations by hand, find the determinant of the matrix below.
\left [\array{
1&3&2
\cr
4&1&3
\cr
1&0&1} \right ]
|
Contributed by Chris Black Solution [1222]
C24 Doing the computations by hand, find the determinant of the matrix below.
\left [\array{
−2& 3 &−2
\cr
−4&−2& 1
\cr
2 & 4 & 2 } \right ]
|
Contributed by Robert Beezer Solution [1222]
C25 Doing the computations by hand, find the determinant of the matrix below.
\left [\array{
3&−1&4
\cr
2& 5 &1
\cr
2& 0 &6 } \right ]
|
Contributed by Robert Beezer Solution [1222]
C26 Doing the computations by hand, find the determinant of the matrix A.
A = \left [\array{
2&0&3&2
\cr
5&1&2&4
\cr
3&0&1&2
\cr
5&3&2&1 } \right ]
|
Contributed by Robert Beezer Solution [1223]
C27 Doing the computations by hand, find the determinant of the matrix A.
A = \left [\array{
1&0& 1 &1
\cr
2&2&−1&1
\cr
2&1& 3 &0
\cr
1&1& 0 &1 } \right ]
|
Contributed by Chris Black Solution [1224]
C28 Doing the computations by hand, find the determinant of the matrix A.
A = \left [\array{
1& 0 & 1 &1
\cr
2&−1&−1&1
\cr
2& 5 & 3 &0
\cr
1&−1& 0 &1 } \right ]
|
Contributed by Chris Black Solution [1224]
C29 Doing the computations by hand, find the determinant of the matrix A.
A = \left [\array{
2&3&0&2&1
\cr
0&1&1&1&2
\cr
0&0&1&2&3
\cr
0&1&2&1&0
\cr
0&0&0&1&2 } \right ]
|
Contributed by Chris Black Solution [1225]
C30 Doing the computations by hand, find the determinant of the matrix A.
A = \left [\array{
2&1&1& 0 &1
\cr
2&1&2&−1&1
\cr
0&0&1& 2 &0
\cr
1&0&3& 1 &1
\cr
2&1&1& 2 &1 } \right ]
|
Contributed by Chris Black Solution [1225]
M10 Find a value of k
so that the matrix A = \left [\array{
2&4
\cr
3&k } \right ]
has \mathop{ det}(A) = 0,
or explain why it is not possible.
Contributed by Chris Black Solution [1226]
M11 Find a value of k
so that the matrix A = \left [\array{
1&2&1
\cr
2&0&1
\cr
2&3&k } \right ]
has \mathop{ det}(A) = 0,
or explain why it is not possible.
Contributed by Chris Black Solution [1227]
M15 Given the matrix B = \left [\array{
2 − x& 1
\cr
4 &2 − x } \right ],
find all values of x
that are solutions of \mathop{ det}(B) = 0.
Contributed by Chris Black Solution [1228]
M16 Given the matrix B = \left [\array{
4 − x& −4 & −4
\cr
2 &−2 − x& −4
\cr
3 & −3 &−4 − x } \right ],
find all values of x
that are solutions of \mathop{ det}(B) = 0.
Contributed by Chris Black Solution [1228]
C21 Contributed by Chris Black Statement [1215]
Using the formula in Theorem DMST we have
C22 Contributed by Chris Black Statement [1215]
Using the formula in Theorem DMST we have
C23 Contributed by Chris Black Statement [1215]
We can compute the determinant by expanding about any row or
column; the most efficient ones to choose are either the second
column or the third row. In any case, the determinant will be
− 4.
C24 Contributed by Robert Beezer Statement [1216]
We’ll expand about the first row since there are no zeros to exploit,
C25 Contributed by Robert Beezer Statement [1216]
We can expand about any row or column, so the zero entry in the middle of the
last row is attractive. Let’s expand about column 2. By Theorem DER and
Theorem DEC you will get the same result by expanding about a different row or
column. We will use Theorem DMST twice.
C26 Contributed by Robert Beezer Statement [1217]
With two zeros in column 2, we choose to expand about that column
(Theorem DEC),
C27 Contributed by Chris Black Statement [1217]
Expanding on the first row, we have
C28 Contributed by Chris Black Statement [1218]
Expanding along the first row, we have
C29 Contributed by Chris Black Statement [1218]
Expanding along the first column, we have
C30 Contributed by Chris Black Statement [1219]
In order to exploit the zeros, let’s expand along row 3. We then have
M10 Contributed by Chris Black Statement [1219]
There is only one value of k
that will make this matrix have a zero determinant.
so \mathop{ det} \left (A\right ) = 0 only when k = 6.
M11 Contributed by Chris Black Statement [1220]
Thus, \mathop{ det} \left (A\right ) = 0 only when k = {7\over 4}.
M15 Contributed by Chris Black Statement [1220]
Using the formula for the determinant of a
2 × 2
matrix given in Theorem DMST, we have
and thus \mathop{ det} (B) = 0 only when x = 0 or x = 4.
M16 Contributed by Chris Black Statement [1220]
And thus, \mathop{ det} \left (B\right ) = 0 when x = 0, x = 2, or x = 4.