From A First Course in Linear Algebra
Version 2.10
© 2004.
Licensed under the GNU Free Documentation License.
http://linear.ups.edu/
We know how to add vectors and how to multiply them by scalars. Together,
these operations give us the possibility of making linear combinations. Similarly,
we know how to add matrices and how to multiply matrices by scalars. In this
section we mix all these ideas together and produce an operation known as
“matrix multiplication.” This will lead to some results that are both surprising
and central. We begin with a definition of how to multiply a vector by a
matrix.
We have repeatedly seen the importance of forming linear combinations of the columns of a matrix. As one example of this, the oft-used Theorem SLSLC, said that every solution to a system of linear equations gives rise to a linear combination of the column vectors of the coefficient matrix that equals the vector of constants. This theorem, and others, motivate the following central definition.
Definition MVP
Matrix-Vector Product
Suppose A is an
m × n matrix with
columns {A}_{1},\kern 1.95872pt {A}_{2},\kern 1.95872pt {A}_{3},\kern 1.95872pt \mathop{\mathop{…}},\kern 1.95872pt {A}_{n} and
u is a vector of size
n. Then the matrix-vector
product of A
with u
is the linear combination
Au ={ \left [u\right ]}_{1}{A}_{1} +{ \left [u\right ]}_{2}{A}_{2} +{ \left [u\right ]}_{3}{A}_{3} + \mathrel{⋯} +{ \left [u\right ]}_{n}{A}_{n}
|
(This definition contains Notation MVP.) △
So, the matrix-vector product is yet another version of “multiplication,” at least in the sense that we have yet again overloaded juxtaposition of two symbols as our notation. Remember your objects, an m × n matrix times a vector of size n will create a vector of size m. So if A is rectangular, then the size of the vector changes. With all the linear combinations we have performed so far, this computation should now seem second nature.
Example MTV
A matrix times a vector
Consider
Then
Au = 2\left [\array{
1
\cr
−3
\cr
1 } \right ]+1\left [\array{
4
\cr
2
\cr
6 } \right ]+(−2)\left [\array{
2
\cr
0
\cr
−3 } \right ]+3\left [\array{
3
\cr
1
\cr
−1 } \right ]+(−1)\left [\array{
4
\cr
−2
\cr
5 } \right ] = \left [\array{
7
\cr
1
\cr
6 } \right ].
|
We can now represent systems of linear equations compactly with a matrix-vector product (Definition MVP) and column vector equality (Definition CVE). This finally yields a very popular alternative to our unconventional ℒS\kern -1.95872pt \left (A,\kern 1.95872pt b\right ) notation.
Theorem SLEMM
Systems of Linear Equations as Matrix Multiplication
The set of solutions to the linear system
ℒS\kern -1.95872pt \left (A,\kern 1.95872pt b\right ) equals the set of
solutions for x in the
vector equation Ax = b.
□
Proof This theorem says that two sets (of solutions) are equal. So we need to show that one set of solutions is a subset of the other, and vice versa (Definition SE). Let {A}_{1},\kern 1.95872pt {A}_{2},\kern 1.95872pt {A}_{3},\kern 1.95872pt \mathop{\mathop{…}},\kern 1.95872pt {A}_{n} be the columns of A. Both of these set inclusions then follow from the following chain of equivalences (Technique E),
Example MNSLE
Matrix notation for systems of linear equations
Consider the system of linear equations from Example NSLE.
has coefficient matrix
A = \left [\array{
2 &4&−3&5& 1
\cr
3 &1& 0 &1&−3
\cr
−2&7&−5&2& 2 } \right ]
|
and vector of constants
b = \left [\array{
9
\cr
0
\cr
−3 } \right ]
|
and so will be described compactly by the vector equation Ax = b. ⊠
The matrix-vector product is a very natural computation. We have motivated it by its connections with systems of equations, but here is a another example.
Example MBC
Money’s best cities
Every year Money magazine selects several cities in the United States
as the “best” cities to live in, based on a wide array of statistics about
each city. This is an example of how the editors of Money might arrive
at a single number that consolidates the statistics about a city. We will
analyze Los Angeles, Chicago and New York City, based on four criteria:
average high temperature in July (Farenheit), number of colleges and
universities in a 30-mile radius, number of toxic waste sites in the Superfund
environmental clean-up program and a personal crime index based on FBI
statistics (average = 100, smaller is safer). It should be apparent how to
generalize the example to a greater number of cities and a greater number of
statistics.
We begin by building a table of statistics. The rows will be labeled with the cities, and the columns with statistical categories. These values are from Money’s website in early 2005.
City | Temp | Colleges | Superfund | Crime |
Los Angeles | 77 | 28 | 93 | 254 |
Chicago | 84 | 38 | 85 | 363 |
New York | 84 | 99 | 1 | 193 |
Conceivably these data might reside in a spreadsheet. Now we must combine the statistics for each city. We could accomplish this by weighting each category, scaling the values and summing them. The sizes of the weights would depend upon the numerical size of each statistic generally, but more importantly, they would reflect the editors opinions or beliefs about which statistics were most important to their readers. Is the crime index more important than the number of colleges and universities? Of course, there is no right answer to this question.
Suppose the editors finally decide on the following weights to employ: temperature, 0.23; colleges, 0.46; Superfund, − 0.05; crime, − 0.20. Notice how negative weights are used for undesirable statistics. Then, for example, the editors would compute for Los Angeles,
(0.23)(77) + (0.46)(28) + (−0.05)(93) + (−0.20)(254) = −24.86
|
This computation might remind you of an inner product, but we will produce the computations for all of the cities as a matrix-vector product. Write the table of raw statistics as a matrix
T = \left [\array{
77&28&93&254
\cr
84&38&85&363
\cr
84&99& 1 &193 } \right ]
|
and the weights as a vector
w = \left [\array{
0.23
\cr
0.46
\cr
−0.05
\cr
−0.20 } \right ]
|
then the matrix-vector product (Definition MVP) yields
Tw = (0.23)\left [\array{
77
\cr
84
\cr
84 } \right ]+(0.46)\left [\array{
28
\cr
38
\cr
99 } \right ]+(−0.05)\left [\array{
93
\cr
85
\cr
1 } \right ]+(−0.20)\left [\array{
254
\cr
363
\cr
193} \right ] = \left [\array{
−24.86
\cr
−40.05
\cr
26.21 } \right ]
|
This vector contains a single number for each of the cities being studied, so the editors would rank New York best (26.21), Los Angeles next ( − 24.86), and Chicago third ( − 40.05). Of course, the mayor’s offices in Chicago and Los Angeles are free to counter with a different set of weights that cause their city to be ranked best. These alternative weights would be chosen to play to each cities’ strengths, and minimize their problem areas.
If a speadsheet were used to make these computations, a row of weights would be entered somewhere near the table of data and the formulas in the spreadsheet would effect a matrix-vector product. This example is meant to illustrate how “linear” computations (addition, multiplication) can be organized as a matrix-vector product.
Another example would be the matrix of numerical scores on examinations and exercises for students in a class. The rows would correspond to students and the columns to exams and assignments. The instructor could then assign weights to the different exams and assignments, and via a matrix-vector product, compute a single score for each student. ⊠
Later (much later) we will need the following theorem, which is really a technical lemma (see Technique LC). Since we are in a position to prove it now, we will. But you can safely skip it for the moment, if you promise to come back later to study the proof when the theorem is employed. At that point you will also be able to understand the comments in the paragraph following the proof.
Theorem EMMVP
Equal Matrices and Matrix-Vector Products
Suppose that A
and B are
m × n matrices
such that Ax = Bx
for every x ∈ {ℂ}^{n}.
Then A = B.
□
Proof We are assuming Ax = Bx for all x ∈ {ℂ}^{n}, so we can employ this equality for any choice of the vector x. However, we’ll limit our use of this equality to the standard unit vectors, {e}_{j}, 1 ≤ j ≤ n (Definition SUV). For all 1 ≤ j ≤ n, 1 ≤ i ≤ m,
So by Definition ME the matrices A and B are equal, as desired. ■
You might notice that the hypotheses of this theorem could be “weakened” (i.e. made less restrictive). We could suppose the equality of the matrix-vector products for just the standard unit vectors (Definition SUV) or any other spanning set (Definition TSVS) of {ℂ}^{n} (Exercise LISS.T40). However, in practice, when we apply this theorem we will only need this weaker form. (If we made the hypothesis less restrictive, we would call the theorem “stronger.”)
We now define how to multiply two matrices together. Stop for a minute and think about how you might define this new operation.
Many books would present this definition much earlier in the course. However, we have taken great care to delay it as long as possible and to present as many ideas as practical based mostly on the notion of linear combinations. Towards the conclusion of the course, or when you perhaps take a second course in linear algebra, you may be in a position to appreciate the reasons for this. For now, understand that matrix multiplication is a central definition and perhaps you will appreciate its importance more by having saved it for later.
Definition MM
Matrix Multiplication
Suppose A is
an m × n matrix
and B is an
n × p matrix with
columns {B}_{1},\kern 1.95872pt {B}_{2},\kern 1.95872pt {B}_{3},\kern 1.95872pt \mathop{\mathop{…}},\kern 1.95872pt {B}_{p}. Then the
matrix product of A
with B is the
m × p matrix where column
i is the matrix-vector
product A{B}_{i}.
Symbolically,
AB = A\left [{B}_{1}|{B}_{2}|{B}_{3}|\mathop{\mathop{…}}|{B}_{p}\right ] = \left [A{B}_{1}|A{B}_{2}|A{B}_{3}|\mathop{\mathop{…}}|A{B}_{p}\right ].
|
Example PTM
Product of two matrices
Set
Then
AB = \left [A\left [\array{
1
\cr
−1
\cr
1
\cr
6
\cr
1 } \right ]\left \vert A\left [\array{
6
\cr
4
\cr
1
\cr
4
\cr
−2 } \right ]\right .\left \vert A\left [\array{
2
\cr
3
\cr
2
\cr
−1
\cr
3 } \right ]\right .\left \vert A\left [\array{
1
\cr
2
\cr
3
\cr
2
\cr
0 } \right ]\right .\right ] = \left [\array{
28 & 17 &20&10
\cr
20 &−13&−3&−1
\cr
−18&−44&12&−3 } \right ].
|
Is this the definition of matrix multiplication you expected? Perhaps our previous operations for matrices caused you to think that we might multiply two matrices of the same size, entry-by-entry? Notice that our current definition uses matrices of different sizes (though the number of columns in the first must equal the number of rows in the second), and the result is of a third size. Notice too in the previous example that we cannot even consider the product BA, since the sizes of the two matrices in this order aren’t right.
But it gets weirder than that. Many of your old ideas about “multiplication” won’t apply to matrix multiplication, but some still will. So make no assumptions, and don’t do anything until you have a theorem that says you can. Even if the sizes are right, matrix multiplication is not commutative — order matters.
Example MMNC
Matrix multiplication is not commutative
Set
Then we have two square, 2 × 2 matrices, so Definition MM allows us to multiply them in either order. We find
and AB\mathrel{≠}BA. Not even close. It should not be hard for you to construct other pairs of matrices that do not commute (try a couple of 3 × 3’s). Can you find a pair of non-identical matrices that do commute? ⊠
Matrix multiplication is fundamental, so it is a natural procedure for any computational device.See: Computation MM.MMA.
While certain “natural” properties of multiplication don’t hold, many more do. In the next subsection, we’ll state and prove the relevant theorems. But first, we need a theorem that provides an alternate means of multiplying two matrices. In many texts, this would be given as the definition of matrix multiplication. We prefer to turn it around and have the following formula as a consequence of our definition. It will prove useful for proofs of matrix equality, where we need to examine products of matrices, entry-by-entry.
Theorem EMP
Entries of Matrix Products
Suppose A is
an m × n matrix
and B is an
n × p matrix.
Then for 1 ≤ i ≤ m,
1 ≤ j ≤ p, the individual
entries of AB
are given by
Proof Denote the columns of A as the vectors {A}_{1},\kern 1.95872pt {A}_{2},\kern 1.95872pt {A}_{3},\kern 1.95872pt \mathop{\mathop{…}},\kern 1.95872pt {A}_{n} and the columns of B as the vectors {B}_{1},\kern 1.95872pt {B}_{2},\kern 1.95872pt {B}_{3},\kern 1.95872pt \mathop{\mathop{…}},\kern 1.95872pt {B}_{p}. Then for 1 ≤ i ≤ m, 1 ≤ j ≤ p,
Example PTMEE
Product of two matrices, entry-by-entry
Consider again the two matrices from Example PTM
Then suppose we just wanted the entry of AB in the second row, third column:
Notice how there are 5 terms in the sum, since 5 is the common dimension of the two matrices (column count for A, row count for B). In the conclusion of Theorem EMP, it would be the index k that would run from 1 to 5 in this computation. Here’s a bit more practice.
The entry of third row, first column:
To get some more practice on your own, complete the computation of the other 10 entries of this product. Construct some other pairs of matrices (of compatible sizes) and compute their product two ways. First use Definition MM. Since linear combinations are straightforward for you now, this should be easy to do and to do correctly. Then do it again, using Theorem EMP. Since this process may take some practice, use your first computation to check your work. ⊠
Theorem EMP is the way many people compute matrix products by hand. It will also be very useful for the theorems we are going to prove shortly. However, the definition (Definition MM) is frequently the most useful for its connections with deeper ideas like the null space and the upcoming column space.
In this subsection, we collect properties of matrix multiplication and its interaction with the zero matrix (Definition ZM), the identity matrix (Definition IM), matrix addition (Definition MA), scalar matrix multiplication (Definition MSM), the inner product (Definition IP), conjugation (Theorem MMCC), and the transpose (Definition TM). Whew! Here we go. These are great proofs to practice with, so try to concoct the proofs before reading them, they’ll get progressively more complicated as we go.
Theorem MMZM
Matrix Multiplication and the Zero Matrix
Suppose A
is an m × n
matrix. Then
1. A{O}_{n×p} = {O}_{m×p}
2. {O}_{p×m}A = {O}_{p×n}
□
Proof We’ll prove (1) and leave (2) to you. Entry-by-entry, for 1 ≤ i ≤ m, 1 ≤ j ≤ p,
So by the definition of matrix equality (Definition ME), the matrices A{O}_{n×p} and {O}_{m×p} are equal. ■
Theorem MMIM
Matrix Multiplication and Identity Matrix
Suppose A
is an m × n
matrix. Then
1. A{I}_{n} = A
2. {I}_{m}A = A
□
Proof Again, we’ll prove (1) and leave (2) to you. Entry-by-entry, For 1 ≤ i ≤ m, 1 ≤ j ≤ n,
So the matrices A and A{I}_{n} are equal, entry-by-entry, and by the definition of matrix equality (Definition ME) we can say they are equal matrices. ■
It is this theorem that gives the identity matrix its name. It is a matrix that behaves with matrix multiplication like the scalar 1 does with scalar multiplication. To multiply by the identity matrix is to have no effect on the other matrix.
Theorem MMDAA
Matrix Multiplication Distributes Across Addition
Suppose A is
an m × n matrix
and B and
C are
n × p matrices
and D
is a p × s
matrix. Then
1. A(B + C) = AB + AC
2. (B + C)D = BD + CD
□
Proof We’ll do (1), you do (2). Entry-by-entry, for 1 ≤ i ≤ m, 1 ≤ j ≤ p,
So the matrices A(B + C) and AB + AC are equal, entry-by-entry, and by the definition of matrix equality (Definition ME) we can say they are equal matrices. ■
Theorem MMSMM
Matrix Multiplication and Scalar Matrix Multiplication
Suppose A is
an m × n matrix
and B is an
n × p matrix. Let
α be a scalar.
Then α(AB) = (αA)B = A(αB).
□
Proof These are equalities of matrices. We’ll do the first one, the second is similar and will be good practice for you. For 1 ≤ i ≤ m, 1 ≤ j ≤ p,
So the matrices α(AB) and (αA)B are equal, entry-by-entry, and by the definition of matrix equality (Definition ME) we can say they are equal matrices. ■
Theorem MMA
Matrix Multiplication is Associative
Suppose A
is an m × n
matrix, B is
an n × p matrix
and D is a
p × s matrix.
Then A(BD) = (AB)D.
□
Proof A matrix equality, so we’ll go entry-by-entry, no surprise there. For 1 ≤ i ≤ m, 1 ≤ j ≤ s,
}
So the matrices (AB)D and A(BD) are equal, entry-by-entry, and by the definition of matrix equality (Definition ME) we can say they are equal matrices. ■
The statement of our next theorem is technically inaccurate. If we upgrade the vectors u,\kern 1.95872pt v to matrices with a single column, then the expression {u}^{t}\overline{v} is a 1 × 1 matrix, though we will treat this small matrix as if it was simply the scalar quantity in its lone entry. When we apply Theorem MMIP there should not be any confusion.
Theorem MMIP
Matrix Multiplication and Inner Products
If we consider the vectors u,\kern 1.95872pt v ∈ {ℂ}^{m}
as m × 1
matrices then
\left \langle u,\kern 1.95872pt v\right \rangle = {u}^{t}\overline{v}
|
Proof
To finish we just blur the distinction between a 1 × 1 matrix ({u}^{t}\overline{v}) and its lone entry. ■
Theorem MMCC
Matrix Multiplication and Complex Conjugation
Suppose A is
an m × n matrix
and B is an
n × p matrix.
Then \overline{AB} = \overline{A}\kern 1.95872pt \overline{B}.
□
Proof To obtain this matrix equality, we will work entry-by-entry. For 1 ≤ i ≤ m, 1 ≤ j ≤ p,
So the matrices \overline{AB} and \overline{A}\kern 1.95872pt \overline{B} are equal, entry-by-entry, and by the definition of matrix equality (Definition ME) we can say they are equal matrices. ■
Another theorem in this style, and its a good one. If you’ve been practicing with the previous proofs you should be able to do this one yourself.
Theorem MMT
Matrix Multiplication and Transposes
Suppose A is
an m × n matrix
and B is an
n × p matrix.
Then {(AB)}^{t} = {B}^{t}{A}^{t}.
□
Proof This theorem may be surprising but if we check the sizes of the matrices involved, then maybe it will not seem so far-fetched. First, AB has size m × p, so its transpose has size p × m. The product of {B}^{t} with {A}^{t} is a p × n matrix times an n × m matrix, also resulting in a p × m matrix. So at least our objects are compatible for equality (and would not be, in general, if we didn’t reverse the order of the matrix multiplication).
Here we go again, entry-by-entry. For 1 ≤ i ≤ m, 1 ≤ j ≤ p,
So the matrices {(AB)}^{t} and {B}^{t}{A}^{t} are equal, entry-by-entry, and by the definition of matrix equality (Definition ME) we can say they are equal matrices. ■
This theorem seems odd at first glance, since we have to switch the order of A and B. But if we simply consider the sizes of the matrices involved, we can see that the switch is necessary for this reason alone. That the individual entries of the products then come along to be equal is a bonus.
As the adjoint of a matrix is a composition of a conjugate and a transpose, its interaction with matrix multiplication is similar to that of a transpose. Here’s the last of our long list of basic properties of matrix multiplication.
Theorem MMAD
Matrix Multiplication and Adjoints
Suppose A is
an m × n matrix
and B is an
n × p matrix.
Then {(AB)}^{∗} = {B}^{∗}{A}^{∗}.
□
Proof
Notice how none of these proofs above relied on writing out huge general matrices with lots of ellipses (“…”) and trying to formulate the equalities a whole matrix at a time. This messy business is a “proof technique” to be avoided at all costs. Notice too how the proof of Theorem MMAD does not use an entry-by-entry approach, but simply builds on previous results about matrix multiplication’s interaction with conjugation and transposes.
These theorems, along with Theorem VSPM and the other results in Section MO, give you the “rules” for how matrices interact with the various operations we have defined on matrices (addition, scalar multiplication, matrix multiplication, conjugation, transposes and adjoints). Use them and use them often. But don’t try to do anything with a matrix that you don’t have a rule for. Together, we would informally call all these operations, and the attendant theorems, “the algebra of matrices.” Notice, too, that every column vector is just a n × 1 matrix, so these theorems apply to column vectors also. Finally, these results, taken as a whole, may make us feel that the definition of matrix multiplication is not so unnatural.
The adjoint of a matrix has a basic property when employed in a matrix-vector product as part of an inner product. At this point, you could even use the following result as a motivation for the definition of an adjoint.
Theorem AIP
Adjoint and Inner Product
Suppose that A
is an m × n
matrix and x ∈ {ℂ}^{n},
y ∈ {ℂ}^{m}. Then
\left \langle Ax,\kern 1.95872pt y\right \rangle = \left \langle x,\kern 1.95872pt {A}^{∗}y\right \rangle .
□
Proof
Sometimes a matrix is equal to its adjoint (Definition A), and these matrices have interesting properties. One of the most common situations where this occurs is when a matrix has only real number entries. Then we are simply talking about symmetric matrices (Definition SYM), so you can view this as a generalization of a symmetric matrix.
Definition HM
Hermitian Matrix
The square matrix A is
Hermitian (or self-adjoint) if A = {A}^{∗}.
△
Again, the set of real matrices that are Hermitian is exactly the set of symmetric matrices. In Section PEE we will uncover some amazing properties of Hermitian matrices, so when you get there, run back here to remind yourself of this definition. Further properties will also appear in various sections of the Topics (Part T). Right now we prove a fundamental result about Hermitian matrices, matrix vector products and inner products. As a characterization, this could be employed as a definition of a Hermitian matrix and some authors take this approach.
Theorem HMIP
Hermitian Matrices and Inner Products
Suppose that A is a
square matrix of size n.
Then A is Hermitian
if and only if \left \langle Ax,\kern 1.95872pt y\right \rangle = \left \langle x,\kern 1.95872pt Ay\right \rangle
for all x,\kern 1.95872pt y ∈ {ℂ}^{n}.
□
Proof ( ⇒) This is the “easy half” of the proof, and makes the rationale for a definition of Hermitian matrices most obvious. Assume A is Hermitian,
( ⇐) This “half” will take a bit more work. Assume that \left \langle Ax,\kern 1.95872pt y\right \rangle = \left \langle x,\kern 1.95872pt Ay\right \rangle for all x,\kern 1.95872pt y ∈ {ℂ}^{n}. Choose any x ∈ {ℂ}^{n}. We want to show that A = {A}^{∗} by establishing that Ax = {A}^{∗}x. With only this much motivation, consider the inner product,
Because this inner product equals zero, and has the same vector in each argument (Ax − {A}^{∗}x), Theorem PIP gives the conclusion that Ax − {A}^{∗}x = 0. With Ax = {A}^{∗}x for all x ∈ {ℂ}^{n}, Theorem EMMVP says A = {A}^{∗}, which is the defining property of a Hermitian matrix (Definition HM). ■
So, informally, Hermitian matrices are those that can be tossed around from one side of an inner product to the other with reckless abandon. We’ll see later what this buys us.
C20 Compute the product of the two matrices below, AB. Do this using the definitions of the matrix-vector product (Definition MVP) and the definition of matrix multiplication (Definition MM).
Contributed by Robert Beezer Solution [657]
C21 Compute the product AB of the two matrices below using both the definition of the matrix-vector product (Definition MVP) and the definition of matrix multiplication (Definition MM).
Contributed by Chris Black Solution [657]
C22 Compute the product AB of the two matrices below using both the definition of the matrix-vector product (Definition MVP) and the definition of matrix multiplication (Definition MM).
Contributed by Chris Black Solution [658]
C23 Compute the product AB of the two matrices below using both the definition of the matrix-vector product (Definition MVP) and the definition of matrix multiplication (Definition MM).
Contributed by Chris Black Solution [658]
C24 Compute the product AB of the two matrices below.
Contributed by Chris Black Solution [658]
C25 Compute the product AB of the two matrices below.
Contributed by Chris Black Solution [658]
C26 Compute the product AB of the two matrices below using both the definition of the matrix-vector product (Definition MVP) and the definition of matrix multiplication (Definition MM).
Contributed by Chris Black Solution [658]
C30 For the matrix A = \left [\array{
1&2
\cr
0&1 } \right ],
find {A}^{2},
{A}^{3},
{A}^{4}. Find a general
formula for {A}^{n} for any
positive integer n.
Contributed by Chris Black Solution [658]
C31 For the matrix A = \left [\array{
1&−1
\cr
0& 1} \right ],
find {A}^{2},
{A}^{3},
{A}^{4}. Find a general
formula for {A}^{n} for any
positive integer n.
Contributed by Chris Black Solution [658]
C32 For the matrix A = \left [\array{
1&0&0
\cr
0&2&0
\cr
0&0&3} \right ],
find {A}^{2},
{A}^{3},
{A}^{4}. Find a general
formula for {A}^{n} for any
positive integer n.
Contributed by Chris Black Solution [659]
C33 For the matrix A = \left [\array{
0&1&2
\cr
0&0&1
\cr
0&0&0} \right ],
find {A}^{2},
{A}^{3},
{A}^{4}. Find a general
formula for {A}^{n} for any
positive integer n.
Contributed by Chris Black Solution [659]
M30 Let A be
a nonsingular n × n
matrix, and let B
be any n × p matrix.
Show that if x ∈N\kern -1.95872pt \left (B\right ),
then x ∈N\kern -1.95872pt \left (AB\right ).
Contributed by Chris Black Solution [659]
M31 Let A be
a nonsingular n × n
matrix, and let B
be any n × p matrix.
Show that if x ∈N\kern -1.95872pt \left (AB\right ),
then x ∈N\kern -1.95872pt \left (B\right ).
Contributed by Chris Black Solution [660]
T10 Suppose that A
is a square matrix and there is a vector,
b, such that
ℒS\kern -1.95872pt \left (A,\kern 1.95872pt b\right ) has a unique
solution. Prove that A
is nonsingular. Give a direct proof (perhaps appealing to Theorem PSPHS) rather
than just negating a sentence from the text discussing a similar situation.
Contributed by Robert Beezer Solution [660]
T20 Prove the second part of Theorem MMZM.
Contributed by Robert Beezer
T21 Prove the second part of Theorem MMIM.
Contributed by Robert Beezer
T22 Prove the second part of Theorem MMDAA.
Contributed by Robert Beezer
T23 Prove the second part of Theorem MMSMM.
Contributed by Robert Beezer Solution [660]
T31 Suppose that A
is an m × n
matrix and x,\kern 1.95872pt y ∈N\kern -1.95872pt \left (A\right ).
Prove that x + y ∈N\kern -1.95872pt \left (A\right ).
Contributed by Robert Beezer
T32 Suppose that A
is an m × n
matrix, α ∈ {ℂ}^{},
and x ∈N\kern -1.95872pt \left (A\right ). Prove
that αx ∈N\kern -1.95872pt \left (A\right ).
Contributed by Robert Beezer
T40 Suppose that A
is an m × n matrix
and B is an
n × p matrix. Prove that the
null space of B is a subset
of the null space of AB,
that is N\kern -1.95872pt \left (B\right ) ⊆N\kern -1.95872pt \left (AB\right ).
Provide an example where the opposite is false, in other words give an example where
N\kern -1.95872pt \left (AB\right )⊈N\kern -1.95872pt \left (B\right ).
Contributed by Robert Beezer Solution [661]
T41 Suppose that A
is an n × n nonsingular
matrix and B is an
n × p matrix. Prove that the
null space of B is equal
to the null space of AB,
that is N\kern -1.95872pt \left (B\right ) = N\kern -1.95872pt \left (AB\right ).
(Compare with Exercise MM.T40.)
Contributed by Robert Beezer Solution [662]
T50 Suppose u and
v are any two solutions
of the linear system ℒS\kern -1.95872pt \left (A,\kern 1.95872pt b\right ).
Prove that u − v is an element
of the null space of A,
that is, u − v ∈N\kern -1.95872pt \left (A\right ).
Contributed by Robert Beezer
T51 Give a new proof of Theorem PSPHS replacing applications of
Theorem SLSLC with matrix-vector products (Theorem SLEMM).
Contributed by Robert Beezer Solution [663]
T52 Suppose that x,\kern 1.95872pt y ∈ {ℂ}^{n},
b ∈ {ℂ}^{m} and
A is an
m × n matrix.
If x,
y and
x + y are each a solution to
the linear system ℒS\kern -1.95872pt \left (A,\kern 1.95872pt b\right ), what
interesting can you say about b?
Form an implication with the existence of the three solutions
as the hypothesis and an interesting statement about
ℒS\kern -1.95872pt \left (A,\kern 1.95872pt b\right ) as
the conclusion, and then give a proof.
Contributed by Robert Beezer Solution [664]
C20 Contributed by Robert Beezer Statement [649]
By Definition MM,
C21 Contributed by Chris Black Statement [649]
AB = \left [\array{
13&3&15
\cr
1 &0& 5
\cr
1 &0& 1 } \right ].
C22 Contributed by Chris Black Statement [650]
AB = \left [\array{
2&3
\cr
0&0 } \right ].
C23 Contributed by Chris Black Statement [650]
AB = \left [\array{
−5& 5
\cr
10&10
\cr
2 &16
\cr
5 & 5 } \right ].
C24 Contributed by Chris Black Statement [651]
AB = \left [\array{
7
\cr
2
\cr
9 } \right ].
C25 Contributed by Chris Black Statement [652]
AB = \left [\array{
0
\cr
0
\cr
0 } \right ].
C26 Contributed by Chris Black Statement [652]
AB = \left [\array{
1&0&0
\cr
0&1&0
\cr
0&0&1} \right ].
C30 Contributed by Chris Black Statement [653]
{A}^{2} = \left [\array{
1&4
\cr
0&1 } \right ],
{A}^{3} = \left [\array{
1&6
\cr
0&1 } \right ],
{A}^{4} = \left [\array{
1&8
\cr
0&1 } \right ]. From this pattern,
we see that {A}^{n} = \left [\array{
1&2n
\cr
0& 1 } \right ].
C31 Contributed by Chris Black Statement [653]
{A}^{2} = \left [\array{
1&−2
\cr
0& 1} \right ],
{A}^{3} = \left [\array{
1&−3
\cr
0& 1} \right ],
{A}^{4} = \left [\array{
1&−4
\cr
0& 1} \right ]. From this pattern,
we see that {A}^{n} = \left [\array{
1&−n
\cr
0& 1} \right ].
C32 Contributed by Chris Black Statement [653]
{A}^{2} = \left [\array{
1&0&0
\cr
0&4&0
\cr
0&0&9} \right ],
{A}^{3} = \left [\array{
1&0& 0
\cr
0&8& 0
\cr
0&0&27 } \right ], and
{A}^{4} = \left [\array{
1& 0 & 0
\cr
0&16& 0
\cr
0& 0 &81 } \right ]. The pattern emerges,
and we see that {A}^{n} = \left [\array{
1& 0 & 0
\cr
0&{2}^{n}& 0
\cr
0& 0 &{3}^{n}
} \right ].
C33 Contributed by Chris Black Statement [653]
We quickly compute {A}^{2} = \left [\array{
0&0&1
\cr
0&0&0
\cr
0&0&0} \right ], and
we then see that {A}^{3} and all
subsequent powers of A
are the 3 × 3 zero
matrix; that is, {A}^{n} = {O}_{
3,3}
for n ≥ 3.
M30 Contributed by Chris Black Statement [654]
We are given that A
is a nonsingular n × n
matrix, and B is any
n × p matrix. We are
hypothesizing that x ∈N\kern -1.95872pt \left (B\right ),
which means that Bx = 0,
the zero vector in {ℂ}^{n}.
Then, (AB)x = A(Bx) = A{0}_{n} = 0,
and so x ∈N\kern -1.95872pt \left (AB\right ).
M31 Contributed by Chris Black Statement [654]
We are given that A
is a nonsingular n × n
matrix, and B is any
n × p matrix. We are
hypothesizing that x ∈N\kern -1.95872pt \left (AB\right ),
which means that ABx = 0,
the zero vector in {ℂ}^{n}.
But, ABx = A(Bx), so the
vector Bx is in
N\kern -1.95872pt \left (A\right ). However,
A is nonsingular, so we
know that N\kern -1.95872pt \left (A\right ) = \left \{0\right \}, so that
Bx is the zero vector.
We then have x ∈N\kern -1.95872pt \left (B\right ),
and we are done.
T10 Contributed by Robert Beezer Statement [654]
Since ℒS\kern -1.95872pt \left (A,\kern 1.95872pt b\right )
has at least one solution, we can apply Theorem PSPHS.
Because the solution is assumed to be unique, the null space of
A
must be trivial. Then Theorem NMTNS implies that
A is
nonsingular.
The converse of this statement is a trivial application of Theorem NMUS. That said, we could extend our NSMxx series of theorems with an added equivalence for nonsingularity, “Given a single vector of constants, b, the system ℒS\kern -1.95872pt \left (A,\kern 1.95872pt b\right ) has a unique solution.”
T23 Contributed by Robert Beezer Statement [655]
We’ll run the proof entry-by-entry.
So the matrices α(AB) and A(αB) are equal, entry-by-entry, and by the definition of matrix equality (Definition ME) we can say they are equal matrices.
T40 Contributed by Robert Beezer Statement [655]
To prove that one set is a subset of another, we start with an element of the smaller set and
see if we can determine that it is a member of the larger set (Definition SSET). Suppose
x ∈N\kern -1.95872pt \left (B\right ). Then we
know that Bx = 0
by Definition NSM. Consider
This establishes that x ∈N\kern -1.95872pt \left (AB\right ), so N\kern -1.95872pt \left (B\right ) ⊆N\kern -1.95872pt \left (AB\right ).
To show that the inclusion does not hold in the opposite direction, choose B to be any nonsingular matrix of size n. Then N\kern -1.95872pt \left (B\right ) = \left \{0\right \} by Theorem NMTNS. Let A be the square zero matrix, O, of the same size. Then AB = OB = O by Theorem MMZM and therefore N\kern -1.95872pt \left (AB\right ) = {ℂ}^{n}, and is not a subset of N\kern -1.95872pt \left (B\right ) = \left \{0\right \}.
T41 Contributed by David Braithwaite Statement [655]
From the solution to Exercise MM.T40 we know that
N\kern -1.95872pt \left (B\right ) ⊆N\kern -1.95872pt \left (AB\right ).
So to establish the set equality (Definition SE) we need to show that
N\kern -1.95872pt \left (AB\right ) ⊆N\kern -1.95872pt \left (B\right ).
Suppose x ∈N\kern -1.95872pt \left (AB\right ). Then we know that ABx = 0 by Definition NSM. Consider
So, Bx ∈N\kern -1.95872pt \left (A\right ). Because A is nonsingular, it has a trivial null space (Theorem NMTNS) and we conclude that Bx = 0. This establishes that x ∈N\kern -1.95872pt \left (B\right ), so N\kern -1.95872pt \left (AB\right ) ⊆N\kern -1.95872pt \left (B\right ) and combined with the solution to Exercise MM.T40 we have N\kern -1.95872pt \left (B\right ) = N\kern -1.95872pt \left (AB\right ) when A is nonsingular.
T51 Contributed by Robert Beezer Statement [656]
We will work with the vector equality representations of the relevant systems of
equations, as described by Theorem SLEMM.
( ⇐) Suppose y = w + z and z ∈N\kern -1.95872pt \left (A\right ). Then
demonstrating that y is a solution.
( ⇒) Suppose y is a solution to ℒS\kern -1.95872pt \left (A,\kern 1.95872pt b\right ). Then
which says that y − w ∈N\kern -1.95872pt \left (A\right ). In other words, y − w = z for some vector z ∈N\kern -1.95872pt \left (A\right ). Rewritten, this is y = w + z, as desired.
T52 Contributed by Robert Beezer Statement [656]
ℒS\kern -1.95872pt \left (A,\kern 1.95872pt b\right ) must
be homogeneous. To see this consider that
By Definition HS we see that ℒS\kern -1.95872pt \left (A,\kern 1.95872pt b\right ) is homogeneous.