Skip to main content

Section MM Matrix Multiplication

We know how to add vectors and how to multiply them by scalars. Together, these operations give us the possibility of making linear combinations. Similarly, we know how to add matrices and how to multiply matrices by scalars. In this section we mix all these ideas together and produce an operation known as matrix multiplication. This will lead to some results that are both surprising and central. We begin with a definition of how to multiply a vector by a matrix.

Subsection MVP Matrix-Vector Product

We have repeatedly seen the importance of forming linear combinations of the columns of a matrix. As one example of this, the oft-used Theorem SLSLC, said that every solution to a system of linear equations gives rise to a linear combination of the column vectors of the coefficient matrix that equals the vector of constants. This theorem, and others, motivate the following central definition.

Definition MVP. Matrix-Vector Product.

Suppose \(A\) is an \(m\times n\) matrix with columns \(\vectorlist{A}{n}\) and \(\vect{u}\) is a vector of size \(n\text{.}\) Then the matrix-vector product of \(A\) with \(\vect{u}\) is the linear combination

\begin{equation*} A\vect{u}= \vectorentry{\vect{u}}{1}\vect{A}_1+ \vectorentry{\vect{u}}{2}\vect{A}_2+ \vectorentry{\vect{u}}{3}\vect{A}_3+ \cdots+ \vectorentry{\vect{u}}{n}\vect{A}_n\text{.} \end{equation*}

So, the matrix-vector product is yet another version of “multiplication,” at least in the sense that we have yet again overloaded juxtaposition of two symbols as our notation. Remember your objects, an \(m\times n\) matrix times a vector of size \(n\) will create a vector of size \(m\text{.}\) So if \(A\) is rectangular, then the size of the “output” vector is different from the size of the “input” vector. With all the linear combinations we have performed so far, this computation should now seem second nature.

Consider

\begin{align*} A= \begin{bmatrix} 1 & 4 & 2 & 3 & 4\\ -3 & 2 & 0 & 1 & -2\\ 1 & 6 & -3 & -1 & 5 \end{bmatrix} && \vect{u}=\colvector{2\\1\\-2\\3\\-1}\text{.} \end{align*}

Then

\begin{equation*} A\vect{u}= 2\colvector{1\\-3\\1}+ 1\colvector{4\\2\\6}+ (-2)\colvector{2\\0\\-3}+ 3\colvector{3\\1\\-1}+ (-1)\colvector{4\\-2\\5} = \colvector{7\\1\\6}\text{.} \end{equation*}

We can now represent systems of linear equations compactly with a matrix-vector product (Definition MVP) and column vector equality (Definition CVE). This finally yields a very popular alternative to our unconventional \(\linearsystem{A}{\vect{b}}\) notation.

This theorem says that two sets (of solutions) are equal. So we need to show that one set of solutions is a subset of the other, and vice versa (Definition SE). Let \(\vectorlist{A}{n}\) be the columns of \(A\text{.}\) Both of these set inclusions then follow from the following chain of equivalences (Proof Technique E).

\begin{align*} &\vect{x}\text{ is a solution to }\linearsystem{A}{\vect{b}}\\ &\iff \vectorentry{\vect{x}}{1}\vect{A}_1+ \vectorentry{\vect{x}}{2}\vect{A}_2+ \vectorentry{\vect{x}}{3}\vect{A}_3+ \cdots+ \vectorentry{\vect{x}}{n}\vect{A}_n =\vect{b}&&\knowl{./knowl/theorem-SLSLC.html}{\text{Theorem SLSLC}}\\ &\iff \vect{x}\text{ is a solution to }A\vect{x}=\vect{b}&&\knowl{./knowl/definition-MVP.html}{\text{Definition MVP}} \end{align*}

Consider the system of linear equations from Example NSLE.

\begin{align*} 2x_1+4x_2-3x_3+5x_4+x_5&=9\\ 3x_1+x_2+x_4-3x_5&=0\\ -2x_1+7x_2-5x_3+2x_4+2x_5&=-3 \end{align*}

has coefficient matrix and vector of constants

\begin{align*} A&= \begin{bmatrix} 2 & 4 & -3 & 5 & 1\\ 3 & 1 & 0 & 1 & -3\\ -2 & 7 & -5 & 2 & 2 \end{bmatrix} &\vect{b}&=\colvector{9\\0\\-3} \end{align*}

and so will be described compactly by the vector equation \(A\vect{x}=\vect{b}\text{.}\)

The matrix-vector product is a very natural computation. We have motivated it by its connections with systems of equations, but here is another example.

Every year Money magazine selects several cities in the United States as the “best” cities to live in, based on a wide array of statistics about each city. This is a small example of how the editors of Money might arrive at a single number that consolidates the statistics about a city. We will analyze Los Angeles, Chicago and New York City, based on four criteria: average high temperature in July (Farenheit), number of colleges and universities in a 30-mile radius, number of toxic waste sites in the Superfund environmental clean-up program and a personal crime index based on FBI statistics (average = 100, smaller is safer). It should be apparent how to generalize the example to a greater number of cities and a greater number of statistics.

We begin by building a table of statistics. The rows will be labeled with the cities, and the columns with statistical categories. These values are from Money's website in early 2005.

City Temp Colleges Superfund Crime
Los Angeles 77 28 93 254
Chicago 84 38 85 363
New York 84 99 1 193

Conceivably these data might reside in a spreadsheet. Now we must combine the statistics for each city. We could accomplish this by weighting each category, scaling the values and summing them. The sizes of the weights would depend upon the numerical size of each statistic generally, but more importantly, they would reflect the editors opinions or beliefs about which statistics were most important to their readers. Is the crime index more important than the number of colleges and universities? Of course, there is no right answer to this question.

Suppose the editors finally decide on the following weights to employ: temperature, \(0.23\text{;}\) colleges, \(0.46\text{;}\) Superfund, \(-0.05\text{;}\) crime, \(-0.20\text{.}\) Notice how negative weights are used for undesirable statistics. Then, for example, the editors would compute for Los Angeles

\begin{equation*} (0.23)(77) + (0.46)(28) + (-0.05)(93) + (-0.20)(254) = -24.86 \end{equation*}

This computation might remind you of an inner product, but we will produce the computations for all of the cities as a matrix-vector product. Write the table of raw statistics as a matrix

\begin{equation*} T= \begin{bmatrix} 77 & 28 & 93 & 254\\ 84 & 38 & 85 & 363\\ 84 & 99 & 1 & 193 \end{bmatrix} \end{equation*}

and the weights as a vector

\begin{equation*} \vect{w}=\colvector{0.23\\0.46\\-0.05\\-0.20} \end{equation*}

then the matrix-vector product (Definition MVP) yields

\begin{equation*} T\vect{w}= (0.23)\colvector{77\\84\\84}+ (0.46)\colvector{28\\38\\99}+ (-0.05)\colvector{93\\85\\1}+ (-0.20)\colvector{254\\363\\193} = \colvector{-24.86\\-40.05\\26.21}\text{.} \end{equation*}

This vector contains a single number for each of the cities being studied, so the editors would rank New York best (\(26.21\)), Los Angeles next (\(-24.86\)), and Chicago third (\(-40.05\)). Of course, the mayor's offices in Chicago and Los Angeles are free to counter with a different set of weights that cause their city to be ranked best. These alternative weights would be chosen to play to each cities' strengths, and minimize their problem areas.

Notice how the vector of weights, \(\vect{w}\text{,}\) is naturally indexed by the four statistics used, while the matrix-vector product, \(T\vect{w}\text{,}\) is naturally indexed by the three cities. If a speadsheet were used to make these computations, a row of weights would be entered somewhere near the table of data and the formulas in the spreadsheet would effect a matrix-vector product. This example is meant to illustrate how “linear” computations (addition, multiplication) can be organized as a matrix-vector product.

Another example would be the matrix of numerical scores on examinations and exercises for students in a class. The rows would be indexed by students and the columns would be indexed by exams and assignments. The instructor could then assign weights to the different exams and assignments, and via a matrix-vector product, compute a single score for each student.

Later (much later) we will need the following theorem, which is really a technical lemma (see Proof Technique LC). Since we are in a position to prove it now, we will. But you can safely skip it for the moment, if you promise to come back later to study the proof when the theorem is employed. At that point you will also be able to understand the comments in the paragraph following the proof.

We are assuming \(A\vect{x}=B\vect{x}\) for all \(\vect{x}\in\complex{n}\text{,}\) so we can employ this equality for any choice of the vector \(\vect{x}\text{.}\) However, we will limit our use of this equality to the standard unit vectors, \(\vect{e}_j\text{,}\) \(1\leq j\leq n\) (Definition SUV). For all \(1\leq j\leq n\text{,}\) \(1\leq i\leq m\text{,}\)

\begin{align*} &\matrixentry{A}{ij}\\ &= 0\matrixentry{A}{i1}+ \cdots+ 0\matrixentry{A}{i,j-1}+ 1\matrixentry{A}{ij}+ 0\matrixentry{A}{i,j+1}+ \cdots+ 0\matrixentry{A}{in}&& \knowl{./knowl/theorem-PCNA.html}{\text{Theorem PCNA}}\\ &= \vectorentry{\vect{e}_j}{1}\matrixentry{A}{i1}+ \vectorentry{\vect{e}_j}{2}\matrixentry{A}{i2}+ \vectorentry{\vect{e}_j}{3}\matrixentry{A}{i3}+ \cdots+ \vectorentry{\vect{e}_j}{n}\matrixentry{A}{in}&& \knowl{./knowl/property-CMCN.html}{\text{Property CMCN}}\\ &= \matrixentry{A}{i1}\vectorentry{\vect{e}_j}{1}+ \matrixentry{A}{i2}\vectorentry{\vect{e}_j}{2}+ \matrixentry{A}{i3}\vectorentry{\vect{e}_j}{3}+ \cdots+ \matrixentry{A}{in}\vectorentry{\vect{e}_j}{n}&& \knowl{./knowl/definition-SUV.html}{\text{Definition SUV}}\\ &= \vectorentry{A\vect{e}_j}{i}&& \knowl{./knowl/definition-MVP.html}{\text{Definition MVP}}\\ &= \vectorentry{B\vect{e}_j}{i}&&\text{Hypothesis}\\ &= \matrixentry{B}{i1}\vectorentry{\vect{e}_j}{1}+ \matrixentry{B}{i2}\vectorentry{\vect{e}_j}{2}+ \matrixentry{B}{i3}\vectorentry{\vect{e}_j}{3}+ \cdots+ \matrixentry{B}{in}\vectorentry{\vect{e}_j}{n}&& \knowl{./knowl/definition-MVP.html}{\text{Definition MVP}}\\ &= \vectorentry{\vect{e}_j}{1}\matrixentry{B}{i1}+ \vectorentry{\vect{e}_j}{2}\matrixentry{B}{i2}+ \vectorentry{\vect{e}_j}{3}\matrixentry{B}{i3}+ \cdots+ \vectorentry{\vect{e}_j}{n}\matrixentry{B}{in}&& \knowl{./knowl/property-CMCN.html}{\text{Property CMCN}}\\ &= 0\matrixentry{B}{i1}+ \cdots+ 0\matrixentry{B}{i,j-1}+ 1\matrixentry{B}{ij}+ 0\matrixentry{B}{i,j+1}+ \cdots+ 0\matrixentry{B}{in}&& \knowl{./knowl/definition-SUV.html}{\text{Definition SUV}}\\ &=\matrixentry{B}{ij}&& \knowl{./knowl/theorem-PCNA.html}{\text{Theorem PCNA}}\text{.} \end{align*}

So by Definition ME the matrices \(A\) and \(B\) are equal, as desired.

You might notice from studying the proof that the hypotheses of this theorem could be “weakened” (i.e. made less restrictive). We need only suppose the equality of the matrix-vector products for just the standard unit vectors (Definition SUV) or any other spanning set (Definition SSVS) of \(\complex{n}\) (Exercise LISS.T40). However, in practice, when we apply this theorem the stronger hypothesis will be in effect so this version of the theorem will suffice for our purposes. (If we changed the statement of the theorem to have the less restrictive hypothesis, then we would call the theorem “stronger.”)

A matrix-vector product is very natural in Sage, and we can check the result against a linear combination of the columns.

Notice that when a matrix is square, a vector of the correct size can be used in Sage in a product with a matrix by placing the vector on either side of the matrix. However, the two results are not the same, and we will not have ocassion to place the vector on the left any time soon. So, despite the possibility, be certain to keep your vectors on the right side of a matrix in a product.

Since a matrix-vector product forms a linear combination of columns of a matrix, it is now very easy to check if a vector is a solution to a system of equations. This is basically the substance of Theorem SLEMM. Here we construct a system of equations and construct two solutions and one non-solution by applying Theorem PSPHS. Then we use a matrix-vector product to verify that the vectors are, or are not, solutions.

We can now explain the difference between “left” and “right” variants of various Sage commands for linear algebra. Generally, the direction refers to where the vector is placed in a matrix-vector product. We place a vector on the right and understand this to mean a linear combination of the columns of the matrix. Placing a vector to the left of a matrix can be understood, in a manner totally consistent with our upcoming definition of matrix multiplication, as a linear combination of the rows of the matrix.

So the difference between A.solve_right(v) and A.solve_left(v) is that the former asks for a vector x such that A*x == v, while the latter asks for a vector x such that x*A == v. Given Sage's preference for rows, a direction-neutral version of a command, if it exists, will be the “left” version. For example, there is a .right_kernel() matrix method, while the .left_kernel() and .kernel() methods are identical — the names are synonyms for the exact same routine.

So when you see a Sage command that comes in “left” and “right” variants figure out just what part of the defined object involves a matrix-vector product and form an interpretation from that.

Subsection MM Matrix Multiplication

We now define how to multiply two matrices together. Stop for a minute and think about how you might define this new operation.

Many books would present this definition much earlier in the course. However, we have taken great care to delay it as long as possible and to present as many ideas as practical based mostly on the notion of linear combinations. Towards the conclusion of the course, or when you perhaps take a second course in linear algebra, you may be in a position to appreciate the reasons for this. For now, understand that matrix multiplication is a central definition and perhaps you will appreciate its importance more by having saved it for later.

Definition MM. Matrix Multiplication.

Suppose \(A\) is an \(m\times n\) matrix and \(\vectorlist{B}{p}\) are the columns of an \(n\times p\) matrix \(B\text{.}\) Then the matrix product of \(A\) with \(B\) is the \(m\times p\) matrix where column \(i\) is the matrix-vector product \(A\vect{B}_i\text{.}\) Symbolically

\begin{equation*} AB=A\matrixcolumns{B}{p}=\left[A\vect{B}_1|A\vect{B}_2|A\vect{B}_3|\ldots|A\vect{B}_p\right]\text{.} \end{equation*}

Set

\begin{align*} A= \begin{bmatrix} 1 & 2 & -1 & 4 & 6\\ 0 & -4 & 1 & 2 & 3\\ -5 & 1 & 2 & -3 & 4 \end{bmatrix} && B= \begin{bmatrix} 1 & 6 & 2 & 1\\ -1 & 4 & 3 & 2\\ 1 & 1 & 2 & 3\\ 6 & 4 & -1 & 2\\ 1 & -2 & 3 & 0 \end{bmatrix} &\text{.} \end{align*}

Then

\begin{equation*} AB= \left[ A\colvector{1\\-1\\1\\6\\1} \left\lvert A\colvector{6\\4\\1\\4\\-2}\right. \left\lvert A\colvector{2\\3\\2\\-1\\3}\right. \left\lvert A\colvector{1\\2\\3\\2\\0}\right. \right] = \begin{bmatrix} 28 & 17 & 20 & 10\\ 20 & -13 & -3 & -1\\ -18 & -44 & 12 & -3 \end{bmatrix}\text{.} \end{equation*}

Is this the definition of matrix multiplication you expected? Perhaps our previous operations for matrices caused you to think that we might multiply two matrices of the same size, entry-by-entry? Notice that our current definition uses matrices of different sizes (though the number of columns in the first must equal the number of rows in the second), and the result is of a third size. Notice too in the previous example that we cannot even consider the product \(BA\text{,}\) since the sizes of the two matrices in this order are not right.

But it gets weirder than that. Many of your old ideas about “multiplication” will not apply to matrix multiplication, but some still will. So make no assumptions, and do not do anything until you have a theorem that says you can. Even if the sizes are right, matrix multiplication is not commutative — order matters.

Set

\begin{align*} A= \begin{bmatrix} 1 & 3\\ -1 & 2 \end{bmatrix} && B= \begin{bmatrix} 4&0\\ 5&1 \end{bmatrix}\text{.} \end{align*}

Then we have two square, \(2\times 2\) matrices, so Definition MM allows us to multiply them in either order. We find

\begin{align*} AB= \begin{bmatrix} 19 & 3\\ 6 & 2 \end{bmatrix} && BA= \begin{bmatrix} 4 & 12\\ 4 & 17 \end{bmatrix} \end{align*}

so \(AB\neq BA\text{.}\) Not even close. It should not be hard for you to construct other pairs of matrices that do not commute (try a couple of \(3\times 3\)'s). Can you find a pair of non-identical matrices that do commute?

Subsection MMEE Matrix Multiplication, Entry-by-Entry

While certain “natural” properties of multiplication do not hold, many more do. In the next subsection, we will state and prove the relevant theorems. But first, we need a theorem that provides an alternate means of multiplying two matrices. In many texts, this would be given as the definition of matrix multiplication. We prefer to turn it around and have the following formula as a consequence of our definition. It will prove useful for proofs of matrix equality, where we need to examine products of matrices, entry-by-entry.

Let the vectors \(\vectorlist{A}{n}\) denote the columns of \(A\) and let the vectors \(\vectorlist{B}{p}\) denote the columns of \(B\text{.}\) Then for \(1\leq i\leq m\text{,}\) \(1\leq j\leq p\text{,}\)

\begin{align*} \matrixentry{AB}{ij} &=\vectorentry{A\vect{B}_j}{i}&& \knowl{./knowl/definition-MM.html}{\text{Definition MM}}\\ &=\vectorentry{ \vectorentry{\vect{B}_j}{1}\vect{A}_1+ \vectorentry{\vect{B}_j}{2}\vect{A}_2+ \cdots+ \vectorentry{\vect{B}_j}{n}\vect{A}_n }{i}&& \knowl{./knowl/definition-MVP.html}{\text{Definition MVP}}\\ &= \vectorentry{\vectorentry{\vect{B}_j}{1}\vect{A}_1}{i}+ \vectorentry{\vectorentry{\vect{B}_j}{2}\vect{A}_2}{i}+ \cdots+ \vectorentry{\vectorentry{\vect{B}_j}{n}\vect{A}_n}{i}&& \knowl{./knowl/definition-CVA.html}{\text{Definition CVA}}\\ &= \vectorentry{\vect{B}_j}{1}\vectorentry{\vect{A}_1}{i}+ \vectorentry{\vect{B}_j}{2}\vectorentry{\vect{A}_2}{i}+ \cdots+ \vectorentry{\vect{B}_j}{n}\vectorentry{\vect{A}_n}{i}&& \knowl{./knowl/definition-CVSM.html}{\text{Definition CVSM}}\\ &= \matrixentry{B}{1j}\matrixentry{A}{i1}+ \matrixentry{B}{2j}\matrixentry{A}{i2}+ \cdots+ \matrixentry{B}{nj}\matrixentry{A}{in}&& \knowl{./knowl/definition-M.html}{\text{Definition M}}\\ &= \matrixentry{A}{i1}\matrixentry{B}{1j}+ \matrixentry{A}{i2}\matrixentry{B}{2j}+ \cdots+ \matrixentry{A}{in}\matrixentry{B}{nj}&& \knowl{./knowl/property-CMCN.html}{\text{Property CMCN}}\\ &=\sum_{k=1}^{n}\matrixentry{A}{ik}\matrixentry{B}{kj}&& \end{align*}

Consider again the two matrices from Example PTM.

\begin{align*} A= \begin{bmatrix} 1 & 2 & -1 & 4 & 6\\ 0 & -4 & 1 & 2 & 3\\ -5 & 1 & 2 & -3 & 4 \end{bmatrix} && B= \begin{bmatrix} 1 & 6 & 2 & 1\\ -1 & 4 & 3 & 2\\ 1 & 1 & 2 & 3\\ 6 & 4 & -1 & 2\\ 1 & -2 & 3 & 0 \end{bmatrix} & \end{align*}

Then suppose we just wanted the entry of \(AB\) in the second row, third column.

\begin{align*} \matrixentry{AB}{23}=& \matrixentry{A}{21}\matrixentry{B}{13}+ \matrixentry{A}{22}\matrixentry{B}{23}+ \matrixentry{A}{23}\matrixentry{B}{33}+ \matrixentry{A}{24}\matrixentry{B}{43}+ \matrixentry{A}{25}\matrixentry{B}{53}\\ =&(0)(2)+(-4)(3)+(1)(2)+(2)(-1)+(3)(3)=-3 \end{align*}

Notice how there are 5 terms in the sum, since 5 is the common dimension of the two matrices (column count for \(A\text{,}\) row count for \(B\)). In the conclusion of Theorem EMP, it would be the index \(k\) that would run from 1 to 5 in this computation. Here is a bit more practice.

The entry of third row, first column.

\begin{align*} \matrixentry{AB}{31}=& \matrixentry{A}{31}\matrixentry{B}{11}+ \matrixentry{A}{32}\matrixentry{B}{21}+ \matrixentry{A}{33}\matrixentry{B}{31}+ \matrixentry{A}{34}\matrixentry{B}{41}+ \matrixentry{A}{35}\matrixentry{B}{51}\\ =&(-5)(1)+(1)(-1)+(2)(1)+(-3)(6)+(4)(1)=-18 \end{align*}

To get some more practice on your own, complete the computation of the other 10 entries of this product. Construct some other pairs of matrices (of compatible sizes) and compute their product two ways. First use Definition MM. Since linear combinations are straightforward for you now, this should be easy to do and to do correctly. Then do it again, using Theorem EMP. Since this process may take some practice, use your first computation to check your work.

Theorem EMP is the way many people compute matrix products by hand. It will also be very useful for the theorems we are going to prove shortly. However, the definition (Definition MM) is frequently the most useful for its connections with deeper ideas like the null space and the upcoming column space.

Matrix multiplication is very natural in Sage, and is just as easy as multiplying two numbers. We illustrate Theorem EMP by using it to compute the entry in the first row and third column.

Note in the final statement, we could replace A.ncols() by B.nrows() since these two quantities must be identical. You can experiment with the last statement by editing it to compute any of the five other entries of the matrix product.

Square matrices can be multiplied in either order, but the result will almost always be different. Execute repeatedly the following products of two random \(4\times 4\) matrices, with a check on the equality of the two products in either order. It is possible, but highly unlikely, that the two products will be equal. So if this compute cell ever produces True it will be a minor miracle.

Subsection PMM Properties of Matrix Multiplication

In this subsection, we collect properties of matrix multiplication and its interaction with the zero matrix (Definition ZM), the identity matrix (Definition IM), matrix addition (Definition MA), scalar matrix multiplication (Definition MSM), the inner product (Definition IP), conjugation (Theorem MMCC), and the transpose (Definition TM). Whew! Here we go. These are great proofs to practice with, so try to concoct the proofs before reading them, they will get progressively more complicated as we go.

We will prove (1) and leave (2) to you. Entry-by-entry, for \(1\leq i\leq m\text{,}\) \(1\leq j\leq p\text{,}\)

\begin{align*} \matrixentry{A\zeromatrix_{n\times p}}{ij} &=\sum_{k=1}^{n}\matrixentry{A}{ik}\matrixentry{\zeromatrix_{n\times p}}{kj}&& \knowl{./knowl/theorem-EMP.html}{\text{Theorem EMP}}\\ &=\sum_{k=1}^{n}\matrixentry{A}{ik}0&& \knowl{./knowl/definition-ZM.html}{\text{Definition ZM}}\\ &=\sum_{k=1}^{n}0\\ &=0&& \knowl{./knowl/property-ZCN.html}{\text{Property ZCN}}\\ &=\matrixentry{\zeromatrix_{m\times p}}{ij}&& \knowl{./knowl/definition-ZM.html}{\text{Definition ZM}}\text{.} \end{align*}

So by the definition of matrix equality (Definition ME), the matrices \(A\zeromatrix_{n\times p}\) and \(\zeromatrix_{m\times p}\) are equal.

Again, we will prove (1) and leave (2) to you. Entry-by-entry, For \(1\leq i\leq m\text{,}\) \(1\leq j\leq n\text{,}\)

\begin{align*} \matrixentry{AI_n}{ij}=&\sum_{k=1}^{n}\matrixentry{A}{ik}\matrixentry{I_n}{kj}&& \knowl{./knowl/theorem-EMP.html}{\text{Theorem EMP}}\\ &=\matrixentry{A}{ij}\matrixentry{I_n}{jj}+\sum_{\substack{k=1\\k\neq j}}^{n}\matrixentry{A}{ik}\matrixentry{I_n}{kj}&& \knowl{./knowl/property-CACN.html}{\text{Property CACN}}\\ &=\matrixentry{A}{ij}(1)+\sum_{k=1, k\neq j}^{n}\matrixentry{A}{ik}(0)&& \knowl{./knowl/definition-IM.html}{\text{Definition IM}}\\ &=\matrixentry{A}{ij}+\sum_{k=1, k\neq j}^{n}0\\ &=\matrixentry{A}{ij} \end{align*}

So the matrices \(A\) and \(AI_n\) are equal, entry-by-entry, and by the definition of matrix equality (Definition ME) we can say they are equal matrices.

It is this theorem that gives the identity matrix its name. It is a matrix that behaves with matrix multiplication like the scalar \(1\) does with scalar multiplication. To multiply by the identity matrix is to have no effect on the other matrix.

We will do (1), you do (2). Entry-by-entry, for \(1\leq i\leq m\text{,}\) \(1\leq j\leq p\text{,}\)

\begin{align*} \matrixentry{A(B+C)}{ij}&=\sum_{k=1}^{n}\matrixentry{A}{ik}\matrixentry{B+C}{kj}&& \knowl{./knowl/theorem-EMP.html}{\text{Theorem EMP}}\\ &=\sum_{k=1}^{n}\matrixentry{A}{ik}(\matrixentry{B}{kj}+\matrixentry{C}{kj})&& \knowl{./knowl/definition-MA.html}{\text{Definition MA}}\\ &=\sum_{k=1}^{n}\matrixentry{A}{ik}\matrixentry{B}{kj}+\matrixentry{A}{ik}\matrixentry{C}{kj}&&\knowl{./knowl/property-DCN.html}{\text{Property DCN}}\\ &=\sum_{k=1}^{n}\matrixentry{A}{ik}\matrixentry{B}{kj}+\sum_{k=1}^{n}\matrixentry{A}{ik}\matrixentry{C}{kj}&& \knowl{./knowl/property-CACN.html}{\text{Property CACN}}\\ &=\matrixentry{AB}{ij}+\matrixentry{AC}{ij}&& \knowl{./knowl/theorem-EMP.html}{\text{Theorem EMP}}\\ &=\matrixentry{AB+AC}{ij}&& \knowl{./knowl/definition-MA.html}{\text{Definition MA}} \end{align*}

So the matrices \(A(B+C)\) and \(AB+AC\) are equal, entry-by-entry, and by the definition of matrix equality (Definition ME) we can say they are equal matrices.

These are equalities of matrices. We will do the first one, the second is similar and will be good practice for you. For \(1\leq i\leq m\text{,}\) \(1\leq j\leq p\text{,}\)

\begin{align*} \matrixentry{\alpha(AB)}{ij}&=\alpha\matrixentry{AB}{ij}&& \knowl{./knowl/definition-MSM.html}{\text{Definition MSM}}\\ &=\alpha\sum_{k=1}^{n}\matrixentry{A}{ik}\matrixentry{B}{kj}&& \knowl{./knowl/theorem-EMP.html}{\text{Theorem EMP}}\\ &=\sum_{k=1}^{n}\alpha\matrixentry{A}{ik}\matrixentry{B}{kj}&& \knowl{./knowl/property-DCN.html}{\text{Property DCN}}\\ &=\sum_{k=1}^{n}\matrixentry{\alpha A}{ik}\matrixentry{B}{kj}&& \knowl{./knowl/definition-MSM.html}{\text{Definition MSM}}\\ &=\matrixentry{(\alpha A)B}{ij}&& \knowl{./knowl/theorem-EMP.html}{\text{Theorem EMP}}\text{.} \end{align*}

So the matrices \(\alpha(AB)\) and \((\alpha A)B\) are equal, entry-by-entry, and by the definition of matrix equality (Definition ME) we can say they are equal matrices.

If you want to test your facility creating proofs about matrix multiplication using Theorem EMP, the next proof is a good one to attempt yourself. So give it a try before reading the proof.

A matrix equality, so we will go entry-by-entry, no surprise there. For \(1\leq i\leq m\text{,}\) \(1\leq j\leq s\text{,}\)

\begin{align*} \matrixentry{A(BD)}{ij}&=\sum_{k=1}^{n}\matrixentry{A}{ik}\matrixentry{BD}{kj}&& \knowl{./knowl/theorem-EMP.html}{\text{Theorem EMP}}\\ &=\sum_{k=1}^{n}\matrixentry{A}{ik}\left(\sum_{\ell=1}^{p}\matrixentry{B}{k\ell}\matrixentry{D}{\ell j}\right)&& \knowl{./knowl/theorem-EMP.html}{\text{Theorem EMP}}\\ &=\sum_{k=1}^{n}\sum_{\ell=1}^{p}\matrixentry{A}{ik}\matrixentry{B}{k\ell}\matrixentry{D}{\ell j}&& \knowl{./knowl/property-DCN.html}{\text{Property DCN}}\\ \end{align*}

We can switch the order of the summation since these are finite sums,

\begin{align*} &=\sum_{\ell=1}^{p}\sum_{k=1}^{n}\matrixentry{A}{ik}\matrixentry{B}{k\ell}\matrixentry{D}{\ell j}&& \knowl{./knowl/property-CACN.html}{\text{Property CACN}}\\ \end{align*}

As \(\matrixentry{D}{\ell j}\) does not depend on the index \(k\text{,}\) we can use distributivity to move it outside of the inner sum,

\begin{align*} &=\sum_{\ell=1}^{p}\matrixentry{D}{\ell j}\left(\sum_{k=1}^{n}\matrixentry{A}{ik}\matrixentry{B}{k\ell}\right)&& \knowl{./knowl/property-DCN.html}{\text{Property DCN}}\\ &=\sum_{\ell=1}^{p}\matrixentry{D}{\ell j}\matrixentry{AB}{i\ell}&& \knowl{./knowl/theorem-EMP.html}{\text{Theorem EMP}}\\ &=\sum_{\ell=1}^{p}\matrixentry{AB}{i\ell}\matrixentry{D}{\ell j}&& \knowl{./knowl/property-CMCN.html}{\text{Property CMCN}}\\ &=\matrixentry{(AB)D}{ij}&& \knowl{./knowl/theorem-EMP.html}{\text{Theorem EMP}}\text{.} \end{align*}

So the matrices \((AB)D\) and \(A(BD)\) are equal, entry-by-entry, and by the definition of matrix equality (Definition ME) we can say they are equal matrices.

Since Theorem MMA says matrix multiplication is associative, it means we do not have to be careful about the order in which we perform matrix multiplication, nor how we parenthesize an expression with just several matrices multiplied togther. So this is where we draw the line on explaining every last detail in a proof. We will frequently add, remove, or rearrange parentheses with no comment. Indeed, I only see about a dozen places where Theorem MMA is cited in a proof. You could try to count how many times we avoid making a reference to this theorem.

It could be obvious already that properties of upper triangular matrices will have analogues for lower triangular matrices. Rather than stating two very similar theorems each time, we will economize and say that matrices are “triangular of the same type” as a convenient shorthand to cover both possibilities and then typically give a proof for just one type. Like the proof of Theorem MMA, the proof of the next theorem is another good one to test your facility with Theorem EMP as a tool for constructing proofs about matrix multiplication. Just by considering sizes we can see that the product of two square matrices of the same size is again a square matrix of that size. For triangular matrices it is even better.

We prove this for lower triangular matrices and leave the proof for upper triangular matrices to you. Suppose that \(A\) and \(B\) are both lower triangular of size \(n\text{.}\) We need only establish that certain entries of the product \(AB\) are zero. Suppose that \(i\lt j\text{,}\) then

\begin{align*} \matrixentry{AB}{ij} &= \sum_{k=1}^{n}\matrixentry{A}{ik}\matrixentry{B}{kj}&& \knowl{./knowl/theorem-EMP.html}{\text{Theorem EMP}}\\ &= \sum_{k=1}^{j-1}\matrixentry{A}{ik}\matrixentry{B}{kj} + \sum_{k=j}^{n}\matrixentry{A}{ik}\matrixentry{B}{kj}&& \knowl{./knowl/property-AACN.html}{\text{Property AACN}}\\ &= \sum_{k=1}^{j-1}\matrixentry{A}{ik}0 + \sum_{k=j}^{n}\matrixentry{A}{ik}\matrixentry{B}{kj}&& k\lt j\text{, }\knowl{./knowl/definition-LTM.html}{\text{Definition LTM}}\\ &= \sum_{k=1}^{j-1}\matrixentry{A}{ik}0 + \sum_{k=j}^{n}0\matrixentry{B}{kj}&& i\lt j\leq k\text{, }\knowl{./knowl/definition-LTM.html}{\text{Definition LTM}}\\ &= \sum_{k=1}^{j-1}0+\sum_{k=j}^{n}0\\ &= 0\text{.} \end{align*}

Since \(\matrixentry{AB}{ij}=0\) whenever \(i\lt j\text{,}\) by Definition LTM, \(AB\) is lower triangular.

The statement of our next theorem is technically inaccurate. If we upgrade the vectors \(\vect{u},\,\vect{v}\) to matrices with a single column, then the expression \(\transpose{\conjugate{\vect{u}}}\vect{v}\) is a \(1\times 1\) matrix, though we will treat this small matrix as if it was simply the scalar quantity in its lone entry. When we apply Theorem MMIP there should not be any confusion. Notice that if we treat a column vector as a matrix with a single column, then we can also construct the adjoint of a vector, though we will not make this a common practice.

\begin{align*} \innerproduct{\vect{u}}{\vect{v}}&=\sum_{k=1}^{m}\conjugate{\vectorentry{\vect{u}}{k}}\vectorentry{\vect{v}}{k}&& \knowl{./knowl/definition-IP.html}{\text{Definition IP}}\\ &=\sum_{k=1}^{m}\conjugate{\matrixentry{\vect{u}}{k1}}\matrixentry{\vect{v}}{k1}&& \text{Column vectors as matrices}\\ &=\sum_{k=1}^{m}\matrixentry{\conjugate{\vect{u}}}{k1}\matrixentry{\vect{v}}{k1}&& \knowl{./knowl/definition-CCM.html}{\text{Definition CCM}}\\ &=\sum_{k=1}^{m}\matrixentry{\transpose{\conjugate{\vect{u}}}}{1k}\matrixentry{\vect{v}}{k1}&& \knowl{./knowl/definition-TM.html}{\text{Definition TM}}\\ &=\matrixentry{\transpose{\conjugate{\vect{u}}}\vect{v}}{11}&& \knowl{./knowl/theorem-EMP.html}{\text{Theorem EMP}} \end{align*}

To finish we just blur the distinction between a \(1\times 1\) matrix (\(\transpose{\conjugate{\vect{u}}}\vect{v}\)) and its lone entry.

To obtain this matrix equality, we will work entry-by-entry. For \(1\leq i\leq m\text{,}\) \(1\leq j\leq p\text{,}\)

\begin{align*} \matrixentry{\conjugate{AB}}{ij}&=\conjugate{\matrixentry{AB}{ij}}&& \knowl{./knowl/definition-CCM.html}{\text{Definition CCM}}\\ &=\conjugate{\sum_{k=1}^{n}\matrixentry{A}{ik}\matrixentry{B}{kj}}&& \knowl{./knowl/theorem-EMP.html}{\text{Theorem EMP}}\\ &=\sum_{k=1}^{n}\conjugate{\matrixentry{A}{ik}\matrixentry{B}{kj}}&& \knowl{./knowl/theorem-CCRA.html}{\text{Theorem CCRA}}\\ &=\sum_{k=1}^{n}\conjugate{\matrixentry{A}{ik}}\,\,\conjugate{\matrixentry{B}{kj}}&& \knowl{./knowl/theorem-CCRM.html}{\text{Theorem CCRM}}\\ &=\sum_{k=1}^{n}\matrixentry{\conjugate{A}}{ik}\matrixentry{\conjugate{B}}{kj}&& \knowl{./knowl/definition-CCM.html}{\text{Definition CCM}}\\ &=\matrixentry{\conjugate{A}\,\conjugate{B}}{ij}&& \knowl{./knowl/theorem-EMP.html}{\text{Theorem EMP}}\text{.} \end{align*}

So the matrices \(\conjugate{AB}\) and \(\conjugate{A}\,\conjugate{B}\) are equal, entry-by-entry, and by the definition of matrix equality (Definition ME) we can say they are equal matrices.

Another theorem in this style, and it is a good one. If you have been practicing with the previous proofs you should be able to do this one yourself.

This theorem may be surprising but if we check the sizes of the matrices involved, then maybe it will not seem so far-fetched. First, \(AB\) has size \(m\times p\text{,}\) so its transpose has size \(p\times m\text{.}\) The product of \(\transpose{B}\) with \(\transpose{A}\) is a \(p\times n\) matrix times an \(n\times m\) matrix, also resulting in a \(p\times m\) matrix. So at least our objects are compatible for equality (and would not be, in general, if we did not reverse the order of the matrix multiplication).

Here we go again, entry-by-entry. For \(1\leq i\leq m\text{,}\) \(1\leq j\leq p\text{,}\)

\begin{align*} \matrixentry{\transpose{(AB)}}{ji}=&\matrixentry{AB}{ij}&& \knowl{./knowl/definition-TM.html}{\text{Definition TM}}\\ &=\sum_{k=1}^{n}\matrixentry{A}{ik}\matrixentry{B}{kj}&& \knowl{./knowl/theorem-EMP.html}{\text{Theorem EMP}}\\ &=\sum_{k=1}^{n}\matrixentry{B}{kj}\matrixentry{A}{ik}&& \knowl{./knowl/property-CMCN.html}{\text{Property CMCN}}\\ &=\sum_{k=1}^{n}\matrixentry{\transpose{B}}{jk}\matrixentry{\transpose{A}}{ki}&& \knowl{./knowl/definition-TM.html}{\text{Definition TM}}\\ &=\matrixentry{\transpose{B}\transpose{A}}{ji}&& \knowl{./knowl/theorem-EMP.html}{\text{Theorem EMP}}\text{.} \end{align*}

So the matrices \(\transpose{(AB)}\) and \(\transpose{B}\transpose{A}\) are equal, entry-by-entry, and by the definition of matrix equality (Definition ME) we can say they are equal matrices.

This theorem seems odd at first glance, since we have to switch the order of \(A\) and \(B\text{.}\) But if we simply consider the sizes of the matrices involved, we can see that the switch is necessary for this reason alone. That the individual entries of the products then come along to be equal is a bonus.

As the adjoint of a matrix is a composition of a conjugate and a transpose, its interaction with matrix multiplication is similar to that of a transpose. Here is the last of our long list of basic properties of matrix multiplication.

\begin{align*} \adjoint{(AB)}&=\transpose{\left(\conjugate{AB}\right)}&& \knowl{./knowl/definition-A.html}{\text{Definition A}}\\ &=\transpose{\left(\conjugate{A}\,\conjugate{B}\right)}&& \knowl{./knowl/theorem-MMCC.html}{\text{Theorem MMCC}}\\ &=\transpose{\left(\conjugate{B}\right)}\transpose{\left(\conjugate{A}\right)}&& \knowl{./knowl/theorem-MMT.html}{\text{Theorem MMT}}\\ &=\adjoint{B}\adjoint{A}&& \knowl{./knowl/definition-A.html}{\text{Definition A}} \end{align*}

Notice how none of these proofs above relied on writing out huge general matrices with lots of ellipses (“…”) and trying to formulate the equalities a whole matrix at a time. This messy business is a “proof technique” to be avoided at all costs. Notice too how the proof of Theorem MMAD does not use an entry-by-entry approach, but simply builds on previous results about matrix multiplication's interaction with conjugation and transposes.

These theorems, along with Theorem VSPM and the other results in Section MO, give you the “rules” for how matrices interact with the various operations we have defined on matrices (addition, scalar multiplication, matrix multiplication, conjugation, transposes and adjoints). Use them and use them often. But do not try to do anything with a matrix that you do not have a rule for. Together, we would informally call all these operations, and the attendant theorems, “the algebra of matrices.” Notice, too, that every column vector is just a \(n\times 1\) matrix, so these theorems apply to column vectors also. Finally, these results, taken as a whole, may make us feel that the definition of matrix multiplication is not so unnatural.

As before, we can use Sage to demonstrate theorems. With randomly-generated matrices, these verifications might be even more believable. Some of the above results should feel fairly routine, but some are perhaps contrary to intuition. For example, Theorem MMT might at first glance seem surprising due to the requirement that the order of the product is reversed. Here is how we would investigate this theorem in Sage. The following compute cell should always return True. Repeated experimental evidence does not make a proof, but certainly gives us confidence.

By now, you can probably guess the matrix method for checking if a matrix is Hermitian.

We can illustrate the most fundamental property of a Hermitian matrix. The vectors x and y below are random, but according to Theorem HMIP the final command should produce True for any possible values of these two vectors. (You would be right to think that using random vectors over QQbar would be a better idea, but at this writing, these vectors are not as “random” as one would like, and are insufficient to perform an accurate test here.)

Subsection HM Hermitian Matrices

The adjoint of a matrix has a basic property when employed in a matrix-vector product as part of an inner product. At this point, you could even use the following result as a motivation for the definition of an adjoint.

\begin{align*} \innerproduct{A\vect{x}}{\vect{y}}&=\transpose{\left(\conjugate{A\vect{x}}\right)}\vect{y}&& \knowl{./knowl/theorem-MMIP.html}{\text{Theorem MMIP}}\\ &=\transpose{\left(\conjugate{A}\,\conjugate{\vect{x}}\right)}\vect{y}&& \knowl{./knowl/theorem-MMCC.html}{\text{Theorem MMCC}}\\ &=\transpose{\conjugate{\vect{x}}}\transpose{\conjugate{A}}\vect{y}&& \knowl{./knowl/theorem-MMT.html}{\text{Theorem MMT}}\\ &=\transpose{\conjugate{\vect{x}}}\left(\adjoint{A}\vect{y}\right)&& \knowl{./knowl/definition-A.html}{\text{Definition A}}\\ &=\innerproduct{\vect{x}}{\adjoint{A}\vect{y}}&& \knowl{./knowl/theorem-MMIP.html}{\text{Theorem MMIP}} \end{align*}

Sometimes a matrix is equal to its adjoint (Definition A), and these matrices have interesting properties. One of the most common situations where this occurs is when a matrix has only real number entries. Then we are simply talking about symmetric matrices (Definition SYM), so you can view this as a generalization of a symmetric matrix.

Definition HM. Hermitian Matrix.

The square matrix \(A\) is Hermitian (or self-adjoint) if \(A=\adjoint{A}\text{.}\)

Again, the set of real matrices that are Hermitian is exactly the set of symmetric matrices. In Section PEE we will uncover some amazing properties of Hermitian matrices, so when you get there, run back here to remind yourself of this definition. Further properties will also appear in Section OD. Right now we prove a fundamental result about Hermitian matrices, matrix-vector products and inner products. As a characterization, this could be employed as a definition of a Hermitian matrix and some authors take this approach.

(⇒) 

This is the “easy half” of the proof, and makes the rationale for a definition of Hermitian matrices most obvious. Assume \(A\) is Hermitian, then

\begin{align*} \innerproduct{A\vect{x}}{\vect{y}} &=\innerproduct{\vect{x}}{\adjoint{A}\vect{y}}&& \knowl{./knowl/theorem-AIP.html}{\text{Theorem AIP}}\\ &=\innerproduct{\vect{x}}{A\vect{y}}&& \knowl{./knowl/definition-HM.html}{\text{Definition HM}}\text{.} \end{align*}
(⇐) 

This “half” will take a bit more work. Assume that \(\innerproduct{A\vect{x}}{\vect{y}}=\innerproduct{\vect{x}}{A\vect{y}}\) for all \(\vect{x},\,\vect{y}\in\complex{n}\text{.}\) We show that \(A=\adjoint{A}\) by establishing that \(A\vect{x}=\adjoint{A}\vect{x}\) for all \(\vect{x}\text{,}\) so we can then apply Theorem EMMVP. With only this much motivation, consider the inner product for any \(\vect{x}\in\complex{n}\text{.}\)

\begin{align*} \innerproduct{A\vect{x}-\adjoint{A}\vect{x}}{A\vect{x}-\adjoint{A}\vect{x}} &=\innerproduct{A\vect{x}-\adjoint{A}\vect{x}}{A\vect{x}}- \innerproduct{A\vect{x}-\adjoint{A}\vect{x}}{\adjoint{A}\vect{x}}&& \knowl{./knowl/theorem-IPVA.html}{\text{Theorem IPVA}}\\ &=\innerproduct{A\vect{x}-\adjoint{A}\vect{x}}{A\vect{x}}- \innerproduct{A\left(A\vect{x}-\adjoint{A}\vect{x}\right)}{\vect{x}}&& \knowl{./knowl/theorem-AIP.html}{\text{Theorem AIP}}\\ &= \innerproduct{A\vect{x}-\adjoint{A}\vect{x}}{A\vect{x}}- \innerproduct{A\vect{x}-\adjoint{A}\vect{x}}{A\vect{x}}&& \text{Hypothesis}\\ &=0&& \knowl{./knowl/property-AICN.html}{\text{Property AICN}} \end{align*}

Because this first inner product equals zero, and has the same vector in each argument (\(A\vect{x}-\adjoint{A}\vect{x}\)), Theorem PIP gives the conclusion that \(A\vect{x}-\adjoint{A}\vect{x}=\zerovector\text{.}\) With \(A\vect{x}=\adjoint{A}\vect{x}\) for all \(\vect{x}\in\complex{n}\text{,}\) Theorem EMMVP says \(A=\adjoint{A}\text{,}\) which is the defining property of a Hermitian matrix (Definition HM).

So, informally, Hermitian matrices are those that can be tossed around from one side of an inner product to the other with reckless abandon. We will see later what this buys us.

Reading Questions MM Reading Questions

1.

Form the matrix-vector product of

\begin{align*} \begin{bmatrix} 2 & 3 & -1 & 0\\ 1 & -2 & 7 & 3\\ 1 & 5 & 3 & 2\\ \end{bmatrix} &&\text{with}&& \colvector{2\\-3\\0\\5}\text{.} \end{align*}
2.

Multiply together the two matrices below (in the order given).

\begin{align*} \begin{bmatrix} 2 & 3 & -1 & 0\\ 1 & -2 & 7 & 3\\ 1 & 5 & 3 & 2\\ \end{bmatrix} && \begin{bmatrix} 2 & 6\\ -3 & -4\\ 0 & 2\\ 3 & -1\\ \end{bmatrix} \end{align*}
3.

Rewrite the system of linear equations below as a vector equality and using a matrix-vector product. (This question does not ask for a solution to the system. But it does ask you to express the system of equations in a new form using tools from this section.)

\begin{align*} 2x_1 + 3x_2 - x_3 &= 0\\ x_1 + 2x_2 + x_3 &= 3\\ x_1 + 3x_2 + 3x_3 &= 7 \end{align*}

Exercises MM Exercises

C20.

Compute the product of the two matrices below, \(AB\text{.}\) Do this using the definitions of the matrix-vector product (Definition MVP) and the definition of matrix multiplication (Definition MM).

\begin{align*} A= \begin{bmatrix} 2&5\\ -1&3\\ 2&-2 \end{bmatrix} && B=\begin{bmatrix} 1&5&-3&4\\ 2&0&2&-3 \end{bmatrix} \end{align*}
Solution

By Definition MM

\begin{align*} AB&= \left[ \left. \begin{bmatrix} 2&5\\ -1&3\\ 2&-2 \end{bmatrix} \colvector{1\\2} \right\vert\ \left. \begin{bmatrix} 2&5\\ -1&3\\ 2&-2 \end{bmatrix} \colvector{5\\0} \right\vert\ \left. \begin{bmatrix} 2&5\\ -1&3\\ 2&-2 \end{bmatrix} \colvector{-3\\2} \right\vert\ \left. \begin{bmatrix} 2&5\\ -1&3\\ 2&-2 \end{bmatrix} \colvector{4\\-3} \right. \right]\text{.} \end{align*}

Repeated applications of Definition MVP give

\begin{align*} &= \left[ \left.1\colvector{2\\-1\\2}+2\colvector{5\\3\\-2}\right\vert\ \left.5\colvector{2\\-1\\2}+0\colvector{5\\3\\-2}\right\vert\ \left.-3\colvector{2\\-1\\2}+2\colvector{5\\3\\-2}\right\vert\ \left.4\colvector{2\\-1\\2}+(-3)\colvector{5\\3\\-2}\right. \right]\\ &= \begin{bmatrix} 12 & 10 & 4 & -7\\ 5 & -5 & 9 & -13\\ -2 & 10 & -10 & 14 \end{bmatrix}\text{.} \end{align*}
C21.

Compute the product \(AB\) of the two matrices below using both the definition of the matrix-vector product (Definition MVP) and the definition of matrix multiplication (Definition MM).

\begin{align*} A &= \begin{bmatrix} 1 & 3 & 2 \\ -1 & 2 & 1 \\ 0 & 1 & 0 \end{bmatrix} & B &= \begin{bmatrix} 4 & 1 & 2\\ 1 & 0 & 1\\3 & 1 & 5 \end{bmatrix} \end{align*}
Solution

\(AB = \begin{bmatrix} 13 & 3 & 15 \\ 1 & 0 & 5\\1 & 0 & 1 \end{bmatrix}\text{.}\)

C22.

Compute the product \(AB\) of the two matrices below using both the definition of the matrix-vector product (Definition MVP) and the definition of matrix multiplication (Definition MM).

\begin{align*} A&= \begin{bmatrix} 1 & 0 \\ -2 & 1 \end{bmatrix} & B&= \begin{bmatrix} 2 & 3 \\ 4 & 6 \end{bmatrix} \end{align*}
Solution

\(AB = \begin{bmatrix} 2 & 3 \\ 0 & 0 \end{bmatrix}\text{.}\)

C23.

Compute the product \(AB\) of the two matrices below using both the definition of the matrix-vector product (Definition MVP) and the definition of matrix multiplication (Definition MM).

\begin{align*} A&= \begin{bmatrix} 3 & 1 \\ 2 & 4 \\ 6 & 5\\1 & 2 \end{bmatrix} & B&= \begin{bmatrix} -3 & 1 \\ 4 & 2 \end{bmatrix} \end{align*}
Solution

\(AB = \begin{bmatrix} -5&5\\ 10&10\\2&16\\5&5\end{bmatrix}\text{.}\)

C24.

Compute the product \(AB\) of the two matrices below.

\begin{align*} A&= \begin{bmatrix} 1 & 2 & 3 & -2 \\ 0 & 1 & -2 & -1\\ 1 & 1 & 3 & 1 \end{bmatrix} & B&= \begin{bmatrix} 3\\ 4 \\ 0 \\ 2 \end{bmatrix} \end{align*}
Solution

\(AB = \begin{bmatrix} 7\\2\\9\end{bmatrix}\text{.}\)

C25.

Compute the product \(AB\) of the two matrices below.

\begin{align*} A&= \begin{bmatrix} 1 & 2 & 3 & -2 \\ 0 & 1 & -2 & -1\\ 1 & 1 & 3 & 1 \end{bmatrix} & B&= \begin{bmatrix} -7\\ 3 \\ 1 \\ 1 \end{bmatrix} \end{align*}
Solution

\(AB = \begin{bmatrix} 0\\ 0 \\ 0 \end{bmatrix}\text{.}\)

C26.

Compute the product \(AB\) of the two matrices below using both the definition of the matrix-vector product (Definition MVP) and the definition of matrix multiplication (Definition MM).

\begin{align*} A&= \begin{bmatrix} 1 & 3 & 1\\ 0 & 1 & 0\\1 & 1 & 2 \end{bmatrix} & B&= \begin{bmatrix} 2 & -5 & -1 \\ 0 & 1 & 0\\-1 & 2 & 1 \end{bmatrix} \end{align*}
Solution

\(AB = \begin{bmatrix} 1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end{bmatrix}\text{.}\)

C30.

For the matrix \(A\text{,}\) find \(A^2\text{,}\) \(A^3\text{,}\) \(A^4\text{.}\) Find a general formula for \(A^n\) for any positive integer \(n\text{.}\)

\begin{equation*} A = \begin{bmatrix} 1 & 2 \\ 0 & 1 \end{bmatrix} \end{equation*}
Solution
\begin{align*} A^2 &= \begin{bmatrix}1 & 4\\ 0 & 1\end{bmatrix} & A^3 &= \begin{bmatrix}1 & 6\\ 0 & 1\end{bmatrix} & A^4 &= \begin{bmatrix}1 & 8\\ 0 & 1\end{bmatrix} \end{align*}

From this pattern, we see that

\begin{equation*} A^n = \begin{bmatrix}1 & 2n\\ 0 & 1\end{bmatrix}\text{.} \end{equation*}
C31.

For the matrix \(A\text{,}\) find \(A^2\text{,}\) \(A^3\text{,}\) \(A^4\text{.}\) Find a general formula for \(A^n\) for any positive integer \(n\text{.}\)

\begin{equation*} A = \begin{bmatrix} 1 & -1 \\ 0 & 1 \end{bmatrix} \end{equation*}
Solution
\begin{align*} A^2 &= \begin{bmatrix}1 & -2\\ 0 & 1\end{bmatrix} & A^3 &= \begin{bmatrix}1 & -3\\ 0 & 1\end{bmatrix} & A^4 &= \begin{bmatrix}1 & -4\\ 0 & 1\end{bmatrix} \end{align*}

From this pattern, we see that

\begin{equation*} A^n = \begin{bmatrix}1 & -n\\ 0 & 1\end{bmatrix}\text{.} \end{equation*}
C32.

For the matrix \(A\text{,}\) find \(A^2\text{,}\) \(A^3\text{,}\) \(A^4\text{.}\) Find a general formula for \(A^n\) for any positive integer \(n\text{.}\)

\begin{equation*} A = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 3 \end{bmatrix} \end{equation*}
Solution
\begin{align*} A^{2} &= \begin{bmatrix} 1 & 0 & 0 \\ 0 & 4 & 0\\ 0 & 0 & 9\end{bmatrix} & A^{3} &= \begin{bmatrix} 1 & 0 & 0 \\ 0 & 8 & 0\\ 0 & 0 & 27\end{bmatrix} & A^{4} &= \begin{bmatrix} 1 & 0 & 0 \\ 0 & 16 & 0\\ 0 & 0 & 81\end{bmatrix} \end{align*}

The pattern emerges, and we see that

\begin{equation*} A^{n} = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 2^n & 0\\ 0 & 0 & 3^n\end{bmatrix}\text{.} \end{equation*}
C33.

For the matrix \(A\text{,}\) find \(A^2\text{,}\) \(A^3\text{,}\) \(A^4\text{.}\) Find a general formula for \(A^n\) for any positive integer \(n\text{.}\)

\begin{equation*} A = \begin{bmatrix} 0 & 1 & 2 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \end{bmatrix} \end{equation*}
Solution

We quickly compute

\begin{align*} A^2 &= \begin{bmatrix} 0 & 0 & 1\\ 0 & 0 & 0\\ 0 & 0 & 0\end{bmatrix} & A^3 &=\begin{bmatrix} 0 & 0 & 0\\ 0 & 0 & 0\\ 0 & 0 & 0\end{bmatrix} \end{align*}

and we then see that all subsequent powers of \(A\) are the \(3 \times 3\) zero matrix. That is, \(A^n = \zeromatrix\) for \(n\geq 3\text{.}\)

T10.

Suppose that \(A\) is a square matrix and there is a vector, \(\vect{b}\text{,}\) such that \(\linearsystem{A}{\vect{b}}\) has a unique solution. Prove that \(A\) is nonsingular. Give a direct proof (perhaps appealing to Theorem PSPHS) rather than just negating a sentence from the text discussing a similar situation.

Solution

Since \(\linearsystem{A}{\vect{b}}\) has at least one solution, we can apply Theorem PSPHS. Because the solution is assumed to be unique, the null space of \(A\) must be trivial. Then Theorem NMTNS implies that \(A\) is nonsingular.

The converse of this statement is a trivial application of Theorem NMUS. That said, we could extend our NSMxx series of theorems with an added equivalence for nonsingularity, “Given a single vector of constants, \(\vect{b}\text{,}\) the system \(\linearsystem{A}{\vect{b}}\) has a unique solution.”

T12.

The conclusion of Theorem AIP is \(\innerproduct{A\vect{x}}{\vect{y}}=\innerproduct{\vect{x}}{\adjoint{A}\vect{y}}\text{.}\) Use the same hypotheses, and prove the similar conclusion: \(\innerproduct{\adjoint{A}\vect{x}}{\vect{y}}=\innerproduct{\vect{x}}{A\vect{y}}\text{.}\) Two different approaches can both be based on an application of Theorem AIP. The first uses Theorem AA, while a second approach uses Theorem IPAC. Can you provide two proofs?

T23.

Prove the second part of Theorem MMSMM.

Solution

We will run the proof entry-by-entry.

\begin{align*} \matrixentry{\alpha(AB)}{ij}=& \alpha\matrixentry{AB}{ij}&& \knowl{./knowl/definition-MSM.html}{\text{Definition MSM}}\\ =&\alpha\sum_{k=1}^{n}\matrixentry{A}{ik}\matrixentry{B}{kj}&& \knowl{./knowl/theorem-EMP.html}{\text{Theorem EMP}}\\ =&\sum_{k=1}^{n}\alpha\matrixentry{A}{ik}\matrixentry{B}{kj}&& \knowl{./knowl/property-DCN.html}{\text{Property DCN}}\\ =&\sum_{k=1}^{n}\matrixentry{A}{ik}\alpha\matrixentry{B}{kj}&& \knowl{./knowl/property-CMCN.html}{\text{Property CMCN}}\\ =&\sum_{k=1}^{n}\matrixentry{A}{ik}\matrixentry{\alpha B}{kj}&& \knowl{./knowl/definition-MSM.html}{\text{Definition MSM}}\\ =&\matrixentry{A(\alpha B)}{ij}&&\knowl{./knowl/theorem-EMP.html}{\text{Theorem EMP}} \end{align*}

So the matrices \(\alpha(AB)\) and \(A(\alpha B)\) are equal, entry-by-entry, and by the definition of matrix equality (Definition ME) we can say they are equal matrices.

\
T28.

Suppose that \(A\) and \(B\) are both upper triangular matrices (Definition UTM) of size \(n\text{.}\) Use Theorem EMP to prove that each diagonal entry of the matrix product \(AB\) is the scalar product of the individual entries on the diagonal of each matrix. More precisely, show that

\begin{equation*} \matrixentry{AB}{ii}=\matrixentry{A}{ii}\matrixentry{B}{ii},\quad 1\leq i\leq n\text{.} \end{equation*}

Note: the matrix product is only this simple for the diagonal entries, and only for the special case of triangular matrices!

T31.

Suppose that \(A\) is an \(m\times n\) matrix and \(\vect{x},\,\vect{y}\in\nsp{A}\text{.}\) Prove that \(\vect{x}+\vect{y}\in\nsp{A}\text{.}\)

T32.

Suppose that \(A\) is an \(m\times n\) matrix, \(\alpha\in\complexes\text{,}\) and \(\vect{x}\in\nsp{A}\text{.}\) Prove that \(\alpha\vect{x}\in\nsp{A}\text{.}\)

T35.

Suppose that \(A\) is an \(n\times n\) matrix. Prove that \(\adjoint{A}A\) and \(A\adjoint{A}\) are Hermitian matrices.

T38.

Theorem NMUS says that a linear system with a nonsingular coefficient matrix always has a unique solution. We will soon see exactly what that solution is in Theorem SNCM, but you do not need that result for this exercise. Use results from this section to give another proof that a solution to a linear system with a nonsingular coefficient matrix is unique.

Solution

The statement lets us assume we have a solution to \(\linearsystem{A}{\vect{b}}\) and we are to show this solution is unique. Consult Proof Technique U.

Assume that we have two solutions, say \(\vect{x}_1\) and \(\vect{x}_2\text{.}\) Then, Theorem MMDAA and Theorem SLEMM allows us to write

\begin{equation*} A\left(\vect{x}_1 - \vect{x}_2\right) =A\vect{x}_1 - A\vect{x}_2 =\vect{b} - \vect{b} =\zerovector\text{.} \end{equation*}

Since \(A\) is nonsingular the only solution to \(\homosystem{A}\) is the zero vector and thus \(\vect{x}_1 - \vect{x}_2 = \zerovector\) and so \(\vect{x}_1 = \vect{x}_2\text{.}\) The two solutions are really the same.

T40.

Suppose that \(A\) is an \(m\times n\) matrix and \(B\) is an \(n\times p\) matrix. Prove that the null space of \(B\) is a subset of the null space of \(AB\text{,}\) that is \(\nsp{B}\subseteq\nsp{AB}\text{.}\) Provide an example where the opposite is false, in other words give an example where \(\nsp{AB}\not\subseteq\nsp{B}\text{.}\)

Solution

To prove that one set is a subset of another, we start with an element of the smaller set and see if we can determine that it is a member of the larger set (Definition SSET). Suppose \(\vect{x}\in\nsp{B}\text{.}\) Then we know that \(B\vect{x}=\zerovector\) by Definition NSM. Consider

\begin{align*} (AB)\vect{x}&=A(B\vect{x})&& \knowl{./knowl/theorem-MMA.html}{\text{Theorem MMA}}\\ &=A\zerovector&& \text{Hypothesis}\\ &=\zerovector&& \knowl{./knowl/theorem-MMZM.html}{\text{Theorem MMZM}}\text{.} \end{align*}

This establishes that \(\vect{x}\in\nsp{AB}\text{,}\) so \(\nsp{B}\subseteq\nsp{AB}\text{.}\)

To show that the inclusion does not hold in the opposite direction, choose \(B\) to be any nonsingular matrix of size \(n\text{.}\) Then \(\nsp{B}=\set{\zerovector}\) by Theorem NMTNS. Let \(A\) be the square zero matrix, \(\zeromatrix\text{,}\) of the same size. Then \(AB=\zeromatrix B=\zeromatrix\) by Theorem MMZM and therefore \(\nsp{AB}=\complex{n}\text{,}\) and is not a subset of \(\nsp{B}=\set{\zerovector}\text{.}\)

T41.

Suppose that \(A\) is an \(n\times n\) nonsingular matrix and \(B\) is an \(n\times p\) matrix. Prove that the null space of \(B\) is equal to the null space of \(AB\text{,}\) that is \(\nsp{B}=\nsp{AB}\text{.}\) (Compare with Exercise MM.T40.)

Solution

From the solution to Exercise MM.T40 we know that \(\nsp{B}\subseteq\nsp{AB}\text{.}\) So to establish the set equality (Definition SE) we need to show that \(\nsp{AB}\subseteq\nsp{B}\text{.}\)

Suppose \(\vect{x}\in\nsp{AB}\text{.}\) Then we know that \(AB\vect{x}=\zerovector\) by Definition NSM. Consider

\begin{align*} \zerovector &=\left(AB\right)\vect{x}&& \knowl{./knowl/definition-NSM.html}{\text{Definition NSM}}\\ &=A\left(B\vect{x}\right)&& \knowl{./knowl/theorem-MMA.html}{\text{Theorem MMA}}\text{.} \end{align*}

So, \(B\vect{x}\in\nsp{A}\text{.}\) Because \(A\) is nonsingular, it has a trivial null space (Theorem NMTNS) and we conclude that \(B\vect{x}=\zerovector\text{.}\) This establishes that \(\vect{x}\in\nsp{B}\text{,}\) so \(\nsp{AB}\subseteq\nsp{B}\) and combined with the solution to Exercise MM.T40 we have \(\nsp{B}=\nsp{AB}\) when \(A\) is nonsingular.

T50.

Suppose \(\vect{u}\) and \(\vect{v}\) are any two solutions of the linear system \(\linearsystem{A}{\vect{b}}\text{.}\) Prove that \(\vect{u}-\vect{v}\) is an element of the null space of \(A\text{,}\) that is, \(\vect{u}-\vect{v}\in\nsp{A}\text{.}\)

T51.

Give a new proof of Theorem PSPHS replacing applications of Theorem SLSLC with matrix-vector products (Theorem SLEMM).

Solution

We will work with the vector equality representations of the relevant systems of equations, as described by Theorem SLEMM.

(⇐) 

Suppose \(\vect{y}=\vect{w}+\vect{z}\) and \(\vect{z}\in\nsp{A}\text{.}\) Then

\begin{align*} A\vect{y}&=A(\vect{w}+\vect{z})&& \text{Substitution}\\ &=A\vect{w}+A\vect{z}&& \knowl{./knowl/theorem-MMDAA.html}{\text{Theorem MMDAA}}\\ &=\vect{b}+\zerovector&& \vect{z}\in\nsp{A}\\ &=\vect{b}&& \knowl{./knowl/property-ZC.html}{\text{Property ZC}} \end{align*}

demonstrating that \(\vect{y}\) is a solution.

(⇒) 

Suppose \(\vect{y}\) is a solution to \(\linearsystem{A}{\vect{b}}\text{.}\) Then

\begin{align*} A(\vect{y}-\vect{w})&=A\vect{y}-A\vect{w}&& \knowl{./knowl/theorem-MMDAA.html}{\text{Theorem MMDAA}}\\ &=\vect{b}-\vect{b}&& \vect{y},\,\vect{w}\text{ solutions to }A\vect{x}=\vect{b}\\ &=\zerovector&& \knowl{./knowl/property-AIC.html}{\text{Property AIC}} \end{align*}

which says that \(\vect{y}-\vect{w}\in\nsp{A}\text{.}\) In other words, \(\vect{y}-\vect{w}=\vect{z}\) for some vector \(\vect{z}\in\nsp{A}\text{.}\) Rewritten, this is \(\vect{y}=\vect{w}+\vect{z}\text{,}\) as desired.

T52.

Suppose that \(\vect{x},\,\vect{y}\in\complex{n}\text{,}\) \(\vect{b}\in\complex{m}\) and \(A\) is an \(m\times n\) matrix. If \(\vect{x}\text{,}\) \(\vect{y}\) and \(\vect{x}+\vect{y}\) are each a solution to the linear system \(\linearsystem{A}{\vect{b}}\text{,}\) what can you say that is interesting about \(\vect{b}\text{?}\) Form an implication with the existence of the three solutions as the hypothesis and an interesting statement about \(\linearsystem{A}{\vect{b}}\) as the conclusion, and then give a proof.

Solution

\(\linearsystem{A}{\vect{b}}\) must be homogeneous. To see this, consider that

\begin{align*} \vect{b}&=A\vect{x}&& \knowl{./knowl/theorem-SLEMM.html}{\text{Theorem SLEMM}}\\ &=A\vect{x}+\zerovector&& \knowl{./knowl/property-ZC.html}{\text{Property ZC}}\\ &=A\vect{x}+A\vect{y}-A\vect{y}&& \knowl{./knowl/property-AIC.html}{\text{Property AIC}}\\ &=A\left(\vect{x}+\vect{y}\right)-A\vect{y}&& \knowl{./knowl/theorem-MMDAA.html}{\text{Theorem MMDAA}}\\ &=\vect{b}-\vect{b}&& \knowl{./knowl/theorem-SLEMM.html}{\text{Theorem SLEMM}}\\ &=\zerovector&& \knowl{./knowl/property-AIC.html}{\text{Property AIC}}\text{.} \end{align*}

By Definition HS we see that \(\linearsystem{A}{\vect{b}}\) is homogeneous.

T80.

Suppose that \(A\) is a nonsingular matrix of size \(n\text{.}\) Let \(T=\set{\vectorlist{x}{m}}\subseteq\complex{n}\) be a linearly independent set of vectors. Prove that

\begin{equation*} R = \set{A\vect{x}_1,\,A\vect{x}_2,\,A\vect{x}_3,\,\ldots,\,A\vect{x}_m}\subseteq\complex{n} \end{equation*}

is a linearly independent set.

Solution

Begin with a relation of linear dependence on the set \(R\text{.}\)

\begin{align*} \zerovector &=a_1A\vect{x}_1 + a_2A\vect{x}_2 + a_3A\vect{x}_3 + \cdots + a_mA\vect{x}_m &&\knowl{./knowl/definition-RLDCV.html}{\text{Definition RLDCV}}\\ &=Aa_1\vect{x}_1 + Aa_2\vect{x}_2 + Aa_3\vect{x}_3 + \cdots + Aa_m\vect{x}_m &&\knowl{./knowl/theorem-MMSMM.html}{\text{Theorem MMSMM}}\\ &=A\left(a_1\vect{x}_1 + a_2\vect{x}_2 + a_3\vect{x}_3 + \cdots + a_m\vect{x}_m\right) &&\knowl{./knowl/theorem-MMDAA.html}{\text{Theorem MMDAA}} \end{align*}

In light of Theorem SLEMM, we can think of this equation as a homogeneous system of equations with \(A\) as the coefficient matrix. Then because \(A\) is nonsingular, Definition NM implies

\begin{equation*} a_1\vect{x}_1 + a_2\vect{x}_2 + a_3\vect{x}_3 + \cdots + a_m\vect{x}_m = \zerovector\text{.} \end{equation*}

This is a relation of linear dependence on \(T\text{,}\) a linearly independent set, so by Definition LICV,

\begin{equation*} a_1=a_2=a_3=\cdots=a_m=0\text{.} \end{equation*}

This means there is only a trivial relation of linear dependence on \(R\text{,}\) so again by Definition LICV we can conclude that \(R\) is linearly independent.