Skip to main content
\(\newcommand{\orderof}[1]{\sim #1} \newcommand{\Z}{\mathbb{Z}} \newcommand{\reals}{\mathbb{R}} \newcommand{\real}[1]{\mathbb{R}^{#1}} \newcommand{\complexes}{\mathbb{C}} \newcommand{\complex}[1]{\mathbb{C}^{#1}} \newcommand{\conjugate}[1]{\overline{#1}} \newcommand{\modulus}[1]{\left\lvert#1\right\rvert} \newcommand{\zerovector}{\vect{0}} \newcommand{\zeromatrix}{\mathcal{O}} \newcommand{\innerproduct}[2]{\left\langle#1,\,#2\right\rangle} \newcommand{\norm}[1]{\left\lVert#1\right\rVert} \newcommand{\dimension}[1]{\dim\left(#1\right)} \newcommand{\nullity}[1]{n\left(#1\right)} \newcommand{\rank}[1]{r\left(#1\right)} \newcommand{\ds}{\oplus} \newcommand{\detname}[1]{\det\left(#1\right)} \newcommand{\detbars}[1]{\left\lvert#1\right\rvert} \newcommand{\trace}[1]{t\left(#1\right)} \newcommand{\sr}[1]{#1^{1/2}} \newcommand{\spn}[1]{\left\langle#1\right\rangle} \newcommand{\nsp}[1]{\mathcal{N}\!\left(#1\right)} \newcommand{\csp}[1]{\mathcal{C}\!\left(#1\right)} \newcommand{\rsp}[1]{\mathcal{R}\!\left(#1\right)} \newcommand{\lns}[1]{\mathcal{L}\!\left(#1\right)} \newcommand{\per}[1]{#1^\perp} \newcommand{\augmented}[2]{\left\lbrack\left.#1\,\right\rvert\,#2\right\rbrack} \newcommand{\linearsystem}[2]{\mathcal{LS}\!\left(#1,\,#2\right)} \newcommand{\homosystem}[1]{\linearsystem{#1}{\zerovector}} \newcommand{\rowopswap}[2]{R_{#1}\leftrightarrow R_{#2}} \newcommand{\rowopmult}[2]{#1R_{#2}} \newcommand{\rowopadd}[3]{#1R_{#2}+R_{#3}} \newcommand{\leading}[1]{\boxed{#1}} \newcommand{\rref}{\xrightarrow{\text{RREF}}} \newcommand{\elemswap}[2]{E_{#1,#2}} \newcommand{\elemmult}[2]{E_{#2}\left(#1\right)} \newcommand{\elemadd}[3]{E_{#2,#3}\left(#1\right)} \newcommand{\scalarlist}[2]{{#1}_{1},\,{#1}_{2},\,{#1}_{3},\,\ldots,\,{#1}_{#2}} \newcommand{\vect}[1]{\mathbf{#1}} \newcommand{\colvector}[1]{\begin{bmatrix}#1\end{bmatrix}} \newcommand{\vectorcomponents}[2]{\colvector{#1_{1}\\#1_{2}\\#1_{3}\\\vdots\\#1_{#2}}} \newcommand{\vectorlist}[2]{\vect{#1}_{1},\,\vect{#1}_{2},\,\vect{#1}_{3},\,\ldots,\,\vect{#1}_{#2}} \newcommand{\vectorentry}[2]{\left\lbrack#1\right\rbrack_{#2}} \newcommand{\matrixentry}[2]{\left\lbrack#1\right\rbrack_{#2}} \newcommand{\lincombo}[3]{#1_{1}\vect{#2}_{1}+#1_{2}\vect{#2}_{2}+#1_{3}\vect{#2}_{3}+\cdots +#1_{#3}\vect{#2}_{#3}} \newcommand{\matrixcolumns}[2]{\left\lbrack\vect{#1}_{1}|\vect{#1}_{2}|\vect{#1}_{3}|\ldots|\vect{#1}_{#2}\right\rbrack} \newcommand{\transpose}[1]{#1^{t}} \newcommand{\inverse}[1]{#1^{-1}} \newcommand{\submatrix}[3]{#1\left(#2|#3\right)} \newcommand{\adj}[1]{\transpose{\left(\conjugate{#1}\right)}} \newcommand{\adjoint}[1]{#1^\ast} \newcommand{\set}[1]{\left\{#1\right\}} \newcommand{\setparts}[2]{\left\lbrace#1\,\middle|\,#2\right\rbrace} \newcommand{\card}[1]{\left\lvert#1\right\rvert} \newcommand{\setcomplement}[1]{\overline{#1}} \newcommand{\charpoly}[2]{p_{#1}\left(#2\right)} \newcommand{\eigenspace}[2]{\mathcal{E}_{#1}\left(#2\right)} \newcommand{\eigensystem}[3]{\lambda&=#2&\eigenspace{#1}{#2}&=\spn{\set{#3}}} \newcommand{\geneigenspace}[2]{\mathcal{G}_{#1}\left(#2\right)} \newcommand{\algmult}[2]{\alpha_{#1}\left(#2\right)} \newcommand{\geomult}[2]{\gamma_{#1}\left(#2\right)} \newcommand{\indx}[2]{\iota_{#1}\left(#2\right)} \newcommand{\ltdefn}[3]{#1\colon #2\rightarrow#3} \newcommand{\lteval}[2]{#1\left(#2\right)} \newcommand{\ltinverse}[1]{#1^{-1}} \newcommand{\restrict}[2]{{#1}|_{#2}} \newcommand{\preimage}[2]{#1^{-1}\left(#2\right)} \newcommand{\rng}[1]{\mathcal{R}\!\left(#1\right)} \newcommand{\krn}[1]{\mathcal{K}\!\left(#1\right)} \newcommand{\compose}[2]{{#1}\circ{#2}} \newcommand{\vslt}[2]{\mathcal{LT}\left(#1,\,#2\right)} \newcommand{\isomorphic}{\cong} \newcommand{\similar}[2]{\inverse{#2}#1#2} \newcommand{\vectrepname}[1]{\rho_{#1}} \newcommand{\vectrep}[2]{\lteval{\vectrepname{#1}}{#2}} \newcommand{\vectrepinvname}[1]{\ltinverse{\vectrepname{#1}}} \newcommand{\vectrepinv}[2]{\lteval{\ltinverse{\vectrepname{#1}}}{#2}} \newcommand{\matrixrep}[3]{M^{#1}_{#2,#3}} \newcommand{\matrixrepcolumns}[4]{\left\lbrack \left.\vectrep{#2}{\lteval{#1}{\vect{#3}_{1}}}\right|\left.\vectrep{#2}{\lteval{#1}{\vect{#3}_{2}}}\right|\left.\vectrep{#2}{\lteval{#1}{\vect{#3}_{3}}}\right|\ldots\left|\vectrep{#2}{\lteval{#1}{\vect{#3}_{#4}}}\right.\right\rbrack} \newcommand{\cbm}[2]{C_{#1,#2}} \newcommand{\jordan}[2]{J_{#1}\left(#2\right)} \newcommand{\hadamard}[2]{#1\circ #2} \newcommand{\hadamardidentity}[1]{J_{#1}} \newcommand{\hadamardinverse}[1]{\widehat{#1}} \newcommand{\lt}{<} \newcommand{\gt}{>} \newcommand{\amp}{&} \)

SectionMMMatrix Multiplication

We know how to add vectors and how to multiply them by scalars. Together, these operations give us the possibility of making linear combinations. Similarly, we know how to add matrices and how to multiply matrices by scalars. In this section we mix all these ideas together and produce an operation known as matrix multiplication. This will lead to some results that are both surprising and central. We begin with a definition of how to multiply a vector by a matrix.

SubsectionMVPMatrix-Vector Product

We have repeatedly seen the importance of forming linear combinations of the columns of a matrix. As one example of this, the oft-used Theorem SLSLC, said that every solution to a system of linear equations gives rise to a linear combination of the column vectors of the coefficient matrix that equals the vector of constants. This theorem, and others, motivate the following central definition.

DefinitionMVPMatrix-Vector Product

Suppose \(A\) is an \(m\times n\) matrix with columns \(\vectorlist{A}{n}\) and \(\vect{u}\) is a vector of size \(n\text{.}\) Then the matrix-vector product of \(A\) with \(\vect{u}\) is the linear combination \begin{equation*} A\vect{u}= \vectorentry{\vect{u}}{1}\vect{A}_1+ \vectorentry{\vect{u}}{2}\vect{A}_2+ \vectorentry{\vect{u}}{3}\vect{A}_3+ \cdots+ \vectorentry{\vect{u}}{n}\vect{A}_n\text{.} \end{equation*}

So, the matrix-vector product is yet another version of “multiplication,” at least in the sense that we have yet again overloaded juxtaposition of two symbols as our notation. Remember your objects, an \(m\times n\) matrix times a vector of size \(n\) will create a vector of size \(m\text{.}\) So if \(A\) is rectangular, then the size of the “output” vector is different from the size of the “input” vector. With all the linear combinations we have performed so far, this computation should now seem second nature.

We can now represent systems of linear equations compactly with a matrix-vector product (Definition MVP) and column vector equality (Definition CVE). This finally yields a very popular alternative to our unconventional \(\linearsystem{A}{\vect{b}}\) notation.

Proof

The matrix-vector product is a very natural computation. We have motivated it by its connections with systems of equations, but here is another example.

Later (much later) we will need the following theorem, which is really a technical lemma (see Proof Technique LC). Since we are in a position to prove it now, we will. But you can safely skip it for the moment, if you promise to come back later to study the proof when the theorem is employed. At that point you will also be able to understand the comments in the paragraph following the proof.

Proof

You might notice from studying the proof that the hypotheses of this theorem could be “weakened” (i.e. made less restrictive). We need only suppose the equality of the matrix-vector products for just the standard unit vectors (Definition SUV) or any other spanning set (Definition SSVS) of \(\complex{n}\) (Exercise LISS.T40). However, in practice, when we apply this theorem the stronger hypothesis will be in effect so this version of the theorem will suffice for our purposes. (If we changed the statement of the theorem to have the less restrictive hypothesis, then we would call the theorem “stronger.”)

SubsectionMMMatrix Multiplication

We now define how to multiply two matrices together. Stop for a minute and think about how you might define this new operation.

Many books would present this definition much earlier in the course. However, we have taken great care to delay it as long as possible and to present as many ideas as practical based mostly on the notion of linear combinations. Towards the conclusion of the course, or when you perhaps take a second course in linear algebra, you may be in a position to appreciate the reasons for this. For now, understand that matrix multiplication is a central definition and perhaps you will appreciate its importance more by having saved it for later.

DefinitionMMMatrix Multiplication

Suppose \(A\) is an \(m\times n\) matrix and \(\vectorlist{B}{p}\) are the columns of an \(n\times p\) matrix \(B\text{.}\) Then the matrix product of \(A\) with \(B\) is the \(m\times p\) matrix where column \(i\) is the matrix-vector product \(A\vect{B}_i\text{.}\) Symbolically \begin{equation*} AB=A\matrixcolumns{B}{p}=\left[A\vect{B}_1|A\vect{B}_2|A\vect{B}_3|\ldots|A\vect{B}_p\right]\text{.} \end{equation*}

Is this the definition of matrix multiplication you expected? Perhaps our previous operations for matrices caused you to think that we might multiply two matrices of the same size, entry-by-entry? Notice that our current definition uses matrices of different sizes (though the number of columns in the first must equal the number of rows in the second), and the result is of a third size. Notice too in the previous example that we cannot even consider the product \(BA\text{,}\) since the sizes of the two matrices in this order are not right.

But it gets weirder than that. Many of your old ideas about “multiplication” will not apply to matrix multiplication, but some still will. So make no assumptions, and do not do anything until you have a theorem that says you can. Even if the sizes are right, matrix multiplication is not commutative — order matters.

SubsectionMMEEMatrix Multiplication, Entry-by-Entry

While certain “natural” properties of multiplication do not hold, many more do. In the next subsection, we will state and prove the relevant theorems. But first, we need a theorem that provides an alternate means of multiplying two matrices. In many texts, this would be given as the definition of matrix multiplication. We prefer to turn it around and have the following formula as a consequence of our definition. It will prove useful for proofs of matrix equality, where we need to examine products of matrices, entry-by-entry.

Proof

Theorem EMP is the way many people compute matrix products by hand. It will also be very useful for the theorems we are going to prove shortly. However, the definition (Definition MM) is frequently the most useful for its connections with deeper ideas like the null space and the upcoming column space.

SubsectionPMMProperties of Matrix Multiplication

In this subsection, we collect properties of matrix multiplication and its interaction with the zero matrix (Definition ZM), the identity matrix (Definition IM), matrix addition (Definition MA), scalar matrix multiplication (Definition MSM), the inner product (Definition IP), conjugation (Theorem MMCC), and the transpose (Definition TM). Whew! Here we go. These are great proofs to practice with, so try to concoct the proofs before reading them, they will get progressively more complicated as we go.

Proof
Proof

It is this theorem that gives the identity matrix its name. It is a matrix that behaves with matrix multiplication like the scalar 1 does with scalar multiplication. To multiply by the identity matrix is to have no effect on the other matrix.

Proof
Proof
Proof

Since Theorem MMA says matrix multipication is associative, it means we do not have to be careful about the order in which we perform matrix multiplication, nor how we parenthesize an expression with just several matrices multiplied togther. So this is where we draw the line on explaining every last detail in a proof. We will frequently add, remove, or rearrange parentheses with no comment. Indeed, I only see about a dozen places where Theorem MMA is cited in a proof. You could try to count how many times we avoid making a reference to this theorem.

The statement of our next theorem is technically inaccurate. If we upgrade the vectors \(\vect{u},\,\vect{v}\) to matrices with a single column, then the expression \(\transpose{\conjugate{\vect{u}}}\vect{v}\) is a \(1\times 1\) matrix, though we will treat this small matrix as if it was simply the scalar quantity in its lone entry. When we apply Theorem MMIP there should not be any confusion. Notice that if we treat a column vector as a matrix with a single column, then we can also construct the adjoint of a vector, though we will not make this a common practice.

Proof
Proof

Another theorem in this style, and it is a good one. If you have been practicing with the previous proofs you should be able to do this one yourself.

Proof

This theorem seems odd at first glance, since we have to switch the order of \(A\) and \(B\text{.}\) But if we simply consider the sizes of the matrices involved, we can see that the switch is necessary for this reason alone. That the individual entries of the products then come along to be equal is a bonus.

As the adjoint of a matrix is a composition of a conjugate and a transpose, its interaction with matrix multiplication is similar to that of a transpose. Here is the last of our long list of basic properties of matrix multiplication.

Proof

Notice how none of these proofs above relied on writing out huge general matrices with lots of ellipses (“…”) and trying to formulate the equalities a whole matrix at a time. This messy business is a “proof technique” to be avoided at all costs. Notice too how the proof of Theorem MMAD does not use an entry-by-entry approach, but simply builds on previous results about matrix multiplication's interaction with conjugation and transposes.

These theorems, along with Theorem VSPM and the other results in Section MO, give you the “rules” for how matrices interact with the various operations we have defined on matrices (addition, scalar multiplication, matrix multiplication, conjugation, transposes and adjoints). Use them and use them often. But do not try to do anything with a matrix that you do not have a rule for. Together, we would informally call all these operations, and the attendant theorems, “the algebra of matrices.” Notice, too, that every column vector is just a \(n\times 1\) matrix, so these theorems apply to column vectors also. Finally, these results, taken as a whole, may make us feel that the definition of matrix multiplication is not so unnatural.

SubsectionHMHermitian Matrices

The adjoint of a matrix has a basic property when employed in a matrix-vector product as part of an inner product. At this point, you could even use the following result as a motivation for the definition of an adjoint.

Proof

Sometimes a matrix is equal to its adjoint (Definition A), and these matrices have interesting properties. One of the most common situations where this occurs is when a matrix has only real number entries. Then we are simply talking about symmetric matrices (Definition SYM), so you can view this as a generalization of a symmetric matrix.

DefinitionHMHermitian Matrix

The square matrix \(A\) is Hermitian (or self-adjoint) if \(A=\adjoint{A}\text{.}\)

Again, the set of real matrices that are Hermitian is exactly the set of symmetric matrices. In Section PEE we will uncover some amazing properties of Hermitian matrices, so when you get there, run back here to remind yourself of this definition. Further properties will also appear in Section OD. Right now we prove a fundamental result about Hermitian matrices, matrix vector products and inner products. As a characterization, this could be employed as a definition of a Hermitian matrix and some authors take this approach.

Proof

So, informally, Hermitian matrices are those that can be tossed around from one side of an inner product to the other with reckless abandon. We will see later what this buys us.

SubsectionReading Questions

1

Form the matrix vector product of \begin{align*} \begin{bmatrix} 2 & 3 & -1 & 0\\ 1 & -2 & 7 & 3\\ 1 & 5 & 3 & 2\\ \end{bmatrix} &&\text{with}&& \colvector{2\\-3\\0\\5}\text{.} \end{align*}

2

Multiply together the two matrices below (in the order given). \begin{align*} \begin{bmatrix} 2 & 3 & -1 & 0\\ 1 & -2 & 7 & 3\\ 1 & 5 & 3 & 2\\ \end{bmatrix} && \begin{bmatrix} 2 & 6\\ -3 & -4\\ 0 & 2\\ 3 & -1\\ \end{bmatrix} \end{align*}

3

Rewrite the system of linear equations below as a vector equality and using a matrix-vector product. (This question does not ask for a solution to the system. But it does ask you to express the system of equations in a new form using tools from this section.) \begin{align*} 2x_1 + 3x_2 - x_3 &= 0\\ x_1 + 2x_2 + x_3 &= 3\\ x_1 + 3x_2 + 3x_3 &= 7 \end{align*}

SubsectionExercises

C20

Compute the product of the two matrices below, \(AB\text{.}\) Do this using the definitions of the matrix-vector product (Definition MVP) and the definition of matrix multiplication (Definition MM). \begin{align*} A= \begin{bmatrix} 2&5\\ -1&3\\ 2&-2 \end{bmatrix} && B=\begin{bmatrix} 1&5&-3&4\\ 2&0&2&-3 \end{bmatrix} \end{align*}

Solution
C21

Compute the product \(AB\) of the two matrices below using both the definition of the matrix-vector product (Definition MVP) and the definition of matrix multiplication (Definition MM). \begin{align*} A &= \begin{bmatrix} 1 & 3 & 2 \\ -1 & 2 & 1 \\ 0 & 1 & 0 \end{bmatrix} & B &= \begin{bmatrix} 4 & 1 & 2\\ 1 & 0 & 1\\3 & 1 & 5 \end{bmatrix} \end{align*}

Solution
C22

Compute the product \(AB\) of the two matrices below using both the definition of the matrix-vector product (Definition MVP) and the definition of matrix multiplication (Definition MM). \begin{align*} A&= \begin{bmatrix} 1 & 0 \\ -2 & 1 \end{bmatrix} & B&= \begin{bmatrix} 2 & 3 \\ 4 & 6 \end{bmatrix} \end{align*}

Solution
C23

Compute the product \(AB\) of the two matrices below using both the definition of the matrix-vector product (Definition MVP) and the definition of matrix multiplication (Definition MM). \begin{align*} A&= \begin{bmatrix} 3 & 1 \\ 2 & 4 \\ 6 & 5\\1 & 2 \end{bmatrix} & B&= \begin{bmatrix} -3 & 1 \\ 4 & 2 \end{bmatrix} \end{align*}

Solution
C24

Compute the product \(AB\) of the two matrices below. \begin{align*} A&= \begin{bmatrix} 1 & 2 & 3 & -2 \\ 0 & 1 & -2 & -1\\ 1 & 1 & 3 & 1 \end{bmatrix} & B&= \begin{bmatrix} 3\\ 4 \\ 0 \\ 2 \end{bmatrix} \end{align*}

Solution
C25

Compute the product \(AB\) of the two matrices below. \begin{align*} A&= \begin{bmatrix} 1 & 2 & 3 & -2 \\ 0 & 1 & -2 & -1\\ 1 & 1 & 3 & 1 \end{bmatrix} & B&= \begin{bmatrix} -7\\ 3 \\ 1 \\ 1 \end{bmatrix} \end{align*}

Solution
C26

Compute the product \(AB\) of the two matrices below using both the definition of the matrix-vector product (Definition MVP) and the definition of matrix multiplication (Definition MM). \begin{align*} A&= \begin{bmatrix} 1 & 3 & 1\\ 0 & 1 & 0\\1 & 1 & 2 \end{bmatrix} & B&= \begin{bmatrix} 2 & -5 & -1 \\ 0 & 1 & 0\\-1 & 2 & 1 \end{bmatrix} \end{align*}

Solution
C30

For the matrix \(A\text{,}\) find \(A^2\text{,}\) \(A^3\text{,}\) \(A^4\text{.}\) Find a general formula for \(A^n\) for any positive integer \(n\text{.}\) \begin{equation*} A = \begin{bmatrix} 1 & 2 \\ 0 & 1 \end{bmatrix} \end{equation*}

Solution
C31

For the matrix \(A\text{,}\) find \(A^2\text{,}\) \(A^3\text{,}\) \(A^4\text{.}\) Find a general formula for \(A^n\) for any positive integer \(n\text{.}\) \begin{equation*} A = \begin{bmatrix} 1 & -1 \\ 0 & 1 \end{bmatrix} \end{equation*}

Solution
C32

For the matrix \(A\text{,}\) find \(A^2\text{,}\) \(A^3\text{,}\) \(A^4\text{.}\) Find a general formula for \(A^n\) for any positive integer \(n\text{.}\) \begin{equation*} A = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 3 \end{bmatrix} \end{equation*}

Solution
C33

For the matrix \(A\text{,}\) find \(A^2\text{,}\) \(A^3\text{,}\) \(A^4\text{.}\) Find a general formula for \(A^n\) for any positive integer \(n\text{.}\) \begin{equation*} A = \begin{bmatrix} 0 & 1 & 2 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \end{bmatrix} \end{equation*}

Solution
T10

Suppose that \(A\) is a square matrix and there is a vector, \(\vect{b}\text{,}\) such that \(\linearsystem{A}{\vect{b}}\) has a unique solution. Prove that \(A\) is nonsingular. Give a direct proof (perhaps appealing to Theorem PSPHS) rather than just negating a sentence from the text discussing a similar situation.

Solution
T12

The conclusion of Theorem AIP is \(\innerproduct{A\vect{x}}{\vect{y}}=\innerproduct{\vect{x}}{\adjoint{A}\vect{y}}\text{.}\) Use the same hypotheses, and prove the similar conclusion: \(\innerproduct{\adjoint{A}\vect{x}}{\vect{y}}=\innerproduct{\vect{x}}{A\vect{y}}\text{.}\) Two different approaches can both be based on an application of Theorem AIP. The first uses Theorem AA, while a second approach uses Theorem IPAC. Can you provide two proofs?

T23

Prove the second part of Theorem MMSMM.

Solution
T31

Suppose that \(A\) is an \(m\times n\) matrix and \(\vect{x},\,\vect{y}\in\nsp{A}\text{.}\) Prove that \(\vect{x}+\vect{y}\in\nsp{A}\text{.}\)

T32

Suppose that \(A\) is an \(m\times n\) matrix, \(\alpha\in\complexes\text{,}\) and \(\vect{x}\in\nsp{A}\text{.}\) Prove that \(\alpha\vect{x}\in\nsp{A}\text{.}\)

T35

Suppose that \(A\) is an \(n\times n\) matrix. Prove that \(\adjoint{A}A\) and \(A\adjoint{A}\) are Hermitian matrices.

T40

Suppose that \(A\) is an \(m\times n\) matrix and \(B\) is an \(n\times p\) matrix. Prove that the null space of \(B\) is a subset of the null space of \(AB\text{,}\) that is \(\nsp{B}\subseteq\nsp{AB}\text{.}\) Provide an example where the opposite is false, in other words give an example where \(\nsp{AB}\not\subseteq\nsp{B}\text{.}\)

Solution
T41

Suppose that \(A\) is an \(n\times n\) nonsingular matrix and \(B\) is an \(n\times p\) matrix. Prove that the null space of \(B\) is equal to the null space of \(AB\text{,}\) that is \(\nsp{B}=\nsp{AB}\text{.}\) (Compare with Exercise MM.T40.)

Solution
T50

Suppose \(\vect{u}\) and \(\vect{v}\) are any two solutions of the linear system \(\linearsystem{A}{\vect{b}}\text{.}\) Prove that \(\vect{u}-\vect{v}\) is an element of the null space of \(A\text{,}\) that is, \(\vect{u}-\vect{v}\in\nsp{A}\text{.}\)

T51

Give a new proof of Theorem PSPHS replacing applications of Theorem SLSLC with matrix-vector products (Theorem SLEMM).

Solution
T52

Suppose that \(\vect{x},\,\vect{y}\in\complex{n}\text{,}\) \(\vect{b}\in\complex{m}\) and \(A\) is an \(m\times n\) matrix. If \(\vect{x}\text{,}\) \(\vect{y}\) and \(\vect{x}+\vect{y}\) are each a solution to the linear system \(\linearsystem{A}{\vect{b}}\text{,}\) what can you say that is interesting about \(\vect{b}\text{?}\) Form an implication with the existence of the three solutions as the hypothesis and an interesting statement about \(\linearsystem{A}{\vect{b}}\) as the conclusion, and then give a proof.

Solution