##### DefinitionVSCVVector Space of Column Vectors

The vector space \(\complex{m}\) is the set of all column vectors (Definition CV) of size \(m\) with entries from the set of complex numbers, \(\complexes\text{.}\)

In this section we define some new operations involving vectors, and collect some basic properties of these operations. Begin by recalling our definition of a column vector as an ordered list of complex numbers, written vertically (Definition CV). The collection of all possible vectors of a fixed size is a commonly used set, so we start with its definition.

The vector space \(\complex{m}\) is the set of all column vectors (Definition CV) of size \(m\) with entries from the set of complex numbers, \(\complexes\text{.}\)

When a set similar to this is defined using only column vectors where all the entries are from the real numbers, it is written as \({\mathbb R}^m\) and is known as *Euclidean \(m\)-space*.

The term *vector* is used in a variety of different ways. We have defined it as an ordered list written vertically. It could simply be an ordered list of numbers, and perhaps written as \(\left\langle 2,\,3,\,-1,\,6\right\rangle\text{.}\) Or it could be interpreted as a point in \(m\) dimensions, such as \(\left(3,\,4,\,-2\right)\) representing a point in three dimensions relative to \(x\text{,}\) \(y\) and \(z\) axes. With an interpretation as a point, we can construct an arrow from the origin to the point which is consistent with the notion that a vector has direction and magnitude.

All of these ideas can be shown to be related and equivalent, so keep that in mind as you connect the ideas of this course with ideas from other disciplines. For now, we will stick with the idea that a vector is just a list of numbers, in some particular order.

We start our study of this set by first defining what it means for two vectors to be the same.

Suppose that \(\vect{u},\,\vect{v}\in\complex{m}\text{.}\) Then \(\vect{u}\) and \(\vect{v}\) are *equal*, written \(\vect{u}=\vect{v}\) if
\begin{gather*}
\vectorentry{\vect{u}}{i}=\vectorentry{\vect{v}}{i}
\end{gather*}
for all \(1\leq i\leq m\text{.}\)

Now this may seem like a silly (or even stupid) thing to say so carefully. Of course two vectors are equal if they are equal for each corresponding entry! Well, this is not as silly as it appears. We will see a few occasions later where the obvious definition is *not* the right one. And besides, in doing mathematics we need to be very careful about making all the necessary definitions and making them unambiguous. And we have done that here.

Notice now that the symbol “=” is now doing triple-duty. We know from our earlier education what it means for two numbers (real or complex) to be equal, and we take this for granted. In Definition SE we defined what it meant for two sets to be equal. Now we have defined what it means for two vectors to be equal, and this new definition builds on our prior definition for when two numbers are equal when we use the condition \(\vectorentry{\vect{u}}{i}=\vectorentry{\vect{v}}{i}\) repeatedly, for all \(1\leq i\leq m\text{.}\) So think carefully about your objects when you see an equal sign and think about just which notion of equality you have encountered. This will be especially important when you are asked to construct proofs whose conclusion states that two objects are equal. If you have an electronic copy of the book, such as the PDF version, searching on “Definition CVE” can be an instructive exercise. See how often, and where, the definition is employed.

OK, let us do an example of vector equality that begins to hint at the utility of this definition.

We will now define two operations on the set \(\complex{m}\text{.}\) By this we mean well-defined procedures that somehow convert vectors into other vectors. Here are two of the most basic definitions of the entire course.

Suppose that \(\vect{u},\,\vect{v}\in\complex{m}\text{.}\) The *sum* of \(\vect{u}\) and \(\vect{v}\) is the vector \(\vect{u}+\vect{v}\) defined by
\begin{gather*}
\vectorentry{\vect{u}+\vect{v}}{i}=\vectorentry{\vect{u}}{i}+\vectorentry{\vect{v}}{i}
\end{gather*}
for \(1\leq i\leq m\text{.}\)

So vector addition takes two vectors of the same size and combines them (in a natural way!) to create a new vector of the same size. Notice that this definition is required, even if we agree that this is the obvious, right, natural or correct way to do it. Notice too that the symbol ‘+’ is being recycled. We all know how to add *numbers*, but now we have the same symbol extended to double-duty and we use it to indicate how to add two new objects, vectors. And this definition of our new addition of vectors is built on our prior meaning of addition of numbers in the expressions \(\vectorentry{\vect{u}}{i}+\vectorentry{\vect{v}}{i}\text{.}\) Think about your objects, especially when doing proofs. Vector addition is easy, here is an example from \(\complex{4}\text{.}\)

Our second operation takes two objects of different types, specifically a number and a vector, and combines them to create another vector. In this context we call the number a *scalar* in order to emphasize that it is not a vector.

Suppose \(\vect{u}\in\complex{m}\) and \(\alpha\in\complexes\text{,}\) then the *scalar multiple* of \(\vect{u}\) by \(\alpha\) is the vector \(\alpha\vect{u}\) defined by
\begin{gather*}
\vectorentry{\alpha\vect{u}}{i}=\alpha\vectorentry{\vect{u}}{i}
\end{gather*}
for \(1\leq i\leq m\text{.}\)

Notice that we are doing a kind of multiplication here, but we are *defining* a new type, perhaps in what appears to be a natural way. We use juxtaposition (smashing two symbols together side-by-side) to denote this operation rather than using a symbol like we did with vector addition. So this can be another source of confusion. When two symbols are next to each other, are we doing regular old multiplication, the kind we have done for years, or are we doing scalar vector multiplication, the operation we just defined? Think about your objects — if the first object is a scalar, and the second is a vector, then it *must* be that we are doing our new operation, and the *result* of this operation will be another vector.

Notice how consistency in notation can be an aid here. If we write scalars as lower case Greek letters from the start of the alphabet (such as \(\alpha\text{,}\) \(\beta\text{,}\) …) and write vectors in bold Latin letters from the end of the alphabet (\(\vect{u}\text{,}\) \(\vect{v}\text{,}\) …), then we have some hints about what type of objects we are working with. This can be a blessing *and* a curse, since when we go read another book about linear algebra, or read an application in another discipline (physics, economics, …) the types of notation employed may be very different and hence unfamiliar.

Again, computationally, vector scalar multiplication is very easy.

With definitions of vector addition and scalar multiplication we can state, and prove, several properties of each operation, and some properties that involve their interplay. We now collect ten of them here for later reference.

Suppose that \(\complex{m}\) is the set of column vectors of size \(m\) (Definition VSCV) with addition and scalar multiplication as defined in Definition CVA and Definition CVSM. Then

- ACC Additive Closure, Column Vectors
If \(\vect{u},\,\vect{v}\in\complex{m}\text{,}\) then \(\vect{u}+\vect{v}\in\complex{m}\text{.}\)

- SCC Scalar Closure, Column Vectors
If \(\alpha\in\complexes\) and \(\vect{u}\in\complex{m}\text{,}\) then \(\alpha\vect{u}\in\complex{m}\text{.}\)

- CC Commutativity, Column Vectors
If \(\vect{u},\,\vect{v}\in\complex{m}\text{,}\) then \(\vect{u}+\vect{v}=\vect{v}+\vect{u}\text{.}\)

- AAC Additive Associativity, Column Vectors
If \(\vect{u},\,\vect{v},\,\vect{w}\in\complex{m}\text{,}\) then \(\vect{u}+\left(\vect{v}+\vect{w}\right)=\left(\vect{u}+\vect{v}\right)+\vect{w}\text{.}\)

- ZC Zero Vector, Column Vectors
There is a vector, \(\zerovector\text{,}\) called the

*zero vector*, such that \(\vect{u}+\zerovector=\vect{u}\) for all \(\vect{u}\in\complex{m}\text{.}\)- AIC Additive Inverses, Column Vectors
If \(\vect{u}\in\complex{m}\text{,}\) then there exists a vector \(\vect{-u}\in\complex{m}\) so that \(\vect{u}+ (\vect{-u})=\zerovector\text{.}\)

- SMAC Scalar Multiplication Associativity, Column Vectors
If \(\alpha,\,\beta\in\complexes\) and \(\vect{u}\in\complex{m}\text{,}\) then \(\alpha(\beta\vect{u})=(\alpha\beta)\vect{u}\text{.}\)

- DVAC Distributivity across Vector Addition, Column Vectors
If \(\alpha\in\complexes\) and \(\vect{u},\,\vect{v}\in\complex{m}\text{,}\) then \(\alpha(\vect{u}+\vect{v})=\alpha\vect{u}+\alpha\vect{v}\text{.}\)

- DSAC Distributivity across Scalar Addition, Column Vectors
If \(\alpha,\,\beta\in\complexes\) and \(\vect{u}\in\complex{m}\text{,}\) then\((\alpha+\beta)\vect{u}=\alpha\vect{u}+\beta\vect{u}\text{.}\)

- OC One, Column Vectors
If \(\vect{u}\in\complex{m}\text{,}\) then \(1\vect{u}=\vect{u}\text{.}\)

Many of the conclusions of our theorems can be characterized as “identities,” especially when we are establishing basic properties of operations such as those in this section. Most of the properties listed in Theorem VSPCV are examples. So some advice about the style we use for proving identities is appropriate right now. Have a look at Proof Technique PI.

Be careful with the notion of the vector \(\vect{-u}\text{.}\) This is a vector that we add to \(\vect{u}\) so that the result is the particular vector \(\zerovector\text{.}\) This is basically a property of vector addition. It happens that we can compute \(\vect{-u}\) using the *other* operation, scalar multiplication. We can prove this directly by writing that
\begin{gather*}
\vectorentry{\vect{-u}}{i}
=-\vectorentry{\vect{u}}{i}
=(-1)\vectorentry{\vect{u}}{i}
=\vectorentry{(-1)\vect{u}}{i}
\end{gather*}
We will see later how to derive this property as a *consequence* of several of the ten properties listed in Theorem VSPCV.

Similarly, we will often write something you would immediately recognize as *vector subtraction*. This could be placed on a firm theoretical foundation — as you can do yourself with Exercise VO.T30.

A final note. Property AAC implies that we do not have to be careful about how we “parenthesize” the addition of vectors. In other words, there is nothing to be gained by writing\(\left(\vect{u}+\vect{v}\right)+\left(\vect{w}+\left(\vect{x}+\vect{y}\right)\right)\) rather than \(\vect{u}+\vect{v}+\vect{w}+\vect{x}+\vect{y}\text{,}\) since we get the same result no matter which order we choose to perform the four additions. So we will not be careful about using parentheses this way.

Where have you seen vectors used before in other courses? How were they different?

In words only, when are two vectors equal?

Perform the following computation with vector operations \begin{equation*} 2\colvector{1\\5\\0} + (-3)\colvector{7\\6\\5} \end{equation*}

Compute \begin{equation*} 4\colvector{2\\-3\\4\\1\\0}+ (-2)\colvector{1\\2\\-5\\2\\4}+ \colvector{-1\\3\\0\\1\\2}\text{.} \end{equation*}

SolutionSolve the given vector equation for \(x\text{,}\) or explain why no solution exists. \begin{equation*} 3\colvector{1\\2\\-1}+ 4\colvector{2\\0\\x}= \colvector{11\\6\\17} \end{equation*}

SolutionSolve the given vector equation for \(\alpha\text{,}\) or explain why no solution exists. \begin{equation*} \alpha\colvector{1\\2\\-1}+ 4\colvector{3\\4\\2} = \colvector{-1\\0\\4} \end{equation*}

SolutionSolve the given vector equation for \(\alpha\text{,}\) or explain why no solution exists. \begin{equation*} \alpha\colvector{3\\2\\-2}+ \colvector{6\\1\\2} = \colvector{0\\-3\\6} \end{equation*}

SolutionFind \(\alpha\) and \(\beta\) that solve the vector equation \begin{equation*} \alpha\colvector{1\\0}+\beta\colvector{0\\1} = \colvector{3\\2}\text{.} \end{equation*}

SolutionFind \(\alpha\) and \(\beta\) that solve the vector equation. \begin{equation*} \alpha\colvector{2\\1}+ \beta\colvector{1\\3 }= \colvector{5\\0} \end{equation*}

SolutionProvide reasons (mostly vector space properties) as justification for each of the seven steps of the proof of the following theorem.

For any vectors \(\vect{u},\,\vect{v},\,\vect{w}\in\complex{m}\text{,}\) if \(\vect{u} + \vect{v} = \vect{u} + \vect{w}\text{,}\) then \(\vect{v} = \vect{w}\text{.}\)

Proof: Let \(\vect{u},\,\vect{v},\,\vect{w}\in\complex{m}\text{,}\) and suppose \(\vect{u} + \vect{v} = \vect{u} + \vect{w}\text{.}\) \begin{align*} \vect{v}&=\zerovector + \vect{v}&&\underline{\hspace{9.090909090909092em}}\\ &=(-\vect{u} + \vect{u}) + \vect{v}&&\underline{\hspace{9.090909090909092em}}\\ &=-\vect{u} + (\vect{u} + \vect{v})&&\underline{\hspace{9.090909090909092em}}\\ &=-\vect{u} + (\vect{u} + \vect{w})&&\underline{\hspace{9.090909090909092em}}\\ &=(-\vect{u} + \vect{u}) + \vect{w}&&\underline{\hspace{9.090909090909092em}}\\ &=\zerovector + \vect{w}&&\underline{\hspace{9.090909090909092em}}\\ &=\vect{w}&&\underline{\hspace{9.090909090909092em}} \end{align*}

SolutionProvide reasons (mostly vector space properties) as justification for each of the six steps of the proof of the following theorem.

For any vector \(\vect{u}\in\complex{m}\text{,}\) \(0\vect{u}=\zerovector\text{.}\)

Proof: Let \(\vect{u}\in\complex{m}\text{.}\) \begin{align*} \zerovector&= 0\vect{u} +(-0\vect{u})&&\underline{\hspace{9.090909090909092em}}\\ &= (0+0)\vect{u} + (-0\vect{u})&&\underline{\hspace{9.090909090909092em}}\\ &= (0\vect{u}+0\vect{u}) + (-0\vect{u})&&\underline{\hspace{9.090909090909092em}}\\ &= 0\vect{u} + (0\vect{u} + (-0\vect{u}))&&\underline{\hspace{9.090909090909092em}}\\ &= 0\vect{u} + \zerovector&&\underline{\hspace{9.090909090909092em}}\\ &= 0\vect{u}&&\underline{\hspace{9.090909090909092em}} \end{align*}

SolutionProvide reasons (mostly vector space properties) as justification for each of the six steps of the proof of the following theorem.

For any scalar \(c\text{,}\) \(c\,\zerovector = \zerovector\text{.}\)

Proof: Let \(c\) be an arbitrary scalar. \begin{align*} \zerovector&= c\zerovector + (-c\zerovector)&&\underline{\hspace{9.090909090909092em}}\\ &= c(\zerovector + \zerovector) + (-c\zerovector)&&\underline{\hspace{9.090909090909092em}}\\ &= (c\zerovector + c\zerovector) + (-c\zerovector)&&\underline{\hspace{9.090909090909092em}}\\ &= c\zerovector + (c\zerovector + (-c\zerovector))&&\underline{\hspace{9.090909090909092em}}\\ &= c\zerovector + \zerovector&&\underline{\hspace{9.090909090909092em}}\\ &= c\zerovector&&\underline{\hspace{9.090909090909092em}} \end{align*}

SolutionProve Property CC of Theorem VSPCV. Write your proof in the style of the proof of Property DSAC given in this section.

SolutionProve Property SMAC of Theorem VSPCV. Write your proof in the style of the proof of Property DSAC given in this section.

Prove Property DVAC of Theorem VSPCV. Write your proof in the style of the proof of Property DSAC given in this section.

Exercises T30, T31 and T32 are about making a careful definition of *vector subtraction*.

Suppose \(\vect{u}\) and \(\vect{v}\) are two vectors in \(\complex{m}\text{.}\) Define a new operation, called “subtraction,” as the new vector denoted \(\vect{u}-\vect{v}\) and defined by \begin{gather*} \vectorentry{\vect{u}-\vect{v}}{i}=\vectorentry{\vect{u}}{i}-\vectorentry{\vect{v}}{i} \end{gather*} for \(1\leq i\leq m\text{.}\) Prove that we can express the subtraction of two vectors in terms of our two basic operations. More precisely, prove that \(\vect{u}-\vect{v}=\vect{u}+(-1)\vect{v}\text{.}\) So in a sense, subtraction is not something new and different, but is just a convenience. Mimic the style of similar proofs in this section.

Prove, by giving counterexamples, that vector subtraction is not commutative and not associative.

Prove that vector subtraction obeys a distributive property. Specifically, prove that \(\alpha\left(\vect{u}-\vect{v}\right)=\alpha\vect{u}-\alpha\vect{v}\text{.}\)

Can you give two different proofs? Distinguish your two proofs by using the alternate descriptions of vector subtraction provided by Exercise VO.T30.