Skip to main content
\(\newcommand{\orderof}[1]{\sim #1} \newcommand{\Z}{\mathbb{Z}} \newcommand{\reals}{\mathbb{R}} \newcommand{\real}[1]{\mathbb{R}^{#1}} \newcommand{\complexes}{\mathbb{C}} \newcommand{\complex}[1]{\mathbb{C}^{#1}} \newcommand{\conjugate}[1]{\overline{#1}} \newcommand{\modulus}[1]{\left\lvert#1\right\rvert} \newcommand{\zerovector}{\vect{0}} \newcommand{\zeromatrix}{\mathcal{O}} \newcommand{\innerproduct}[2]{\left\langle#1,\,#2\right\rangle} \newcommand{\norm}[1]{\left\lVert#1\right\rVert} \newcommand{\dimension}[1]{\dim\left(#1\right)} \newcommand{\nullity}[1]{n\left(#1\right)} \newcommand{\rank}[1]{r\left(#1\right)} \newcommand{\ds}{\oplus} \newcommand{\detname}[1]{\det\left(#1\right)} \newcommand{\detbars}[1]{\left\lvert#1\right\rvert} \newcommand{\trace}[1]{t\left(#1\right)} \newcommand{\sr}[1]{#1^{1/2}} \newcommand{\spn}[1]{\left\langle#1\right\rangle} \newcommand{\nsp}[1]{\mathcal{N}\!\left(#1\right)} \newcommand{\csp}[1]{\mathcal{C}\!\left(#1\right)} \newcommand{\rsp}[1]{\mathcal{R}\!\left(#1\right)} \newcommand{\lns}[1]{\mathcal{L}\!\left(#1\right)} \newcommand{\per}[1]{#1^\perp} \newcommand{\augmented}[2]{\left\lbrack\left.#1\,\right\rvert\,#2\right\rbrack} \newcommand{\linearsystem}[2]{\mathcal{LS}\!\left(#1,\,#2\right)} \newcommand{\homosystem}[1]{\linearsystem{#1}{\zerovector}} \newcommand{\rowopswap}[2]{R_{#1}\leftrightarrow R_{#2}} \newcommand{\rowopmult}[2]{#1R_{#2}} \newcommand{\rowopadd}[3]{#1R_{#2}+R_{#3}} \newcommand{\leading}[1]{\boxed{#1}} \newcommand{\rref}{\xrightarrow{\text{RREF}}} \newcommand{\elemswap}[2]{E_{#1,#2}} \newcommand{\elemmult}[2]{E_{#2}\left(#1\right)} \newcommand{\elemadd}[3]{E_{#2,#3}\left(#1\right)} \newcommand{\scalarlist}[2]{{#1}_{1},\,{#1}_{2},\,{#1}_{3},\,\ldots,\,{#1}_{#2}} \newcommand{\vect}[1]{\mathbf{#1}} \newcommand{\colvector}[1]{\begin{bmatrix}#1\end{bmatrix}} \newcommand{\vectorcomponents}[2]{\colvector{#1_{1}\\#1_{2}\\#1_{3}\\\vdots\\#1_{#2}}} \newcommand{\vectorlist}[2]{\vect{#1}_{1},\,\vect{#1}_{2},\,\vect{#1}_{3},\,\ldots,\,\vect{#1}_{#2}} \newcommand{\vectorentry}[2]{\left\lbrack#1\right\rbrack_{#2}} \newcommand{\matrixentry}[2]{\left\lbrack#1\right\rbrack_{#2}} \newcommand{\lincombo}[3]{#1_{1}\vect{#2}_{1}+#1_{2}\vect{#2}_{2}+#1_{3}\vect{#2}_{3}+\cdots +#1_{#3}\vect{#2}_{#3}} \newcommand{\matrixcolumns}[2]{\left\lbrack\vect{#1}_{1}|\vect{#1}_{2}|\vect{#1}_{3}|\ldots|\vect{#1}_{#2}\right\rbrack} \newcommand{\transpose}[1]{#1^{t}} \newcommand{\inverse}[1]{#1^{-1}} \newcommand{\submatrix}[3]{#1\left(#2|#3\right)} \newcommand{\adj}[1]{\transpose{\left(\conjugate{#1}\right)}} \newcommand{\adjoint}[1]{#1^\ast} \newcommand{\set}[1]{\left\{#1\right\}} \newcommand{\setparts}[2]{\left\lbrace#1\,\middle|\,#2\right\rbrace} \newcommand{\card}[1]{\left\lvert#1\right\rvert} \newcommand{\setcomplement}[1]{\overline{#1}} \newcommand{\charpoly}[2]{p_{#1}\left(#2\right)} \newcommand{\eigenspace}[2]{\mathcal{E}_{#1}\left(#2\right)} \newcommand{\eigensystem}[3]{\lambda&=#2&\eigenspace{#1}{#2}&=\spn{\set{#3}}} \newcommand{\geneigenspace}[2]{\mathcal{G}_{#1}\left(#2\right)} \newcommand{\algmult}[2]{\alpha_{#1}\left(#2\right)} \newcommand{\geomult}[2]{\gamma_{#1}\left(#2\right)} \newcommand{\indx}[2]{\iota_{#1}\left(#2\right)} \newcommand{\ltdefn}[3]{#1\colon #2\rightarrow#3} \newcommand{\lteval}[2]{#1\left(#2\right)} \newcommand{\ltinverse}[1]{#1^{-1}} \newcommand{\restrict}[2]{{#1}|_{#2}} \newcommand{\preimage}[2]{#1^{-1}\left(#2\right)} \newcommand{\rng}[1]{\mathcal{R}\!\left(#1\right)} \newcommand{\krn}[1]{\mathcal{K}\!\left(#1\right)} \newcommand{\compose}[2]{{#1}\circ{#2}} \newcommand{\vslt}[2]{\mathcal{LT}\left(#1,\,#2\right)} \newcommand{\isomorphic}{\cong} \newcommand{\similar}[2]{\inverse{#2}#1#2} \newcommand{\vectrepname}[1]{\rho_{#1}} \newcommand{\vectrep}[2]{\lteval{\vectrepname{#1}}{#2}} \newcommand{\vectrepinvname}[1]{\ltinverse{\vectrepname{#1}}} \newcommand{\vectrepinv}[2]{\lteval{\ltinverse{\vectrepname{#1}}}{#2}} \newcommand{\matrixrep}[3]{M^{#1}_{#2,#3}} \newcommand{\matrixrepcolumns}[4]{\left\lbrack \left.\vectrep{#2}{\lteval{#1}{\vect{#3}_{1}}}\right|\left.\vectrep{#2}{\lteval{#1}{\vect{#3}_{2}}}\right|\left.\vectrep{#2}{\lteval{#1}{\vect{#3}_{3}}}\right|\ldots\left|\vectrep{#2}{\lteval{#1}{\vect{#3}_{#4}}}\right.\right\rbrack} \newcommand{\cbm}[2]{C_{#1,#2}} \newcommand{\jordan}[2]{J_{#1}\left(#2\right)} \newcommand{\hadamard}[2]{#1\circ #2} \newcommand{\hadamardidentity}[1]{J_{#1}} \newcommand{\hadamardinverse}[1]{\widehat{#1}} \newcommand{\lt}{<} \newcommand{\gt}{>} \newcommand{\amp}{&} \)

SectionVOVector Operations

In this section we define some new operations involving vectors, and collect some basic properties of these operations. Begin by recalling our definition of a column vector as an ordered list of complex numbers, written vertically (Definition CV). The collection of all possible vectors of a fixed size is a commonly used set, so we start with its definition.

SubsectionCVColumn Vectors

DefinitionVSCVVector Space of Column Vectors

The vector space \(\complex{m}\) is the set of all column vectors (Definition CV) of size \(m\) with entries from the set of complex numbers, \(\complexes\text{.}\)

When a set similar to this is defined using only column vectors where all the entries are from the real numbers, it is written as \({\mathbb R}^m\) and is known as Euclidean \(m\)-space.

The term vector is used in a variety of different ways. We have defined it as an ordered list written vertically. It could simply be an ordered list of numbers, and perhaps written as \(\left\langle 2,\,3,\,-1,\,6\right\rangle\text{.}\) Or it could be interpreted as a point in \(m\) dimensions, such as \(\left(3,\,4,\,-2\right)\) representing a point in three dimensions relative to \(x\text{,}\) \(y\) and \(z\) axes. With an interpretation as a point, we can construct an arrow from the origin to the point which is consistent with the notion that a vector has direction and magnitude.

All of these ideas can be shown to be related and equivalent, so keep that in mind as you connect the ideas of this course with ideas from other disciplines. For now, we will stick with the idea that a vector is just a list of numbers, in some particular order.

SubsectionVEASMVector Equality, Addition, Scalar Multiplication

We start our study of this set by first defining what it means for two vectors to be the same.

DefinitionCVEColumn Vector Equality

Suppose that \(\vect{u},\,\vect{v}\in\complex{m}\text{.}\) Then \(\vect{u}\) and \(\vect{v}\) are equal, written \(\vect{u}=\vect{v}\) if \begin{gather*} \vectorentry{\vect{u}}{i}=\vectorentry{\vect{v}}{i} \end{gather*} for all \(1\leq i\leq m\text{.}\)

Now this may seem like a silly (or even stupid) thing to say so carefully. Of course two vectors are equal if they are equal for each corresponding entry! Well, this is not as silly as it appears. We will see a few occasions later where the obvious definition is not the right one. And besides, in doing mathematics we need to be very careful about making all the necessary definitions and making them unambiguous. And we have done that here.

Notice now that the symbol “=” is now doing triple-duty. We know from our earlier education what it means for two numbers (real or complex) to be equal, and we take this for granted. In Definition SE we defined what it meant for two sets to be equal. Now we have defined what it means for two vectors to be equal, and this new definition builds on our prior definition for when two numbers are equal when we use the condition \(\vectorentry{\vect{u}}{i}=\vectorentry{\vect{v}}{i}\) repeatedly, for all \(1\leq i\leq m\text{.}\) So think carefully about your objects when you see an equal sign and think about just which notion of equality you have encountered. This will be especially important when you are asked to construct proofs whose conclusion states that two objects are equal. If you have an electronic copy of the book, such as the PDF version, searching on “Definition CVE” can be an instructive exercise. See how often, and where, the definition is employed.

OK, let us do an example of vector equality that begins to hint at the utility of this definition.

We will now define two operations on the set \(\complex{m}\text{.}\) By this we mean well-defined procedures that somehow convert vectors into other vectors. Here are two of the most basic definitions of the entire course.

DefinitionCVAColumn Vector Addition

Suppose that \(\vect{u},\,\vect{v}\in\complex{m}\text{.}\) The sum of \(\vect{u}\) and \(\vect{v}\) is the vector \(\vect{u}+\vect{v}\) defined by \begin{gather*} \vectorentry{\vect{u}+\vect{v}}{i}=\vectorentry{\vect{u}}{i}+\vectorentry{\vect{v}}{i} \end{gather*} for \(1\leq i\leq m\text{.}\)

So vector addition takes two vectors of the same size and combines them (in a natural way!) to create a new vector of the same size. Notice that this definition is required, even if we agree that this is the obvious, right, natural or correct way to do it. Notice too that the symbol ‘+’ is being recycled. We all know how to add numbers, but now we have the same symbol extended to double-duty and we use it to indicate how to add two new objects, vectors. And this definition of our new addition of vectors is built on our prior meaning of addition of numbers in the expressions \(\vectorentry{\vect{u}}{i}+\vectorentry{\vect{v}}{i}\text{.}\) Think about your objects, especially when doing proofs. Vector addition is easy, here is an example from \(\complex{4}\text{.}\)

Our second operation takes two objects of different types, specifically a number and a vector, and combines them to create another vector. In this context we call the number a scalar in order to emphasize that it is not a vector.

DefinitionCVSMColumn Vector Scalar Multiplication

Suppose \(\vect{u}\in\complex{m}\) and \(\alpha\in\complexes\text{,}\) then the scalar multiple of \(\vect{u}\) by \(\alpha\) is the vector \(\alpha\vect{u}\) defined by \begin{gather*} \vectorentry{\alpha\vect{u}}{i}=\alpha\vectorentry{\vect{u}}{i} \end{gather*} for \(1\leq i\leq m\text{.}\)

Notice that we are doing a kind of multiplication here, but we are defining a new type, perhaps in what appears to be a natural way. We use juxtaposition (smashing two symbols together side-by-side) to denote this operation rather than using a symbol like we did with vector addition. So this can be another source of confusion. When two symbols are next to each other, are we doing regular old multiplication, the kind we have done for years, or are we doing scalar vector multiplication, the operation we just defined? Think about your objects — if the first object is a scalar, and the second is a vector, then it must be that we are doing our new operation, and the result of this operation will be another vector.

Notice how consistency in notation can be an aid here. If we write scalars as lower case Greek letters from the start of the alphabet (such as \(\alpha\text{,}\) \(\beta\text{,}\) …) and write vectors in bold Latin letters from the end of the alphabet (\(\vect{u}\text{,}\) \(\vect{v}\text{,}\) …), then we have some hints about what type of objects we are working with. This can be a blessing and a curse, since when we go read another book about linear algebra, or read an application in another discipline (physics, economics, …) the types of notation employed may be very different and hence unfamiliar.

Again, computationally, vector scalar multiplication is very easy.

SubsectionVSPVector Space Properties

With definitions of vector addition and scalar multiplication we can state, and prove, several properties of each operation, and some properties that involve their interplay. We now collect ten of them here for later reference.


Many of the conclusions of our theorems can be characterized as “identities,” especially when we are establishing basic properties of operations such as those in this section. Most of the properties listed in Theorem VSPCV are examples. So some advice about the style we use for proving identities is appropriate right now. Have a look at Proof Technique PI.

Be careful with the notion of the vector \(\vect{-u}\text{.}\) This is a vector that we add to \(\vect{u}\) so that the result is the particular vector \(\zerovector\text{.}\) This is basically a property of vector addition. It happens that we can compute \(\vect{-u}\) using the other operation, scalar multiplication. We can prove this directly by writing that \begin{gather*} \vectorentry{\vect{-u}}{i} =-\vectorentry{\vect{u}}{i} =(-1)\vectorentry{\vect{u}}{i} =\vectorentry{(-1)\vect{u}}{i} \end{gather*} We will see later how to derive this property as a consequence of several of the ten properties listed in Theorem VSPCV.

Similarly, we will often write something you would immediately recognize as vector subtraction. This could be placed on a firm theoretical foundation — as you can do yourself with Exercise VO.T30.

A final note. Property AAC implies that we do not have to be careful about how we “parenthesize” the addition of vectors. In other words, there is nothing to be gained by writing\(\left(\vect{u}+\vect{v}\right)+\left(\vect{w}+\left(\vect{x}+\vect{y}\right)\right)\) rather than \(\vect{u}+\vect{v}+\vect{w}+\vect{x}+\vect{y}\text{,}\) since we get the same result no matter which order we choose to perform the four additions. So we will not be careful about using parentheses this way.

SubsectionReading Questions


Where have you seen vectors used before in other courses? How were they different?


In words only, when are two vectors equal?


Perform the following computation with vector operations \begin{equation*} 2\colvector{1\\5\\0} + (-3)\colvector{7\\6\\5} \end{equation*}



Compute \begin{equation*} 4\colvector{2\\-3\\4\\1\\0}+ (-2)\colvector{1\\2\\-5\\2\\4}+ \colvector{-1\\3\\0\\1\\2}\text{.} \end{equation*}


Solve the given vector equation for \(x\text{,}\) or explain why no solution exists. \begin{equation*} 3\colvector{1\\2\\-1}+ 4\colvector{2\\0\\x}= \colvector{11\\6\\17} \end{equation*}


Solve the given vector equation for \(\alpha\text{,}\) or explain why no solution exists. \begin{equation*} \alpha\colvector{1\\2\\-1}+ 4\colvector{3\\4\\2} = \colvector{-1\\0\\4} \end{equation*}


Solve the given vector equation for \(\alpha\text{,}\) or explain why no solution exists. \begin{equation*} \alpha\colvector{3\\2\\-2}+ \colvector{6\\1\\2} = \colvector{0\\-3\\6} \end{equation*}


Find \(\alpha\) and \(\beta\) that solve the vector equation \begin{equation*} \alpha\colvector{1\\0}+\beta\colvector{0\\1} = \colvector{3\\2}\text{.} \end{equation*}


Find \(\alpha\) and \(\beta\) that solve the vector equation. \begin{equation*} \alpha\colvector{2\\1}+ \beta\colvector{1\\3 }= \colvector{5\\0} \end{equation*}


Provide reasons (mostly vector space properties) as justification for each of the seven steps of the proof of the following theorem.

For any vectors \(\vect{u},\,\vect{v},\,\vect{w}\in\complex{m}\text{,}\) if \(\vect{u} + \vect{v} = \vect{u} + \vect{w}\text{,}\) then \(\vect{v} = \vect{w}\text{.}\)

Proof: Let \(\vect{u},\,\vect{v},\,\vect{w}\in\complex{m}\text{,}\) and suppose \(\vect{u} + \vect{v} = \vect{u} + \vect{w}\text{.}\) \begin{align*} \vect{v}&=\zerovector + \vect{v}&&\underline{\hspace{9.090909090909092em}}\\ &=(-\vect{u} + \vect{u}) + \vect{v}&&\underline{\hspace{9.090909090909092em}}\\ &=-\vect{u} + (\vect{u} + \vect{v})&&\underline{\hspace{9.090909090909092em}}\\ &=-\vect{u} + (\vect{u} + \vect{w})&&\underline{\hspace{9.090909090909092em}}\\ &=(-\vect{u} + \vect{u}) + \vect{w}&&\underline{\hspace{9.090909090909092em}}\\ &=\zerovector + \vect{w}&&\underline{\hspace{9.090909090909092em}}\\ &=\vect{w}&&\underline{\hspace{9.090909090909092em}} \end{align*}


Provide reasons (mostly vector space properties) as justification for each of the six steps of the proof of the following theorem.

For any vector \(\vect{u}\in\complex{m}\text{,}\) \(0\vect{u}=\zerovector\text{.}\)

Proof: Let \(\vect{u}\in\complex{m}\text{.}\) \begin{align*} \zerovector&= 0\vect{u} +(-0\vect{u})&&\underline{\hspace{9.090909090909092em}}\\ &= (0+0)\vect{u} + (-0\vect{u})&&\underline{\hspace{9.090909090909092em}}\\ &= (0\vect{u}+0\vect{u}) + (-0\vect{u})&&\underline{\hspace{9.090909090909092em}}\\ &= 0\vect{u} + (0\vect{u} + (-0\vect{u}))&&\underline{\hspace{9.090909090909092em}}\\ &= 0\vect{u} + \zerovector&&\underline{\hspace{9.090909090909092em}}\\ &= 0\vect{u}&&\underline{\hspace{9.090909090909092em}} \end{align*}


Provide reasons (mostly vector space properties) as justification for each of the six steps of the proof of the following theorem.

For any scalar \(c\text{,}\) \(c\,\zerovector = \zerovector\text{.}\)

Proof: Let \(c\) be an arbitrary scalar. \begin{align*} \zerovector&= c\zerovector + (-c\zerovector)&&\underline{\hspace{9.090909090909092em}}\\ &= c(\zerovector + \zerovector) + (-c\zerovector)&&\underline{\hspace{9.090909090909092em}}\\ &= (c\zerovector + c\zerovector) + (-c\zerovector)&&\underline{\hspace{9.090909090909092em}}\\ &= c\zerovector + (c\zerovector + (-c\zerovector))&&\underline{\hspace{9.090909090909092em}}\\ &= c\zerovector + \zerovector&&\underline{\hspace{9.090909090909092em}}\\ &= c\zerovector&&\underline{\hspace{9.090909090909092em}} \end{align*}


Prove Property CC of Theorem VSPCV. Write your proof in the style of the proof of Property DSAC given in this section.


Exercises T30, T31 and T32 are about making a careful definition of vector subtraction.


Suppose \(\vect{u}\) and \(\vect{v}\) are two vectors in \(\complex{m}\text{.}\) Define a new operation, called “subtraction,” as the new vector denoted \(\vect{u}-\vect{v}\) and defined by \begin{gather*} \vectorentry{\vect{u}-\vect{v}}{i}=\vectorentry{\vect{u}}{i}-\vectorentry{\vect{v}}{i} \end{gather*} for \(1\leq i\leq m\text{.}\) Prove that we can express the subtraction of two vectors in terms of our two basic operations. More precisely, prove that \(\vect{u}-\vect{v}=\vect{u}+(-1)\vect{v}\text{.}\) So in a sense, subtraction is not something new and different, but is just a convenience. Mimic the style of similar proofs in this section.


Prove, by giving counterexamples, that vector subtraction is not commutative and not associative.


Prove that vector subtraction obeys a distributive property. Specifically, prove that \(\alpha\left(\vect{u}-\vect{v}\right)=\alpha\vect{u}-\alpha\vect{v}\text{.}\)

Can you give two different proofs? Distinguish your two proofs by using the alternate descriptions of vector subtraction provided by Exercise VO.T30.