Skip to main content
\(\newcommand{\orderof}[1]{\sim #1} \newcommand{\Z}{\mathbb{Z}} \newcommand{\reals}{\mathbb{R}} \newcommand{\real}[1]{\mathbb{R}^{#1}} \newcommand{\complexes}{\mathbb{C}} \newcommand{\complex}[1]{\mathbb{C}^{#1}} \newcommand{\conjugate}[1]{\overline{#1}} \newcommand{\modulus}[1]{\left\lvert#1\right\rvert} \newcommand{\zerovector}{\vect{0}} \newcommand{\zeromatrix}{\mathcal{O}} \newcommand{\innerproduct}[2]{\left\langle#1,\,#2\right\rangle} \newcommand{\norm}[1]{\left\lVert#1\right\rVert} \newcommand{\dimension}[1]{\dim\left(#1\right)} \newcommand{\nullity}[1]{n\left(#1\right)} \newcommand{\rank}[1]{r\left(#1\right)} \newcommand{\ds}{\oplus} \newcommand{\detname}[1]{\det\left(#1\right)} \newcommand{\detbars}[1]{\left\lvert#1\right\rvert} \newcommand{\trace}[1]{t\left(#1\right)} \newcommand{\sr}[1]{#1^{1/2}} \newcommand{\spn}[1]{\left\langle#1\right\rangle} \newcommand{\nsp}[1]{\mathcal{N}\!\left(#1\right)} \newcommand{\csp}[1]{\mathcal{C}\!\left(#1\right)} \newcommand{\rsp}[1]{\mathcal{R}\!\left(#1\right)} \newcommand{\lns}[1]{\mathcal{L}\!\left(#1\right)} \newcommand{\per}[1]{#1^\perp} \newcommand{\augmented}[2]{\left\lbrack\left.#1\,\right\rvert\,#2\right\rbrack} \newcommand{\linearsystem}[2]{\mathcal{LS}\!\left(#1,\,#2\right)} \newcommand{\homosystem}[1]{\linearsystem{#1}{\zerovector}} \newcommand{\rowopswap}[2]{R_{#1}\leftrightarrow R_{#2}} \newcommand{\rowopmult}[2]{#1R_{#2}} \newcommand{\rowopadd}[3]{#1R_{#2}+R_{#3}} \newcommand{\leading}[1]{\boxed{#1}} \newcommand{\rref}{\xrightarrow{\text{RREF}}} \newcommand{\elemswap}[2]{E_{#1,#2}} \newcommand{\elemmult}[2]{E_{#2}\left(#1\right)} \newcommand{\elemadd}[3]{E_{#2,#3}\left(#1\right)} \newcommand{\scalarlist}[2]{{#1}_{1},\,{#1}_{2},\,{#1}_{3},\,\ldots,\,{#1}_{#2}} \newcommand{\vect}[1]{\mathbf{#1}} \newcommand{\colvector}[1]{\begin{bmatrix}#1\end{bmatrix}} \newcommand{\vectorcomponents}[2]{\colvector{#1_{1}\\#1_{2}\\#1_{3}\\\vdots\\#1_{#2}}} \newcommand{\vectorlist}[2]{\vect{#1}_{1},\,\vect{#1}_{2},\,\vect{#1}_{3},\,\ldots,\,\vect{#1}_{#2}} \newcommand{\vectorentry}[2]{\left\lbrack#1\right\rbrack_{#2}} \newcommand{\matrixentry}[2]{\left\lbrack#1\right\rbrack_{#2}} \newcommand{\lincombo}[3]{#1_{1}\vect{#2}_{1}+#1_{2}\vect{#2}_{2}+#1_{3}\vect{#2}_{3}+\cdots +#1_{#3}\vect{#2}_{#3}} \newcommand{\matrixcolumns}[2]{\left\lbrack\vect{#1}_{1}|\vect{#1}_{2}|\vect{#1}_{3}|\ldots|\vect{#1}_{#2}\right\rbrack} \newcommand{\transpose}[1]{#1^{t}} \newcommand{\inverse}[1]{#1^{-1}} \newcommand{\submatrix}[3]{#1\left(#2|#3\right)} \newcommand{\adj}[1]{\transpose{\left(\conjugate{#1}\right)}} \newcommand{\adjoint}[1]{#1^\ast} \newcommand{\set}[1]{\left\{#1\right\}} \newcommand{\setparts}[2]{\left\lbrace#1\,\middle|\,#2\right\rbrace} \newcommand{\card}[1]{\left\lvert#1\right\rvert} \newcommand{\setcomplement}[1]{\overline{#1}} \newcommand{\charpoly}[2]{p_{#1}\left(#2\right)} \newcommand{\eigenspace}[2]{\mathcal{E}_{#1}\left(#2\right)} \newcommand{\eigensystem}[3]{\lambda&=#2&\eigenspace{#1}{#2}&=\spn{\set{#3}}} \newcommand{\geneigenspace}[2]{\mathcal{G}_{#1}\left(#2\right)} \newcommand{\algmult}[2]{\alpha_{#1}\left(#2\right)} \newcommand{\geomult}[2]{\gamma_{#1}\left(#2\right)} \newcommand{\indx}[2]{\iota_{#1}\left(#2\right)} \newcommand{\ltdefn}[3]{#1\colon #2\rightarrow#3} \newcommand{\lteval}[2]{#1\left(#2\right)} \newcommand{\ltinverse}[1]{#1^{-1}} \newcommand{\restrict}[2]{{#1}|_{#2}} \newcommand{\preimage}[2]{#1^{-1}\left(#2\right)} \newcommand{\rng}[1]{\mathcal{R}\!\left(#1\right)} \newcommand{\krn}[1]{\mathcal{K}\!\left(#1\right)} \newcommand{\compose}[2]{{#1}\circ{#2}} \newcommand{\vslt}[2]{\mathcal{LT}\left(#1,\,#2\right)} \newcommand{\isomorphic}{\cong} \newcommand{\similar}[2]{\inverse{#2}#1#2} \newcommand{\vectrepname}[1]{\rho_{#1}} \newcommand{\vectrep}[2]{\lteval{\vectrepname{#1}}{#2}} \newcommand{\vectrepinvname}[1]{\ltinverse{\vectrepname{#1}}} \newcommand{\vectrepinv}[2]{\lteval{\ltinverse{\vectrepname{#1}}}{#2}} \newcommand{\matrixrep}[3]{M^{#1}_{#2,#3}} \newcommand{\matrixrepcolumns}[4]{\left\lbrack \left.\vectrep{#2}{\lteval{#1}{\vect{#3}_{1}}}\right|\left.\vectrep{#2}{\lteval{#1}{\vect{#3}_{2}}}\right|\left.\vectrep{#2}{\lteval{#1}{\vect{#3}_{3}}}\right|\ldots\left|\vectrep{#2}{\lteval{#1}{\vect{#3}_{#4}}}\right.\right\rbrack} \newcommand{\cbm}[2]{C_{#1,#2}} \newcommand{\jordan}[2]{J_{#1}\left(#2\right)} \newcommand{\hadamard}[2]{#1\circ #2} \newcommand{\hadamardidentity}[1]{J_{#1}} \newcommand{\hadamardinverse}[1]{\widehat{#1}} \newcommand{\lt}{<} \newcommand{\gt}{>} \newcommand{\amp}{&} \)

SectionVSVector Spaces

In this section we present a formal definition of a vector space, which will lead to an extra increment of abstraction. Once defined, we study its most basic properties.

SubsectionVSVector Spaces

Here is one of the two most important definitions in the entire course.

DefinitionVSVector Space

Suppose that \(V\) is a set upon which we have defined two operations: (1) vector addition, which combines two elements of \(V\) and is denoted by “+”, and (2) scalar multiplication, which combines a complex number with an element of \(V\) and is denoted by juxtaposition. Then \(V\text{,}\) along with the two operations, is a vector space over \(\complexes\) if the following ten properties hold.

AC Additive Closure

If \(\vect{u},\,\vect{v}\in V\text{,}\) then \(\vect{u}+\vect{v}\in V\text{.}\)

SC Scalar Closure

If \(\alpha\in\complexes\) and \(\vect{u}\in V\text{,}\) then \(\alpha\vect{u}\in V\text{.}\)

C Commutativity

If \(\vect{u},\,\vect{v}\in V\text{,}\) then \(\vect{u}+\vect{v}=\vect{v}+\vect{u}\text{.}\)

AA Additive Associativity

If \(\vect{u},\,\vect{v},\,\vect{w}\in V\text{,}\) then \(\vect{u}+\left(\vect{v}+\vect{w}\right)=\left(\vect{u}+\vect{v}\right)+\vect{w}\text{.}\)

Z Zero Vector

There is a vector, \(\zerovector\text{,}\) called the zero vector, such that \(\vect{u}+\zerovector=\vect{u}\) for all \(\vect{u}\in V\text{.}\)

AI Additive Inverses

If \(\vect{u}\in V\text{,}\) then there exists a vector \(\vect{-u}\in V\) so that \(\vect{u}+ (\vect{-u})=\zerovector\text{.}\)

SMA Scalar Multiplication Associativity

If \(\alpha,\,\beta\in\complexes\) and \(\vect{u}\in V\text{,}\) then \(\alpha(\beta\vect{u})=(\alpha\beta)\vect{u}\text{.}\)

DVA Distributivity across Vector Addition

If \(\alpha\in\complexes\) and \(\vect{u},\,\vect{v}\in V\text{,}\) then \(\alpha(\vect{u}+\vect{v})=\alpha\vect{u}+\alpha\vect{v}\text{.}\)

DSA Distributivity across Scalar Addition

If \(\alpha,\,\beta\in\complexes\) and \(\vect{u}\in V\text{,}\) then\((\alpha+\beta)\vect{u}=\alpha\vect{u}+\beta\vect{u}\text{.}\)

O One

If \(\vect{u}\in V\text{,}\) then \(1\vect{u}=\vect{u}\text{.}\)

The objects in \(V\) are called vectors, no matter what else they might really be, simply by virtue of being elements of a vector space.

Now, there are several important observations to make. Many of these will be easier to understand on a second or third reading, and especially after carefully studying the examples in Subsection VS.EVS.

An axiom is often a “self-evident” truth. Something so fundamental that we all agree it is true and accept it without proof. Typically, it would be the logical underpinning that we would begin to build theorems upon. Some might refer to the ten properties of Definition VS as axioms, implying that a vector space is a very natural object and the ten properties are the essence of a vector space. We will instead emphasize that we will begin with a definition of a vector space. After studying the remainder of this chapter, you might return here and remind yourself how all our forthcoming theorems and definitions rest on this foundation.

As we will see shortly, the objects in \(V\) can be anything, even though we will call them vectors. We have been working with vectors frequently, but we should stress here that these have so far just been column vectors — scalars arranged in a columnar list of fixed length. In a similar vein, you have used the symbol “+” for many years to represent the addition of numbers (scalars). We have extended its use to the addition of column vectors and to the addition of matrices, and now we are going to recycle it even further and let it denote vector addition in any possible vector space. So when describing a new vector space, we will have to define exactly what “+” is. Similar comments apply to scalar multiplication. Conversely, we can define our operations any way we like, so long as the ten properties are fulfilled (see Example CVS).

In Definition VS, the scalars do not have to be complex numbers. They can come from what are called in more advanced mathematics, “fields”. Examples of fields are the set of complex numbers, the set of real numbers, the set of rational numbers, and even the finite set of “binary numbers”, \(\set{0,\,1}\text{.}\) There are many, many others. In this case we would call \(V\) a vector space over (the field) \(F\text{.}\)

A vector space is composed of three objects, a set and two operations. Some would explicitly state in the definition that \(V\) must be a nonempty set, but we can infer this from Property Z, since the set cannot be empty and contain a vector that behaves as the zero vector. Also, we usually use the same symbol for both the set and the vector space itself. Do not let this convenience fool you into thinking the operations are secondary!

This discussion has either convinced you that we are really embarking on a new level of abstraction, or it has seemed cryptic, mysterious or nonsensical. You might want to return to this section in a few days and give it another read then. In any case, let us look at some concrete examples now.

SubsectionEVSExamples of Vector Spaces

Our aim in this subsection is to give you a storehouse of examples to work with, to become comfortable with the ten vector space properties and to convince you that the multitude of examples justifies (at least initially) making such a broad definition as Definition VS. Some of our claims will be justified by reference to previous theorems, we will prove some facts from scratch, and we will do one nontrivial example completely. In other places, our usual thoroughness will be neglected, so grab paper and pencil and play along.

So, the set of all matrices of a fixed size forms a vector space. That entitles us to call a matrix a vector, since a matrix is an element of a vector space. For example, if \(A,\,B\in M_{34}\) then we call \(A\) and \(B\) “vectors,” and we even use our previous notation for column vectors to refer to \(A\) and \(B\text{.}\) So we could legitimately write expressions like \begin{equation*} \vect{u}+\vect{v}=A+B=B+A=\vect{v}+\vect{u} \end{equation*} This could lead to some confusion, but it is not too great a danger. But it is worth comment.

The previous two examples may be less than satisfying. We made all the relevant definitions long ago. And the required verifications were all handled by quoting old theorems. However, it is important to consider these two examples first. We have been studying vectors and matrices carefully (Chapter V, Chapter M), and both objects, along with their operations, have certain properties in common, as you may have noticed in comparing Theorem VSPCV with Theorem VSPM. Indeed, it is these two theorems that motivate us to formulate the abstract definition of a vector space, Definition VS. Now, if we prove some general theorems about vector spaces (as we will shortly in Subsection VS.VSP), we can then instantly apply the conclusions to both \(\complex{m}\) and \(M_{mn}\text{.}\) Notice too, how we have taken six definitions and two theorems and reduced them down to two examples. With greater generalization and abstraction our old ideas get downgraded in stature.

Let us look at some more examples, now considering some new vector spaces.

Here is a unique example.

Perhaps some of the above definitions and verifications seem obvious or like splitting hairs, but the next example should convince you that they are necessary. We will study this one carefully. Ready? Check your preconceptions at the door.

SubsectionVSPVector Space Properties

Subsection VS.EVS has provided us with an abundance of examples of vector spaces, most of them containing useful and interesting mathematical objects along with natural operations. In this subsection we will prove some general properties of vector spaces. Some of these results will again seem obvious, but it is important to understand why it is necessary to state and prove them. A typical hypothesis will be “Let \(V\) be a vector space.” From this we may assume the ten properties of Definition VS, and nothing more. It is like starting over, as we learn about what can happen in this new algebra we are learning. But the power of this careful approach is that we can apply these theorems to any vector space we encounter — those in the previous examples, or new ones we have not yet contemplated. Or perhaps new ones that nobody has ever contemplated. We will illustrate some of these results with examples from the crazy vector space (Example CVS), but mostly we are stating theorems and doing proofs. These proofs do not get too involved, but are not trivial either, so these are good theorems to try proving yourself before you study the proof given here. (See Proof Technique P.)

First we show that there is just one zero vector. Notice that the properties only require there to be at least one, and say nothing about there possibly being more. That is because we can use the ten properties of a vector space (Definition VS) to learn that there can never be more than one. To require that this extra condition be stated as an eleventh property would make the definition of a vector space more complicated than it needs to be.

Proof
Proof

As obvious as the next three theorems appear, nowhere have we guaranteed that the zero scalar, scalar multiplication and the zero vector all interact this way. Until we have proved it, anyway.

Proof

Here is another theorem that looks like it should be obvious, but is still in need of a proof.

Proof

Here is another one that sure looks obvious. But understand that we have chosen to use certain notation because it makes the theorem's conclusion look so nice. The theorem is not true because the notation looks so good; it still needs a proof. If we had really wanted to make this point, we might have used notation like \(\vect{u}^\sharp\) for the additive inverse of \(\vect{u}\text{.}\) Then we would have written the defining property, Property AI, as \(\vect{u}+\vect{u}^\sharp=\zerovector\text{.}\) This theorem would become \(\vect{u}^\sharp=(-1)\vect{u}\text{.}\) Not really quite as pretty, is it?

Proof

Because of this theorem, we can now write linear combinations like \(6\vect{u}_1+(-4)\vect{u}_2\) as \(6\vect{u}_1-4\vect{u}_2\text{,}\) even though we have not formally defined an operation called vector subtraction.

Our next theorem is a bit different from several of the others in the list. Rather than making a declaration (“the zero vector is unique”) it is an implication (“if…, then…”) and so can be used in proofs to convert a vector equality into two possibilities, one a scalar equality and the other a vector equality. It should remind you of the situation for complex numbers. If \(\alpha,\,\beta\in\complexes\) and \(\alpha\beta=0\text{,}\) then \(\alpha=0\) or \(\beta=0\text{.}\) This critical property is the driving force behind using a factorization to solve a polynomial equation.

Proof

SubsectionRDRecycling Definitions

When we say that \(V\) is a vector space, we then know we have a set of objects (the “vectors”), but we also know we have been provided with two operations (“vector addition” and “scalar multiplication”) and these operations behave with these objects according to the ten properties of Definition VS. One combines two vectors and produces a vector, the other takes a scalar and a vector, producing a vector as the result. So if \(\vect{u}_1,\,\vect{u}_2,\,\vect{u}_3\in V\) then an expression like \begin{equation*} 5\vect{u}_1+7\vect{u}_2-13\vect{u}_3 \end{equation*} would be unambiguous in any of the vector spaces we have discussed in this section. And the resulting object would be another vector in the vector space. If you were tempted to call the above expression a linear combination, you would be right. Four of the definitions that were central to our discussions in Chapter V were stated in the context of vectors being column vectors, but were purposely kept broad enough that they could be applied in the context of any vector space. They only rely on the presence of scalars, vectors, vector addition and scalar multiplication to make sense. We will restate them shortly, unchanged, except that their titles and acronyms no longer refer to column vectors, and the hypothesis of being in a vector space has been added. Take the time now to look forward and review each one, and begin to form some connections to what we have done earlier and what we will be doing in subsequent sections and chapters. Specifically, compare the following pairs of definitions:

SubsectionReading Questions

2

In the crazy vector space, \(C\text{,}\) (Example CVS) compute the linear combination \begin{equation*} 2(3,\,4)+(-6)(1,\,2)\text{.} \end{equation*}

3

Suppose that \(\alpha\) is a scalar and \(\zerovector\) is the zero vector. Why should we prove anything as obvious as \(\alpha\zerovector=\zerovector\) such as we did in Theorem ZVSM?

SubsectionExercises

M10

Define a possibly new vector space by beginning with the set and vector addition from \(\complex{2}\) (Example VSCV) but change the definition of scalar multiplication to \begin{equation*} \alpha\vect{x}=\zerovector=\colvector{0\\0},\qquad\alpha\in\complexes,\ \vect{x}\in\complex{2}\text{.} \end{equation*} Prove that the first nine properties required for a vector space hold, but Property O does not hold.

This example shows us that we cannot expect to be able to derive Property O as a consequence of assuming the first nine properties. In other words, we cannot slim down our list of properties by jettisoning the last one, and still have the same collection of objects qualify as vector spaces.

M11

Let \(V\) be the set \(\complex{2}\) with the usual vector addition, but with scalar multiplication defined by \begin{equation*} \alpha \colvector{x\\y} = \colvector{\alpha y \\ \alpha x} \end{equation*} Determine whether or not \(V\) is a vector space with these operations.

Solution
M12

Let \(V\) be the set \(\complex{2}\) with the usual scalar multiplication, but with vector addition defined by \begin{equation*} \colvector{x\\y} + \colvector{z\\w} = \colvector{y + w \\ x + z} \end{equation*} Determine whether or not \(V\) is a vector space with these operations.

Solution
M13

Let \(V\) be the set \(M_{22}\) with the usual scalar multiplication, but with addition defined by \(A + B = \zeromatrix_{2,2}\) for all \(2 \times 2\) matrices \(A\) and \(B\text{.}\) Determine whether or not \(V\) is a vector space with these operations.

Solution
M14

Let \(V\) be the set \(M_{22}\) with the usual addition, but with scalar multiplication defined by \(\alpha A= \zeromatrix_{2,2}\) for all \(2 \times 2\) matrices \(A\) and scalars \(\alpha\text{.}\) Determine whether or not \(V\) is a vector space with these operations.

Solution
M15

Consider the following sets of \(3 \times 3\) matrices, where the symbol \(*\) indicates the position of an arbitrary complex number. Determine whether or not these sets form vector spaces with the usual operations of addition and scalar multiplication for matrices.

  1. All matrices of the form \(\begin{bmatrix} * & * & 1\\ * & 1 & *\\ 1 & * & * \end{bmatrix}\)
  2. All matrices of the form \(\begin{bmatrix} * & 0 & *\\ 0 & * & 0\\ * & 0 & * \end{bmatrix}\)
  3. All matrices of the form \(\begin{bmatrix} * & 0 & 0\\ 0 & * & 0\\ 0 & 0 & * \end{bmatrix}\) (These are the diagonal matrices.)
  4. All matrices of the form \(\begin{bmatrix} * & * & *\\ 0 & * & *\\ 0 & 0 & * \end{bmatrix}\) (These are the upper triangular matrices.)
Solution
M20

Explain why we need to define the vector space \(P_n\) as the set of all polynomials with degree up to and including \(n\) instead of the more obvious set of all polynomials of degree exactly \(n\text{.}\)

Solution
M21

The set of integers is denoted \(\mathbb{Z}\text{.}\) Does the set \(\mathbb{Z}^2 = \setparts{\colvector{m\\n}}{m,n\in\mathbb{Z}}\) with the operations of standard addition and scalar multiplication of vectors form a vector space?

Solution

The next three problems suggest that under the right situations we can “cancel.” In practice, these techniques should be avoided in other proofs. Prove each of the following statements.

T21

Suppose that \(V\) is a vector space, and \(\vect{u},\,\vect{v},\,\vect{w}\in V\text{.}\) If \(\vect{w}+\vect{u}=\vect{w}+\vect{v}\text{,}\) then \(\vect{u}=\vect{v}\text{.}\)

Solution
T22

Suppose \(V\) is a vector space, \(\vect{u},\,\vect{v}\in V\) and \(\alpha\) is a nonzero scalar from \(\complexes\text{.}\) If \(\alpha\vect{u}=\alpha\vect{v}\text{,}\) then \(\vect{u}=\vect{v}\text{.}\)

Solution
T23

Suppose \(V\) is a vector space, \(\vect{u}\neq\zerovector\) is a vector in \(V\) and \(\alpha,\,\beta\in\complexes\text{.}\) If \(\alpha\vect{u}=\beta\vect{u}\text{,}\) then \(\alpha=\beta\text{.}\)

Solution
T30

Suppose that \(V\) is a vector space and \(\alpha\in\complexes\) is a scalar such that \(\alpha\vect{x}=\vect{x}\) for every \(\vect{x}\in V\text{.}\) Prove that \(\alpha = 1\text{.}\) In other words, Property O is not duplicated for any other scalar but the “special” scalar, 1. (This question was suggested by James Gallagher.)

Solution
T31

Construct an alternate proof of Theorem AISM by demonstrating that \((-1)\vect{x}\) is the additive inverse of \(\vect{x}\text{.}\)

Solution