Section VR Vector Representations
Subsection VR Vector Representation
We begin by establishing an invertible linear transformation between any vector space V of dimension n and Cn. This will allow us to βgo back and forthβ between the two vector spaces, no matter how abstract the definition of V might be.Definition VR. Vector Representation.
Suppose that V is a vector space with a basis B={v1,v2,v3,β¦,vn}. Define a function ΟB:VβCn as follows. For wβV define the column vector ΟB(w)βCn by
Theorem VRLT. Vector Representation is a Linear Transformation.
The function ΟB (Definition VR) is a linear transformation.
Proof.
We will take a novel approach in this proof. We will construct another function, which we will easily determine is a linear transformation, and then show that this second function is really \(\vectrepname{B}\) in disguise. Here we go.
Since \(B\) is a basis, we can define \(\ltdefn{T}{V}{\complex{n}}\) to be the unique linear transformation such that \(\lteval{T}{\vect{v}_i}=\vect{e}_i\text{,}\) \(1\leq i\leq n\text{,}\) as guaranteed by Theorem LTDB, and where the \(\vect{e}_i\) are the standard unit vectors (Definition SUV). Then suppose for an arbitrary \(\vect{w}\in V\) we have,
As column vectors, Definition CVE implies that \(\lteval{T}{\vect{w}}=\vectrep{B}{\vect{w}}\text{.}\) Since \(\vect{w}\) was an arbitrary element of \(V\text{,}\) as functions \(T=\vectrepname{B}\text{.}\) Now, since \(T\) is known to be a linear transformation, it must follow that \(\vectrepname{B}\) is also a linear transformation.
Example VRC4. Vector representation in C4.
Consider the vector \(\vect{y}\in\complex{4}\)
We will find several vector representations of \(\vect{y}\) in this example. Notice that \(\vect{y}\) never changes, but the representations of \(\vect{y}\) do change. One basis for \(\complex{4}\) is
as can be seen by making these vectors the columns of a matrix, checking that the matrix is nonsingular and applying Theorem CNMB. To find \(\vectrep{B}{\vect{y}}\text{,}\) we need to find scalars, \(a_1,\,a_2,\,a_3,\,a_4\) such that
By Theorem SLSLC the desired scalars are a solution to the linear system of equations with a coefficient matrix whose columns are the vectors in \(B\) and with a vector of constants \(\vect{y}\text{.}\) With a nonsingular coefficient matrix, the solution is unique, but this is no surprise as this is the content of Theorem VRRB. This unique solution is
Then by Definition VR, we have
Suppose now that we construct a representation of \(\vect{y}\) relative to another basis of \(\complex{4}\text{,}\)
As with \(B\text{,}\) it is easy to check that \(C\) is a basis. Writing \(\vect{y}\) as a linear combination of the vectors in \(C\) leads to solving a system of four equations in the four unknown scalars with a nonsingular coefficient matrix. The unique solution can be expressed as
so that Definition VR gives
We often perform representations relative to standard bases, but for vectors in \(\complex{m}\) this is a little silly. Let us find the vector representation of \(\vect{y}\) relative to the standard basis (Theorem SUVB),
Then, without any computation, we can check that
so by Definition VR,
which is not very exciting. Notice however that the order in which we place the vectors in the basis is critical to the representation. Let us keep the standard unit vectors as our basis, but rearrange the order we place them in the basis. So a fourth basis is
Then,
so by Definition VR,
So for every possible basis of \(\complex{4}\) we could construct a different representation of \(\vect{y}\text{.}\)
Example VRP2. Vector representations in P2.
Consider the vector \(\vect{u}=15+10x-6x^2\in P_2\) from the vector space of polynomials with degree at most 2 (Example VSP). A nice basis for \(P_2\) is
so that
so by Definition VR
Another nice basis for \(P_2\) is
so that now it takes a bit of computation to determine the scalars for the representation. We want \(a_1,\,a_2,\,a_3\) so that
Performing the operations in \(P_2\) on the right-hand side, and equating coefficients, gives the three equations in the three unknown scalars,
The coefficient matrix of this sytem is nonsingular, leading to a unique solution (no surprise there, see Theorem VRRB),
so by Definition VR
While we often form vector representations relative to βniceβ bases, nothing prevents us from forming representations relative to βnastyβ bases. For example, the set
can be verified as a basis of \(P_2\) by checking linear independence with Definition LI and then arguing that 3 vectors from \(P_2\text{,}\) a vector space of dimension 3 (Theorem DP), must also be a spanning set (Theorem G).
Now we desire scalars \(a_1,\,a_2,\,a_3\) so that
Performing the operations in \(P_2\) on the right-hand side, and equating coefficients, gives the three equations in the three unknown scalars,
The coefficient matrix of this sytem is nonsingular, leading to a unique solution (no surprise there, see Theorem VRRB),
so by Definition VR
Theorem VRI. Vector Representation is Injective.
The function ΟB (Definition VR) is an injective linear transformation.
Proof.
We will appeal to Theorem KILT. Suppose \(U\) is a vector space of dimension \(n\text{,}\) so vector representation is \(\ltdefn{\vectrepname{B}}{U}{\complex{n}}\text{.}\) Let \(B=\set{\vectorlist{u}{n}}\) be the basis of \(U\) used in the definition of \(\vectrepname{B}\text{.}\) Suppose \(\vect{u}\in\krn{\vectrepname{B}}\text{.}\) We write \(\vect{u}\) as a linear combination of the vectors in the basis \(B\) where the scalars are the components of the vector representation, \(\lteval{\vectrepname{B}}{\vect{u}}\text{.}\) We have
Thus an arbitrary vector, \(\vect{u}\text{,}\) from the kernel ,\(\krn{\vectrepname{B}}\text{,}\) must equal the zero vector of \(U\text{.}\) So \(\krn{\vectrepname{B}}=\set{\zerovector}\) and by Theorem KILT, \(\vectrepname{B}\) is injective.
Theorem VRS. Vector Representation is Surjective.
The function ΟB (Definition VR) is a surjective linear transformation.
Proof.
We will appeal to Theorem RSLT. Suppose \(U\) is a vector space of dimension \(n\text{,}\) so vector representation is \(\ltdefn{\vectrepname{B}}{U}{\complex{n}}\text{.}\) Let \(B=\set{\vectorlist{u}{n}}\) be the basis of \(U\) used in the definition of \(\vectrepname{B}\text{.}\) Suppose \(\vect{v}\in\complex{n}\text{.}\) Define the vector \(\vect{u}\) by
Then for \(1\leq i\leq n\text{,}\) by Definition VR,
so the entries of vectors \(\vectrep{B}{\vect{u}}\) and \(\vect{v}\) are equal and Definition CVE yields the vector equality \(\vectrep{B}{\vect{u}}=\vect{v}\text{.}\) This demonstrates that \(\vect{v}\in\rng{\vectrepname{B}}\text{,}\) so \(\complex{n}\subseteq\rng{\vectrepname{B}}\text{.}\) Since \(\rng{\vectrepname{B}}\subseteq\complex{n}\) by Definition RLT, we have \(\rng{\vectrepname{B}}=\complex{n}\) and Theorem RSLT says \(\vectrepname{B}\) is surjective.
Theorem VRILT. Vector Representation is an Invertible Linear Transformation.
The function ΟB (Definition VR) is an invertible linear transformation.
Proof.
The function \(\vectrepname{B}\) (Definition VR) is a linear transformation (Theorem VRLT) that is injective (Theorem VRI) and surjective (Theorem VRS) with domain \(V\) and codomain \(\complex{n}\text{.}\) By Theorem ILTIS we then know that \(\vectrepname{B}\) is an invertible linear transformation.
Sage VR. Vector Representations.
Vector representation is described in the text in a fairly abstract fashion. Sage will support this view (which will be useful in the next section), as well as provide a more practical approach. We will explain both approaches. We begin with an arbitrarily chosen basis. We then create an alternate version of QQ^4
with this basis as aβuser basisβ, namely V
.
Now, the construction of a linear transformation will use the basis provided for V
. In the proof of Theorem VRLT we defined a linear transformation \(T\) that equaled \(\vectrepname{B}\text{.}\) \(T\) was defined by taking the basis vectors of \(B\) to the basis composed of standard unit vectors (Definition SUV). This is exactly what we will accomplish in the following construction. Note how the basis associated with the domain is automatically paired with the elements of the basis for the codomain.
First, we verify Theorem VRILT:
Notice that the matrix of the linear transformation is the identity matrix. This might look odd now, but we will have a full explanation soon. Let us see if this linear transformation behaves as it should. We will βcoordinatizeβ an arbitrary vector, w
.
Notice how the expression for lincombo
is exactly the messy expression displayed in Definition VR. More precisely, we could even write this as:
Or we can test this equality repeatedly with random vectors.
Finding a vector representation is such a fundamental operation that Sage has an easier command, bypassing the need to create a linear transformation. It does still require constructing a vector space with the alternate basis. Here goes, repeating the prior example.
Boom!
Subsection CVS Characterization of Vector Spaces
Limiting our attention to vector spaces with finite dimension, we now describe every possible vector space. All of them. Really.Theorem CFDVS. Characterization of Finite Dimensional Vector Spaces.
Suppose that V is a vector space with dimension n. Then V is isomorphic to Cn.
Proof.
Since \(V\) has dimension \(n\) we can find a basis of \(V\) of size \(n\) (Definition D) which we will call \(B\text{.}\) The linear transformation \(\vectrepname{B}\) is an invertible linear transformation from \(V\) to \(\complex{n}\text{,}\) so by Definition IVS, we have that \(V\) and \(\complex{n}\) are isomorphic.
Example TIVS. Two isomorphic vector spaces.
The vector space of polynomials with degree 8 or less, \(P_8\text{,}\) has dimension 9 (Theorem DP). By Theorem CFDVS, \(P_8\) is isomorphic to \(\complex{9}\text{.}\)
Example CVSR. Crazy vector space revealed.
The crazy vector space, \(C\) of Example CVS, has dimension 2 by Example DC. By Theorem CFDVS, \(C\) is isomorphic to \(\complex{2}\text{.}\) Hmmmm. Not really so crazy after all?
Example ASC. A subspace characterized.
In Example DSP4 we determined that a certain subspace \(W\) of \(P_4\) has dimension \(4\text{.}\) By Theorem CFDVS, \(W\) is isomorphic to \(\complex{4}\text{.}\)
Theorem IFDVS. Isomorphism of Finite Dimensional Vector Spaces.
Suppose U and V are both finite-dimensional vector spaces. Then U and V are isomorphic if and only if dim(U)=dim(V).
Proof.
(β)
This is just the statement proved in Theorem IVSED.
(β)
This is the advertised converse of Theorem IVSED. We will assume \(U\) and \(V\) have equal dimension and discover that they are isomorphic vector spaces. Let \(n\) be the common dimension of \(U\) and \(V\text{.}\) Then by Theorem CFDVS there are isomorphisms \(\ltdefn{T}{U}{\complex{n}}\) and \(\ltdefn{S}{V}{\complex{n}}\text{.}\)
\(T\) is therefore an invertible linear transformation by Definition IVS. Similarly, \(S\) is an invertible linear transformation, and so \(\ltinverse{S}\) is an invertible linear transformation (Theorem IILT). The composition of invertible linear transformations is again invertible (Theorem CIVLT) so the composition of \(\ltinverse{S}\) with \(T\) is invertible. Then \(\ltdefn{\left(\compose{\ltinverse{S}}{T}\right)}{U}{V}\) is an invertible linear transformation from \(U\) to \(V\) and Definition IVS says \(U\) and \(V\) are isomorphic.
Example MIVS. Multiple isomorphic vector spaces.
\(\complex{10}\text{,}\) \(P_{9}\text{,}\) \(M_{25}\) and \(M_{52}\) are all vector spaces and each has dimension 10. By Theorem IFDVS each is isomorphic to any other.
The subspace of \(M_{44}\) that contains all the symmetric matrices (Definition SYM) has dimension \(10\text{,}\) so this subspace is also isomorphic to each of the four vector spaces above.
Subsection CP Coordinatization Principle
With ΟB available as an invertible linear transformation, we can translate between vectors in a vector space U of dimension m and Cm. Furthermore, as a linear transformation, ΟB respects the addition and scalar multiplication in U, while Οβ1B respects the addition and scalar multiplication in Cm. Since our definitions of linear independence, spans, bases and dimension are all built up from linear combinations, we will finally be able to translate fundamental properties between abstract vector spaces (U) and concrete vector spaces (Cm).Theorem CLI. Coordinatization and Linear Independence.
Suppose that U is a vector space with a basis B of size n. Then
is a linearly independent subset of U if and only if
is a linearly independent subset of Cn.
Proof.
The linear transformation \(\vectrepname{B}\) is an isomorphism between \(U\) and \(\complex{n}\) (Theorem VRILT). As an invertible linear transformation, \(\vectrepname{B}\) is an injective linear transformation (Theorem ILTIS), and \(\ltinverse{\vectrepname{B}}\) is also an injective linear transformation (Theorem IILT, Theorem ILTIS).
(β)
Since \(\vectrepname{B}\) is an injective linear transformation and \(S\) is linearly independent, Theorem ILTLI says that \(R\) is linearly independent.
(β)
If we apply \(\ltinverse{\vectrepname{B}}\) to each element of \(R\text{,}\) we will create the set \(S\text{.}\) Since we are assuming \(R\) is linearly independent and \(\ltinverse{\vectrepname{B}}\) is injective, Theorem ILTLI says that \(S\) is linearly independent.
Theorem CSS. Coordinatization and Spanning Sets.
Suppose that U is a vector space with a basis B of size n. Then
if and only if
Proof.
(β)
Suppose \(\vect{u}\in\spn{\set{\vectorlist{u}{k}}}\text{.}\) Then we know there are scalars, \(\scalarlist{a}{k}\text{,}\) such that
Then, by Theorem LTLC,
which says that \(\vectrep{B}{\vect{u}}\in\spn{\set{\vectrep{B}{\vect{u}_1},\,\vectrep{B}{\vect{u}_2},\,\vectrep{B}{\vect{u}_3},\,\ldots,\,\vectrep{B}{\vect{u}_k}}}\text{.}\)
(β)
Suppose that \(\vectrep{B}{\vect{u}}\in\spn{\set{\vectrep{B}{\vect{u}_1},\,\vectrep{B}{\vect{u}_2},\,\vectrep{B}{\vect{u}_3},\,\ldots,\,\vectrep{B}{\vect{u}_k}}}\text{.}\) Then there are scalars \(\scalarlist{b}{k}\) such that
Recall that \(\vectrepname{B}\) is invertible (Theorem VRILT), so
which says that \(\vect{u}\in\spn{\set{\vectorlist{u}{k}}}\text{.}\)
Example CP2. Coordinatizing in P2.
In Example VRP2 we needed to know that
is a basis for \(P_2\text{.}\) With Theorem CLI and Theorem CSS this task is much easier.
First, choose a known basis for \(P_2\text{,}\) a basis that forms vector representations easily. We will choose
Now, form the subset of \(\complex{3}\) that is the result of applying \(\vectrepname{B}\) to each element of \(D\text{,}\)
and ask if \(F\) is a linearly independent spanning set for \(\complex{3}\text{.}\) This is easily seen to be the case by forming a matrix \(A\) whose columns are the vectors of \(F\text{,}\) row-reducing \(A\) to the identity matrix \(I_3\text{,}\) and then using the nonsingularity of \(A\) to assert that \(F\) is a basis for \(\complex{3}\) (Theorem CNMB). Now, since \(F\) is a basis for \(\complex{3}\text{,}\) Theorem CLI and Theorem CSS tell us that \(D\) is also a basis for \(P_2\text{.}\)
Principle CP. The Coordinatization Principle.
Suppose that U is a vector space with a basis B of size n. Then any question about U, or its elements, which ultimately depends on the vector addition or scalar multiplication in U, or depends on linear independence or spanning, may be translated into the same question in Cn by application of the linear transformation ΟB to the relevant vectors. Once the question is answered in Cn, the answer may be translated back to U through application of the inverse linear transformation Οβ1B (if necessary).
Example CM32. Coordinatization in M32.
This is a simple example of the The Coordinatization Principle, depending only on the fact that coordinatizing is an invertible linear transformation (Theorem VRILT). Suppose we have a linear combination to perform in \(M_{32}\text{,}\) the vector space of \(3\times 2\) matrices, but we are adverse to doing the operations of \(M_{32}\) (Definition MA, Definition MSM). More specifically, suppose we are faced with the computation
We choose a nice basis for \(M_{32}\) (or a nasty basis if we are so inclined),
and apply \(\vectrepname{B}\) to each vector in the linear combination. This gives us a new computation, now in the vector space \(\complex{6}\text{,}\) which we can compute with operations in \(\complex{6}\) (Definition CVA, Definition CVSM),
We are after the result of a computation in \(M_{32}\text{,}\) so we now can apply \(\ltinverse{\vectrepname{B}}\) to obtain a \(3\times 2\) matrix,
which is exactly the matrix we would have computed had we just performed the matrix operations in the first place. So this was not meant to be an easier way to compute a linear combination of two matrices, just a different way.
Sage SUTH2. Sage Under The Hood, Round 2.
You will have noticed that we have never constructed examples involving our favorite abstract vector spaces, such as the vector space of polynomials with fixed maximum degree, the vector space of matrices of a fixed size, or even the crazy vector space. There is nothing to stop us (or you) from implementing these examples in Sage as vector spaces. Maybe someday it will happen. But since Sage is built to be a tool for serious mathematical research, the designers recognize that this is not necessary.
Theorem CFDVS tells us that every finite-dimensional vector space can be described (loosely speaking) by just a field of scalars (for us, \(\complexes\) in the text, QQ
in Sage) and the dimension. You can study whatever whacky vector space you might dream up, or whatever very complicated vector space that is important for particle physics, and through vector representation (βcoordinatizationβ), you can convert your calculations to and from Sage.
Reading Questions VR Reading Questions
1.
The vector space of 3Γ5 matrices, M35 is isomorphic to what fundamental vector space?
2.
A basis for C3 is
Compute ΟB([58β1]).
3.
What is the first βsurprise,β and why is it surprising?
Exercises VR Exercises
C10.
In the vector space C3, compute the vector representation ΟB(v) for the basis B and vector v,
We need to express the vector \(\vect{v}\) as a linear combination of the vectors in \(B\text{.}\) Theorem VRRB tells us we will be able to do this, and do it uniquely. The vector equation
becomes (via Theorem SLSLC) a system of linear equations with augmented matrix,
This system has the unique solution \(a_1=2\text{,}\) \(a_2=-2\text{,}\) \(a_3=3\text{.}\) So by Definition VR,
C20.
Rework Example CM32 replacing the basis B by the basis
The following computations replicate the computations given in Example CM32, only using the basis \(C\text{,}\)
M10.
Prove that the set S below is a basis for the vector space of 2Γ2 matrices, M22. Do this by choosing a natural basis for M22 and coordinatizing the elements of S with respect to this basis. Examine the resulting set of column vectors from C4 and apply the The Coordinatization Principle,
M20.
The set B={v1,v2,v3,v4} is a basis of the vector space P3, polynomials with degree 3 or less. Therefore ΟB is a linear transformation, according to Theorem VRLT. Find a βformulaβ for ΟB. In other words, find an expression for ΟB(a+bx+cx2+dx3).
Our strategy is to determine the values of the linear transformation on a βniceβ basis for the domain, and then apply the ideas of Theorem LTDB to obtain our formula. \(\vectrepname{B}\) is a linear transformation of the form \(\ltdefn{\vectrepname{B}}{P_3}{\complex{4}}\text{,}\) so for a basis of the domain we choose a very simple one: \(C=\set{1,\,x,\,x^2,\,x^3}\text{.}\) We now give the vector representations of the elements of \(C\text{,}\) which are obtained by solving the relevant systems of equations obtained from linear combinations of the elements of \(B\text{.}\)
This is enough information to determine the linear transformation uniquely, and in particular, to allow us to use Theorem LTLC to construct a formula. We have