Section CB Change of Basis
We have seen in Section MR that a linear transformation can be represented by a matrix, once we pick bases for the domain and codomain. How does the matrix representation change if we choose different bases? Which bases lead to especially nice representations? From the infinite possibilities, what is the best possible representation? This section will begin to answer these questions. But first we need to define eigenvalues for linear transformations and the change-of-basis matrix.
Subsection EELT Eigenvalues and Eigenvectors of Linear Transformations
We now define the notion of an eigenvalue and eigenvector of a linear transformation. It should not be too surprising, especially if you remind yourself of the close relationship between matrices and linear transformations.
Definition EELT Eigenvalue and Eigenvector of a Linear Transformation
Suppose that $\ltdefn{T}{V}{V}$ is a linear transformation. Then a nonzero vector $\vect{v}\in V$ is an eigenvector of $T$ for the eigenvalue $\lambda$ if $\lt{T}{\vect{v}}=\lambda\vect{v}$.
We will see shortly the best method for computing the eigenvalues and eigenvectors of a linear transformation, but for now, here are some examples to verify that such things really do exist.
Example ELTBM Eigenvectors of linear transformation between matrices
Here is another.
Example ELTBP Eigenvectors of linear transformation between polynomials
Of course, these examples are meant only to illustrate the definition of eigenvectors and eigenvalues for linear transformations, and therefore beg the question, “How would I find eigenvectors?” We will have an answer before we finish this section. We need one more construction first.
Sage ENDO Endomorphisms
Subsection CBM Change-of-Basis Matrix
Given a vector space, we know we can usually find many different bases for the vector space, some nice, some nasty. If we choose a single vector from this vector space, we can build many different representations of the vector by constructing the representations relative to different bases. How are these different representations related to each other? A change-of-basis matrix answers this question.
Definition CBM Change-of-Basis Matrix
Suppose that $V$ is a vector space, and $\ltdefn{I_V}{V}{V}$ is the identity linear transformation on $V$. Let $B=\set{\vectorlist{v}{n}}$ and $C$ be two bases of $V$. Then the change-of-basis matrix from $B$ to $C$ is the matrix representation of $I_V$ relative to $B$ and $C$, \begin{align*} \cbm{B}{C}&=\matrixrep{I_V}{B}{C}\\ &=\matrixrepcolumns{I_V}{C}{v}{n}\\ &=\left\lbrack \left.\vectrep{C}{\vect{v}_1}\right| \left.\vectrep{C}{\vect{v}_2}\right| \left.\vectrep{C}{\vect{v}_3}\right| \ldots \left|\vectrep{C}{\vect{v}_n}\right. \right\rbrack \end{align*}
Notice that this definition is primarily about a single vector space ($V$) and two bases of $V$ ($B$, $C$). The linear transformation ($I_V$) is necessary but not critical. As you might expect, this matrix has something to do with changing bases. Here is the theorem that gives the matrix its name (not the other way around).
Theorem CB Change-of-Basis
Suppose that $\vect{v}$ is a vector in the vector space $V$ and $B$ and $C$ are bases of $V$. Then \begin{equation*} \vectrep{C}{\vect{v}}=\cbm{B}{C}\vectrep{B}{\vect{v}} \end{equation*}
So the change-of-basis matrix can be used with matrix multiplication to convert a vector representation of a vector ($\vect{v}$) relative to one basis ($\vectrep{B}{\vect{v}}$) to a representation of the same vector relative to a second basis ($\vectrep{C}{\vect{v}}$).
Theorem ICBM Inverse of Change-of-Basis Matrix
Suppose that $V$ is a vector space, and $B$ and $C$ are bases of $V$. Then the change-of-basis matrix $\cbm{B}{C}$ is nonsingular and \begin{equation*} \inverse{\cbm{B}{C}}=\cbm{C}{B} \end{equation*}
Example CBP Change of basis with polynomials
The computations of the previous example are not meant to present any labor-saving devices, but instead are meant to illustrate the utility of the change-of-basis matrix. However, you might have noticed that $\cbm{C}{B}$ was easier to compute than $\cbm{B}{C}$. If you needed $\cbm{B}{C}$, then you could first compute $\cbm{C}{B}$ and then compute its inverse, which by Theorem ICBM, would equal $\cbm{B}{C}$.
Here is another illustrative example. We have been concentrating on working with abstract vector spaces, but all of our theorems and techniques apply just as well to $\complex{m}$, the vector space of column vectors. We only need to use more complicated bases than the standard unit vectors (Theorem SUVB) to make things interesting.
Example CBCV Change of basis with column vectors
Sage CBM Change-of-Basis Matrix
Subsection MRS Matrix Representations and Similarity
Here is the main theorem of this section. It looks a bit involved at first glance, but the proof should make you realize it is not all that complicated. In any event, we are more interested in a special case.
Theorem MRCB Matrix Representation and Change of Basis
Suppose that $\ltdefn{T}{U}{V}$ is a linear transformation, $B$ and $C$ are bases for $U$, and $D$ and $E$ are bases for $V$. Then \begin{equation*} \matrixrep{T}{B}{D}=\cbm{E}{D}\matrixrep{T}{C}{E}\cbm{B}{C} \end{equation*}
We will be most interested in a special case of this theorem (Theorem SCB), but here is an example that illustrates the full generality of Theorem MRCB.
Example MRCM Matrix representations and change-of-basis matrices
Here is a special case of the previous theorem, where we choose $U$ and $V$ to be the same vector space, so the matrix representations and the change-of-basis matrices are all square of the same size.
Theorem SCB Similarity and Change of Basis
Suppose that $\ltdefn{T}{V}{V}$ is a linear transformation and $B$ and $C$ are bases of $V$. Then \begin{equation*} \matrixrep{T}{B}{B}=\inverse{\cbm{B}{C}}\matrixrep{T}{C}{C}\cbm{B}{C} \end{equation*}
This is the third surprise of this chapter. Theorem SCB considers the special case where a linear transformation has the same vector space for the domain and codomain ($V$). We build a matrix representation of $T$ using the basis $B$ simultaneously for both the domain and codomain ($\matrixrep{T}{B}{B}$), and then we build a second matrix representation of $T$, now using the basis $C$ for both the domain and codomain ($\matrixrep{T}{C}{C}$). Then these two representations are related via a similarity transformation (Definition SIM) using a change-of-basis matrix ($\cbm{B}{C}$)!
Example MRBE Matrix representation with basis of eigenvectors
Sage MRCB Matrix Representation and Change-of-Basis
We can now return to the question of computing an eigenvalue or eigenvector of a linear transformation. For a linear transformation of the form $\ltdefn{T}{V}{V}$, we know that representations relative to different bases are similar matrices. We also know that similar matrices have equal characteristic polynomials by Theorem SMEE. We will now show that eigenvalues of a linear transformation $T$ are precisely the eigenvalues of any matrix representation of $T$. Since the choice of a different matrix representation leads to a similar matrix, there will be no “new” eigenvalues obtained from this second representation. Similarly, the change-of-basis matrix can be used to show that eigenvectors obtained from one matrix representation will be precisely those obtained from any other representation. So we can determine the eigenvalues and eigenvectors of a linear transformation by forming one matrix representation, using any basis we please, and analyzing the matrix in the manner of Chapter E.
Theorem EER Eigenvalues, Eigenvectors, Representations
Suppose that $\ltdefn{T}{V}{V}$ is a linear transformation and $B$ is a basis of $V$. Then $\vect{v}\in V$ is an eigenvector of $T$ for the eigenvalue $\lambda$ if and only if $\vectrep{B}{\vect{v}}$ is an eigenvector of $\matrixrep{T}{B}{B}$ for the eigenvalue $\lambda$.
Subsection CELT Computing Eigenvectors of Linear Transformations
Theorem EER tells us that the eigenvalues of a linear transformation are the eigenvalues of any representation, no matter what the choice of the basis $B$ might be. So we could now unambiguously define items such as the characteristic polynomial of a linear transformation, which we would define as the characteristic polynomial of any matrix representation. We will say that again — eigenvalues, eigenvectors, and characteristic polynomials are intrinsic properties of a linear transformation, independent of the choice of a basis used to construct a matrix representation.
As a practical matter, how does one compute the eigenvalues and eigenvectors of a linear transformation of the form $\ltdefn{T}{V}{V}$? Choose a nice basis $B$ for $V$, one where the vector representations of the values of the linear transformations necessary for the matrix representation are easy to compute. Construct the matrix representation relative to this basis, and find the eigenvalues and eigenvectors of this matrix using the techniques of Chapter E. The resulting eigenvalues of the matrix are precisely the eigenvalues of the linear transformation. The eigenvectors of the matrix are column vectors that need to be converted to vectors in $V$ through application of $\ltinverse{\vectrepname{B}}$ (this is part of the content of Theorem EER).
Now consider the case where the matrix representation of a linear transformation is diagonalizable. The $n$ linearly independent eigenvectors that must exist for the matrix (Theorem DC) can be converted (via $\ltinverse{\vectrepname{B}}$) into eigenvectors of the linear transformation. A matrix representation of the linear transformation relative to a basis of eigenvectors will be a diagonal matrix — an especially nice representation! Though we did not know it at the time, the diagonalizations of Section SD were really about finding especially pleasing matrix representations of linear transformations.
Here are some examples.
Example ELTT Eigenvectors of a linear transformation, twice
Another example, this time a bit larger and with complex eigenvalues.