Skip to main content
\(\newcommand{\orderof}[1]{\sim #1} \newcommand{\Z}{\mathbb{Z}} \newcommand{\reals}{\mathbb{R}} \newcommand{\real}[1]{\mathbb{R}^{#1}} \newcommand{\complexes}{\mathbb{C}} \newcommand{\complex}[1]{\mathbb{C}^{#1}} \newcommand{\conjugate}[1]{\overline{#1}} \newcommand{\modulus}[1]{\left\lvert#1\right\rvert} \newcommand{\zerovector}{\vect{0}} \newcommand{\zeromatrix}{\mathcal{O}} \newcommand{\innerproduct}[2]{\left\langle#1,\,#2\right\rangle} \newcommand{\norm}[1]{\left\lVert#1\right\rVert} \newcommand{\dimension}[1]{\dim\left(#1\right)} \newcommand{\nullity}[1]{n\left(#1\right)} \newcommand{\rank}[1]{r\left(#1\right)} \newcommand{\ds}{\oplus} \newcommand{\detname}[1]{\det\left(#1\right)} \newcommand{\detbars}[1]{\left\lvert#1\right\rvert} \newcommand{\trace}[1]{t\left(#1\right)} \newcommand{\sr}[1]{#1^{1/2}} \newcommand{\spn}[1]{\left\langle#1\right\rangle} \newcommand{\nsp}[1]{\mathcal{N}\!\left(#1\right)} \newcommand{\csp}[1]{\mathcal{C}\!\left(#1\right)} \newcommand{\rsp}[1]{\mathcal{R}\!\left(#1\right)} \newcommand{\lns}[1]{\mathcal{L}\!\left(#1\right)} \newcommand{\per}[1]{#1^\perp} \newcommand{\augmented}[2]{\left\lbrack\left.#1\,\right\rvert\,#2\right\rbrack} \newcommand{\linearsystem}[2]{\mathcal{LS}\!\left(#1,\,#2\right)} \newcommand{\homosystem}[1]{\linearsystem{#1}{\zerovector}} \newcommand{\rowopswap}[2]{R_{#1}\leftrightarrow R_{#2}} \newcommand{\rowopmult}[2]{#1R_{#2}} \newcommand{\rowopadd}[3]{#1R_{#2}+R_{#3}} \newcommand{\leading}[1]{\boxed{#1}} \newcommand{\rref}{\xrightarrow{\text{RREF}}} \newcommand{\elemswap}[2]{E_{#1,#2}} \newcommand{\elemmult}[2]{E_{#2}\left(#1\right)} \newcommand{\elemadd}[3]{E_{#2,#3}\left(#1\right)} \newcommand{\scalarlist}[2]{{#1}_{1},\,{#1}_{2},\,{#1}_{3},\,\ldots,\,{#1}_{#2}} \newcommand{\vect}[1]{\mathbf{#1}} \newcommand{\colvector}[1]{\begin{bmatrix}#1\end{bmatrix}} \newcommand{\vectorcomponents}[2]{\colvector{#1_{1}\\#1_{2}\\#1_{3}\\\vdots\\#1_{#2}}} \newcommand{\vectorlist}[2]{\vect{#1}_{1},\,\vect{#1}_{2},\,\vect{#1}_{3},\,\ldots,\,\vect{#1}_{#2}} \newcommand{\vectorentry}[2]{\left\lbrack#1\right\rbrack_{#2}} \newcommand{\matrixentry}[2]{\left\lbrack#1\right\rbrack_{#2}} \newcommand{\lincombo}[3]{#1_{1}\vect{#2}_{1}+#1_{2}\vect{#2}_{2}+#1_{3}\vect{#2}_{3}+\cdots +#1_{#3}\vect{#2}_{#3}} \newcommand{\matrixcolumns}[2]{\left\lbrack\vect{#1}_{1}|\vect{#1}_{2}|\vect{#1}_{3}|\ldots|\vect{#1}_{#2}\right\rbrack} \newcommand{\transpose}[1]{#1^{t}} \newcommand{\inverse}[1]{#1^{-1}} \newcommand{\submatrix}[3]{#1\left(#2|#3\right)} \newcommand{\adj}[1]{\transpose{\left(\conjugate{#1}\right)}} \newcommand{\adjoint}[1]{#1^\ast} \newcommand{\set}[1]{\left\{#1\right\}} \newcommand{\setparts}[2]{\left\lbrace#1\,\middle|\,#2\right\rbrace} \newcommand{\card}[1]{\left\lvert#1\right\rvert} \newcommand{\setcomplement}[1]{\overline{#1}} \newcommand{\charpoly}[2]{p_{#1}\left(#2\right)} \newcommand{\eigenspace}[2]{\mathcal{E}_{#1}\left(#2\right)} \newcommand{\eigensystem}[3]{\lambda&=#2&\eigenspace{#1}{#2}&=\spn{\set{#3}}} \newcommand{\geneigenspace}[2]{\mathcal{G}_{#1}\left(#2\right)} \newcommand{\algmult}[2]{\alpha_{#1}\left(#2\right)} \newcommand{\geomult}[2]{\gamma_{#1}\left(#2\right)} \newcommand{\indx}[2]{\iota_{#1}\left(#2\right)} \newcommand{\ltdefn}[3]{#1\colon #2\rightarrow#3} \newcommand{\lteval}[2]{#1\left(#2\right)} \newcommand{\ltinverse}[1]{#1^{-1}} \newcommand{\restrict}[2]{{#1}|_{#2}} \newcommand{\preimage}[2]{#1^{-1}\left(#2\right)} \newcommand{\rng}[1]{\mathcal{R}\!\left(#1\right)} \newcommand{\krn}[1]{\mathcal{K}\!\left(#1\right)} \newcommand{\compose}[2]{{#1}\circ{#2}} \newcommand{\vslt}[2]{\mathcal{LT}\left(#1,\,#2\right)} \newcommand{\isomorphic}{\cong} \newcommand{\similar}[2]{\inverse{#2}#1#2} \newcommand{\vectrepname}[1]{\rho_{#1}} \newcommand{\vectrep}[2]{\lteval{\vectrepname{#1}}{#2}} \newcommand{\vectrepinvname}[1]{\ltinverse{\vectrepname{#1}}} \newcommand{\vectrepinv}[2]{\lteval{\ltinverse{\vectrepname{#1}}}{#2}} \newcommand{\matrixrep}[3]{M^{#1}_{#2,#3}} \newcommand{\matrixrepcolumns}[4]{\left\lbrack \left.\vectrep{#2}{\lteval{#1}{\vect{#3}_{1}}}\right|\left.\vectrep{#2}{\lteval{#1}{\vect{#3}_{2}}}\right|\left.\vectrep{#2}{\lteval{#1}{\vect{#3}_{3}}}\right|\ldots\left|\vectrep{#2}{\lteval{#1}{\vect{#3}_{#4}}}\right.\right\rbrack} \newcommand{\cbm}[2]{C_{#1,#2}} \newcommand{\jordan}[2]{J_{#1}\left(#2\right)} \newcommand{\hadamard}[2]{#1\circ #2} \newcommand{\hadamardidentity}[1]{J_{#1}} \newcommand{\hadamardinverse}[1]{\widehat{#1}} \newcommand{\lt}{<} \newcommand{\gt}{>} \newcommand{\amp}{&} \)

SectionILTInjective Linear Transformations

Some linear transformations possess one, or both, of two key properties, which go by the names injective and surjective. We will see that they are closely related to ideas like linear independence and spanning, and subspaces like the null space and the column space. In this section we will define an injective linear transformation and analyze the resulting consequences. The next section will do the same for the surjective property. In the final section of this chapter we will see what happens when we have the two properties simultaneously.

SubsectionILTInjective Linear Transformations

As usual, we lead with a definition.

DefinitionILTInjective Linear Transformation

Suppose \(\ltdefn{T}{U}{V}\) is a linear transformation. Then \(T\) is injective if whenever \(\lteval{T}{\vect{x}}=\lteval{T}{\vect{y}}\text{,}\) then \(\vect{x}=\vect{y}\text{.}\)

Given an arbitrary function, it is possible for two different inputs to yield the same output (think about the function \(f(x)=x^2\) and the inputs \(x=3\) and \(x=-3\)). For an injective function, this never happens. If we have equal outputs (\(\lteval{T}{\vect{x}}=\lteval{T}{\vect{y}}\)) then we must have achieved those equal outputs by employing equal inputs (\(\vect{x}=\vect{y}\)). Some authors prefer the term one-to-one where we use injective, and we will sometimes refer to an injective linear transformation as an injection.

SubsectionEILTExamples of Injective Linear Transformations

It is perhaps most instructive to examine a linear transformation that is not injective first.

Here is a cartoon of a non-injective linear transformation. Notice that the central feature of this cartoon is that \(\lteval{T}{\vect{u}}=\vect{v}=\lteval{T}{\vect{w}}\text{.}\) Even though this happens again with some unnamed vectors, it only takes one occurrence to destroy the possibility of injectivity. Note also that the two vectors displayed in the bottom of \(V\) have no bearing, either way, on the injectivity of \(T\text{.}\)

<<SVG image is unavailable, or your browser cannot render it>>

Figure7.39Non-Injective Linear Transformation

To show that a linear transformation is not injective, it is enough to find a single pair of inputs that get sent to the identical output, as in Example NIAQ. However, to show that a linear transformation is injective we must establish that this coincidence of outputs never occurs. Here is an example that shows how to establish this.

Here is the cartoon for an injective linear transformation. It is meant to suggest that we never have two inputs associated with a single output. Again, the two lonely vectors at the bottom of \(V\) have no bearing either way on the injectivity of \(T\text{.}\)

<<SVG image is unavailable, or your browser cannot render it>>

Figure7.41Injective Linear Transformation

Let us now examine an injective linear transformation between abstract vector spaces.

SubsectionKLTKernel of a Linear Transformation

For a linear transformation \(\ltdefn{T}{U}{V}\text{,}\) the kernel is a subset of the domain \(U\text{.}\) Informally, it is the set of all inputs that the transformation sends to the zero vector of the codomain. It will have some natural connections with the null space of a matrix, so we will keep the same notation, and if you think about your objects, then there should be little confusion. Here is the careful definition.

DefinitionKLTKernel of a Linear Transformation

Suppose \(\ltdefn{T}{U}{V}\) is a linear transformation. Then the kernel of \(T\) is the set \begin{equation*} \krn{T}=\setparts{\vect{u}\in U}{\lteval{T}{\vect{u}}=\zerovector}\text{.} \end{equation*}

Notice that the kernel of \(T\) is just the preimage of \(\zerovector\text{,}\) \(\preimage{T}{\zerovector}\) (Definition PI). Here is an example.

We know that the span of a set of vectors is always a subspace (Theorem SSS), so the kernel computed in Example NKAO is also a subspace. This is no accident, the kernel of a linear transformation is always a subspace.

Proof

Let us compute another kernel, now that we know in advance that it will be a subspace.

Our next theorem says that if a preimage is a nonempty set then we can construct it by picking any one element and adding on elements of the kernel.

Proof

This theorem, and its proof, should remind you very much of Theorem PSPHS. Additionally, you might go back and review Example SPIAS. Can you tell now which is the only preimage to be a subspace?

Here is the cartoon which describes the “many-to-one” behavior of a typical linear transformation. Presume that \(\lteval{T}{\vect{u}_i}=\vect{v}_i\text{,}\) for \(i=1,2,3\text{,}\) and as guaranteed by Theorem LTTZZ, \(\lteval{T}{\zerovector_U}=\zerovector_V\text{.}\) Then four pre-images are depicted, each labeled slightly different. \(\preimage{T}{\vect{v}_2}\) is the most general, employing Theorem KPI to provide two equal descriptions of the set. The most unusual is \(\preimage{T}{\zerovector_V}\) which is equal to the kernel, \(\krn{T}\text{,}\) and hence is a subspace (by Theorem KLTS). The subdivisions of the domain, \(U\text{,}\) are meant to suggest the partioning of the domain by the collection of pre-images. It also suggests that each pre-image is of similar size or structure, since each is a “shifted” copy of the kernel. Notice that we cannot speak of the dimension of a pre-image, since it is almost never a subspace. Also notice that \(\vect{x},\,\vect{y}\in V\) are elements of the codomain with empty pre-images.

<<SVG image is unavailable, or your browser cannot render it>>

Figure7.48Kernel and Pre-Image

The next theorem is one we will cite frequently, as it characterizes injections by the size of the kernel.

Proof

You might begin to think about how Figure 7.48 would change if the linear transformation is injective, which would make the kernel trivial by Theorem KILT.

SubsectionILTLIInjective Linear Transformations and Linear Independence

There is a connection between injective linear transformations and linearly independent sets that we will make precise in the next two theorems. However, more informally, we can get a feel for this connection when we think about how each property is defined. A set of vectors is linearly independent if the only relation of linear dependence is the trivial one. A linear transformation is injective if the only way two input vectors can produce the same output is in the trivial way, when both input vectors are equal.

Proof
Proof

SubsectionILTDInjective Linear Transformations and Dimension

Proof

Notice that the previous example made no use of the actual formula defining the function. Merely a comparison of the dimensions of the domain and codomain are enough to conclude that the linear transformation is not injective. Archetype M and Archetype N are two more examples of linear transformations that have “big” domains and “small” codomains, resulting in “collisions” of outputs and thus are non-injective linear transformations.

SubsectionCILTComposition of Injective Linear Transformations

In Subsection LT.NLTFO we saw how to combine linear transformations to build new linear transformations, specifically, how to build the composition of two linear transformations (Definition LTC). It will be useful later to know that the composition of injective linear transformations is again injective, so we prove that here.

Proof
SageCILTComposition of Injective Linear Transformations
Click to open

SubsectionReading Questions

1

Suppose \(\ltdefn{T}{\complex{8}}{\complex{5}}\) is a linear transformation. Why is \(T\) not injective?

2

Describe the kernel of an injective linear transformation.

SubsectionExercises

C10

Each archetype below is a linear transformation. Compute the kernel for each.

Archetype M, Archetype N, Archetype O, Archetype P, Archetype Q, Archetype R, Archetype S, Archetype T, Archetype U, Archetype V, Archetype W, Archetype X

C20

The linear transformation \(\ltdefn{T}{\complex{4}}{\complex{3}}\) is not injective. Find two inputs \(\vect{x},\,\vect{y}\in\complex{4}\) that yield the same output (that is \(\lteval{T}{\vect{x}}=\lteval{T}{\vect{y}}\)). \begin{equation*} \lteval{T}{\colvector{x_1\\x_2\\x_3\\x_4}}= \colvector{ 2x_1+x_2+x_3\\ -x_1+3x_2+x_3-x_4\\ 3x_1+x_2+2x_3-2x_4 }\text{.} \end{equation*}

Solution
C25

Define the linear transformation \begin{equation*} \ltdefn{T}{\complex{3}}{\complex{2}},\quad \lteval{T}{\colvector{x_1\\x_2\\x_3}}=\colvector{2x_1-x_2+5x_3\\-4x_1+2x_2-10x_3}\text{.} \end{equation*} Find a basis for the kernel of \(T\text{,}\) \(\krn{T}\text{.}\) Is \(T\) injective?

Solution
C26

Let \begin{equation*} A = \begin{bmatrix} 1 & 2 & 3 & 1 & 0\\ 2 & -1 & 1 & 0 & 1\\ 1 & 2 & -1 & -2 & 1\\ 1 & 3 & 2 & 1 & 2 \end{bmatrix} \end{equation*} and let \(\ltdefn{T}{\complex{5}}{\complex{4}}\) be given by \(\lteval{T}{\vect{x}}=A\vect{x}\text{.}\) Is \(T\) injective? (Hint: No calculation is required.)

Solution
C27

Let \(\ltdefn{T}{\complex{3}}{\complex{3}}\) be given by \(\lteval{T}{\colvector{x\\y\\z}} = \colvector{2x + y + z\\ x - y + 2z\\ x + 2y - z}\text{.}\) Find \(\krn{T}\text{.}\) Is \(T\) injective?

Solution
C28

Let \begin{equation*} A = \begin{bmatrix} 1 & 2 & 3 & 1 \\ 2 & -1 & 1 & 0 \\ 1 & 2 & -1 & -2 \\ 1 & 3 & 2 & 1 \end{bmatrix} \end{equation*} and let \(\ltdefn{T}{\complex{4}}{\complex{4}}\) be given by \(\lteval{T}{\vect{x}}=A\vect{x}\text{.}\) Find \(\krn{T}\text{.}\) Is \(T\) injective?

Solution
C29

Let \begin{equation*} A = \begin{bmatrix} 1 & 2 & 1 & 1 \\ 2 & 1 & 1 & 0 \\ 1 & 2 & 1 & 2 \\ 1 & 2 & 1 & 1 \end{bmatrix} \end{equation*} and let \(\ltdefn{T}{\complex{4}}{\complex{4}}\) be given by \(\lteval{T}{\vect{x}}=A\vect{x}\text{.}\) Find \(\krn{T}\text{.}\) Is \(T\) injective?

Solution
C30

Let \(T : M_{22} \rightarrow P_2\) be given by \(T\left(\begin{bmatrix} a & b \\ c & d \end{bmatrix}\right) = (a + b) + (a + c)x + (a + d)x^2\text{.}\) Is \(T\) injective? Find \(\krn{T}\text{.}\)

Solution
C31

Given that the linear transformation \(\ltdefn{T}{\complex{3}}{\complex{3}}\text{,}\) \(\lteval{T}{\colvector{x\\y\\z}} = \colvector{2x + y\\2y + z\\x + 2z}\) is injective, show directly that \begin{equation*} \set{ \lteval{T}{\vect{e}_1},\, \lteval{T}{\vect{e}_2},\, \lteval{T}{\vect{e}_3} } \end{equation*} is a linearly independent set.

Solution
C32

Given that the linear transformation \(\ltdefn{T}{\complex{2}}{\complex{3}}\text{,}\) \(\lteval{T}{\colvector{x\\y}} = \colvector{x+y\\2x + y\\x + 2y}\) is injective, show directly that \begin{equation*} \set{ \lteval{T}{\vect{e}_1},\, \lteval{T}{\vect{e}_2} } \end{equation*} is a linearly independent set.

Solution
C33

Given that the linear transformation \(\ltdefn{T}{\complex{3}}{\complex{5}}\text{,}\) \begin{equation*} \lteval{T}{\colvector{x\\y\\z}} = \begin{bmatrix} 1 & 3 & 2\\ 0 & 1 & 1\\ 1 & 2 & 1\\ 1 & 0 & 1\\ 3 & 1 & 2 \end{bmatrix} \colvector{x\\y\\z} \end{equation*} is injective, show directly that \begin{equation*} \set{ \lteval{T}{\vect{e}_1},\, \lteval{T}{\vect{e}_2},\, \lteval{T}{\vect{e}_3} } \end{equation*} is a linearly independent set.

Solution
C40

Show that the linear transformation \(R\) is not injective by finding two different elements of the domain, \(\vect{x}\) and \(\vect{y}\text{,}\) such that \(\lteval{R}{\vect{x}}=\lteval{R}{\vect{y}}\text{.}\) (\(S_{22}\) is the vector space of symmetric \(2\times 2\) matrices.) \begin{equation*} \ltdefn{R}{S_{22}}{P_1}\quad \lteval{R}{\begin{bmatrix}a&b\\b&c\end{bmatrix}}=(2a-b+c)+(a+b+2c)x\text{.} \end{equation*}

Solution
M60

Suppose \(U\) and \(V\) are vector spaces. Define the function \(\ltdefn{Z}{U}{V}\) by \(\lteval{Z}{\vect{u}}=\zerovector_{V}\) for every \(\vect{u}\in U\text{.}\) Then by Exercise LT.M60, \(Z\) is a linear transformation. Formulate a condition on \(U\) that is equivalent to \(Z\) being an injective linear transformation. In other words, fill in the blank to complete the following statement (and then give a proof): \(Z\) is injective if and only if \(U\) is . (See Exercise SLT.M60, Exercise IVLT.M60.)

T10

Suppose \(\ltdefn{T}{U}{V}\) is a linear transformation. For which vectors \(\vect{v}\in V\) is \(\preimage{T}{\vect{v}}\) a subspace of \(U\text{?}\)

Solution
T15

Suppose that that \(\ltdefn{T}{U}{V}\) and \(\ltdefn{S}{V}{W}\) are linear transformations. Prove the following relationship between kernels. \begin{equation*} \krn{T}\subseteq\krn{\compose{S}{T}}\text{.} \end{equation*}

Solution
T20

Suppose that \(A\) is an \(m\times n\) matrix. Define the linear transformation \(T\) by \begin{equation*} \ltdefn{T}{\complex{n}}{\complex{m}},\quad \lteval{T}{\vect{x}}=A\vect{x}\text{.} \end{equation*} Prove that the kernel of \(T\) equals the null space of \(A\text{,}\) \(\krn{T}=\nsp{A}\text{.}\)

Solution