In the next linear algebra course you take, the first lecture might be a reminder about what a vector space is (Definition VS), their ten properties, basic theorems and then some examples. The second lecture would likely be all about linear transformations. While it may seem we have waited a long time to present what must be a central topic, in truth we have already been working with linear transformations for some time.
Functions are important objects in the study of calculus, but have been absent from this course until now (well, not really, it just seems that way). In your study of more advanced mathematics it is nearly impossible to escape the use of functions β they are as fundamental as sets are.
You give me an mΓn matrix and I will give you a linear transformation T:CnβCm. This is our first hint that there is some relationship between linear transformations and matrices.
You give me a linear transformation T:CnβCm and I will give you an mΓn matrix. This is our second hint that there is some relationship between linear transformations and matrices. Generalizing this relationship to arbitrary vector spaces (i.e. not just Cn and Cm) will be the most important idea of Chapter R.
A simple idea, and as described in Exercise LT.T20, equivalent to the Definition LT. The statement is really just for convenience, as we will quote this one often.
Another simple idea, but a powerful one. βIt is enough to know what a linear transformation does to a basis.β At the outset of Chapter R, Theorem VRRB will help us define a very important function, and then Theorem LTDB will allow us to understand that this function is also a linear transformation.
The preimage will be an important construction in this chapter, and this is one of the most important descriptions of the preimage. It should remind you very much of Theorem PSPHS. Also see Theorem RPI, which has a description below.
This theorem provides the most direct way of forming the range of a linear transformation. The resulting spanning set might well be linearly dependent, and beg for some clean-up, but that does not stop us from having very quickly formed a reasonable description of the range. If you find the determination of spanning sets or ranges difficult, this is one worth remembering. You can view this as the analogue of forming a column space by a direct application of Definition CSM.
Injectivity and surjectivity are independent concepts. You can have one without the other. But when you have both, you get invertibility, a linear transformation that can be run βbackwards.β This result might explain the entire structure of the four sections in this chapter.
This is the promised generalization of Theorem RPNC about matrices. So the number of columns of a matrix is the analogue of the dimension of the domain. This will become even more precise in Chapter R. For now, this can be a powerful result for determining dimensions of kernels and ranges, and consequently, the injectivity or surjectivity of linear transformations. Never underestimate a theorem that counts something.