Processing math: 100%
Skip to main content

Section 1.3 Orthogonal Complements

Theorem (((above on repeated sums))) mentions repeated sums, which are of interest. However, when we begin with a vector space V and a single subspace W, we can ask about the existence of another subspace, W, such that V=UβŠ•W. The answer is that such a W always exists, and we then refer to it as a complement of U.

Definition 1.3.1. Subspace Complement.

Suppose that V is a vector space with a subspace U. If W is a subspace such that V=UβŠ•W, then W is the complement of V.

Every subspace has a complement, and generally it is not unique.

Suppose that \(\dimension{V}=n\) and \(\dimension{U}=k\text{,}\) and let \(B=\set{\vectorlist{u}{k}}\) be a basis of \(U\text{.}\) With \(n-k\) applications of Theorem ELIS we obtain vectors \(\vectorlist{v}{n-k}\) that succesively create bases \(B_i=\set{\vectorlist{u}{k},\,\vectorlist{v}{i}}\text{,}\) \(0\leq i\leq n-k\) for subspaces \(U=U_0, U_1,\dots,U_{n-k}=V\text{,}\) where \(\dimension{U_i}=k+i\text{.}\)

Define \(W=\spn{\set{\vectorlist{v}{n-k}}}\text{.}\) Since \(\set{\vectorlist{u}{k},\,\vectorlist{v}{i}}\) is a basis for \(V\) and \(\set{\vectorlist{u}{k}}\) is a basis for \(U\text{,}\) we can apply Theorem (((Direct Sum From a Basis (above)))) to see that \(V=U\ds W\text{,}\) so \(W\) is the complement of \(V\text{.}\) (Compare with (((Direct Sum From One Subspace (above)))), which has identical content, but a different write-up.)

The freedom given when we β€œextend” a linearly independent set (or basis) to create a basis for the complement means that we can create a complement in many ways, so it is not unique.

Consider the subspace \(U\) of \(V=\complex{3}\text{,}\)

\begin{equation*} U=\spn{\set{\colvector{1\\-6\\-8},\colvector{1\\-5\\-7}}}. \end{equation*}

Create two different complements of \(U\text{,}\) being sure to prove that your complements are unequal (and not simply have unequal bases). Before reading ahead, can you think of an ideal (or β€œcanonical”) choice for the complement?

Consider the subspace \(U\) of \(V=\complex{5}\text{,}\)

\begin{equation*} U=\spn{\set{\colvector{1\\-4\\-2\\6\\-5},\colvector{1\\-4\\-1\\4\\-3}}}. \end{equation*}

Create a complement of \(U\text{.}\) (If you have read ahead, do not create an orthogonal complement for this exercise.)

With an inner product, and a notion of orthogonality, we can define a canonical, and useful, complement for every subspace.

Definition 1.3.5. Orthogonal Complement.

Suppose that V is a vector space with a subspace U. Then the orthogonal complement of U (relative to V) is

UβŠ₯={v∈V|⟨v,u⟩=0 for every u∈U}.

A matrix formulation of the orthogonal complement will help us establish that the moniker β€œcomplement” is deserved.

Membership in the orthogonal complement requires a vector to be orthogonal to every vector of \(U\text{.}\) However, because of the linearity of the inner product (Theorem IPVA, Theorem IPSM), it is equivalent to require that a vector be orthogonal to each member of a spanning set for \(U\text{.}\) So membership in the orthogonal complement is equivalent to being orthogonal to each column of \(A\text{.}\) We obtain the desired set equality from the equivalences,

\begin{equation*} \vect{v}\in\per{U} \iff \adjoint{\vect{v}}A=\adjoint{\zerovector} \iff \adjoint{A}\vect{v}=\zerovector \iff \vect{v}\in\nsp{\adjoint{A}}. \end{equation*}

We first establish that \(U\cap\per{U}=\set{\zerovector}\text{.}\) Suppose \(\vect{u}\in U\) and \(\vect{u}\in\per{U}\text{.}\) Then \(\innerproduct{\vect{u}}{\vect{u}}=0\) and by Theorem PIP we conclude that \(\vect{u}=\zerovector\text{.}\)

We now show that an arbitrary vector \(\vect{v}\) can be written as a sum of vectors from \(U\) and \(\per{U}\text{.}\) Without loss of generality, we can assume we have an orthonormal basis for \(U\text{,}\) for if not, we can apply the Gram-Schmidt process to any basis of \(U\) to create an orthogonal spanning set, whose individual vectors can be scaled to have norm one (Theorem GSP). Denote this basis as \(B=\set{\vectorlist{u}{k}}\text{.}\)

Define the vector \(\vect{v}_1\) as a linear combination of the vectors of \(B\text{,}\) so \(\vect{v}_1\in U\text{.}\)

\begin{equation*} \vect{v}_1 = \sum_{i=1}^k\innerproduct{\vect{u}_i}{\vect{v}}\,\vect{u}_i\text{.} \end{equation*}

Define \(\vect{v}_2 = \vect{v}-\vect{v}_1\text{,}\) so trivially by construction, \(\vect{v}=\vect{v}_1+\vect{v}_2\text{.}\) It remains to show that \(\vect{v}_2\in\per{U}\text{.}\) We repeatedly use properties of the inner product. This construction and proof may remind you of the Gram-Schmidt process. For \(1\leq j\leq k\text{,}\)

\begin{align*} \innerproduct{\vect{v}_2}{\vect{u}_j} &=\innerproduct{\vect{v}}{\vect{u}_j}-\innerproduct{\vect{v}_1}{\vect{u}_j}\\ &=\innerproduct{\vect{v}}{\vect{u}_j}-\sum_{i=1}^k\innerproduct{\innerproduct{\vect{u}_i}{\vect{v}}\,\vect{u}_i}{\vect{u}_j}\\ &=\innerproduct{\vect{v}}{\vect{u}_j}-\sum_{i=1}^k\conjugate{\innerproduct{\vect{u}_i}{\vect{v}}}\innerproduct{\vect{u}_i}{\vect{u}_j}\\ &=\innerproduct{\vect{v}}{\vect{u}_j}-\conjugate{\innerproduct{\vect{u}_j}{\vect{v}}}\innerproduct{\vect{u}_j}{\vect{u}_j}\\ &=\innerproduct{\vect{v}}{\vect{u}_j}-\innerproduct{\vect{v}}{\vect{u}_j}\\ &=0 \end{align*}

We have fulfilled the hypotheses of Theorem Theorem 1.2.5 and so can say \(V=U\ds\per{U}\text{.}\)

Theorem Theorem 1.3.7 gives us a canonical choice of a complementary subspace, which has useful orthogonality properties. It also allows us to decompose any vector (uniquely) into an element of a subspace, plus an orthogonal vector. This might remind you in some ways of β€œresolving a vector into compoments” if you have studied physics some.

Given a matrix, we get a natural vector space decomposition.

Compute the orthogonal complement of the subspace \(U\subset\complex{3}\text{.}\)

\begin{equation*} U=\spn{\set{\colvector{1\\-1\\5}, \colvector{3\\1\\3} }} \end{equation*}
Solution

Form the matrix \(A\text{,}\) whose columns are the two basis vectors given for \(U\) and compute the null space \(\nsp{\adjoint{A}}\) by row-reducing the matrix. (Theorem Theorem 1.3.6)

\begin{equation*} \adjoint{A}=\begin{bmatrix} 1 & -1 & 5\\ 3 & 1 & 3\end{bmatrix} \rref \begin{bmatrix} 1 & 0 & 2\\ 0 & 1 & -3\end{bmatrix} \end{equation*}

So

\begin{equation*} \per{U}=\nsp{\adjoint{A}}=\spn{\set{\colvector{-2\\3\\1}}} \end{equation*}

Compute the orthogonal complements of the two subspaces from Exercises Checkpoint 1.3.3 and Checkpoint 1.3.4. For the subspace of \(\complex{5}\) verify that your first complement was not the orthogonal complement (or return to the exercise and find a complement that is not orthogonal).