Section 2.1 LU (Triangular) Decomposition
ΒΆSubsection 2.1.1 LU Decomposition, Nonsingular Case
ΒΆTheorem 2.1.1. LU (Triangular) Decomposition.
Suppose A is a square matrix of size n. Let Ak be the kΓk matrix formed from A by taking the first k rows and the first k columns. Suppose that Ak is nonsingular for all 1β€kβ€n. Then there is a lower triangular matrix L with all of its diagonal entries equal to 1 and an upper triangular matrix U such that A=LU. Furthermore, this decomposition is unique.
Proof.
We will row reduce \(A\) to a row-equivalent upper triangular matrix through a series of row operations, forming intermediate matrices \(A^\prime_j\text{,}\) \(1\leq j\leq n\text{,}\) that denote the state of the conversion after working on column \(j\text{.}\) First, the lone entry of \(A_1\) is \(\matrixentry{A}{11}\) and this scalar must be nonzero if \(A_1\) is nonsingular (Theorem SMZD). We can use row operations Definition RO of the form \(\rowopadd{\alpha}{1}{k}\text{,}\) \(2\leq k\leq n\text{,}\) where \(\alpha=-\matrixentry{A}{1k}/\matrixentry{A}{11}\) to place zeros in the first column below the diagonal. The first two rows and columns of \(A^\prime_1\) are a \(2\times 2\) upper triangular matrix whose determinant is equal to the determinant of \(A_2\text{,}\) since the matrices are row-equivalent through a sequence of row operations strictly of the third type (Theorem DRCMA). As such the diagonal entries of this \(2\times 2\) submatrix of \(A^\prime_1\) are nonzero. We can employ this nonzero diagonal element with row operations of the form \(\rowopadd{\alpha}{2}{k}\text{,}\) \(3\leq k\leq n\) to place zeros below the diagonal in the second column. We can continue this process, column by column. The key observations are that our hypothesis on the nonsingularity of the \(A_k\) will guarantee a nonzero diagonal entry for each column when we need it, that the row operations employed are always of the third type using a multiple of a row to transform another row with a greater row index, and that the final result will be a nonsingular upper triangular matrix. This is the desired matrix \(U\text{.}\)
Each row operation described in the previous paragraph can be accomplished with matrix multiplication by the appropriate elementary matrix (Theorem EMDRO). Since every row operation employed is adding a multiple of a row to a subsequent row these elementary matrices are of the form \(\elemadd{\alpha}{j}{k}\) with \(j<k\text{.}\) By Definition ELEM, these matrices are lower triangular with every diagonal entry equal to 1. We know that the product of two such matrices will again be lower triangular (Theorem PTMT), but also, as you can also easily check using a proof with a style similar to one above, that the product maintains all 1's on the diagonal. Let \(E_1,\,E_2,\,E_3,\,\dots,\,E_m\) denote the elementary matrices for this sequence of row operations. Then
where \(L^\prime\) is the product of the elementary matrices, and we know \(L^\prime\) is lower triangular with all 1's on the diagonal. Our desired matrix \(L\) is then \(L=\inverse{\left(L^\prime\right)}\text{.}\) By Theorem ITMT, \(L\) is lower triangular with all 1's on the diagonal and \(A=LU\text{,}\) as desired.
The process just described is deterministic. That is, the proof is constructive, with no freedom for each of us to walk through it differently. But could there be other matrices with the same properties as \(L\) and \(U\) that give such a decomposition of \(A\text{?}\) In other words, is the decomposition unique (Proof Technique U)? Suppose that we have two triangular decompositions, \(A=L_1U_1\) and \(A=L_2U_2\text{.}\) Since \(A\) is nonsingular, two applications of Theorem NPNT imply that \(L_1,\,L_2,\,U_1,\,U_2\) are all nonsingular. We have
Theorem ITMT tells us that \(\inverse{L_2}\) is lower triangular and has 1's as the diagonal entries. By Theorem PTMT, the product \(\inverse{L_2}L_1\) is again lower triangular, and it is simple to check (as before) that the diagonal entries of the product are again all 1's. By the entirely similar process we can conclude that the product \(U_2\inverse{U_1}\) is upper triangular. Because these two products are equal, their common value is a matrix that is both lower triangular and upper triangular, with all 1's on the diagonal. The only matrix meeting these three requirements is the identity matrix (Definition IM). So, we have,
which establishes the uniqueness of the decomposition.
Example 2.1.2. Triangular decomposition, size 4.
In this example, we will illustrate the process for computing a triangular decomposition, as described in the previous paragraphs. Consider the nonsingular square matrix \(A\) of size 4,
We form \(M\) by augmenting \(A\) with the size 4 identity matrix \(I_4\text{.}\) We will perform the allowed operations, column by column, only reporting intermediate results as we finish converting each column. It is easy to determine exactly which row operations we perform, since the final four columns contain a record of each such operation. We will not verify our hypotheses about the nonsingularity of the \(A_k\text{,}\) since if we do not have these conditions, we will reach a stage where a diagonal entry is zero and we cannot create the row operations we need to zero out the bottom portion of the associated column. In other words, we can boldly proceed and the necessity of our hypotheses will become apparent.
So at this point, we have \(U\) and \(L^\prime\text{,}\)
Then by whatever procedure we like (such as Theorem CINM), we find
It is instructive to verify that indeed \(LU=A\text{.}\)
Subsection 2.1.2 Solving Systems with Triangular Decompositions
ΒΆIn this section we give an explanation of why you might be interested in a triangular decomposition for a matrix. Many of the computational problems in linear algebra revolve around solving large systems of equations, or nearly equivalently, finding inverses of large matrices. Suppose we have a system of equations with coefficient matrix A and vector of constants b, and suppose further that A has the triangular decomposition A=LU. Let y be the solution to the linear system LS(L,b), so that by Theorem SLEMM, we have Ly=b. Notice that since L is nonsingular, this solution is unique, and the form of L makes it trivial to solve the system. The first component of y is determined easily, and we can continue on through determining the components of y, without even ever dividing. Now, with y in hand, consider the linear system, LS(U,y). Let x be the unique solution to this system, so by Theorem SLEMM we have Ux=y. Notice that a system of equations with U as a coefficient matrix is also straightforward to solve, though we will compute the bottom entries of x first, and we will need to divide. The upshot of all this is that x is a solution to LS(A,b), as we now show,Example 2.1.3. Triangular decomposition solves a system of equations.
Here we illustrate the previous discussion, recycling the decomposition found previously in Example TD4. Consider the linear system \(\linearsystem{A}{\vect{b}}\) with
First we solve the system \(\linearsystem{L}{\vect{b}}\) (see Example TD4 for \(L\)),
Then
so
Then we solve the system \(\linearsystem{U}{\vect{y}}\) (see Example TD4 for \(U\)),
Then
And so
is the solution to \(\linearsystem{U}{\vect{y}}\) and consequently is the unique solution to \(\linearsystem{A}{\vect{b}}\text{,}\) as you can easily verify.
Subsection 2.1.3 Computing Triangular Decompositions
ΒΆIt would be a simple matter to adjust the algorithm for converting a matrix to reduced row-echelon form and obtain an algorithm to compute the triangular decomposition of the matrix, along the lines of Example TD4 and the discussion preceding this example. However, it is possible to obtain relatively simple formulas for the entries of the decomposition, and if computed in the proper order, an implementation will be straightforward. We will state the result as a theorem and then give an example of its use.Theorem 2.1.4. Triangular Decomposition, Entry by Entry.
Suppose that A is a square matrix of size n with a triangular decomposition A=LU, where L is lower triangular with diagonal entries all equal to 1, and U is upper triangular. Then
Proof.
Example 2.1.5. Triangular decomposition, entry by entry, size 6.
We illustrate the application of the formulas in Theorem TDEE for the \(6\times 6\) matrix \(A\text{.}\)
Using the notational convenience of packaging the two triangular matrices into one matrix, and using the ordering of the computations mentioned above, we display the results after computing a single row and column of each of the two triangular matrices.
Splitting out the pieces of this matrix, we have the decomposition,