Section MMA  Mathematica

From A First Course in Linear Algebra
Version 2.10
© 2004.
Licensed under the GNU Free Documentation License.
http://linear.ups.edu/

Computation Note ME.MMA: Matrix Entry

Matrices are input as lists of lists, since a list is a basic data structure in Mathematica. A matrix is a list of rows, with each row entered as a list. Mathematica uses braces ((\left \{\right . , \left .\right \})) to delimit lists. So the input

a = \{\{1, 2, 3, 4\},\{5, 6, 7, 8\},\{9, 10, 11, 12\}\}

would create a 3 × 4 matrix named a that is equal to

\left [\array{ 1& 2 & 3 & 4 \cr 5& 6 & 7 & 8 \cr 9&10&11&12 } \right ]

To display a matrix named a “nicely” in Mathematica, type MatrixForm[a] , and the output will be displayed with rows and columns. If you just type a , then you will get a list of lists, like how you input the matrix in the first place.

Computation Note RR.MMA: Row Reduce

If a is the name of a matrix in Mathematica, then the command RowReduce[a] will output the reduced row-echelon form of the matrix.

Computation Note LS.MMA: Linear Solve

Mathematica will solve a linear system of equations using the LinearSolve[] command. The inputs are a matrix with the coefficients of the variables (but not the column of constants), and a list containing the constant terms of each equation. This will look a bit odd, since the lists in the matrix are rows, but the column of constants is also input as a list and so looks like a row rather than a column. The result will be a single solution (even if there are infinitely many), reported as a list, or the statement that there is no solution. When there are infinitely many, the single solution reported is exactly that solution used in the proof of Theorem RCLS, where the free variables are all set to zero, and the dependent variables come along with values from the final column of the row-reduced matrix.

As an example, Archetype A is

\eqalignno{ {x}_{1} − {x}_{2} + 2{x}_{3} & = 1 & & \cr 2{x}_{1} + {x}_{2} + {x}_{3} & = 8 & & \cr {x}_{1} + {x}_{2} & = 5 & & }

To ask Mathematica for a solution, enter

LinearSolve[\ \{\{1,\kern 1.95872pt − 1,\kern 1.95872pt 2\},\{2,\kern 1.95872pt 1,\kern 1.95872pt 1\},\{1,\kern 1.95872pt 1,\kern 1.95872pt 0\}\},\ \{1,\kern 1.95872pt 8,\kern 1.95872pt 5\}\ ]

and you will get back the single solution

\{3,\kern 1.95872pt 2,\kern 1.95872pt 0\}

We will see later how to coax Mathematica into giving us infinitely many solutions for this system (Computation VFSS.MMA).

Computation Note VLC.MMA: Vector Linear Combinations

Contributed by Robert Beezer
Vectors in Mathematica are represented as lists, written and displayed horizontally. For example, the vector

v = \left [\array{ 1 \cr 2 \cr 3 \cr 4 } \right ]

would be entered and named via the command

v = \{1,\kern 1.95872pt 2,\kern 1.95872pt 3,\kern 1.95872pt 4\}

Vector addition and scalar multiplication are then very natural. If u and v are two lists of equal length, then

2u + (−3)v

will compute the correct vector and return it as a list. If u and v have different sizes, then Mathematica will complain about “objects of unequal length.”

Computation Note NS.MMA: Null Space

Given a matrix A, Mathematica will compute a set of column vectors whose span is the null space of the matrix with the NullSpace[] command. Perhaps not coincidentally, this set is exactly \left \{{z}_{j}\mathrel{∣}1 ≤ j ≤ n − r\right \}. However, Mathematica prefers to output the vectors in the opposite order than one we have chosen. Here’s a small example.

Begin with the 3 × 4 matrix A, and its row-reduced version B,

\eqalignno{ A & = \left [\array{ 1 &2&−1& 0 \cr 3 &4& 1 &−2 \cr −1&1&−5& 3 } \right ] & &\mathop{\longrightarrow}\limits_{}^{\text{RREF}} &B & = \left [\array{ \text{1}&0& 3 &−2 \cr 0&\text{1}&−2& 1 \cr 0&0& 0 & 0 } \right ] & & & & & & }

We could extract entries from B to build the vectors {z}_{1} and {z}_{2} according to Theorem SSNS and describe N\kern -1.95872pt \left (A\right ) as a span of the set \left \{{z}_{1},\kern 1.95872pt {z}_{2}\right \}. Instead, if a has been set to A, then executing the command NullSpace[a] yields the list of lists (column vectors),

\eqalignno{ \{\{2,−1, 0, 1\},\{ − 3, 2, 1, 0\}\} & & }

Notice how our {z}_{1} is second in the list. To “correct” this we can use a list-processing command from Mathematica, Reverse[] , as follows,

\eqalignno{ \text{ Reverse[NullSpace[a]] } & & }

and receive the output in our preferred order. Give it a try yourself.

Computation Note VFSS.MMA: Vector Form of Solution Set

Suppose that A is an m × n matrix and b ∈ {ℂ}^{m} is a column vector. We might wish to find all of the solutions to the linear system ℒS\kern -1.95872pt \left (A,\kern 1.95872pt b\right ). Mathematica’s LinearSolve[A, b] will return at most one solution (Computation LS.MMA). However, when the system is consistent, then this one solution reported is exactly the vector c, described in the statement of Theorem VFSLS.

The vectors {u}_{j}, 1 ≤ j ≤ n − r of Theorem VFSLS are exactly the output of Mathematica’s NullSpace[] command, though Mathematica lists them in the opposite order from the order we have chosen. These are the same vectors listed as {z}_{j}, 1 ≤ j ≤ n − r in Theorem SSNS. With c produced from the LinearSolve[] command, and the {u}_{j} coming from the NullSpace[] command we can use Mathematica’s symbolic manipulation commands to create an expression that describes all of the solutions.

Begin with the system ℒS\kern -1.95872pt \left (A,\kern 1.95872pt b\right ). Row-reduce A (Computation RR.MMA) and identify the free variables by determining the non-pivot columns. Suppose, for the sake of argument, that we have the three free variables {x}_{3}, {x}_{7} and {x}_{8}. Then the following command will build an expression for an arbitrary solution:

\eqalignno{ \text{ LinearSolve[A,b]+\{x8,x7,x3\}.NullSpace[A] } & & }

Be sure to include the “dot” right before the NullSpace[] command — it has the effect of creating a linear combination of the vectors in the null space, using scalars that are symbols reminiscent of the variables.

A concrete example should help here. Suppose we want a solution set for the linear system with coefficient matrix A and vector of constants b,

\eqalignno{ A & = \left [\array{ 1&2&3&−5& 1 &−1& 2 \cr 2&4&0& 8 &−4& 1 &−8 \cr 3&6&4& 0 &−2& 5 & 7 } \right ] &b & = \left [\array{ 8 \cr 1 \cr −5 } \right ] & & & & }

If we were to apply Theorem VFSLS, we would extract the components of c and {u}_{j} from the row-reduced version of the augmented matrix of the system (obtained with Mathematica, Computation RR.MMA),

\left [\array{ \text{1}&2&0& 4 &−2&0&−5& 2 \cr 0&0&\text{1}&−3& 1 &0& 3 & 1 \cr 0&0&0& 0 & 0 &\text{1}& 2 &−3 } \right ]

Instead, we will use this augmented matrix in reduced row-echelon form only to identify the free variables. In this example, we locate the non-pivot columns and see that {x}_{2}, {x}_{4}, {x}_{5} and {x}_{7} are free. If we have set a to the coefficient matrix and b to the vector of constants, then we execute the Mathematica command,

\eqalignno{ \text{ LinearSolve[a, b]+\{x7, x5, x4, x2\}.NullSpace[a] } & & }

As output we obtain the column vector (list),

\eqalignno{ \left [\array{ 2 − 2\text{ x2 }− 4\text{ x4 } + 2\text{ x5 } + 5\text{ x7 } \cr \text{ x2 } \cr 1 + 3\text{ x4 }−\text{ x5 }− 3\text{ x7 } \cr \text{ x4 } \cr \text{ x5 } \cr −3 − 2\text{ x7 } \cr \text{ x7 } } \right ] & & }

Computation Note GSP.MMA: Gram-Schmidt Procedure

Mathematica has a built-in routine that will do the Gram-Schmidt procedure (Theorem GSP). The input is a set of vectors, which must be linearly independent. This is written as a list, containing lists that are the vectors. Let a be such a list of lists, containing the vectors {v}_{i}, 1 ≤ i ≤ p from the statement of the theorem. You will need to first load the right Mathematica package — execute <<LinearAlgebra‘Orthogonalization‘ to make this happen. Then execute GramSchmidt[a] . The output will be another list of lists containing the vectors {u}_{i}, 1 ≤ i ≤ p from the statement of the theorem. Mathematica will complain if you do not provide a linearly independent set as input (try it!).

An example. Suppose our linearly independent set (check this!) is

\eqalignno{ S = \left \{\left [\array{ −1 \cr 4 \cr 1 \cr 0 \cr 3 } \right ],\kern 1.95872pt \left [\array{ 0 \cr 3 \cr 0 \cr 3 \cr −3 } \right ],\kern 1.95872pt \left [\array{ −1 \cr 2 \cr 0 \cr −1 \cr −2 } \right ],\kern 1.95872pt \left [\array{ −1 \cr −2 \cr −3 \cr 1 \cr 4 } \right ],\kern 1.95872pt \left [\array{ 1 \cr 6 \cr −1 \cr 4 \cr 6 } \right ]\right \} & & }

The output of the GramSchmidt[] command will be the set,

\eqalignno{ T = \left \{\left [\array{ − {1\over 3\sqrt{3}} \cr {4\over 3\sqrt{3}} \cr {1\over 3\sqrt{3}} \cr 0 \cr {1\over \sqrt{3}} } \right ],\kern 1.95872pt \left [\array{ {1\over 12\sqrt{15}} \cr {23\over 12\sqrt{15}} \cr − {1\over 12\sqrt{15}} \cr {3\sqrt{{3\over 5}}\over 4} \cr −{\sqrt{{5\over 3}}\over 2} } \right ],\kern 1.95872pt \left [\array{ − {37\over 4\sqrt{685}} \cr {29\over 4\sqrt{685}} \cr − {3\over 4\sqrt{685}} \cr − {79\over 4\sqrt{685}} \cr −{5\sqrt{ {5\over 137}}\over 2} } \right ],\kern 1.95872pt \left [\array{ − {337\over 2\sqrt{120423}} \cr − {37\over 6\sqrt{120423}} \cr − {1763\over 6\sqrt{120423}} \cr {337\over 6\sqrt{120423}} \cr {50\over \sqrt{120423}} } \right ],\kern 1.95872pt \left [\array{ {23\over \sqrt{879}} \cr {26\over 3\sqrt{879}} \cr − {44\over 3\sqrt{879}} \cr − {23\over 3\sqrt{879}} \cr {1\over \sqrt{879}} } \right ]\right \} & & }

Ugly, but true. At this stage, you might just as well be encouraged to think of the Gram-Schmidt procedure as a computational black box, linearly independent set in, orthogonal span-preserving set out.

To check that the output set is orthogonal, we can easily check the orthogonality of individual pairs of vectors. Suppose the output was set equal to b (say via b=GramSchmidt[a] ). We can extract the individual vectors of c as “parts” with syntax like c[[3]] , which would return the third vector in the set. When our vectors have only real number entries, we can accomplish an innerproduct with a “dot.” So, for example, you should discover that c[[3]].c[[5]] will return zero. Try it yourself with another pair of vectors.

Computation Note TM.MMA: Transpose of a Matrix

Contributed by Robert Beezer
Suppose a is the name of a matrix stored in Mathematica. Then Transpose[a] will create the transpose of a .

Computation Note MM.MMA: Matrix Multiplication

If A and B are matrices defined in Mathematica, then A.B will return the product of the two matrices (notice the dot between the matrices). If A is a matrix and v is a vector, then A.v will return the vector that is the matrix-vector product of A and v. In every case the sizes of the matrices and vectors need to be correct.

Some examples:

\begin{array}{cl} \{\{1,\kern 1.95872pt 2\},\kern 1.95872pt \{3,\kern 1.95872pt 4\}\}.\{\{5,\kern 1.95872pt 6,\kern 1.95872pt 7\},\kern 1.95872pt \{8,\kern 1.95872pt 9,\kern 1.95872pt 10\}\} = \{\{21,\kern 1.95872pt 24,\kern 1.95872pt 27\},\kern 1.95872pt \{47,\kern 1.95872pt 54,\kern 1.95872pt 61\}\}& \\ \{\{1,\kern 1.95872pt 2\},\kern 1.95872pt \{3,\kern 1.95872pt 4\}\}.\{\{5\},\kern 1.95872pt \{6\}\} = \{\{17\},\kern 1.95872pt \{39\}\} & \\ \{\{1,\kern 1.95872pt 2\},\kern 1.95872pt \{3,\kern 1.95872pt 4\}\}.\{5,\kern 1.95872pt 6\} = \{17,\kern 1.95872pt 39\} & \end{array}

Understanding the difference between the last two examples will go a long way to explaining how some Mathematica constructs work.

Computation Note MI.MMA: Matrix Inverse

If A is a matrix defined in Mathematica, then Inverse[A] will return the inverse of A, should it exist. In the case where A does not have an inverse Mathematica will tell you the matrix is singular (see Theorem NI).