From A First Course in Linear Algebra

Version 2.00

© 2004.

Licensed under the GNU Free Documentation License.

http://linear.ups.edu/

This Section is a Draft, Subject to Changes

Alexandre-Théophile Vandermonde was a French mathematician in the 1700’s who was among the first to write about basic properties of the determinant (such as the effect of swapping two rows). However, the determinant that bears his name (Theorem DVM) does not appear in any of his four published mathematical papers.

Definition VM

Vandermonde Matrix

An square matrix of size $n$,
$A$,
is a Vandermonde matrix if there are scalars,
${x}_{1},\phantom{\rule{0em}{0ex}}{x}_{2},\phantom{\rule{0em}{0ex}}{x}_{3},\phantom{\rule{0em}{0ex}}\dots ,\phantom{\rule{0em}{0ex}}{x}_{n}$ such
that ${\left[A\right]}_{ij}={x}_{i}^{j-1}$,
$1\le i\le n$,
$1\le j\le n$.
$\u25b3$

Example VM4

Vandermonde matrix of size 4

is a Vandermonde matrix since it meets the definition with ${x}_{1}=2$, ${x}_{2}=-3$, ${x}_{3}=1$, ${x}_{4}=4$. $\u22a0$

Vandermonde matrices are not very interesting as numerical matrices, but instead appear more often in proofs and applications where the scalars ${x}_{i}$ are carried as symbols. Two such applications are in the sections on secret-sharing (Section SAS) and curve-fitting (Section CF). Principally, we would like to know when Vandermonde matrices are nonsingular, and the most convenient way to check this is by determining when the determinant is nonzero (Theorem SMZD). As a bonus, the determinant of a Vandermonde matrix has an especially pleasing formula.

Theorem DVM

Determinant of a Vandermonde Matrix

Suppose that $A$ is a
Vandermonde matrix of size $n$
built with the scalars ${x}_{1},\phantom{\rule{0em}{0ex}}{x}_{2},\phantom{\rule{0em}{0ex}}{x}_{3},\phantom{\rule{0em}{0ex}}\dots ,\phantom{\rule{0em}{0ex}}{x}_{n}$.
Then

Proof The proof is by induction (Technique I) on $n$, the size of the matrix. An empty product for a $1\times 1$ matrix might make a good base case, but we’ll start at $n=2$ instead. For a $2\times 2$ Vandermonde matrix, we have

$$\begin{array}{llll}\hfill det\left(A\right)& =\left|\begin{array}{cc}\hfill 1\hfill & \hfill {x}_{1}\hfill \\ \hfill 1\hfill & \hfill {x}_{2}\hfill \end{array}\right|={x}_{2}-{x}_{1}=\prod _{1\le i<j\le 2}\left({x}_{j}-{x}_{i}\right)\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$$For the induction step we will perform row operations on $A$ to obtain the determinant of $A$ as multiple of the determinant of an $\left(n-1\right)\times \left(n-1\right)$ Vandermonde matrix. the notation in this theorem tens to obscure your intuition about the changes effected by various row and column manipulations. Construct a $4\times 4$ Vandermonde matrix with four symbols as the scalars (${x}_{1}$, ${x}_{2}$, ${x}_{2}$, ${x}_{4}$, or perhaps $a$, $b$, $c$, $d$) and play along with the example as you study the proof.

First we convert most of the first column to zeros. Subtract row $n$ from each of the other $n-1$ rows to form a matrix $B$. By Theorem DRCMA, $B$ has the same determinant as $A$. The entries of $B$, in the first $n-1$ rows, i.e. for $1\le i\le n-1$, $1\le j\le n-1$, are

$$\begin{array}{llll}\hfill {\left[B\right]}_{ij}& ={x}_{i}^{j-1}-{x}_{n}^{j-1}=\left({x}_{i}-{x}_{n}\right)\sum _{k=0}^{j-2}{x}_{i}^{j-2-k}{x}_{n}^{k}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$$As the elements of row $i$, $1\le i\le n-1$, have the common factor $\left({x}_{i}-{x}_{n}\right)$, we form the new matrix $C$ that differs from $B$ by the removal of this factor from each of the first $n-1$ rows. This will change the determinant, as we will track carefully in a moment. We also have a first column with zeros in each location, except row $n$, so we can use it for a column expansion computation of the determinant. We now know,

$$\begin{array}{llllllll}\hfill det\left(A\right)& =det\left(B\right)\phantom{\rule{2em}{0ex}}& \hfill & \text{}\text{Theorem DRCMA}\text{}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =\left({x}_{1}-{x}_{n}\right)\left({x}_{2}-{x}_{n}\right)\cdots \left({x}_{n-1}-{x}_{n}\right)det\left(C\right)\phantom{\rule{2em}{0ex}}& \hfill & \text{}\text{Theorem DRCM}\text{}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =\left({x}_{1}-{x}_{n}\right)\left({x}_{2}-{x}_{n}\right)\cdots \left({x}_{n-1}-{x}_{n}\right)\left(1\right){\left(-1\right)}^{n+1}det\left(C\left(n-1|1\right)\right)\phantom{\rule{2em}{0ex}}& \hfill & \text{}\text{Theorem DEC}\text{}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =\left({x}_{1}-{x}_{n}\right)\left({x}_{2}-{x}_{n}\right)\cdots \left({x}_{n-1}-{x}_{n}\right){\left(-1\right)}^{n-1}det\left(C\left(n-1|1\right)\right)\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =\left({x}_{n}-{x}_{1}\right)\left({x}_{n}-{x}_{2}\right)\cdots \left({x}_{n}-{x}_{n-1}\right)det\left(C\left(n-1|1\right)\right)\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$$For convenience, denote $D=C\left(n-1|1\right)$. Entries of this matrix are similar to those of $B$, but the factors used to build $C$ are gone, and since the first column is gone, there is a slight re-indexing relative to the columns. For $1\le i\le n-1$, $1\le j\le n-1$,

$$\begin{array}{llll}\hfill {\left[D\right]}_{ij}& =\sum _{k=0}^{j-1}{x}_{i}^{j-1-k}{x}_{n}^{k}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$$We will perform many column operations on the matrix $D$, always of the type where we multiply a column by a scalar and add the result to another column. As such, Theorem DRCM insures that the determinant will remain constant. We will work column by column, left to right, to convert $D$ into a Vandermonde matrix with scalars ${x}_{1},\phantom{\rule{0em}{0ex}}{x}_{2},\phantom{\rule{0em}{0ex}}{x}_{3},\phantom{\rule{0em}{0ex}}\dots ,\phantom{\rule{0em}{0ex}}{x}_{n-1}$. More precisely, we will build a sequence of matrices $D={D}_{1}$, ${D}_{2}$, …, ${D}_{n-1}$, where each obtainable from the previous by a sequence of determinant-preserving column operations and the first $\ell $ columns of ${D}_{\ell}$ are the first $\ell $ columns of a Vandermonde matrix with scalars ${x}_{1},\phantom{\rule{0em}{0ex}}{x}_{2},\phantom{\rule{0em}{0ex}}{x}_{3},\phantom{\rule{0em}{0ex}}\dots ,\phantom{\rule{0em}{0ex}}{x}_{n-1}$. We could establish this claim by induction (Technique I) on $\ell $ if we were to expand the claim to specify the exact values of the final $n-1-\ell $ columns as well. Since the claim is that matrices with certain properties exist, we will instead establish the claim by constructing the desired matrices one-by-one procedurally. The extension to an inductive proof should be clear, but not especially illuminating.

Set ${D}_{1}=D$ to begin, and note that the entries of the first column of ${D}_{1}$ are, for $1\le i\le n-1$,

$$\begin{array}{llll}\hfill {\left[{D}_{1}\right]}_{i1}& =\sum _{k=0}^{1-1}{x}_{i}^{1-1-k}{x}_{n}^{k}=1={x}_{i}^{1-1}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$$So the first column of ${D}_{1}$ has the properties we desire. We will use this column of all 1’s to remove the highest power of ${x}_{n}$ from each of the remaining columns and so build ${D}_{2}$. Precisely, perform the $n-2$ column operations where column 1 is multiplied by ${x}_{n}^{j-1}$ and subtracted from column $j$, for $2\le j\le n-1$. Call the result ${D}_{2}$, and examine its entries in columns $2$ through $n-1$. For $1\le i\le n-1$, $2\le j\le n-1$,

$$\begin{array}{llll}\hfill {\left[{D}_{2}\right]}_{ij}& =-{x}_{n}^{j-1}{\left[{D}_{1}\right]}_{i1}+{\left[{D}_{1}\right]}_{ij}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =-{x}_{n}^{j-1}\left(1\right)+\sum _{k=0}^{j-1}{x}_{i}^{j-1-k}{x}_{n}^{k}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =-{x}_{n}^{j-1}+{x}_{i}^{j-1-\left(j-1\right)}{x}_{n}^{j-1}+\sum _{k=0}^{j-2}{x}_{i}^{j-1-k}{x}_{n}^{k}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =\sum _{k=0}^{j-2}{x}_{i}^{j-1-k}{x}_{n}^{k}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$$In particular, we examine column 2 of ${D}_{2}$. For $1\le i\le n-1$,

$$\begin{array}{llll}\hfill {\left[{D}_{2}\right]}_{i2}& =\sum _{k=0}^{2-2}{x}_{i}^{2-1-k}{x}_{n}^{k}={x}_{i}^{1}={x}_{i}^{2-1}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$$Now, form ${D}_{3}$. Perform the $n-3$ column operations where column 2 of ${D}_{2}$ is multiplied by ${x}_{n}^{j-2}$ and subtracted from column $j$, for $3\le j\le n-1$. The result is ${D}_{3}$, whose entries we now compute. For $1\le i\le n-1$,

$$\begin{array}{llll}\hfill {\left[{D}_{3}\right]}_{ij}& =-{x}_{n}^{j-2}{\left[{D}_{2}\right]}_{i2}+{\left[{D}_{2}\right]}_{ij}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =-{x}_{n}^{j-2}{x}_{i}^{1}+\sum _{k=0}^{j-2}{x}_{i}^{j-1-k}{x}_{n}^{k}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =-{x}_{n}^{j-2}{x}_{i}^{1}+{x}_{i}^{j-1-\left(j-2\right)}{x}_{n}^{j-2}+\sum _{k=0}^{j-3}{x}_{i}^{j-1-k}{x}_{n}^{k}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =\sum _{k=0}^{j-3}{x}_{i}^{j-1-k}{x}_{n}^{k}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$$Specifically, we examine column 3 of ${D}_{3}$. For $1\le i\le n-1$,

$$\begin{array}{llll}\hfill {\left[{D}_{3}\right]}_{i3}& =\sum _{k=0}^{3-3}{x}_{i}^{3-1-k}{x}_{n}^{k}={x}_{i}^{2}={x}_{i}^{3-1}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$$We could continue this procedure $n-4$ more times, eventually totaling $\frac{1}{2}\left({n}^{2}-3n+2\right)$ column operations, and arriving at ${D}_{n-1}$, the Vandermonde matrix of size $n-1$ built from the scalars ${x}_{1},\phantom{\rule{0em}{0ex}}{x}_{2},\phantom{\rule{0em}{0ex}}{x}_{3},\phantom{\rule{0em}{0ex}}\dots ,\phantom{\rule{0em}{0ex}}{x}_{n-1}$. Informally, we chop off the last term of every sum, until a single term is left in a column, and it is of the right form for the Vandermonde matrix. This desired column is then used in the next iteration to chop off some more final terms for columns to the right. Now we can apply our induction hypothesis to the determinant of ${D}_{n-1}$ and arrive at an expression for $detA$,

$$\begin{array}{llll}\hfill det\left(A\right)& =det\left(C\right)\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =\prod _{k=1}^{n-1}\left({x}_{n}-{x}_{k}\right)det\left(D\right)\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =\prod _{k=1}^{n-1}\left({x}_{n}-{x}_{k}\right)det\left({D}_{n-1}\right)\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =\prod _{k=1}^{n-1}\left({x}_{n}-{x}_{k}\right)\prod _{1\le i<j\le n-1}\left({x}_{j}-{x}_{i}\right)\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =\prod _{1\le i<j\le n}\left({x}_{j}-{x}_{i}\right)\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$$which is the desired result. $\u25a0$

Before we had Theorem DVM we could see that if two of the scalar values were equal, then the Vandermonde matrix would have two equal rows and hence be singular (Theorem DERC, Theorem SMZD). But with this expression for the determinant, we can establish the converse.

Theorem NVM

Nonsingular Vandermonde Matrix

A Vandermonde matrix of size $n$
with scalars ${x}_{1},\phantom{\rule{0em}{0ex}}{x}_{2},\phantom{\rule{0em}{0ex}}{x}_{3},\phantom{\rule{0em}{0ex}}\dots ,\phantom{\rule{0em}{0ex}}{x}_{n}$
is nonsingular if and only if the scalars are all different.
$\square $

Proof Let $A$ denote the Vandermonde matrix with scalars ${x}_{1},\phantom{\rule{0em}{0ex}}{x}_{2},\phantom{\rule{0em}{0ex}}{x}_{3},\phantom{\rule{0em}{0ex}}\dots ,\phantom{\rule{0em}{0ex}}{x}_{n}$. By Theorem SMZD, $A$ is nonsingular if and only if the determinant of $A$ is nonzero. The determinant is given by Theorem DVM, and this product is nonzero if and only if each term of the product is nonzero. This condition translates to ${x}_{i}-{x}_{j}\ne 0$ whenever $i\ne j$. In other words, the matrix is nonsingular if and only if the scalars are all different. $\u25a0$