From A First Course in Linear Algebra

Version 2.20

© 2004.

Licensed under the GNU Free Documentation License.

http://linear.ups.edu/

This Section is Incomplete

Given two points in the plane, there is a unique line through them. Given three points in the plane, and not in a line, there is a unique parabola through them. Given four points in the plane, there is a unique polynomial, of degree 3 or less, passing through them. And so on. We can prove this result, and give a procedure for finding the polynomial with the help of Vandermonde matrices (Section VM).

Theorem IP

Interpolating Polynomial

Suppose $\left\{\left({x}_{i},\phantom{\rule{0.3em}{0ex}}{y}_{i}\right)\mid 1\le i\le n+1\right\}$ is a set of
$n+1$ points in the plane
where the $x$-coordinates
are all different. Then there is a unique polynomial of degree
$n$ or less,
$p\left(x\right)$, such
that $p\left({x}_{i}\right)={y}_{i}$,
$1\le i\le n+1$.
$\square $

Proof Write $p\left(x\right)={a}_{0}+{a}_{1}x+{a}_{2}{x}^{2}+\cdots +{a}_{n}{x}^{n}$. To meet the conclusion of the theorem, we desire,

$$\begin{array}{llllllll}\hfill {y}_{i}& =p\left({x}_{i}\right)={a}_{0}+{a}_{1}{x}_{i}+{a}_{2}{x}_{i}^{2}+\cdots +{a}_{n}{x}_{i}^{n}\phantom{\rule{2em}{0ex}}& \hfill 1\le i\le n+1& \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$$This is a system of $n+1$ linear equations in the $n+1$ variables ${a}_{0},\phantom{\rule{0.3em}{0ex}}{a}_{1},\phantom{\rule{0.3em}{0ex}}{a}_{2},\phantom{\rule{0.3em}{0ex}}\dots ,\phantom{\rule{0.3em}{0ex}}{a}_{n}$. The vector of constants in this system is the vector containing the $y$-coordinates of the points. More importantly, the coefficient matrix is a Vandermonde matrix (Definition VM) built from the $x$-coordinates ${x}_{1},\phantom{\rule{0.3em}{0ex}}{x}_{2},\phantom{\rule{0.3em}{0ex}}{x}_{3},\phantom{\rule{0.3em}{0ex}}\dots ,\phantom{\rule{0.3em}{0ex}}{x}_{n+1}$. Since we have required that these scalars all be different, Theorem NVM tells us that the coefficient matrix is nonsingular and Theorem NMUS says the solution for the coefficients of the polynomial exists, and is unique. As a practical matter, Theorem SNCM provides an expression for the solution. $\u25a0$

Example PTFP

Polynomial through five points

Suppose we have the following 5 points in the plane and we wish to pass a degree
4 polynomial through them.

$i$ | 1 | 2 | 3 | 4 | 5 | |

${x}_{i}$ | -3 | -1 | 2 | 3 | 6 | |

${y}_{i}$ | 276 | 16 | 31 | 144 2319 |

The required system of equations has a coefficient matrix that is the Vandermonde matrix where row $i$ is successive powers of ${x}_{i}$

$$\begin{array}{llll}\hfill A& =\left[\begin{array}{ccccc}\hfill 1\hfill & \hfill -3\hfill & \hfill 9\hfill & \hfill -27\hfill & \hfill 81\hfill \\ \hfill 1\hfill & \hfill -1\hfill & \hfill 1\hfill & \hfill -1\hfill & \hfill 1\hfill \\ \hfill 1\hfill & \hfill 2\hfill & \hfill 4\hfill & \hfill 8\hfill & \hfill 16\hfill \\ \hfill 1\hfill & \hfill 3\hfill & \hfill 9\hfill & \hfill 27\hfill & \hfill 81\hfill \\ \hfill 1\hfill & \hfill 6\hfill & \hfill 36\hfill & \hfill 216\hfill & \hfill 1296\hfill \end{array}\right]\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$$Theorem NMUS provides a solution as

$$\begin{array}{llll}\hfill \left[\begin{array}{c}\hfill {a}_{0}\hfill \\ \hfill {a}_{1}\hfill \\ \hfill {a}_{2}\hfill \\ \hfill {a}_{3}\hfill \\ \hfill {a}_{4}\hfill \end{array}\right]& ={A}^{-1}\left[\begin{array}{c}\hfill 276\hfill \\ \hfill 16\hfill \\ \hfill 31\hfill \\ \hfill 144\hfill \\ \hfill 2319\hfill \end{array}\right]=\left[\begin{array}{ccccc}\hfill -\frac{1}{15}\hfill & \hfill \frac{9}{14}\hfill & \hfill \frac{9}{10}\hfill & \hfill -\frac{1}{2}\hfill & \hfill \frac{1}{42}\hfill \\ \hfill 0\hfill & \hfill -\frac{3}{7}\hfill & \hfill \frac{3}{4}\hfill & \hfill -\frac{1}{3}\hfill & \hfill \frac{1}{84}\hfill \\ \hfill \frac{5}{108}\hfill & \hfill -\frac{1}{56}\hfill & \hfill -\frac{1}{4}\hfill & \hfill \frac{17}{72}\hfill & \hfill -\frac{11}{756}\hfill \\ \hfill -\frac{1}{54}\hfill & \hfill \frac{1}{21}\hfill & \hfill -\frac{1}{12}\hfill & \hfill \frac{1}{18}\hfill & \hfill -\frac{1}{756}\hfill \\ \hfill \frac{1}{540}\hfill & \hfill -\frac{1}{168}\hfill & \hfill \frac{1}{60}\hfill & \hfill -\frac{1}{72}\hfill & \hfill \frac{1}{756}\hfill \end{array}\right]\left[\begin{array}{c}\hfill 276\hfill \\ \hfill 16\hfill \\ \hfill 31\hfill \\ \hfill 144\hfill \\ \hfill 2319\hfill \end{array}\right]=\left[\begin{array}{c}\hfill 3\hfill \\ \hfill -4\hfill \\ \hfill 5\hfill \\ \hfill -2\hfill \\ \hfill 2\hfill \end{array}\right]\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$$So the polynomial is $p\left(x\right)=3-4x+5{x}^{2}-2{x}^{3}+2{x}^{4}$. $\u22a0$

The unique polynomial passing through a set of points is known as the interpolating polynomial and it has many uses. Unfortunately, when confronted with data from an experiment the situation may not be so simple or clear cut. Read on.

Suppose that we have $n$ real variables, ${x}_{1},\phantom{\rule{0.3em}{0ex}}{x}_{2},\phantom{\rule{0.3em}{0ex}}{x}_{3},\phantom{\rule{0.3em}{0ex}}\dots ,\phantom{\rule{0.3em}{0ex}}{x}_{n}$, that we can measure in an experiment. We believe that these variables combine, in a linear fashion, to equal another real variable, $y$. In other words, we have reason to believe from our understanding of the experiment, that

$$\begin{array}{llll}\hfill y& ={a}_{1}{x}_{1}+{a}_{2}{x}_{2}+{a}_{3}{x}_{3}+\cdots +{a}_{n}{x}_{n}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$$where the scalars ${a}_{1},\phantom{\rule{0.3em}{0ex}}{a}_{2},\phantom{\rule{0.3em}{0ex}}{a}_{3},\phantom{\rule{0.3em}{0ex}}\dots ,\phantom{\rule{0.3em}{0ex}}{a}_{n}$ are not known to us, but are instead desirable. We would call this our model of the situation. Then we run the experiment $m$ times, collecting sets of values for the variables of the experiment. For run number $k$ we might denote these values as ${y}_{k}$, ${x}_{k1}$, ${x}_{k2}$, ${x}_{k3}$, …, ${x}_{kn}$. If we substitute these values into the model equation, we get $m$ linear equations in the unknown coefficients ${a}_{1},\phantom{\rule{0.3em}{0ex}}{a}_{2},\phantom{\rule{0.3em}{0ex}}{a}_{3},\phantom{\rule{0.3em}{0ex}}\dots ,\phantom{\rule{0.3em}{0ex}}{a}_{n}$. If $m=n$, then we have a square coefficient matrix of the system which might happen to be nonsingular and there would be a unique solution.

However, more likely $m>n$ (the more data we collect, the greater our confidence in the results) and the resulting system is inconsistent. It may be that our model is only an approximate understanding of the relationship between the ${x}_{i}$ and $y$, or our measurements are not completely accurate. Still we would like to understand the situation we are studying, and would like some best answer for ${a}_{1},\phantom{\rule{0.3em}{0ex}}{a}_{2},\phantom{\rule{0.3em}{0ex}}{a}_{3},\phantom{\rule{0.3em}{0ex}}\dots ,\phantom{\rule{0.3em}{0ex}}{a}_{n}$.

Let $y$ denote the vector with ${\left[y\right]}_{i}={y}_{i}$, $1\le i\le m$, let $a$ denote the vector with ${\left[a\right]}_{j}={a}_{j}$, $1\le j\le n$, and let $X$ denote the $m\times n$ matrix with ${\left[X\right]}_{ij}={x}_{ij}$, $1\le i\le m$, $1\le j\le n$. Then the model equation, evaluated with each run of the experiment, translates to $Xa=y$. With the presumption that this system has no solution, we can try to minimize the difference between the two side of the equation $y-Xa$. As a vector, it is hard to imagine what the minimum might be, so we instead minimize the square of its norm

$$\begin{array}{lll}\hfill S={\left(y-Xa\right)}^{t}\left(y-Xa\right)& \phantom{\rule{2em}{0ex}}& \hfill \end{array}$$To keep the logical flow accurate, we will define the minimizing value and then give the proof that it behaves as desired.

Definition LSS

Least Squares Solution

Given the equation $Xa=y$,
where $X$ is an
$m\times n$ matrix of rank
$n$, the least squares
solution for $a$
is ${\left({X}^{t}X\right)}^{-1}{X}^{t}y$.
$\u25b3$

Theorem LSMR

Least Squares Minimizes Residuals

Suppose that $X$ is
an $m\times n$ matrix of rank
$n$. The least squares
solution of $Xa=y$,
${a}^{\prime}={\left({X}^{t}X\right)}^{-1}{X}^{t}y$,
minimizes the expression

Proof We begin by finding the critical points of $S$. In preparation, let ${X}_{j}$ denote column $j$ of $X$, for $1\le j\le n$ and compute partial derivatives with respect to ${a}_{j}$, $1\le j\le n$. A matrix product of the form ${x}^{t}y$ is a sum of products, so a derivative is a sum of applications of the product rule,

$$\begin{array}{llll}\hfill \frac{\partial}{\partial {a}_{j}}S& =\frac{\partial}{\partial {a}_{j}}\left({\left(y-Xa\right)}^{t}\left(y-Xa\right)\right)\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =\sum _{i=1}^{m}\frac{\partial}{\partial {a}_{j}}\left({\left[y-Xa\right]}_{i}\right){\left[y-Xa\right]}_{i}+{\left[y-Xa\right]}_{i}\frac{\partial}{\partial {a}_{j}}\left({\left[y-Xa\right]}_{i}\right)\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =2\sum _{i=1}^{m}\frac{\partial}{\partial {a}_{j}}\left({\left[y-Xa\right]}_{i}\right){\left[y-Xa\right]}_{i}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =2\sum _{i=1}^{m}\frac{\partial}{\partial {a}_{j}}\left({\left[y\right]}_{i}-\sum _{k=1}^{n}{\left[X\right]}_{ik}{\left[a\right]}_{k}\right){\left[y-Xa\right]}_{i}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =2\sum _{i=1}^{m}-{\left[X\right]}_{ij}{\left[y-Xa\right]}_{i}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =-2{\left({X}_{j}\right)}^{t}\left(y-Xa\right)\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & \phantom{\rule{2em}{0ex}}& \hfill \end{array}$$The first partial derivatives will allow us to find critical points, while second partial derivatives will be needed to confirm that a critical point will yield a minimum. Return to the next-to-last expression for the first partial derivative of $S$,

$$\begin{array}{llll}\hfill \frac{\partial}{\partial {a}_{\ell}{a}_{j}}S& =\frac{\partial}{\partial {a}_{\ell}}2\sum _{i=1}^{m}-{\left[X\right]}_{ij}{\left[y-Xa\right]}_{i}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =-2\sum _{i=1}^{m}\frac{\partial}{\partial {a}_{\ell}}{\left[X\right]}_{ij}{\left[y-Xa\right]}_{i}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =-2\sum _{i=1}^{m}{\left[X\right]}_{ij}\frac{\partial}{\partial {a}_{\ell}}\left({\left[y\right]}_{i}-\sum _{k=1}^{n}{\left[X\right]}_{ik}{\left[a\right]}_{k}\right)\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =-2\sum _{i=1}^{m}{\left[X\right]}_{ij}\left(-{\left[X\right]}_{i\ell}\right)\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =2\sum _{i=1}^{m}{\left[X\right]}_{ij}{\left[X\right]}_{i\ell}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =2\sum _{i=1}^{m}{\left[{X}^{t}\right]}_{ji}{\left[X\right]}_{i\ell}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =2{\left[{X}^{t}X\right]}_{j\ell}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$$For $1\le j\le n$, set $\frac{\partial}{\partial {a}_{j}}S=0$. This results in the $n$ scalar equations

$$\begin{array}{llllllll}\hfill {\left({X}_{j}\right)}^{t}Xa& ={\left({X}_{j}\right)}^{t}y\phantom{\rule{2em}{0ex}}& \hfill 1\le j\le n& \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$$These $n$ vector equations can be summarized in the single vector equation,

$$\begin{array}{lll}\hfill {X}^{t}Xa={X}^{t}y& \phantom{\rule{2em}{0ex}}& \hfill \end{array}$$${X}^{t}X$ is an $n\times n$ matrix and since we have assumed that $X$ has rank $n$, ${X}^{t}X$ will also have rank $n$. Since ${X}^{t}X$ is invertible, we have a critical point at

$$\begin{array}{lll}\hfill {a}^{\prime}={\left({X}^{t}X\right)}^{-1}{X}^{t}y& \phantom{\rule{2em}{0ex}}& \hfill \end{array}$$Is this lone critical point really a minimum? The matrix of second partial derivatives is constant, and a positive multiple of ${X}^{t}X$. Theorem CPSM tells us that this matrix is positive semi-definite. In an advanced course on multivariable calculus, it is shown that a minimum occurs exactly where the matrix of second partial derivatives is positive semi-definite. You may have seen this in the two-variable case, where a check on the positive semi-definiteness is disguised with a determinant of the $2\times 2$ matrix of second partial derivatives. $\u25a0$

T20 Theorem IP constructs a unique polynomial through a set of $n+1$ points in the plane, $\left\{\left({x}_{i},\phantom{\rule{0.3em}{0ex}}{y}_{i}\right)\mid 1\le i\le n+1\right\}$, where the $x$-coordinates are all different. Prove that the expression below is the same polynomial and include an explanation of the necessity of the hypothesis that the $x$-coordinates are all different.

$$\begin{array}{llll}\hfill p\left(x\right)& =\sum _{i=1}^{n+1}{y}_{i}\prod _{\begin{array}{c}j=1\\ j\ne i\end{array}}^{n+1}\frac{x-{x}_{j}}{{x}_{i}-{x}_{j}}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$$This is known as the Lagrange form of the interpolating polynomial.

Contributed by Robert Beezer