Skip to main content

Section 4.2 Curve Fitting

Subsection 4.2.1 Interpolating Polynomials

Given two points in the plane, there is a unique line through them. Given three points in the plane, and not in a line, there is a unique parabola through them. Given four points in the plane, there is a unique polynomial, of degree 3 or less, passing through them. And so on. We can prove this result, and give a procedure for finding the polynomial with the help of Vandermonde matrices ((((section on vandermonde matrices)))).

Write \(p(x)=a_0+a_1x+a_2x^2+\cdots+a_nx^n\text{.}\) To meet the conclusion of the theorem, we desire,

\begin{align*} y_i&=p(x_i)=a_0+a_1x_i+a_2x_i^2+\cdots+a_nx_i^n && 1\leq i\leq n+1 \end{align*}

This is a system of \(n+1\) linear equations in the \(n+1\) variables \(a_0,\,a_1,\,a_2,\,\ldots,\,a_n\text{.}\) The vector of constants in this system is the vector containing the \(y\)-coordinates of the points. More importantly, the coefficient matrix is a Vandermonde matrix ((((definition-vandermonde)))) built from the \(x\)-coordinates \(\scalarlist{x}{n+1}\text{.}\) Since we have required that these scalars all be different, (((theorem-NVM))) tells us that the coefficient matrix is nonsingular and Theorem NMUS says the solution for the coefficients of the polynomial exists, and the solution is unique. As a practical matter, Theorem SNCM provides an expression for the solution.

Suppose we have the following 5 points in the plane and we wish to pass a degree 4 polynomial through them.

\(i\) 1 2 3 4 5
\(x_i\) -3 -1 2 3 6
\(y_i\) 276 16 31 144 2319
Table 4.2.3. Points on a polynomial

The required system of equations has a coefficient matrix that is the Vandermonde matrix where row \(i\) is successive powers of \(x_i\)

\begin{equation*} A= \begin{bmatrix} 1 & -3 & 9 & -27 & 81 \\ 1 & -1 & 1 & -1 & 1 \\ 1 & 2 & 4 & 8 & 16 \\ 1 & 3 & 9 & 27 & 81 \\ 1 & 6 & 36 & 216 & 1296 \end{bmatrix} \end{equation*}

Theorem NMUS provides a solution as

\begin{align*} \colvector{a_0\\a_1\\a_2\\a_3\\a_4} &=\inverse{A}\colvector{276 \\ 16 \\ 31 \\ 144 \\ 2319}\\ &= \begin{bmatrix} -\frac{1}{15} & \frac{9}{14} & \frac{9}{10} & -\frac{1}{2} & \frac{1}{42} \\ 0 & -\frac{3}{7} & \frac{3}{4} & -\frac{1}{3} & \frac{1}{84} \\ \frac{5}{108} & -\frac{1}{56} & -\frac{1}{4} & \frac{17}{72} & -\frac{11}{756} \\ -\frac{1}{54} & \frac{1}{21} & -\frac{1}{12} & \frac{1}{18} & -\frac{1}{756} \\ \frac{1}{540} & -\frac{1}{168} & \frac{1}{60} & -\frac{1}{72} & \frac{1}{756} \end{bmatrix} \colvector{276 \\ 16 \\ 31 \\ 144 \\ 2319} =\colvector{3 \\ -4 \\ 5 \\ -2 \\ 2} \end{align*}

So the polynomial is \(p(x)=3 - 4x + 5x^2 - 2x^3 + 2x^4\text{.}\)

The unique polynomial passing through a set of points is known as the interpolating polynomial and it has many uses. Unfortunately, when confronted with data from an experiment the situation may not be so simple or clear cut. Read on.

Subsection 4.2.2 Fitting Curves Through Data

To construct an interpolating polynomial, we presumed we wanted a curve passing through a set of points exactly. Sometimes we have a similar, but distinctly different situation. We have a set of data points \(x_i\text{,}\) \(1\leq i\leq n\text{,}\) where the \(x_i\) are \(m\)-tuples. We have a model or a physical law which suggests that each \(m\)-tuple satisfies some linear equation with \(k\) unknown parameters. We wish to estimate the parameters. If we can formulate a linear system with the parameters as the variables, then we can use a least-squares estimate (Section Section 4.1). We illustrate with two examples.

Suppose we believe the twelve data points below are related by a degree three polynomial, \(y=p(x)=a_0+a_1x+a_2x^2+a_3x^3\text{.}\) We have four unknown parameters, the coefficients of the polynomial. For each point we can create a \(5\) tuple, \((1, x_i, x_i^2, x_i^3, y_i)\text{,}\) with entries that are related by the linear equation \(a_0+a_1x_i+a_2x_i^2+a_3x_i^3=y_i\text{.}\) So we have \(12\) linear equations in \(4\) variables. The coefficent matrix \(A\) has \(12\) rows and \(4\) columns, similar in spirit to a Vandermonde matrix (Section Section 5.1), though not even square. The vector of constants is the \(12\) values of \(y_i\text{.}\)

\(x_i\) \(y_i\)
0.142 -10.867
0.645 10.120
0.953 8.1728
2.958 11.693
2.975 18.931
3.167 16.215
3.413 3.863
4.301 -7.971
5.552 -24.108
6.576 -31.217
7.957 0.719
8.027 9.550
Table 4.2.5. Points on a polynomial

Here are the relevant pieces of the system, the normal equations, and the solution.

\begin{align*} A&= \begin{bmatrix} 1 & 0.142 & 0.020 & 0.003 \\ 1 & 0.646 & 0.417 & 0.269 \\ 1 & 0.954 & 0.909 & 0.867 \\ 1 & 2.958 & 8.751 & 25.886 \\ 1 & 2.975 & 8.851 & 26.332 \\ 1 & 3.167 & 10.032 & 31.775 \\ 1 & 3.413 & 11.649 & 39.757 \\ 1 & 4.302 & 18.504 & 79.595 \\ 1 & 5.552 & 30.830 & 171.180 \\ 1 & 6.576 & 43.247 & 284.403 \\ 1 & 7.958 & 63.325 & 503.917 \\ 1 & 8.028 & 64.444 & 517.341 \end{bmatrix} & \vect{b}&= \colvector{-10.867 \\ 10.120 \\ 8.172 \\ 11.693 \\ 18.931 \\ 16.215 \\ 3.863 \\ -7.971 \\ -24.108 \\ -31.217 \\ 0.719 \\ 9.550 } \end{align*}
\begin{align*} \adjoint{A}A&= \begin{bmatrix} 12.000 & 46.671 & 260.978 & 1681.324 \\ 46.671 & 260.978 & 1681.324 & 11718.472 \\ 260.978 & 1681.324 & 11718.472 & 85542.108 \\ 1681.324 & 11718.472 & 85542.108 & 642050.755 \end{bmatrix} \end{align*}
\begin{align*} \adjoint{A}\vect{b}&= \colvector{5.102 \\ -122.81 \\ -1090.783 \\ -6856.475} & \vect{x}&= \colvector{-17.726 \\ 47.157 \\ -16.122 \\ 1.323} \end{align*}

So the polynomial obtained from a least-squares fit is

\begin{equation*} \hat{p}(x)=1.323x^3-16.122x^2+47.157x-17.726 \end{equation*}

With other models, it may be necessary to rearrange the equation to “linearize” it. For example, if the relationship between \(x\) and \(y\) is exponential and is given by \(y=ae^{bx}\) then applying the logarithm to both sides would yield \(\log(y)=\log(a) + bx\text{.}\) Then by using pairs \((x_i, \log(y_i))\text{,}\) a least-squares solution would provide estimates of \(\log(a)\) and \(b\text{,}\) which could be easily converted to estimates of \(a\) and \(b\text{.}\)