Numerical experiments, Tips, Tricks and Gotchas
Polynomial Tricks
Choosing appropriate polynomials for the problem
Introduction
Polynomials are just about the simplest mathematical functions that
exist, requiring only multiplications and additions for their evaluation.
Yet they also have the flexibility to represent very general nonlinear
relationships [
1,
2]. Moreover, the Weierstrass approximation
theorem states that every continuous function defined on a closed
interval ${[}a,b{]}$ can be uniformly approximated as closely as desired
by a polynomial function [
3]. Polynomials can be calculated
efficiently without actual rising their arguments to powers by using the Horner's
method [
4]. However, additional tricks can further increase
the efficiency of the polynomial technique.
Choosing appropriate polynomials for the problem
A polynomial function is actually a sum of monomials, special polynomials
with only one nonzero coefficient
\begin{equation}
f(x)=\sum_{n=0}^{N}a_{n}x^{n}=\sum_{n=0}^{N}a_{n}M_{n}(x)\label{eq:fxm}
\end{equation}
here
\begin{equation}
M_{n}(x)=x^{n}\label{eq:mono}
\end{equation}
On the other hand, the same polynomial function can be equally expressed
in terms of any $N+1$ linear independent polynomials of the power
$N$ :
\begin{equation}
f(x)=\sum_{n=0}^{N}b_{n}P_{n}(x)\label{eq:fxp}
\end{equation}
One should try to find some polynomial basis, which simplifies the
problem. This trick can simplify both derivation and/or use of a polynomial
approximation.
Example
Let's consider the power sum problem:
\begin{equation}
S_{n}(m)=\sum_{k=1}^{n}k^{m}\label{eq:snm1}
\end{equation}
For the first two $m$ the solutions are:
\begin{eqnarray*}
S_{n}(0) & = & n\\
S_{n}(1) & = & \frac{n(n+1)}{2}
\end{eqnarray*}
For an arbitrary $m$ we can estimate
\[
S_{n}(m)\approx\intop_{0}^{n}k^{m}dk=\frac{n^{m+1}}{m+1}
\]
This gives the leading term. Therefore we conclude that $S_{n}(m)$
must be a polynomial in $n$ of the order $m+1$. In particular,
\begin{equation}
S_{n}(2)=b_{0}+b_{1}\, n+b_{2}\, n^{2}+b_{3}\, n^{3}\label{eq:sn2}
\end{equation}
We can find the coefficients by equating (\ref{eq:sn2}) to the known
values of the sum for four values of $n,$ e.g. for $n=1,2,3,4$.
A small trick
Note that for $m>0$ the summation in (\ref{eq:snm1}) can start with
$k=0$:
\begin{equation}
S_{n}(m)=\sum_{k=0}^{n}k^{m}\label{eq:snm}
\end{equation}
Therefore we can utilize that $S_{0}(m)=0$ and use smaller values
$n=0,1,2,3$.
The direct solution
The basis polynomials for (\ref{eq:sn2}) are
\begin{equation}
\vec{P}(n)\equiv\{P_{0}(n),P_{1}(n),P_{2}(n),P_{3}(n)\}=\{1,n,n^{2},n^{3}\}
\end{equation}
for $n=0,1,2,3$ we have
\begin{eqnarray*}
0 & = & b_{0}\\
1 & = & b_{0}+b_{1}+b_{2}+b_{3}\\
5 & = & b_{0}+2b_{2}+4b_{2}+8b_{3}\\
14 & = & b_{0}+3b_{3}+9b_{2}+27b_{3}
\end{eqnarray*}
This is a system of linear equations with a dense matrix ($n\times n$
in general). It can be solved e.g. by $LU$ decomposition.
Trick 1
Let's choose the following basis polynomials
\begin{equation}
\vec{P}(n)=\{1,\, n,\, n(n-1),\, n(n-1)(n-2)\}\label{eq:H}
\end{equation}
or
\begin{equation}
S_{n}(2)=b_{0}+b_{1}\, n+b_{2}\, n(n-1)+b_{3}\, n(n-1)(n-2)\label{eq:sn2-1}
\end{equation}
Now the matrix is already triangular
\begin{eqnarray*}
0 & = & b_{0}\\
1 & = & b_{0}+b_{1}\\
5 & = & b_{0}+2b_{1}+2b_{2}\\
14 & = & b_{0}+3b_{1}+6b_{2}+6b_{3}
\end{eqnarray*}
and the system can be easy solved
\begin{eqnarray*}
0 & = & b_{0}\\
1 & = & 0+b_{1}\\
3 & = & 0+0+2b_{2}\\
2 & = & 0+0+0+6b_{3}
\end{eqnarray*}
After some algebra we get a well known result:
\begin{equation}
S_{n}(2)=\frac{n(n+1)(2n+1)}{6}\label{eq:sn2-2}
\end{equation}
Note that expressions like (\ref{eq:sn2-1}) can be efficiently
calculated using a Horner type form:
\begin{equation}
S_{n}(2)=b_{0}+n\left\{ b_{1}+(n-1)\left[b_{2}+b_{3}(n-2)\right]\right\} \label{eq:sn2-3}
\end{equation}
Trick 2
Using Lagrange's approach [
5] we can get the solution immediately.
In this case the basis polynomials are
\begin{equation}
\vec{P}(n)=\left\{ \frac{(n-1)(n-2)(n-3)}{(-1)\cdot(-2)\cdot(-3)},\,\frac{n(n-2)(n-3)}{1\cdot(-1)\cdot(-2)},\,\frac{n(n-1)(n-3)}{2\cdot1\cdot(-1)},\,\frac{n(n-1)(n-2)}{3\cdot2\cdot1}\right\} \label{eq:Lp}
\end{equation}
They satisfy the following relation
\begin{equation}
P_{i}(n)=\delta_{n,i}\:,\: n,i=0,1,2,3\label{eq:delta}
\end{equation}
Therefore $b_{n}=S_{n}(2)\,,\: n=0,1,2,3$ and finally
\begin{equation}
S_{n}(2)=0\cdot P_{0}(n)+1\cdot P_{1}(n)+5\cdot P_{2}(n)+14\cdot P_{3}(n)\label{eq:sn2-4}
\end{equation}
with $P_{i}(n)$ from (\ref{eq:Lp}). The drawback of this "quick''
solution is that the polynomials (\ref{eq:Lp} ) are computationally more complex.
Sources of inspiration
The basis set (\ref{eq:Lp}) can be called the polynomial delta-function
because of Eq. (\ref{eq:delta}). The Heaviside step function
[6]
is the analogy for (\ref{eq:H}). The Chebyshev polynomials [
7, 8, 9]
are the polynomial harmonic functions. The analogy with the other
elementary and special functions can be very fruitful. In my article
[
10]
I discuss several options for calculating the smoothing
polynomials. In this tutorial
[
11]
the use of orthogonal
polynomials is discussed.
References
-
Smyth, G. K. (2004). Polynomial approximation. In:
Encyclopedia of Biostatistics Second Edition, Volume 6, P. Armitage
and T. Colton (eds.), Wiley, London, pages 4136-4140.
-
Abramowitz, Milton; Stegun, Irene A., eds. (1972),
Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical
Tables, New York: Dover Publications
-
Wikipedia, Stone-Weierstrass theorem.
-
Wikipedia, Horner's method.
-
Wikipedia, Lagrange polynomial.
-
Wikipedia, Heaviside step function.
-
Wikipedia, Chebyshev polynomials.
-
C. Lanczos, Applied Analysis, Prentice Hall, Inc. Englewood Cliffs, N.J. 1956 (There are also later reprints).
-
R. Hamming, Numerical methods for scientists and engineers, Dover, 1987.
-
N. Shokhirev,
Least squares polynomial approximation, www.numericalexpert.com, 2013.
-
N. Shokhirev,
The power of orthogonality, www.numericalexpert.com, 2012.