# Polynomial fitting pitfalls

The dangers of power series

A common task in research is to represent trends in a dataset by a smooth analytic function. If a physical model for the trend exists, then the goal is to fit for the parameters of the model. In other situations, however, no good model is available, so one tries instead to represent the trend with a general functional form or a basis function expansion. Polynomials are a popular choice, partially thanks to their general familiarity and their role in Taylor series. What many practitioners might not realize, however, is that there is more than one way to represent a polynomial fit.

Intuition does us a disservice in this context. When we think of a polynomial, we undoubtedly consider the form $${a}_{0}+{a}_{1}x+{a}_{2}{x}^{2}+\dots $$ In the context of fitting data, then, it seems natural to fit for the coefficients ${a}_{n}$ (which can be done efficiently using linear least squares). This can have serious consequences, though, especially if the order of the polynomial is high (which is likely when you care about accuracy the most!). When implemented with finite-precision numerics, this power series form is very ill-conditioned (in fact, in the context of spectral methods, the power series basis gives rise to the Hilbert matrix - see “Method of moments” in Boyd §3.1). What this means is that small errors in your fitting coefficients can produce large errors when the polynomial is evaluated.

So what are the alternatives? Fortunately, one doesn’t have to give up the familiarity of polynomials nor the convenience of linear least squares. One must simply choose a different polynomial basis than the “power series” basis of ${x}^{n}$. In particular, orthogonal polynomials like Chebyshev and Legendre polynomials produce much more robust results. So instead of solving for the coefficients of ${x}^{n}$, we solve for the coefficients of ${T}_{n}\left(x\right)$, for example.

And lest you think this is of merely theoretical importance, here is an example from the astronomical literature. In 1996, Flower published a set of polynomial fits for determining intrinsic properties of stars from observations made on Earth. In particular, they presented a 7th-order fit for determining the effective temperature (T_{eff}) of a star from its color (B-V). Unfortunately, you can’t use these fits as they appear in the original publication; not only are they riddled with typos, but the precision of the coefficients as printed is insufficient to faithfully reproduce the results for the reason discussed above. Fortunately, Torrres published corrections to these fits in 2007, enabling us to explore the robustness of an alternative polynomial representation.

For this comparison, I first computed the Chebyshev coefficients of this fitting polynomial (this was done symbolically, resulting in almost perfect agreement when the two forms are evaluated numerically; see figure above). When evaluating these polynomials, I used Horner’s method for the power series form and Clenshaw’s algorithm for the Chebyshev series. With these pieces in place, we can compare how the two representations fare in different circumstances. First, let’s examine how much accuracy is lost when the functions are evaluated in single-precision:

While the Chebyshev form remains accurate over the whole domain, the power series form exhibits growing errors as B-V grows farther from zero.

Next, let’s see what we get when the coefficients are truncated to 6 decimal places (the same precision as in the original paper, but without the typos).

Again, while the magnitude of the error may be small, we see that it is poorly controlled as B-V deviates from zero, while the Chebyshev results are robust across the domain under this perturbation to the coefficients.

Finally, let’s consider the situation where we would like to reduce the order of the fit. Intuitively, we expect high-order terms to be less important than low-order ones, since we expect the coefficients of a convergent series to approach zero as the order approaches infinity. However, dropping terms from a power series fit is disastrous away from $x=0$. The truncated Chebyshev series, on the other hand, remains a good approximation:

So the next time you’re considering a polynomial fit to represent a trend in your data, choose an orthogonal basis when performing linear least squares; the trustworthiness of your results depends on it.