Saturday, December 30, 2017

Approximating a function with Taylor series vs. continued fraction

A Taylor series is a polynomial that approximates any given (differentiable) function within a certain range (which, depending on the function in question, may extend indefinitely). The more elements in the polynomial, the more accurate the approximation is. (If the Taylor series is extended to infinity, it's theoretically completely equal to the original function, within its range of validity. In other words, the Taylor series converges towards the original function (with exceptions).)

For example, the first three terms of the Taylor series for the function sin(x) around x=0 are:


In this graph the sin(x) function is drawn in orange, and the above polynomial in blue. As you can see, the polynomial matches the original function decently well for x values close to 0:


What is the point of this? Is this just math for fun and curiosity?

Polynomial approximations of more complex functions are actually very useful in many situations. When such approximations are enough for the task at hand, polynomials are often much easier to handle than the original function (eg. when using these polynomials in physics problems requiring differentiation or integration, or other such applications).

Moreover, notice that these polynomials require solely simple arithmetic operations to calculate: Addition and multiplication (integer powers can be calculated as multiplications, and the divisions are merely multiplications with a constant term). In many applications, such as in microprocessors, it's much easier to approximate things like trigonometric functions using only addition and multiplication, which are much easier to implement in hardware than actual trigonometric functions. (And even if trigonometric functions were implemented, they would probably still internally use an approximation similar to this.)

Note that with functions like sin(x) we can keep adding terms to the Taylor series in order to get better and better approximations around x=0, for an increasingly larger range. (It's thus only a question of how much accuracy is required for the application at hand, and how much time we are willing to spend adding and multiplying more and more terms.)

The Taylor series is, in fact, often used, and mentioned, as a good approximation for such functions.

However, this doesn't mean that it's the best possible approximation that can be done with arithmetic operations alone! Sometimes there are other arithmetic expressions that actually approximate the desired function much better with much less operations required.

For example, the first three terms for the Taylor series of the function tan(x) around x=0 are:


The tan(x) function can also be approximated using a continued fraction. Its first three terms are:


It looks slightly more complicated, but it still requires only simple arithmetic operations (although in this case actual division needs to be added to the set of required operations).

How do these two expressions compare in terms of accuracy? In the following graph the tan(x) function is drawn in red, the polynomial expression above is drawn in blue, and the continued fraction expression is drawn in green (click the image for a larger version):


As you can see, while the polynomial (blue line) deviates from the function quite clearly, the continued fraction (green line) is so close to it that it actually covers it entirely in this image, in the relevant range from -pi/2 to pi/2. And this using only the first three terms!

(Also, curiously, and potentially usefully, while the polynomial is valid only within the range -pi/2 to pi/2, the continued fraction continues to approximate the function outside this range. In fact, if we add more and more terms to it, it will approximate the tan(x) function at a wider and wider range.)

How well do the two methods approximate the function in terms of actual values? Here are some numbers:

x = 0.00, tan(x) = 0.000000000000, taylor = 0.000000000000, contf = 0.000000000000
x = 0.10, tan(x) = 0.100334672085, taylor = 0.100334666667, contf = 0.100334672021
x = 0.25, tan(x) = 0.255341921221, taylor = 0.255338541667, contf = 0.255341880342
x = 0.50, tan(x) = 0.546302489844, taylor = 0.545833333333, contf = 0.546296296296
x = 0.75, tan(x) = 0.931596459944, taylor = 0.922265625000, contf = 0.931451612903
x = 1.00, tan(x) = 1.557407724655, taylor = 1.466666666667, contf = 1.555555555556
x = 1.25, tan(x) = 3.009569673863, taylor = 2.307942708333, contf = 2.986111111111
x = 1.50, tan(x) = 14.101419947172, taylor = 3.637500000000, contf = 12.750000000000

As seen from these values, even with just three terms the continued fraction approximates the function much better. For example for x=0.75 the Taylor series (with three terms) is only accurate to one decimal place, while the continued fraction is accurate to three.

As mentioned earlier, to improve the accuracy of either method, we can add more terms to them. Let's try adding just one more term to both, in other words:


and


How well do they compare now?

x = 0.00, tan(x) = 0.000000000000, taylor = 0.000000000000, contf = 0.000000000000
x = 0.10, tan(x) = 0.100334672085, taylor = 0.100334672063, contf = 0.100334672085
x = 0.25, tan(x) = 0.255341921221, taylor = 0.255341835627, contf = 0.255341921180
x = 0.50, tan(x) = 0.546302489844, taylor = 0.546254960317, contf = 0.546302465023
x = 0.75, tan(x) = 0.931596459944, taylor = 0.929469517299, contf = 0.931595136956
x = 1.00, tan(x) = 1.557407724655, taylor = 1.520634920635, contf = 1.557377049180
x = 1.25, tan(x) = 3.009569673863, taylor = 2.565283396887, contf = 3.008942661757
x = 1.50, tan(x) = 14.101419947172, taylor = 4.559598214286, contf = 14.042553191489

The Taylor series does not show a lot of improvement, but the continued fraction became quite significantly better. For example at that x=0.75 we looked at before, the Taylor series is still only accurate to one decimal place, while the continued fraction is now accurate to 5 decimal places.

Even at the quite extreme value x=1.5 the continued fraction became accurate to two significant digits (while the Taylor series is still far, far off).

While the Taylor series is usually presented as the archetypal method for approximating functions, it's not always the method that does so in the most efficient way. Although, to be fair, the continued fraction method requires actual divisions (which the Taylor series does not need), which may or may not be a problem depending on the application.