## Sunday, June 22, 2008

### Taking N to the limit

A simple numerical algorithm to compute limx → a f(x) in the case when f(a) is undefined is to evaluate f(a+ε) for some fixed small value ε (or f(1/ε) in case a = ∞). The expression f(a+ε) or f(1/ε) is typically extremely poorly conditioned and hence a challenge for a numerical evaluation routine.

As a stress test for N, I have tried numerically evaluating all the limits in test_demidovich.py, a set of tests for SymPy's symbolic limit function.

I chose to evaluate each limit to 10 accurate digits, using ε = 10-50. Some simplifications were necessary:

• Since the numerical limit algorithm described above cannot generally detect convergence to 0 or ∞ (giving pseudorandom tiny or huge values instead), I chose to interpret any magnitude outside the range 10-10 to 1010 as 0 or ∞.

• SymPy's limit function supports limits containing parameters, such as limx→0 (cos(mx)-cos(nx))/x2 = (n2-m2)/2. In all such cases, I replaced the parameters with arbitrary values.

The nlimit function with tests is available in the file limtest.py (requiring evalf.py, mpmath and the hg version of SymPy). A straightforward limit evaluation looks like this:

`>>> nlimit(sin(x)/(3*x), x, 0, 10)'0.3333333333'`

The results of the tests? After fixing two minor bugs in N that manifested themselves, nlimit passes 50 out of 53 tests. It only fails three tests involving the functions log and asin which are not yet implemented. SymPy's limit function fails 8 out of the 53 tests; in each of these cases, nlimit gives the correct value to 10 digits.

nlimit is 100 times faster than limit, processing all the test cases in 0.18 (versus 18) seconds.

Despite only 10 digits being requested, running N with verbose=True shows that upwards of 800 bits of working precision are required for some of the limits, indicating very clearly the need for adaptive numerical evaluation.

The heuristic of using a fixed, finite ε will not work in case a limit converges extremely slowly. And of course, limit gives a nice, symbolic expression instead of an approximation (nlimit could give an exact answer in simple cases by passing its output through number recognition functions in mpmath). Due to these limitations, a numerical limit algorithm is at best a complement to a symbolic algorithm. The point, at this moment, is just to test N, although providing a numerical limit function in SymPy would also be a good idea.

I should note that there are much more sophisticated algorithms for numerical limits than the brute force method described here. Such algorithms are necessary to use especially when evaluating limits of indexed sequences where each element is expensive to compute (e.g. for summation of infinite series). A few acceleration methods for sequences and series are available in mpmath.

## Friday, June 20, 2008

### How many digits would you like?

My last post discussed the implementation of a number type that tracks the propagation of initial numerical uncertainties under arithmetic operations. I have now begun implementing a function that in some sense does the reverse; given a fixed formula and desired final accuracy, it produces a numerical value through recursive evaluation. I've named the function N because it behaves much like Mathematica's function with the same name.

The file is available here (the code needs a lot of cleanup at this point, so please be considerate). It contains a small test suite that should pass if you try running it.

The input to N can be a SymPy expression or a string representing one. For simplicity, the returned value is currently just a string. The second argument to N specifies the desired precision as a number of base-10 digits:

`>>> from sympy import *>>> from evalf import N>>> N(pi,30)'3.14159265358979323846264338328'>>> N('355/113',30)'3.14159292035398230088495575221'`

The set of supported expressions is currently somewhat limited; examples of what does work will be given below.

As I have said before, an important motivation for an adaptive algorithm for numerical evaluation is to distinguish integers from non-integers (or more simply, distinguishing nonzero numbers from zero). Numerical evaluation is, as far as I know, the only general method to evaluate functions such as x ≥ y, sign(x), abs(x) and floor(x). Due to the discontinuous nature of these functions, a tiny numerical error can cause a drastically wrong result if undetected, leading to complete nonsense in symbolic simplifications.

There are many known examples of "high-precision fraud", i.e. cases where an expression appears to be identical to another if evaluated to low numerical precision, but where there is in fact a small (and sometimes theoretically important) difference. See for example MathWorld's article, "Almost Integer". Some of these examples are rather conspicuous (e.g. any construction involving the floor function), but others are surprising and even involve elegant mathematical theory. In any case, they are a great way to test that the numerical evaluation works as intended.

#### Some algebraic examples

A neat way to derive almost-integers is based on Binet's formula for the Fibonacci numbers, F(n) = (φn - (-φ)n)/√5 where φ is the golden ratio (1+√5)/2. The (-φ)n-term decreases exponentially as n grows, meaning that φn/√5 alone is an excellent (although not exact) approximation of F(n). How good? Let's compute F(n) - φn/√5 for a few n (we can use SymPy's exact Fibonacci number function fibonacci(n) to make sure no symbolic simplification accidentally occurs):

`>>> binet = lambda n: ((1+sqrt(5))/2)**n/sqrt(5)>>> N(binet(10) - fibonacci(10), 10)'3.636123247e-3'>>> N(binet(100) - fibonacci(100), 10)'5.646131293e-22'>>> N(binet(1000) - fibonacci(1000), 10)'4.60123853e-210'>>> N(binet(10000) - fibonacci(10000), 10)'5.944461218e-2091'`

N works much better than the current fixed-precision evalf in SymPy:

`>>> (fibonacci(1000) - binet(1000)).evalf()-1.46910587887435e+195`

With N, we find that the simplified Binet formula not only gives the correct Fibonacci number to the nearest integer; for F(10000), you have to look 2000 digits beyond the decimal point to spot the difference. A more direct approach, of course, is to simply evaluate the (-φ)n term; the beauty of the implementation of N is that it works automatically, and it will still work in case there are a hundred terms contributing to cancel each other out (a much harder situation for a human to analyze).

Another, related, well-known result is that F(n+1)/F(n) is a close approximation of the golden ratio. To see how close, we can just compute the difference:

`>>> N(fibonacci(10001)/fibonacci(10000) - (1+sqrt(5))/2, 10)'3.950754128e-4180'>>> N(fibonacci(10002)/fibonacci(10001) - (1+sqrt(5))/2, 10)'-1.509053796e-4180'`

The approximation is good to over 4000 digits. Note also the signs; based on the numerical results, we could compute the exact value of the function sign(F(n+1)/F(n) - φ) for any specific value of n (and find that it is positive for odd n and negative for even n). Indeed, I will later implement the sign function (and related functions) in SymPy precisely this way: just try to call N() asking for 10 digits (or 3, it doesn't make much of a difference), and use the sign of the computed result if no error occurs.

Let's also revisit Rump's example of an ill-conditioned function, which was mentioned in my previous blog post. I have given N the ability to substitute numerical values for symbols (this required roughly two lines of code), in effect allowing it to be used for function evaluation. When asked for 15 digits, N gives the correct value right away:

`>>> var('x y')>>> a = 1335*y**6/4+x**2*(11*x**2*y**2-y**6-121*y**4-2) + \...     11*y**8/2+x/(2*y)>>> N(a, 15, subs={x:77617, y:33096})'-0.827396059946821'`

With the "verbose" flag set, N shows that it encounters cancellations during the addition and has to restart twice:

`>>> N(a, 15, subs={x:77617, y:33096}, verbose=1)ADD: wanted 54 accurate bits, got -7 -- restarting with prec 115ADD: wanted 54 accurate bits, got 2 -- restarting with prec 167'-0.827396059946821'`

#### Transcendental functions

N currently supports the constants π and e, and the functions x^y, exp, cos and sin. I refer to my previous post for a discussion of the issues involved in (real) exponentiation. Suffice to say, N figures out that in order to compute 10 mantissa digits of π to the power of 1 googol, it needs 110 digits of precision:

`>>> N(pi ** (10**100), 10)'4.946362032e+4971498726941338543512682882908988736516783243804424461340534999249471120895526746555473864642912223'`

It is also able to cope with cancellation of exponentials close to unity:

`>>> N('2**(1/10**50) - 2**(-1/10**50)',15)'1.38629436111989e-50'`

The trigonometric functions are a bit more interesting. Basically, to compute cos(x) or sin(x) to n accurate digits, you need to first evaluate x with an absolute error of 10-n. In order to calculate x to within a given absolute error, the magnitude of x must be known first, so two evaluations are generally required. N avoids the problem by evaluating x to a few extra bits the first time; if it turns out that |x| < C for the appropriate constant (say C = 1000), a second evaluation is not necessary. By appropriately increasing the internal precision, correct evaluations such as the following are possible:

`>>> N('sin(exp(1000))',15)'-0.906874170721915'`

There is an additional complication associated with evaluating trigonometric functions. If the argument is very close to a root (i.e. a multiple of π for sin, or offset by π/2 for cos), the precision must be increased further. N detects when this is necessary, and is for example able to deduce the following:

`>>> N(sin(pi*10**1000 + Rational(1,10**1000), evaluate=False), 10)'1.0e-1000'`

The test shows that there is no difference between evaluating sin(2πn + x) and sin(x), except of course for speed. The evaluate=False was added to prevent SymPy from removing the full-period term π · 101000. This automatic simplification is of course a SymPy feature; indeed, it makes the cleverness in N redundant in many cases by automatically reducing the argument to a region close to zero. However, the symbolic simplification is not of much help when x happens to be close to a multiple of 2π without having that explicit symbolic form. To demonstrate, let's combine a trigonometric function with the Fibonacci number approximation from before:

`>>> phi = (1+sqrt(5))/2>>> N(sin(phi**3000 / sqrt(5) * pi), 15)'1.53018563496763e-627'`

Running with verbose=True shows that N sets the working precision to over 6000 bits before it arrives at those 15 digits.

#### The problem with zero

I have so far neglected to mention the issue of zero detection. Although adaptive numerical evaluation can identify a nonzero value in finite time, it cannot detect a zero.

Suppose we try to compute the difference between the explicit Fibonacci number F(n) and the expression for the same in terms of Binet's exact formula (φn - (-φ)n)/√5. N(...,10) will in effect attempt to find 10 nonzero digits of 0, and of course fail, getting stuck in an infinite loop:

`>>> phi = (1+sqrt(5))/2>>> binet = lambda n: (phi**n - (-phi)**n)/sqrt(5)>>> N(binet(100) - fibonacci(100), 10, verbose=1)ADD: wanted 56 accurate bits, got -1 -- restarting with prec 113ADD: wanted 56 accurate bits, got 0 -- restarting with prec 169ADD: wanted 56 accurate bits, got -3 -- restarting with prec 228...ADD: wanted 56 accurate bits, got 0 -- restarting with prec 524753ADD: wanted 56 accurate bits, got -1 -- restarting with prec 1049051...`

To deal with this problem, it will be necessary to set a threshold precision or maximum number of iterations, with a reasonable default value. This number should be possible to override by the user, either globally or by providing a keyword argument to N (perhaps both).

There are several possible courses of action in case the threshold is reached. A cheap and practical solution is to simply consider any smaller quantity to be zero. This can be done either silently or with a printed warning message. Rigorists would perhaps find it more satisfactory if an exception were raised. A final possibility is to prompt the user. I think all these options should be available; the question is what to do by default.

Fortunately, in a computer algebra system, most cancellations can be detected and turned into explicit zeros before they reach the N function (1-1, sin(πn), etc). I am not entirely sure about the terminology here, but I think this ability to symbolically detect zeros is the essential difference between computer algebra systems and what is sometimes called "lazy infinite-precision reals" (or something similar).

#### Complex numbers

Currently, N does not know how to deal with complex numbers (I have so far only written some placeholder code for this).

Addition should be relatively easy to implement: just add the real and imaginary parts separately and check the accuracy of each.

Multiplication is the simplest of all operations in the purely real case, because there are no cancellation effects whatsoever; all that is needed is to a few guard bits to deal with rounding. In fact, multiplying purely real and purely imaginary quantities already works (this is just a matter of keeping an extra boolean variable around to keep track of whether the product is imaginary; in effect a pseudo-polar representation):

`>>> N('3*pi*I',10)'9.424777961*I'`

With general complex numbers, however, multiplication in rectangular form translates into addition, and I think cancellation effects may come into play so that it will be a little more complicated to implement correctly. For multiplication, it would be much nicer to use polar complex numbers, but that makes addition harder. There's just no escape...

One thing I'm wondering about is how to define accuracy for complex numbers. One could either consider the accuracy of the number as a whole, or of the real and imaginary parts separately.

It is very common to encounter sums of complex numbers with conjugate imaginary parts, i.e. (a+bi) + (c-bi). What should N do if it obtains the requested number of digits, say 15, for the real part, but is unable to deduce anything about the imaginary part except that it is smaller than 10-15 of the real part? By the definition of relative error as z·(1+error), N should arguably be satisfied with that. But matters becomes more complicated if the number is subsequently passed to a function that is very sensitive to changes in the imaginary part alone (in the extreme case the imaginary part function, Im(z)).

#### Performance

What about speed? In the best case (i.e. no restarts), N seems to be about as fast as direct evaluation with mpmath. This might be surprising, since N both keeps track of errors and manually traverses the expression tree. I had actually expected N to come out faster, since much of the need for instance creation and floating-point normalization is eliminated, but it turns out that a good deal of that is still needed and the additional error bookkeeping largely makes up for the remaining advantage.

There are some potential optimizations that could be exploited. One would be to take advantage of the TERMS/FACTORS representation employed in SympyCore. In a sum like a·x + b·y + ... where the coefficients a, b... are integers or simple fractions (a very common situation), the coefficients can absorbed into the sum on the fly instead of recursively evaluating each term as a full product. Another optimization would be to save the magnitude (exponent + width of mantissa) and sign of each term in the numerical evaluation of sums. This way, an optimal precision can be chosen for each term in case the evaluation has to be restarted at higher precision.

#### Project status

This completes an important part of my GSoC project, namely to implement a reliable numerical expression evaluation algorithm. What remains now is first of all to add support for complex numbers and more mathematical functions, and to provide alternatives to the infinite loop for dealing with zeros. The code should also work with something like the num class I posted previously; N currently assumes that expressions are exact and can be approximated to any accuracy, but it should also be able to accommodate expressions initially containing approximate numbers. And of course, it all has to be implemented in SymPy and debugged.

Post scriptum: it seems that I forgot to provide an example of applying N to Ramanujan's constant. I'll leave this as an entertaining exercise to the reader.

## Wednesday, June 11, 2008

### Basic implementation of significance arithmetic

This .py file contains a work-in-progress implementation of significance arithmetic, using mpmath as a base. The main class is currently called "num" (sorry, I just haven't come up with a good name yet). An instance can be created from a float or int value. A second number that specifies the accuracy (measured in decimal digits) can be passed; floats are assumed by default to have an accuracy of 53 bits or 15.95 decimal digits. The method .accuracy(b) gives the estimated number of accurate digits in the given base (default b = 10).

`>>> num(1.2345678901234567890123456789)1.23456789012346>>> _.accuracy()15.954589770191001>>> num(1.2345678901234567890123456789, 5)1.2346>>> _.accuracy()5.0>>> num(1.2345678901234567890123456789, 25)1.234567890123456690432135>>> _.accuracy()25.0`

In the last example, the fact that the input float has a limited accuracy as a representation of the entered decimal literal becomes visible. (Support for passing an exact string as input to avoid this problem will be added.)

The accuracy is a measure of relative accuracy (or relative error). A num instance with value u and accuracy a represents the interval u · (1 + ξ 2-a) for values of ξ between -1 and 1. The relative error, in the traditional numerical analysis meaning of the term, is given by 2-a and the absolute error is given by |u| · 2-a. In other words, the accuracy is a logarithmic (base 2) measure of error; in some cases, it is more natural to consider the (logarithmic) error, given by -a.

It can usually be assumed that a > 0. When this property does not hold, not even the sign of the number it is supposed to represent is known (unless additional assumptions are made). Such extremely inaccurate numbers will need special treatment in various places; when they result from higher operations (such as when the user asks for N digits of an expression in SymPy), they should probably signal errors.

How do errors propagate? The simplest case is that of multiplication. If x = u · (1+e) and y = v · (1+f), then x · y = u · v · (1+e) · (1+f) = (u · v) · (1 + e + f + e·f). Therefore the cumulative error is given by e + f + e·f. Expressed in terms of the logarithmic representation of the errors, 2-a = e and 2-b = f, the final accuracy is given by c = −log2(2-a + 2-b + 2-a-b). This expression can usually be approximated as c = min(a,b), which is the rule I have implemented so far. (Further testing will show whether it needs to be refined).

This analysis does not account for the error due to roundoff in the floating-point product u · v. Such errors can be accounted for either by adding an extra term to the error sum or by setting the arithmetic precision slightly higher than the estimated accuracy. I'm currently taking the latter approach.

Division is similar to multiplication.

Addition and subtraction is a bit harder, as these operations need to translate the relative errors of all the terms into a combined absolute error, and then translate that absolute error back to a relative accuracy (of the final sum). It is important to note that the accuracy of a sum can be much greater or much lesser than the minimum accuracy of all the terms.

Translating between relative and absolute error associated with a number involves knowing its magnitude, or rather the base-2 logarithm thereof. This is very easy with mpmath numbers, which are represented as tuples x = (sign, man, exp, bc) where bc is the bit size of the mantissa. The exact magnitude of x is given by log2 x = log2 man + exp, and this quantity can be approximated closely as exp+bc (although doing so admittedly does not pay off much since a call to math.log is inexpensive in Python terms).

Perhaps the most important type of error that significance arithmetic catches is loss of significance due to cancellation of terms with like magnitude but different sign. For example, 355/113 is an excellent approximation of pi, accurate to more than one part in a million. Subtracting the numbers from each other, with each given an initial accuracy of ~15 digits, leaves a number with only 9 accurate digits:

`>>> pi = num(3.1415926535897932)>>> 355./113 - pi2.66764189e-7>>> _.accuracy()9.0308998699194341`

Compare this with what happens when adding the terms:

`>>> 355./113 + pi6.28318557394378>>> _.accuracy()15.954589770191001`

Adding an inaccurate number to an accurate number does not greatly reduce accuracy if the less accurate number has small magnitude:

`>>> pi + num(1e-12,2)3.1415926535908>>> _.accuracy()14.342229822223226`

Total cancellation currently raises an exception. Significance arithmetic requires special treatment of numbers with value 0, because the relative error of 0 is meaningless; instead some form of absolute error has to be used.

I have also implemented real-valued exponentiation (x**y, exp(x) and sqrt(x)). These operations are currently a little buggy, but the basics work.

Exponentiation reduces accuracy proportionally to the magnitude of the exponent. For example, exponentiation by 1010 removes 10 digits of accuracy:

`>>> from devel.num import *>>> pi = num(3.1415926535897932)>>> pi.accuracy()15.954589770191001>>> pi ** (10**10)8.7365e+4971498726>>> _.accuracy()5.954589770191002`

Contrast this with the behavior of mpmath. With the working precision set to 15 digits, it will print 15 (plus two or three guard) digits of mantissa. Re-computing at a higher precision however verifies that only the first five digits were correct:

`>>> from mpmath import mp, pi>>> mp.dps = 15>>> pi ** (10**10)mpf('8.7365179634758897e+4971498726')>>> mp.dps = 35>>> pi ** (10**10)mpf('8.7365213691213779435688568850099288527e+4971498726')`

One might wonder where all the information in the input operand has gone. Powering is of course implemented using binary expontiation, so the rounding errors from repeated multiplications are insignificant.

The answer is that exponentiation transfers information between mantissa and exponent (contrast with a single floating-point multiplication, which works by separately multiplying mantissas and adding exponents). So to speak, exponentiation by 10n moves n digits of the mantissa into the exponent and then fiddles a little with whatever remains of the mantissa. Logarithms do the reverse, moving information from the exponent to the mantissa.

This is not something you have to think of often in ordinary floating-point arithmetic, because the exponent of a number is limited to a few bits and anything larger is an overflow. But when using mpmath numbers, exponents are arbitrary-precision integers, treated as exact. If you compute pi raised to 1 googol, you get:

`>>> num(3.1415926535897932) ** (10**100)1.0e+4971498726941338374217231245523794013693982835177254302984571553024909360917306608664397050705368816>>> _.accuracy()-84.045410229808965`

The exponent, although printed as if accurate to 100 digits, is only accurate to 15. Although the accuracy is reported as negative, the number does have a positive "logarithmic accuracy". So in contexts where extremely large numbers are used, some extra care is needed.

A counterintuitive property of arithmetic, that a direct implementation of precision-tracking floating-point arithmetic fails to capture, is that some operations increase accuracy. Significance arithmetic can recognize these cases and automatically adjust the precision to ensure that no information is lost. For example, although each input operand is accurate to only 15 decimal places, the result of the following operation is accurate to 65:

`>>> num(2.0) ** num(1e-50)1.0000000000000000000000000000000000000000000000000069314718055995>>> _.accuracy()65.783713593702743`

This permits one to do things like

`>>> pi = num(3.1415926535897932)>>> (pi / (exp(pi/10**50) - 1)) / 10**501.0>>> _.accuracy()15.65355977452702`

which in ordinary FP or interval arithmetic have a tendency to cause divisions by zero or catastrophic loss of significance, unless the precision is manually set high enough (here 65 digits) from the start.

Automatically-increasing precision is of course a bit dangerous since a calculation can become unpexpectedly slow in case the precision is increased to a level much higher than will be needed subsequently (e.g. when computing 15 digits of exp(10-100000) rather than exp(-100000)-1). This feature therefore needs to be combined with some user-settable precision limit.

I'll finish this post with a neat example of why significance or interval arithmetic is important for reliable numerical evaluation. The example is due to Siegfried M. Rump and discussed further in the paper by Sofroniou and Spaletta mentioned in an earlier post. The problem is to evaluate the following function for x = 77617 and y = 33096:

`def f(x,y):    return 1335*y**6/4 + x**2*(11*x**2*y**2-y**6-121*y**4-2) + \        11*y**8/2 + x/(2*y)`

Directly with mpmath, we get (at various levels of precision):

`>>> from mpmath import mp, mpf, nstr>>> for i in range(2,51):...     mp.dps = i...     print i, nstr(f(mpf(77617),mpf(33096)),50)...2 1.1718753 2596148429267413814265248164610048.04 1.1726074218755 1.1726036071777343756 1.17260384559631347656257 1.17260393500328063964843758 -9903520314283042199192993792.09 -1237940039285380274899124224.010 -154742504910672534362390528.011 9671406556917033397649408.012 1.1726039400532499712426215410232543945312513 75557863725914323419136.014 -9444732965739290427392.015 -1180591620717411303424.016 1.172603940053178639413289374715532176196575164794917 -18446744073709551616.018 1152921504606846977.2519 144115188075855873.17187520 1.172603940053178631859449551101681752385275103733921 1125899906842625.172603845596313476562522 1.172603940053178631858840746170842724016569746936523 1.17260394005317863185883412872594229979517077566724 1.172603940053178631858834955906554852822845647075725 1.172603940053178631858834904207766568258615967612626 8589934593.172603940053178625535501566901075420901227 1073741825.172603940053178631823874167317001138144428 1.172603940053178631858834904510689155863484500890729 1.172603940053178631858834904520155486726136642555730 1.172603940053178631858834904520155486726136642555731 1.172603940053178631858834904520180138629424799174632 1.17260394005317863185883490452018322011733581875233 1.172603940053178631858834904520183797896319134922734 1.172603940053178631858834904520183701599821915560935 1.17260394005317863185883490452018370761835299177136 -0.8273960599468213681411650954798162920054888159658437 -0.8273960599468213681411650954798162920054888159658438 -0.8273960599468213681411650954798162919996113442117339 -0.8273960599468213681411650954798162919990603312347840 -0.8273960599468213681411650954798162919990373723607441 -0.8273960599468213681411650954798162919990330675718642 -0.8273960599468213681411650954798162919990330675718643 -0.8273960599468213681411650954798162919990331124134144 -0.8273960599468213681411650954798162919990331152160145 -0.8273960599468213681411650954798162919990331157414946 -0.8273960599468213681411650954798162919990331157852847 -0.8273960599468213681411650954798162919990331157852848 -0.8273960599468213681411650954798162919990331157844349 -0.8273960599468213681411650954798162919990331157843950 -0.82739605994682136814116509547981629199903311578438`

Remarkably, the evaluations are not only wildly inaccurate at low precision; up to 35 digits, they seem to be converging to the value 1.1726..., which is wrong!

Significance arithmetic saves the day:

`>>> for i in range(2,51):...     try:...         r = f(num(77617,i),num(33096,i))...         s = str(r)...         a = r.accuracy()...         print i, str(s), a...     except (NotImplementedError):...         continue...2 2.0e+33 -2.515449934965 6.0e+29 -2.826779887266 -8.0e+28 -2.729869874267 -1.0e+28 -2.632959861258 -1.0e+27 -2.536049848249 -2.0e+26 -2.4391398352310 1.0e+25 -2.6432598178912 2.0e+23 -2.4494397918713 -9.0e+21 -2.6535597745314 -1.0e+21 -2.5566497615216 -2.0e+19 -2.362829735517 1.0e+18 -2.5669497181618 -7.0e+16 -2.7710697008120 1.0e+15 -2.577249674821 -1.0e+14 -2.4803396617925 9.0e+9 -2.6947596010926 1.0e+9 -2.5978495880827 -1.0e+8 -2.5009395750735 -8.0e-1 -2.9297794536636 -8.0e-1 -1.9297794536637 -8.0e-1 -0.92977945366238 -8.0e-1 0.070220546338439 -8.0e-1 1.0702205463440 -8.3e-1 2.0702205463441 -8.27e-1 3.0702205463442 -8.274e-1 4.0702205463443 -8.274e-1 5.0702205463444 -0.827396 6.0702205463445 -0.8273961 7.0702205463446 -0.82739606 8.0702205463447 -0.82739606 9.0702205463448 -0.8273960599 10.070220546349 -0.82739605995 11.070220546350 -0.827396059947 12.0702205463`

I had to wrap the code in try/except-clauses due to num(0) not yet being implemented (at many of the low-precision stages, but fortunately for this demonstration not all of them, there is not unpexpectedly complete cancellation).

The accuracy is reported as negative until the arguments to the function are specified as accurate to over 38 digits, and at that point we see that the printed digits are indeed correct. (The values do not exactly match those of mpmath due to slightly different use of guard digits.)

Interestingly, the num class seems to greatly overestimate the error in this case, and I'm not yet sure if that's inherent to evaluating Rump's particular function with significance arithmetic or due to implementation flaws. It's of course better to overestimate errors than to underestimate them.

Numerical evaluation of a SymPy expression can be seen as converting the expression to a function of its constants (integers and special numbers like pi) and evaluating that function with the constants replaced by significance floating-point numbers. In practice, I will probably implement numerical evaluation using a recursive expression tree walker. This way forward error analysis can be used to efficiently obtain the user-requested level of accuracy at the top of the expression. Subevaluations can be repeated with increased precision until they become accurate enough; precision only needs to be increased for parts of an expression where it actually matters.

My next goal, besides polishing the existing features (and implementing num(0)), is to implement more functions (log, trigonometric functions), comparison operators, and writing some tests. With trigonometric functions, all the arithmetic operations for complex numbers will be straightforward to implement.

## Tuesday, June 3, 2008

### The significance of arithmetic

One week of GSoC has passed. Since school didn't end until today, I've so far only had time to do a simple edit to SymPy's Real type, replacing decimal with mpmath. Although this wasn't much work, the improvement is substantial. If you download the latest version of SymPy from the hg repository, you will find that for example cos(100).evalf() not only is fast (finishing in milliseconds rather than seconds), but actually returns the right value (!).

Switching to mpmath immediately solves a few problems, but others remain. What is the best way to handle the accuracy of approximate numbers in a computer algebra system? Different CASes use different strategies, each with pros and cons. I would like to arrive at some kind of conclusion by the end of this summer, but this is far from a solved problem. Common approaches roughly fall into three categories, which I will discuss briefly below.

The simplest approach, from the implementation point of view, is to approximate a number by a single (arbitrary-precision) floating-point number. It is up to the user to know whether any results become inaccurate due to cancellations and other errors. SymPy currently works this way (with the floating-point working precision adjusted via a global parameter).

The second approach is interval arithmetic: any exact real number can be approximated by bounding it between a pair of floating-point numbers. If all operations are implemented with proper rounding, interval arithmetic deals rigorously with approximation errors, in fact rigorously enough for formal proof generation.

Interval arithmetic is unfortunately sometimes needlessly conservative, greatly exaggerating the estimate of the true error. It also has some counterintuitive properties (e.g., x · x is not the same as x2), and there is no unambiguous way to define comparisons. Interval arithmetic also comes with a considerable speed penalty, by a factor usually of the order 2-8.

The third approach is what might be called significance arithmetic (although this term has other meanings). It can be viewed as a tradeoff between direct floating-point arithmetic and interval arithmetic. A number is approximated as a single floating-point number, but with an error estimate attached to it. This representation is in principle equivalent to that of interval arithmetic (the error estimate can be seen as the width of an interval), but operations are defined differently. Instead of propagating the estimated error rigorously, one settles for a first-order approximation (this often works well in practice).

The error can be stored as a machine-precision number (representing the logarithm of the absolute or relative error), making arithmetic only a little slower than direct arbitrary-precision floating-point arithmetic. A downside with significance arithmetic is that it is not so straightforward to represent the concept of a "completely inaccurate" number, e.g. a number with unknown sign (the number 0 requires special treatment). It is not entirely clear to me whether it is best to work with the absolute or relative error (or both). As with interval arithmetic, there are still multiple ways to define comparisons.

I'm leaning towards implementing significance arithmetic in SymPy. Mpmath already has some support for interval arithmetic (which I will work in improving later this summer), and this should be accessible in SymPy as well, but I think some looser form of significance arithmetic is the right approach for the common representation of approximate numbers. Right now, I'm studying the implementation of significance arithmetic in Mathematica ( Mark Sofroniou & Giulia Spaletta: Precise Numerical Computation), and writing some test code, to get to grips with the details.