My last post discussed the implementation of a number type that tracks the propagation of initial numerical uncertainties under arithmetic operations. I have now begun implementing a function that in some sense does the reverse; given a fixed formula and desired final accuracy, it produces a numerical value through recursive evaluation. I've named the function
N because it behaves much like Mathematica's function with the same name.
The file is available
here (the code needs a lot of cleanup at this point, so please be considerate). It contains a small test suite that should pass if you try running it.
The input to
N can be a SymPy expression or a string representing one. For simplicity, the returned value is currently just a string. The second argument to
N specifies the desired precision as a number of base-10 digits:
>>> from sympy import *
>>> from evalf import N
>>> N(pi,30)
'3.14159265358979323846264338328'
>>> N('355/113',30)
'3.14159292035398230088495575221'
The set of supported expressions is currently somewhat limited; examples of what does work will be given below.
As I have said before, an important motivation for an adaptive algorithm for numerical evaluation is to distinguish integers from non-integers (or more simply, distinguishing nonzero numbers from zero). Numerical evaluation is, as far as I know, the only general method to evaluate functions such as x ≥ y, sign(x), abs(x) and floor(x). Due to the discontinuous nature of these functions, a tiny numerical error can cause a drastically wrong result if undetected, leading to complete nonsense in symbolic simplifications.
There are many known examples of "high-precision fraud", i.e. cases where an expression appears to be identical to another if evaluated to low numerical precision, but where there is in fact a small (and sometimes theoretically important) difference. See for example MathWorld's article,
"Almost Integer". Some of these examples are rather conspicuous (e.g. any construction involving the floor function), but others are surprising and even involve elegant mathematical theory. In any case, they are a great way to test that the numerical evaluation works as intended.
Some algebraic examples
A neat way to derive almost-integers is based on Binet's formula for the Fibonacci numbers, F(n) = (φ
n - (-φ)
n)/√5 where φ is the golden ratio (1+√5)/2. The (-φ)
n-term decreases exponentially as n grows, meaning that φ
n/√5 alone is an excellent (although not exact) approximation of F(n). How good? Let's compute F(n) - φ
n/√5 for a few n (we can use SymPy's exact Fibonacci number function
fibonacci(n) to make sure no symbolic simplification accidentally occurs):
>>> binet = lambda n: ((1+sqrt(5))/2)**n/sqrt(5)
>>> N(binet(10) - fibonacci(10), 10)
'3.636123247e-3'
>>> N(binet(100) - fibonacci(100), 10)
'5.646131293e-22'
>>> N(binet(1000) - fibonacci(1000), 10)
'4.60123853e-210'
>>> N(binet(10000) - fibonacci(10000), 10)
'5.944461218e-2091'
N works much better than the current fixed-precision evalf in SymPy:
>>> (fibonacci(1000) - binet(1000)).evalf()
-1.46910587887435e+195
With
N, we find that the simplified Binet formula not only gives the correct Fibonacci number to the nearest integer; for F(10000), you have to look 2000 digits beyond the decimal point to spot the difference. A more direct approach, of course, is to simply evaluate the (-φ)
n term; the beauty of the implementation of
N is that it works automatically, and it will still work in case there are a hundred terms contributing to cancel each other out (a much harder situation for a human to analyze).
Another, related, well-known result is that F(n+1)/F(n) is a close approximation of the golden ratio. To see how close, we can just compute the difference:
>>> N(fibonacci(10001)/fibonacci(10000) - (1+sqrt(5))/2, 10)
'3.950754128e-4180'
>>> N(fibonacci(10002)/fibonacci(10001) - (1+sqrt(5))/2, 10)
'-1.509053796e-4180'
The approximation is good to over 4000 digits. Note also the signs; based on the numerical results, we could compute the exact value of the function sign(F(n+1)/F(n) - φ) for any specific value of n (and find that it is positive for odd n and negative for even n). Indeed, I will later implement the sign function (and related functions) in SymPy precisely this way: just try to call
N() asking for 10 digits (or 3, it doesn't make much of a difference), and use the sign of the computed result if no error occurs.
Let's also revisit Rump's example of an ill-conditioned function, which was mentioned in my previous blog post. I have given
N the ability to substitute numerical values for symbols (this required roughly two lines of code), in effect allowing it to be used for function evaluation. When asked for 15 digits,
N gives the correct value right away:
>>> var('x y')
>>> a = 1335*y**6/4+x**2*(11*x**2*y**2-y**6-121*y**4-2) + \
... 11*y**8/2+x/(2*y)
>>> N(a, 15, subs={x:77617, y:33096})
'-0.827396059946821'
With the "verbose" flag set,
N shows that it encounters cancellations during the addition and has to restart twice:
>>> N(a, 15, subs={x:77617, y:33096}, verbose=1)
ADD: wanted 54 accurate bits, got -7 -- restarting with prec 115
ADD: wanted 54 accurate bits, got 2 -- restarting with prec 167
'-0.827396059946821'
Transcendental functions
N currently supports the constants π and e, and the functions x^y, exp, cos and sin. I refer to my previous post for a discussion of the issues involved in (real) exponentiation. Suffice to say,
N figures out that in order to compute 10 mantissa digits of π to the power of 1 googol, it needs 110 digits of precision:
>>> N(pi ** (10**100), 10)
'4.946362032e+497149872694133854351268288290898873651678324
3804424461340534999249471120895526746555473864642912223'
It is also able to cope with cancellation of exponentials close to unity:
>>> N('2**(1/10**50) - 2**(-1/10**50)',15)
'1.38629436111989e-50'
The trigonometric functions are a bit more interesting. Basically, to compute cos(x) or sin(x) to n accurate digits, you need to first evaluate x with an
absolute error of 10
-n. In order to calculate x to within a given absolute error, the magnitude of x must be known first, so two evaluations are generally required.
N avoids the problem by evaluating x to a few extra bits the first time; if it turns out that |x| < C for the appropriate constant (say C = 1000), a second evaluation is not necessary. By appropriately increasing the internal precision, correct evaluations such as the following are possible:
>>> N('sin(exp(1000))',15)
'-0.906874170721915'
There is an additional complication associated with evaluating trigonometric functions. If the argument is very close to a root (i.e. a multiple of π for sin, or offset by π/2 for cos), the precision must be increased further.
N detects when this is necessary, and is for example able to deduce the following:
>>> N(sin(pi*10**1000 + Rational(1,10**1000), evaluate=False), 10)
'1.0e-1000'
The test shows that there is no difference between evaluating sin(2πn + x) and sin(x), except of course for speed. The
evaluate=False was added to prevent SymPy from removing the full-period term π · 10
1000. This automatic simplification is of course a SymPy feature; indeed, it makes the cleverness in
N redundant in many cases by automatically reducing the argument to a region close to zero. However, the symbolic simplification is not of much help when x happens to be close to a multiple of 2π without having that explicit symbolic form. To demonstrate, let's combine a trigonometric function with the Fibonacci number approximation from before:
>>> phi = (1+sqrt(5))/2
>>> N(sin(phi**3000 / sqrt(5) * pi), 15)
'1.53018563496763e-627'
Running with
verbose=True shows that
N sets the working precision to over 6000 bits before it arrives at those 15 digits.
The problem with zero
I have so far neglected to mention the issue of zero detection. Although adaptive numerical evaluation can identify a nonzero value in finite time, it cannot detect a zero.
Suppose we try to compute the difference between the explicit Fibonacci number F(n) and the expression for the same in terms of Binet's
exact formula (φ
n - (-φ)
n)/√5.
N(...,10) will in effect attempt to find 10 nonzero digits of 0, and of course fail, getting stuck in an infinite loop:
>>> phi = (1+sqrt(5))/2
>>> binet = lambda n: (phi**n - (-phi)**n)/sqrt(5)
>>> N(binet(100) - fibonacci(100), 10, verbose=1)
ADD: wanted 56 accurate bits, got -1 -- restarting with prec 113
ADD: wanted 56 accurate bits, got 0 -- restarting with prec 169
ADD: wanted 56 accurate bits, got -3 -- restarting with prec 228
...
ADD: wanted 56 accurate bits, got 0 -- restarting with prec 524753
ADD: wanted 56 accurate bits, got -1 -- restarting with prec 1049051
...
To deal with this problem, it will be necessary to set a threshold precision or maximum number of iterations, with a reasonable default value. This number should be possible to override by the user, either globally or by providing a keyword argument to
N (perhaps both).
There are several possible courses of action in case the threshold is reached. A cheap and practical solution is to simply consider any smaller quantity to
be zero. This can be done either silently or with a printed warning message. Rigorists would perhaps find it more satisfactory if an exception were raised. A final possibility is to prompt the user. I think all these options should be available; the question is what to do by default.
Fortunately, in a computer algebra system, most cancellations can be detected and turned into explicit zeros before they reach the
N function (1-1, sin(πn), etc). I am not entirely sure about the terminology here, but I think this ability to symbolically detect zeros is the essential difference between computer algebra systems and what is sometimes called "lazy infinite-precision reals" (or something similar).
Complex numbers
Currently,
N does not know how to deal with complex numbers (I have so far only written some placeholder code for this).
Addition should be relatively easy to implement: just add the real and imaginary parts separately and check the accuracy of each.
Multiplication is the simplest of all operations in the purely real case, because there are no cancellation effects whatsoever; all that is needed is to a few guard bits to deal with rounding. In fact, multiplying purely real and purely imaginary quantities already works (this is just a matter of keeping an extra boolean variable around to keep track of whether the product is imaginary; in effect a pseudo-polar representation):
>>> N('3*pi*I',10)
'9.424777961*I'
With general complex numbers, however, multiplication in rectangular form translates into addition, and I think cancellation effects may come into play so that it will be a little more complicated to implement correctly. For multiplication, it would be much nicer to use polar complex numbers, but that makes addition harder. There's just no escape...
One thing I'm wondering about is how to define accuracy for complex numbers. One could either consider the accuracy of the number as a whole, or of the real and imaginary parts separately.
It is very common to encounter sums of complex numbers with conjugate imaginary parts, i.e. (a+bi) + (c-bi). What should
N do if it obtains the requested number of digits, say 15, for the real part, but is unable to deduce anything about the imaginary part except that it is smaller than 10
-15 of the real part? By the definition of relative error as z·(1+error),
N should arguably be satisfied with that. But matters becomes more complicated if the number is subsequently passed to a function that is very sensitive to changes in the imaginary part alone (in the extreme case the imaginary part function, Im(z)).
Performance
What about speed? In the best case (i.e. no restarts),
N seems to be about as fast as direct evaluation with mpmath. This might be surprising, since
N both keeps track of errors and manually traverses the expression tree. I had actually expected
N to come out faster, since much of the need for instance creation and floating-point normalization is eliminated, but it turns out that a good deal of that is still needed and the additional error bookkeeping largely makes up for the remaining advantage.
There are some potential optimizations that could be exploited. One would be to take advantage of the TERMS/FACTORS representation employed in
SympyCore. In a sum like a·x + b·y + ... where the coefficients a, b... are integers or simple fractions (a very common situation), the coefficients can absorbed into the sum on the fly instead of recursively evaluating each term as a full product. Another optimization would be to save the magnitude (exponent + width of mantissa) and sign of each term in the numerical evaluation of sums. This way, an optimal precision can be chosen for each term in case the evaluation has to be restarted at higher precision.
Project status
This completes an important part of my GSoC project, namely to implement a reliable numerical expression evaluation algorithm. What remains now is first of all to add support for complex numbers and more mathematical functions, and to provide alternatives to the infinite loop for dealing with zeros. The code should also work with something like the
num class I posted previously;
N currently assumes that expressions are exact and can be approximated to any accuracy, but it should also be able to accommodate expressions initially containing approximate numbers. And of course, it all has to be implemented in SymPy and debugged.
Post scriptum: it seems that I forgot to provide an example of applying
N to
Ramanujan's constant. I'll leave this as an entertaining exercise to the reader.