>>> from mpmath import mp, pi
>>> mp.dps = 10**6
>>> print pi
Right now, computing π with mpmath is actually faster than with SAGE (but knowing the SAGE people, this bug will be fixed soon :-).
As promised in the last post, I will now write in more detail about the most recently added features to my numerical evaluation code for SymPy. For the code itself, see evalf.py and test_evalf.py.
New functions
The functions atan (for real input) and log (for positive real input) have been added.
>>> N('log(2)', 50)
'0.69314718055994530941723212145817656807550013436026'
>>> N('16*atan(1/5) - 4*atan(1/239)', 50)
'3.1415926535897932384626433832795028841971693993751'
The working precision is increased automatically to evaluate log(1+ε) accurately:
>>> N('log(2**(1/10**20))',15)
'6.93147180559945e-21'
Integrals
A second important new feature is support for integrals. There are still some bugs to be sorted out with the implementation, but the basics work.
>>> from sympy import *
>>> var('x')
x
>>> gauss = Integral(exp(-x**2), (x, -oo, oo))
>>> N(gauss, 15)
'1.77245385090552'
Integrals can be used as part of larger expressions, and adaptive evaluation works as expected:
>>> N(gauss - sqrt(pi) + E*Rational(1,10**20), 15)
'2.71828182845904e-20'
For reasonably nice integrands, the integration routine in mpmath can provide several hundred digits fairly quickly. Of course, any numerical integration algorithm can be fooled by pathological input, and the user must assume responsibility for being aware of this fact. In many common situations, however, numerical errors can be detected automatically (and doing this well is something I will look into further).
Complex arithmetic
The most important new feature is support for multiplication and addition of complex numbers.
In an earlier post, I posed the question of how to best track accuracy for complex numbers. This turns out not to be such a difficult problem; as soon as I got started with the implementation, I realized that there is only one reasonable solution. I have decided to track the accuracy of the real and imaginary parts separately, but to count the accuracy of a computed result as the accuracy of the number as a whole.
In other words, a computed result z denotes a point in the complex plane and the real and imaginary errors define a rectangular uncertainty region, centered around z. The other option would have been a circular disk, requiring only a single real error value (specifying radius). The rectangular representation is somewhat easier to work with, and much more powerful, because it is very common that the real part is known much more accurately than the imaginary part, and vice versa.
If the half-width and half-height of the error rectangle are defined by the complex number w, then the absolute error can be defined the usual way as |w| and the relative error as |w|/|z|. (For computational purposes, the complex norm can be approximated accurately using the max norm. This is wrong by at most a factor √2, or logarithmically by log2(√ 2) = 0.5 bits = 0.15 decimals.)
In other words, if |w|/|z| = 10−15, the result is considered accurate to 15 digits. This can either mean that both the real and imaginary parts are accurate to 15 digits, or that just one of them is, provided that the other is smaller in magnitude. For example, with a target accuracy of 15 digits; if the real part is fully accurate, and the imaginary part is a factor 103 smaller than the real part, then the latter need only be accurate to 15−3 = 12 digits. The advantage of this approach is that accuracy is preserved exactly under multiplication, and hence no restarting is required during multiplication.
As an example, consider the following multiplication in which the real parts completely cancel:
>>> N('(1/3+2*I)*(2+1/3*I)', 10)
'.0e-12 + 4.111111111*I'
As noted in earlier posts, numerical evaluation cannot detect a quantity being exactly zero. The ".0e-12" is a scaled zero, indicating a real quantity of unknown sign and magnitude at most equal to 1e-12. (To clarify its meaning, it could perhaps be printed with a "±" sign in front). If we treat it as 10−12, its relative accuracy is 0 digits (because 0 nonzero digits are known). But the result as a whole is accurate to 10 digits, due to the imaginary part being more than 1010 times larger and accurate to 10 digits.
In an earlier post, I speculated about the total accuracy being problematic to use for complex results, because of subsequent additions potentially causing cancellations. This was rather stupid, because the problem already existed for purely real numbers, and was already solved by the existing adaptive addition code. To implement complex addition, I basically just needed to refactor the code to first evaluate all the terms and then add up the real and imaginary parts separately.
Building on the previous example, where the imaginary part held all the accuracy, we can try to subtract the entire imaginary part:
>>> N('(1/3+2*I)*(2+1/3*I) - 37/9*I + pi/10**50', 10)
'3.141592654e-50 + .0e-62*I'
The addition routine finds that both the real and the imaginary parts are inaccurate and retries at higher precision until it locates the tiny real part that has been added. Alternatively, the following test also works (the imaginary part is displayed as 1.0e-15 despite having been computed accurately to 10 digits, because mpmath strips trailing zeros — this could perhaps be changed):
>>> N('(1/3+2*I)*(2+1/3*I) - (37/9 - 1/10**15)*I', 10)
'.0e-30 + 1.0e-15*I'
The computation
>>> N('(1/3+2*I)*(2+1/3*I) - 37/9*I, 10)
hangs, as it should, because I have still not implemented any stopping criterion for apparent total cancellations.
So there is still work to do :-)
No comments:
Post a Comment