Holiday season means more free time for coding (yay!). In
this commit, I've fixed the remaining major accuracy issues with hypergeometric functions in mpmath. The effect, generally speaking, is that mpmath will now deal much more robustly with large parameters of hypergeometric-type functions.
Previously, you would obtain an accurate value with parameters of magnitude ≈ 10
−1 - 10
1 (say), and often for large parameters as well, but for some functions the accuracy would quickly deteriorate if parameters were increased (or moved closer to singularities). You could still obtain any accuracy by simply increasing the working precision, but you had to manually figure out the amount of extra precision required; the news is that mpmath now automatically gives a value with full accuracy even for large parameters (or screams out loud if it fails to compute an accurate value).
This doesn't mean that you can't trick mpmath into computing a wrong value by choosing sufficiently evil parameters, but it's much harder now than simply plugging in large values. Abnormally large values will now mainly accomplish abnormal slowness (while mpmath implements asymptotic expansions with respect to the argument of hypergeometric functions, evaluation for asymptotically large parameters is a much harder problem as far as I'm aware -- so mpmath in effect accomplishes it by brute force.)
The most trivial change was to change the series summation to aim for control of the relative error instead of the absolute error. This affects alternating series, where large parameters lead to very small function values. Previously, something like the following would happen:
>>> mp.dps=5; hyp1f1(-5000, 1500, 100)
1.6353e-14
>>> mp.dps=15; hyp1f1(-5000, 1500, 100)
8.09813050863682e-25
>>> mp.dps=30; hyp1f1(-5000, 1500, 100)
-1.38318247777802583583082760724e-39
This isn't disastrously bad since the absolute error is controlled (summing this series naively in floating-point arithmetic might give something like 1e+234 due to catastrophic cancellation). But now, you get the relatively accurate answer right away, which is much nicer:
>>> mp.dps = 5; hyp1f1(-5000, 1500, 100)
9.1501e-185
>>> mp.dps = 15; hyp1f1(-5000, 1500, 100)
9.15012134245639e-185
>>> mp.dps = 30; hyp1f1(-5000, 1500, 100)
9.15012134245639443114220499541e-185
Of course, if the value is too small, the evaluation loop must abort eventually. The default behavior now is to raise an exception if relative accuracy cannot be obtained. The user can force a return value by either permitting a higher internal precision before aborting, or specifying a size 2
−zeroprec below which the value is small enough to be considered zero:
>>> hyp1f1(-8000, 4000, 1500)
Traceback (most recent call last):
...
ValueError: hypsum() failed to converge to the requested 53 bits of accuracy
using a working precision of 3568 bits. Try with a higher maxprec,
maxterms, or set zeroprec.
>>>
>>>
>>> hyp1f1(-8000, 4000, 1500, maxprec=10000)
4.36754212717293e-1350
>>> hyp1f1(-8000, 4000, 1500, accurate_small=False)
-1.99286380611911e-25
>>> hyp1f1(-8000, 4000, 1500, zeroprec=1000)
0.0
Since exceptions are inconvenient, it will be necessary to add some more symbolic tests for exact zeros for certain functions (particularly orthogonal polynomials) where exact zeros arise as important special cases.
Also, cancellation detection in
hypercomb is now the default for all functions. This fixes, among other things, a bug in the Meijer G-function reported by a user via email. Before:
>>> meijerg([[],[]],[[0],[]],0.1)
0.90483741803596
>>> meijerg([[],[]],[[0,0],[]],0.1)
1.47347419562219
>>> meijerg([[],[]],[[0,0,0],[]],0.1)
1.61746025085449
>>> meijerg([[],[]],[[0,0,0,0],[]],0.1) # suspicious
13194139533312.0
After:
>>> meijerg([[],[]],[[0],[]],0.1)
0.90483741803596
>>> meijerg([[],[]],[[0,0],[]],0.1)
1.47347419562219
>>> meijerg([[],[]],[[0,0,0],[]],0.1)
1.61745972140487
>>> meijerg([[],[]],[[0,0,0,0],[]],0.1)
1.56808223438324
Another important change is more correct handling parameters very close to negative integers, particularly those appearing in denominators of series. Previously, unless the integer
n was small enough or the precision was set high enough, this situation would yield a bogus value (that of a prematurely truncated series):
>>> n = -20
>>> nh = fadd(n, 1e-100, exact=True)
>>> hyp0f1(n, 0.25) # nonsense
0.987581857511351
>>> hyp0f1(nh, 0.25) # same nonsense
0.987581857511351
Now, correctly:
>>> n = -20
>>> nh = fadd(n, 1e-100, exact=True)
>>> hyp0f1(n, 0.25)
Traceback (most recent call last):
...
ZeroDivisionError: pole in hypergeometric series
>>> hyp0f1(nh, 0.25)
1.85014429040103e+49
Finally, and probably most importantly, the limit evaluation code has been changed to adaptively decrease the size of perturbation until convergence. Under fairly general assumptions, the maximum accurate perturbation at precision
p can easily be shown to be 2
−(p+C); the problem is that the parameter-dependent constant
C isn't generally known. Previously
C was just set to a small value (10), and naturally this would break down for some functions when parameters were increased beyond roughly that magnitude.
I briefly considered the idea of estimating
C analytically in terms of the parameters, and while I think this can be done rigorously, it seems difficult -- especially to do it tightly (grossly overestimating
C would murder performance). The algorithm implemented now is quasi-rigorous, and although there is some slowdown (sometimes by a fair amount), the improved reliability is definitely worth it. A user can also manually supply a perturbation size of their own taste, thereby overriding adaptivity. If the user supplies an intelligent value, this gives both the speed of the old code and full rigor. Probably some intelligent choices for particular functions could be made automatically by mpmath too, to recover the old speed in common cases.
An example of a situation where this becomes necessary is for Legendre functions with certain large parameter combinations. With the old code:
>>> mp.dps = 15; legenp(0.5, 300, 0.25) # bad
4.19317029788723e+626
>>> mp.dps = 30; legenp(0.5, 300, 0.25) # bad
3.72428336871098039162972441784e+611
>>> mp.dps = 60; legenp(0.5, 300, 0.25) # bad
2.93794154954090326636196697693611381787845107728298382876544e+581
>>> mp.dps = 100; legenp(0.5, 300, 0.25) # only accurate to a few digits
-1.71750854949725238788826203712778687036438365374945625996246145924802366061559
6579520831362887006032e+578
With the new code:
>>> mp.dps = 15; legenp(0.5, 300, 0.25)
-1.71750854949725e+578
>>> mp.dps = 30; legenp(0.5, 300, 0.25)
-1.71750854949725238788826203713e+578
>>> mp.dps = 60; legenp(0.5, 300, 0.25)
-1.71750854949725238788826203712778687063419097363472262860567e+578
>>> mp.dps = 15; legenp(0.5, 300, 0.25, hmag=300) # faster, with manual perturbation
-1.71750854949725e+578
Another example is for Bessel functions differentiated to high order. With the old code:
>>> mp.dps = 15; besseli(2, 10, derivative=100) # bad
2.63560662943646e+34
>>> mp.dps = 30; besseli(2, 10, derivative=100) # bad
23408889310840499424.9813614712
>>> mp.dps = 60; besseli(2, 10, derivative=100) # only accurate to a few digits
821.238621006501815018537509753672810563116338269219460709828
With the new code:
>>> mp.dps = 15; besseli(2, 10, derivative=100)
821.238621006483
>>> mp.dps = 30; besseli(2, 10, derivative=100)
821.238621006483348660925541651
>>> mp.dps = 60; besseli(2, 10, derivative=100)
821.238621006483348660925541650744338655411830854860813048862
In related news, I've given all hypergeometric-type functions the ability to accept kwargs and pass them on to
hypercomb and
hypsum. This means that tuning and error control options will be available for a large number of functions (Bessel functions, etc), which could be useful for some users who need to do advanced calculations. I have yet to document these features thoroughly (the interface will also perhaps be tweaked before the next release).