For example, the classical orthogonal polynomials (Legendre, Chebyshev, Jacobi) are instances of 2F1 with certain integer parameters, and 2F1 with noninteger parameters allows for generalization of these functions to noninteger orders. Other functions that can be reduced to 2F1 include elliptic integrals (though mpmath uses AGM for these). With a good implementation of 2F1, these functions can be implemented very straightforwardly without a lot of special-purpose code to handle all their corner cases.

Numerical evaluation of 2F1 is far from straightforward, and the

`hyp2f1`function in mpmath used to be quite fragile. The hypergeometric series only converges for |z| < 1, and rapidly only for |z| << 1. There is a transformation that replaces z with 1/z, but this leaves arguments close to the unit circle which must be handled using further transformations. As if things weren't complicated enough, the transformations involve gamma function factors that often become singular even when the value of 2F1 is actually finite, and obtaining the correct finite value involves appropriately cancelling the singularities against each other.

After about two days of work, I've patched the 2F1 function in mpmath to the point where it should finally work for all complex values of a, b, c, z (see commits here). I'm not going to bet money that there isn't some problematic case left unhandled, but I've done tests for many of the special cases now.

The following is a very simple example that previously triggered a division by zero but now works:

>>> print hyp2f1(3,-1,-1,0.5)

2.5

The following previously returned something like -inf + nan*j, due to incorrect handling of gamma function poles, but now works:

>>> print hyp2f1(1,1,4,3+4j)

(0.492343840009635 + 0.60513406166124j)

>>> print (717./1250-378j/625)-(6324./15625-4032j/15625)*log(-2-4j) # Exact

(0.492343840009635 + 0.60513406166124j)

Evaluation close to the unit circle used to be completely broken, but should be fine now. A simple test is to integrate along the unit circle:

>>> mp.dps = 25

>>> a, b, c = 1.5, 2, -4.25

>>> print quad(lambda z: hyp2f1(a,b,c,exp(j*z)), [pi/2, 3*pi/2])

(14.97223917917104676241015 + 1.70735170126956043188265e-24j)

Mathematica gives the same value:

In[17]:= NIntegrate[Hypergeometric2F1[3/2,2,-17/4,Exp[I z]],

{z, Pi/2, 3Pi/2}, WorkingPrecision->25]

-26

Out[17]= 14.97223917917104676241014 - 3.514976640925973851950882 10 I

Finally, evaluation at the singular point z = 1 now works and knows whether the result is finite or infinite:

>>> print hyp2f1(1, 0.5, 3, 1)

1.333333333333333333333333

>>> print hyp2f1(1, 4.5, 3, 1)

+inf

As a consequence of these improvements, several mpmath functions (such as the orthogonal polynomials) should now work for almost all complex parameters as well.

The improvements to 2F1 also pave the way for some new functions. One of the many functions that can be reduced to 2F1 is the generalized incomplete beta function:

An implementation of this function (

`betainc(a,b,x1,x2)`) is now available in mpmath trunk. I wrote the basics of this implementation a while back, but it was nearly useless without the recent upgrades to 2F1. Evaluating the incomplete beta function with various choices of parameters proved useful to identify and fix some corner cases in 2F1.

One important application of the incomplete beta integral is that, when regularized, it is the cumulative distribution function of the beta distribution. As a sanity check, the following code successfully reproduces the plot of several beta CDF:s on the Wikipedia page for the beta distribution (I even got the same colors!):

def B(a,b):

return lambda t: betainc(a,b,0,t,regularized=True)

plot([B(1,3),B(0.5,0.5),B(5,1),B(2,2),B(2,5)], [0,1])

The

`betainc`function is superior to manual numerical integration because of the numerically hairy singularities that occur at x = 0 and x = 1 for some choices of parameters. Thanks to having a good 2F1 implementation,

`betainc`gives accurate results even in those cases.

The

`betainc`function also provides an appropriate analytic continuation of the beta integral, internally via the analytic continuation of 2F1. Thus the beta integral can be evaluated outside of the standard interval [0,1]; for parameters where the integrand is singular at 0 or 1, this is in the sense of a contour that avoids the singularity.

It is interesting to observe how the integration introduces branch cuts; for example, in the following plot, you can see that 0 is a branch point when the first parameter is fractional and 1 is a branch point when the second parameter is fractional (when both are positive integers, the beta integral is just a polynomial, so it then behaves nicely):

# blue, red, green

plot([B(2.5,2), B(3,1.5), B(3,2)], [-0.5,1.5], [-0.5,1.5])

To check which integration path

`betainc`"uses", we can compare with numerical integration. For example, to integrate from 0 to 1.5, we can choose a contour that passes through +i (in the upper half plane) or -i (in the lower half plane):

>>> mp.dps = 25

>>> print betainc(3, 1.5, 0, 1.5)

(0.152380952380952380952381 + 0.4023774302466306150757186j)

>>> print quad(lambda x: x**2*(1-x)**0.5, [0, j, 1.5])

(0.152380952380952380952381 - 0.4023774302466306150757186j)

>>> print quad(lambda x: x**2*(1-x)**0.5, [0, -j, 1.5])

(0.152380952380952380952381 + 0.4023774302466306150757186j)

The sign of the imaginary part shows that

`betainc`gives the equivalent of a contour through the lower half plane. The convention turns out to agree with that used by Mathematica:

In[10]:= Beta[0, 1.5, 3, 1.5]

Out[10]= 0.152381 + 0.402377 I

I'll round things up by noting that I've also implemented the generalized exponential integral (the En-function) in mpmath as

`expint(n,z)`. A sample:

>>> print expint(2, 3.5)

0.005801893920899125522331056

>>> print quad(lambda t: exp(-3.5*t)/t**2, [1,inf])

0.005801893920899125522331056

The En-function is based on the incomplete gamma function, which is based on the hypergeometric series 1F1. These functions are still slow and/or inaccurate for certain arguments (in particular, for large ones), so they will require improvements along the lines of those for 2F1. Stay tuned for progress.

In other news, mpmath 0.12 should be in both SymPy and Sage soon. With this announcement I'm just looking for an excuse to tag this post with both 'sympy' and 'sage' so it will show up on both Planet SymPy and Planet Sage :-) Posts purely about mpmath development should be relevant to both audiences though, I hope.

## 1 comment:

Excellent stuff! mpmath is shaping along nicely.

Post a Comment