Friday, September 24, 2010

Announcing mpmath 0.16

I'm happy to announce the release of mpmath 0.16, which contains the usual bugfixes as well as a slew of new features!

The main focus has been to improve coverage of special functions. Additions include inhomogeneous Bessel functions, Bessel function zeros, incomplete elliptic integrals, and parabolic cylinder functions. As of 0.16, mpmath implements essentially everything listed in the NIST Digital Library of Mathematical Functions chapters 1-20, as well as 21,24,27 and 33. (For 25 and 26 -- combinatorial and number-theoretic functions, see also my post about FLINT 2.)

Another major change is that mpmath 0.16 running in Sage will be much faster thanks to new extension code (currently awaiting review for inclusion in Sage). I've clocked speedups between 1.3x and 2x for various nontrivial pieces of code (such as the mpmath test suite and the torture test programs).

Thanks to William Stein, my work on mpmath during the summer was funded using resources from NSF grant DMS-0757627. This support is gratefully acknowledged.

Most of the new features are described in previous posts on this blog. For convenience, here is a short summary:

Assorted special functions update

  • The documentation now includes plots to illustrate several of the special functions.

  • Airy functions have been rewritten for improved speed and accuracy and to support evaluation of derivatives.

  • Functions airyaizero(), airybizero() for computation of Airy function zeros have been implemented.

  • Inhomogeneous Airy (Scorer) functions scorergi() and scorerhi() have been implemented.

  • Four inhomogeneous Bessel functions have been added (lommels1(), lommels2(), angerj(), webere()).

  • The Lambert W function has been rewritten to fix various bugs and numerical issues


Incomplete elliptic integrals complete

  • The Legendre and Carlson incomplete elliptic integrals for real and complex arguments have been implemented (ellipf(), ellipe(), ellippi(), elliprf(), elliprc(), elliprj(), elliprd(), elliprg()).


Sage Days 23, and Bessel function zeros

  • Functions besseljzero() and besselyzero() have been implemented for computing the m-th zero of Jν(z), J'ν(z) Yν(z), or Y'ν(z) for any positive integer index m and real order ν ≥ 0.


Post Sage Days 24 report

  • The Parabolic cylinder functions pcfd(), pcfu(), pcfv(), pcfw() have been implemented.


Euler-Maclaurin summation of hypergeometric series

  • Hypergeometric functions pFp-1(...; ...; z) now support accurate evaluation close to the singularity at z = 1.

  • A function sumap() has been added for summation of infinite series using the Abel-Plana formula.

  • Functions diffs_prod() and diffs_prod() have been added for generating high-order derivatives of products or exponentials of functions with known derivatives.


Again, mpmath in Sage is about to get faster

  • New Cython extension code has been written for Sage to speed up various operations in mpmath, including elementary functions and hypergeometric series.



There are various other changes as well, such as support for matrix slice indexing (contributed by Ioannis Tziakos -- thanks!). As usual, details are available in the changelog and the Changes page on the Google Code project site.

Wednesday, September 22, 2010

Again, mpmath in Sage is about to get faster

My summer project on special functions in mpmath and Sage, generously supported by William Stein with funds from NSF grant DMS-0757627, is nearing completion. I will soon release mpmath-0.16, which contains lots of new special functions and bugfixes. Sage users will also benefit from ~1500 lines of new Cython code (preliminary patch here) that speeds up various basic operations. Executing mpmath.runtests() in Sage on my laptop now takes 10.47 seconds (8.60 from a warm cache), compared to 14.21 (11.84) seconds with the new extensions disabled -- a global speedup of 30%.

For comparison, pure-Python mpmath with gmpy as the backend takes 21.46 (18.72) seconds to execute the unit tests and pure-Python mpmath with the pure-Python backend takes 52.33 (45.92) seconds.

Specifically, the new extension code implements exp for real and complex arguments, cos, sin and ln for real arguments, complex exponentiation in some cases, and summation of hypergeometric series, entirely in Cython.

Timings before (new extensions disabled):

sage: import mpmath
sage: x = mpmath.mpf(0.37)
sage: y = mpmath.mpf(0.49)
sage: %timeit mpmath.exp(x)
625 loops, best of 3: 14.5 µs per loop
sage: %timeit mpmath.ln(x)
625 loops, best of 3: 23.2 µs per loop
sage: %timeit mpmath.cos(x)
625 loops, best of 3: 17.2 µs per loop
sage: %timeit x ^ y
625 loops, best of 3: 39.9 µs per loop
sage: %timeit mpmath.hyp1f1(2r,3r,4r)
625 loops, best of 3: 90.3 µs per loop
sage: %timeit mpmath.hyp1f1(x,y,x)
625 loops, best of 3: 83.6 µs per loop
sage: %timeit mpmath.hyp1f1(x,y,mpmath.mpc(x,y))
625 loops, best of 3: 136 µs per loop


Timings after (new extensions enabled):

sage: import mpmath
sage: x = mpmath.mpf(0.37)
sage: y = mpmath.mpf(0.49)
sage: %timeit mpmath.exp(x)
625 loops, best of 3: 2.72 µs per loop
sage: %timeit mpmath.ln(x)
625 loops, best of 3: 7.25 µs per loop
sage: %timeit mpmath.cos(x)
625 loops, best of 3: 4.13 µs per loop
sage: %timeit x ^ y
625 loops, best of 3: 10.5 µs per loop
sage: %timeit mpmath.hyp1f1(2r,3r,4r)
625 loops, best of 3: 47.1 µs per loop
sage: %timeit mpmath.hyp1f1(x,y,x)
625 loops, best of 3: 59.4 µs per loop
sage: %timeit mpmath.hyp1f1(x,y,mpmath.mpc(x,y))
625 loops, best of 3: 83.1 µs per loop


The new elementary functions use a combination of custom algorithms and straightforward MPFR wrappers. Why not just wrap MPFR for everything? There are two primary reasons:

Firstly, because MPFR numbers have a limited range, custom code still needs to be used in the overflowing cases, and this is almost as much work as an implementation-from-scratch. (There are also some more minor incompatibilities, like lack of round-away-from-zero in MPFR, that result in a lot of extra work.)

Secondly, MPFR is not always fast (or as fast as it could be), so it pays off to write custom code. In fact, some of the ordinary Python implementations of functions in mpmath are faster than their MPFR counterparts in various cases, although that is rather exceptional (atan is an example). But generally, at low-mid precisions, it is possible to be perhaps 2-4x faster than MPFR with carefully optimized C code (see fastfunlib). This is a longer-term goal.

Already now, with the new extension code, the mpmath exponential function becomes faster than the Sage RealNumber version (based on MPFR) at low precision:

sage: %timeit mpmath.exp(x)
625 loops, best of 3: 2.75 µs per loop
sage: w = RealField(53)(x)
sage: %timeit w.exp()
625 loops, best of 3: 5.57 µs per loop


As the timings above indicate, hypergeometric series have gotten up to 2x faster. The speedup of the actual summation is much larger, but much of that gain is lost in various Python overheads (more work can be done on this). There should be a noticeable speedup for some hypergeometric function computations, while others will not benefit as much, for the moment.

Another benchmark is the extratest_zeta.py script in mpmath, which exercises the mpmath implementation of the Riemann-Siegel formula for evaluation of ζ(s) for complex s with large imaginary part. Such computations largely depend on elementary function performance (cos, sin, exp, log).

Here are the new timings for mpmath in Sage:

fredrik@scv:~/sage$ ./sage /home/fredrik/mp/mpmath/tests/extratest_zeta.py
399999999 156762524.675 ok = True (time = 1.144)
241389216 97490234.2277 ok = True (time = 9.271)
526196239 202950727.691 ok = True (time = 1.671)
542964976 209039046.579 ok = True (time = 1.189)
1048449112 388858885.231 ok = True (time = 1.774)
1048449113 388858885.384 ok = True (time = 1.604)
1048449114 388858886.002 ok = True (time = 2.096)
1048449115 388858886.002 ok = True (time = 2.587)
1048449116 388858886.691 ok = True (time = 1.546)

This is mpmath in Sage with the new extension code disabled:

fredrik@scv:~/sage$ ./sage /home/fredrik/mp/mpmath/tests/extratest_zeta.py
399999999 156762524.675 ok = True (time = 2.352)
241389216 97490234.2277 ok = True (time = 14.088)
526196239 202950727.691 ok = True (time = 3.036)
542964976 209039046.579 ok = True (time = 2.104)
1048449112 388858885.231 ok = True (time = 3.707)
1048449113 388858885.384 ok = True (time = 3.283)
1048449114 388858886.002 ok = True (time = 4.444)
1048449115 388858886.002 ok = True (time = 5.592)
1048449116 388858886.691 ok = True (time = 3.101)

This is mpmath in ordinary Python mode, using gmpy:

fredrik@scv:~/sage$ python /home/fredrik/mp/mpmath/tests/extratest_zeta.py
399999999 156762524.675 ok = True (time = 2.741)
241389216 97490234.2277 ok = True (time = 13.842)
526196239 202950727.691 ok = True (time = 3.124)
542964976 209039046.579 ok = True (time = 2.143)
1048449112 388858885.231 ok = True (time = 3.257)
1048449113 388858885.384 ok = True (time = 2.912)
1048449114 388858886.002 ok = True (time = 3.953)
1048449115 388858886.002 ok = True (time = 4.964)
1048449116 388858886.691 ok = True (time = 2.762)

With the new extension code, it appears that zeta computations are up to about twice as fast. This speedup could be made much larger as there still is a significant amount of Python overhead left to remove -- also a project for the future.

Sunday, September 5, 2010

Fast combinatorial and number-theoretic functions with FLINT 2

Time for a development update! Recently, I've done only a limited amount of work on mpmath (I have a some almost-finished Cython code for sage.libs.mpmath and new code for numerical integration in mpmath, both to be committed fairly soon -- within a couple of weeks, hopefully).

The last few weeks, I've mostly been contributing to FLINT 2. For those unfamiliar with it, FLINT is a fast C library for computational number theory developed by Bill Hart and others (the other active developers right now are Sebastian Pancratz and Andy Novocin). In particular, FLINT implements ridiculously fast multiprecision integer vectors and polynomials. It also provides very fast primality testing and factorization for word-size integers (32 or 64 bits), among other things. FLINT 2 is an in-progress rewrite of FLINT 1.x, a current standard component in Sage.

What does this have to do with numerical evaluation of special functions (the usual theme of this blog)? In short, my goal is to add code to FLINT 2 for exact special function computations -- combinatorial and number-theoretic functions, special polynomials and the like. Such functions benefit tremendously from the fast integer and polynomial arithmetic available in FLINT 2.

All my code can be found in my public GitHub repository (the most recent commits as of this writing are in the 'factor' branch).

Functions I've implemented so far include:

  • Möbius μ and Euler φ (totient) functions for word-size and arbitrary-size integers

  • Divisor sum function σk for arbitrary-size integers

  • Ramanujan τ function (Δ-function q-expansion)

  • Harmonic numbers 1 + 1/2 + 1/3 + ... + 1/n

  • Primorials 2 · 3 · 5 · ... · pn

  • Stirling numbers (1st and 2nd kind)



The versions in FLINT 2 of these functions should now be faster than all other implementations I've tried (GAP, Pari, Mathematica, the Sage library) for all ranges of arguments, except for those requiring factorization of large integers.

Some of these functions depend fundamentally on the ability to factorize integers efficiently. So far I've only implemented trial division for large integers in FLINT 2, with some clever code to extract large powers of small factors quickly. Sufficiently small cofactors are handled by calling Bill Hart's single-word factoring routines. The resulting code is very fast for "artificial" numbers like factorials, and will eventually be complemented with prime and perfect power detection code, plus fast implementations of Brent's algorithm and other methods. Later on the quadratic sieve from FLINT 1 will probably be ported to FLINT 2, so that FLINT 2 will be able to factor any reasonable number reasonably quickly.

Below, I've posted some benchmark results. A word of caution: all Mathematica timings were done on a different system, which is faster than my own laptop (typically by 30% or so). So in reality, Mathematica performs slightly worse relatively than indicated below. Everything else is timed on my laptop. I have not included test code for the FLINT2 functions (but it's just straightforward C code -- a function call or two between timeit_start and timeit_stop using FLINT 2's profiler module).

Möbius function (the following is basically a raw exercise of the small-integer factoring code):

Pari:
sage: %time pari('sum(n=1,10^6,moebius(n))');
CPU times: user 1.04 s, sys: 0.00 s, total: 1.04 s
Wall time: 1.04 s

Mathematica:
In[1]:= Timing[Sum[MoebiusMu[n], {n,1,10^6}];]
Out[1]= {0.71, Null}

flint2:
650 ms


Divisor sum:

Sage (uses Cython code):
sage: %time sigma(factorial(1000),1000);
CPU times: user 0.47 s, sys: 0.00 s, total: 0.47 s
Wall time: 0.46 s

Mathematica:
In[1]:= Timing[DivisorSigma[1000,1000!];]
Out[1]= {3.01, Null}

flint2:
350 ms


Ramanujan τ function:

Sage (uses FLINT 1):
sage: %time delta_qexp(100000);
CPU times: user 0.42 s, sys: 0.01 s, total: 0.43 s
Wall time: 0.42 s
sage: %time delta_qexp(1000000);
CPU times: user 6.02 s, sys: 0.37 s, total: 6.39 s
Wall time: 6.40 s

flint2:
100000: 230 ms
1000000: 4500 ms


An isolated value (Mathematica seems to be the only other software that knows how to compute this):

Mathematica:
In[1]:= Timing[RamanujanTau[10000!];]
Out[1]= {8.74, Null}

flint2:
280 ms


Harmonic numbers (again, only Mathematica seems to implement these). See also my old blog post How (not) to compute harmonic numbers. I've included the fastest version from there, harmonic5:


Mathematica:
In[1]:= Timing[HarmonicNumber[100000];]
Out[1]= {0.22, Null}
In[2]:= Timing[HarmonicNumber[1000000];]
Out[2]= {6.25, Null}
In[3]:= Timing[HarmonicNumber[10000000];]
Out[3]= {129.13, Null}

harmonic5: (100000):
100000: 0.471 s
1000000: 8.259 s
10000000: 143.639 s

flint2:
100000: 100 ms
1000000: 2560 ms
10000000: 49400 ms


The FLINT 2 function benefits from an improved algorithm that eliminates terms and reduces the size of the temporary numerators and denominators, as well as low-level optimization (the basecase summation directly uses the MPIR mpn interface).

Isolated Stirling numbers of the first kind:


Mathematica:
In[1]:= Timing[StirlingS1[1000,500];]
Out[1]= {0.24, Null}
In[2]:= Timing[StirlingS1[2000,1000];]
Out[2]= {1.79, Null}
In[3]:= Timing[StirlingS1[3000,1500];]
Out[3]= {5.13, Null}

flint 2:
100,500: 100 ms
2000,1000: 740 ms
3000,1500: 1520 ms


Isolated Stirling numbers of the second kind:

Mathematica:
In[1]:= Timing[StirlingS2[1000,500];]
Out11]= {0.21, Null}
In[2]:= Timing[StirlingS2[2000,1000];]
Out[2]= {1.54, Null}
In[3]:= Timing[StirlingS2[3000,1500];]
Out[3]= {4.55, Null}
In[4]:= Timing[StirlingS2[5000,2500];]
Out[4]= {29.25, Null}

flint2:
1000,500: 2 ms
2000,1000: 17 ms
3000,1500: 50 ms
5000,2500: 240 ms


In addition, fast functions are provided for computing a whole row or matrix of Stirling numbers. For example, computing the triangular matrix of ~1.1 million Stirling numbers of the first kind up to S(1500,1500) takes only 1.3 seconds. In Mathematica (again, on the faster system):

In[1]:= Timing[Table[StirlingS1[n,k], {n,0,1500}, {k,0,n}];]
Out[1]= {2.13, Null}


The benchmarks above mostly demonstrate performance for large inputs. Another nice aspect of the FLINT 2 functions is that there typically is very little overhead for small inputs. The high performance is due to a combination of algorithms, low-level optimization, and (most importantly) the fast underlying arithmetic in FLINT 2. I will perhaps write some more about the algorithms (for e.g. Stirling numbers) in a later post.

Wednesday, July 28, 2010

Euler-Maclaurin summation of hypergeometric series

I recently fixed another corner case (it's always the corner cases that are hard!) in the hypergeometric function evaluation code in mpmath.

The difficulty in question concerns the functions pFp-1(...; ...; z), of which the Gauss hypergeometric function 2F1 is an important special case. These functions can be thought of as generalizations of the geometric series



and the natural logarithm



and share the property that the hypergeometric (Maclaurin) series has radius of convergence 1, with a singularity (either algebraic or logarithmic, as above) at z = 1.

Numerical evaluation is fairly easy for z in most of the complex plane. The series for pFp-1(...; ...; z) converges like the geometric series, so direct summation works well for, say, |z| ≤ r 0.9. Likewise, there is an analytic continuation formula which transforms z to 1/z, allowing evaluation for (say) |z| ≥ 1.1.

The remaining shell around the unit circle 0.9 < |z| < 1.1 is the difficult region. In fact, 2F1 can still (essentially) be computed using exact transformations here, but the higher cases remain difficult. As mentioned in an earlier post, mpmath handles this by calling nsum() which applies convergence acceleration to the slowly converging series. In particular, nsum implements the (generalized) Shanks transformation which is extremely efficient for this particular type of series – in fact, it works outside the radius of convergence (Shanks transformation effectively computes a Padé approximant), so it can be used anywhere in the difficult shell.

Still, the Shanks transformation is most efficient when arg(z) = ±π and unfortunately degenerates as arg(z) → 0, i.e. close to z = 1. nsum also implements Richardson extrapolation, which sometimes works at z = 1, but only for special parameter combinations in the hypergeometric series (although those special combinations happen to be quite important – they include the case where the hypergeometric series reduces to a polygamma function or polylogarithm).

Thus, we have the following evaluation strategy for pFp-1:

Blue: direct summation (with respect to z or 1/z)
Yellow: Shanks transformation
Red: ???



A commenter suggested trying some other convergence acceleration techniques (like the Levin transformation). But it seems that they still will fail in some cases.

Thus I've finally implemented the Euler-Maclaurin summation formula for the remaining case. The E-M formula differs from other convergence acceleration techniques in that it is based on analyticity of the integrand and does not require a particular rate of convergence. It is implemented in mpmath as sumem(), but requires a little work to apply efficiently.

To apply the E-M formula to a hypergeometric series Σ T(k), the term T(k) must first of all obviously be extended to an analytic function of k, which is done the straightforward way by considering the rising factorials as quotients of gamma functions a(a+1)...(a+k−1) = Γ(a+k) / Γ(a).

We are left with the difficulty of having to integrate T(k) and also to compute n-th order derivatives of this function for the tail expansion in the E-M formula (neither of which can be done in closed form). Fortunately, numerical integration with quad() works very well, but the derivatives remain a problem.

By default, sumem() computes the derivatives using finite differences. Although this works, it gets extremely expensive at high precision for hypergeometric series due to the rapidly increasing cost of evaluating the gamma function as the precision increases. But T(k) is a product of functions that individually can be differentiated logarithmically in closed form (in terms of polygamma functions). I therefore implemented the equivalent of symbolic high-order differentiation for products and the exponential function using generators.

diffs_prod generates the high-order derivatives of a product, given generators for products of individual factors:

>>> f = lambda x: exp(x)*cos(x)*sin(x)
>>> u = diffs(f, 1)
>>> v = diffs_prod([diffs(exp,1), diffs(cos,1), diffs(sin,1)])
>>> next(u); next(v)
1.23586333600241
1.23586333600241
>>> next(u); next(v)
0.104658952245596
0.104658952245596
>>> next(u); next(v)
-5.96999877552086
-5.96999877552086
>>> next(u); next(v)
-12.4632923122697
-12.4632923122697


diffs_exp generates the high-order derivatives of exp(f(x)), given a generator for the derivatives of f(x):

>>> def diffs_loggamma(x):
... yield loggamma(x)
... i = 0
... while 1:
... yield psi(i,x)
... i += 1
...
>>> u = diffs_exp(diffs_loggamma(3))
>>> v = diffs(gamma, 3)
>>> next(u); next(v)
2.0
2.0
>>> next(u); next(v)
1.84556867019693
1.84556867019693
>>> next(u); next(v)
2.49292999190269
2.49292999190269
>>> next(u); next(v)
3.44996501352367
3.44996501352367


Actually only differentiation of the exponential function is necessary, which I didn't realize at first, but the product differentiation is still a nice feature to have.

Exponential differentiation amounts to constructing a polynomial of n+1 variables (the derivatives of f), which has P(n) (partition function) ≈ exp(√n) terms. Despite the rapid growth, it is indeed faster to use this approach to differentiate gamma function products at high precision, since the precision can be kept constant. In fact, this should even work in double precision (fp arithmetic) although I think it's broken right now due to some minor bug.

As a result of all this, here are two examples that now work (the results have full accuracy):


>>> mp.dps = 25
>>> hyper(['1/3',1,'3/2',2], ['1/5','11/6','41/8'], 1)
2.219433352235586121250027

>>> hyper(['1/3',1,'3/2',2], ['1/5','11/6','5/4'], 1)
+inf
>>> eps1 = extradps(6)(lambda: 1 - mpf('1e-6'))()
>>> hyper(['1/3',1,'3/2',2], ['1/5','11/6','5/4'], eps1)
2923978034.412973409330956


Application: roots of polynomials



A neat application is to compute the roots of quintic polynomials. Such roots can be expressed in closed form using hypergeometric functions, as outlined in the Wikipedia article Bring radical. This is more interesting in theory than practice, since polynomial roots can be calculated very efficiently using iterative methods.

In particular, the roots of x5 - x + t are given (in some order) by



where



Just verifying one case:


>>> mp.dps = 25; mp.pretty = True
>>> t = mpf(3)
>>> for r in polyroots([1,0,0,0,-1,t]):
... print r
...
-1.341293531690699250693537
(0.9790618691253385511579325 - 0.6252190807348055061205923j)
(0.9790618691253385511579325 + 0.6252190807348055061205923j)
(-0.3084151032799889258111639 + 1.249926913731033699905407j)
(-0.3084151032799889258111639 - 1.249926913731033699905407j)
>>> -t*hyper(['1/5','2/5','3/5','4/5'], ['1/2','3/4','5/4'], 3125*t**4/256)
(-0.9790618691253385511579325 + 0.6252190807348055061205923j)


Now, we can choose t such that the hypergeometric argument becomes unity:


>>> t = root(mpf(256)/3125,4)
>>> for r in polyroots([1,0,0,0,-1,t]):
... print r
...
-1.103842268886161371052433
(0.6687403049764220240032331 + 3.167963942396665205386494e-14j)
(0.6687403049764220240032331 - 3.172324181973539074536523e-14j)
(-0.1168191705333413384770166 + 1.034453571405662514459057j)
(-0.1168191705333413384770166 - 1.034453571405662514459057j)

>>> -t*hyper(['1/5','2/5','3/5','4/5'], ['1/2','3/4','5/4'], 1)
-0.6687403049764220240032331


The last line would previously fail.

Interesting to note is that the value at the singular point z = 1 corresponds to a double root. This also makes polyroots return inaccurate values (note the imaginary parts), a known deficiency that should be fixed...

Better methods



In fact, there may be better methods than the Euler-Maclaurin formula. One such approach is to use the Abel-Plana formula, which I've also recently implemented as sumap(). The Abel-Plana formula gives the exact value for an infinite series (subject to some special growth conditions on the analytically extended summand) as a sum of two integrals.

The Abel-Plana formula is particularly useful when the summand decreases like a power of k; for example when the sum is a pure zeta function:


>>> sumap(lambda k: 1/k**2.5, [1,inf])
1.34148725725091717975677
>>> zeta(2.5)
1.34148725725091717975677

>>> sumap(lambda k: 1/(k+1j)**(2.5+2.5j), [1,inf])
(-3.385361068546473342286084 - 0.7432082105196321803869551j)
>>> zeta(2.5+2.5j, 1+1j)
(-3.385361068546473342286084 - 0.7432082105196321803869551j)


It actually works very well for the generalized hypergeometric series at the z = 1 singularity (which, as mentioned earlier, is a generalized polygamma function, polylogarithm, or zeta function of integer argument), and near the singularity as well. It should even be slightly faster than the Euler-Maclaurin formula since no derivatives are required. Yet another possibility is to integrate a Mellin-Barnes type contour for the hypergeometric function directly.

But at present, I don't want to modify the existing hypergeometric code because it works and would only get more complicated. Rather, I want to improve nsum so it can handle all of this more easily without external hacks. The numerical integration code should also be improved first, because there are still certain parameter combinations where the hypergeometric function evaluation fails due to slow convergence in the numerical integration (this is due to an implementation issue and not an inherent limitation of the integration algorithm).

Tuesday, July 27, 2010

Post Sage Days 24 report

Now that I've overcome the immediate effects of SDWS (Sage Days Withdrawal Syndrome), I should write a brief report about Sage Days 24 which took place last week at the Research Institute for Symbolic Computation (RISC), located in a small town called Hagenberg just outside Linz in Austria. Many thanks for the organizers for inviting me!

Since I'm starting as a Ph.D. student at RISC in October, it was nice to get an early view of the site and meet some current students and staff. The location is quite beautiful, my only complaint being the extremely high temperature last week (they say the weather is more normal most of the year!).

SD24 was titled "Symbolic Computation in Differential Algebra and Special Functions", which pretty accurately describes the topics covered. Some points of interest:

William Stein talked about his vision for Sage (short and long term), including goals for special functions support.

Frédéric Chyzak gave a talk about the Dynamic Dictionary of Mathematical Functions. I really like the approach of starting from a minimal definition of a special function (a differential equation with initial conditions) and generating series expansions, Chebyshev approximations (etc.) algorithmically.

Nico Temme talked about numerical evaluation of special functions, which was familiar grounds to me, but with much food for thought.

I met Manuel Kauers, my future supervisor, who gave a neat presentation of some recent research.

There were many other interesting talks and discussions as well, and I gave a talk about special function evaluation in mpmath (based on my SD23 talk). And of course, I met various other Sage developers and got some coding done as well.

I had hoped to get generalized hypergeometric functions into Sage during SD24. There is now a patch, but it still needs some more work.

I also implemented parabolic cylinder functions (PCFs) in mpmath, the last remaining chapter of hypergeometric functions listed in the DLMF (mpmath now covers chapters 1-20, 22, 24-25, partially 26-27, and 33). Code commit here.

Technically, parabolic cylinder functions are just scaled (linear combinations of) confluent hypergeometric functions or Whittaker functions, but they are a bit tricky to compute due to cancellation and branch cuts.

An example from Nico Temme's talk was to evaluate Dν(z) with ν = -300.14 and z = 300.15 (Maple 13 takes 5 seconds to give an answer that has the wrong sign and is off by 692 orders of magnitude). The new pcfd function in mpmath instantaneously (in about a millisecond) gives the correct result:

>>> pcfd(-300.14, 300.15)
6.83322814925713e-10526
>>> mp.dps = 30
>>> pcfd('-300.14', '300.15')
6.83322814923312844480669009487e-10526


All the curve plots from DLMF 12.3 can be reproduced as follows:

f1 = lambda x: pcfu(0.5,x)
f2 = lambda x: pcfu(2,x)
f3 = lambda x: pcfu(3.5,x)
f4 = lambda x: pcfu(5,x)
f5 = lambda x: pcfu(8,x)
plot([f1,f2,f3,f4,f5], [-3,3], [0,3])



f1 = lambda x: pcfv(0.5,x)
f2 = lambda x: pcfv(2,x)
f3 = lambda x: pcfv(3.5,x)
f4 = lambda x: pcfv(5,x)
f5 = lambda x: pcfv(8,x)
plot([f1,f2,f3,f4,f5], [-3,3], [-3,3])



f1 = lambda x: pcfu(-0.5,x)
f2 = lambda x: pcfu(-2,x)
f3 = lambda x: pcfu(-3.5,x)
f4 = lambda x: pcfu(-5,x)
plot([f1,f2,f3,f4], [-8,8], [-6,6])



f1 = lambda x: pcfv(-0.5,x)
f2 = lambda x: pcfv(-2,x)
f3 = lambda x: pcfv(-3.5,x)
f4 = lambda x: pcfv(-5,x)
plot([f1,f2,f3,f4], [-8,8], [-6,6])



f1 = lambda x: pcfu(-8,x)
f2 = lambda x: pcfv(-8,x)*gamma(0.5-(-8))
f3 = lambda x: hypot(f1(x), f2(x))
plot([f1,f2,f3], [-6,6], [-120,120])



f1 = lambda x: diff(pcfu,(-8,x),(0,1))
f2 = lambda x: diff(pcfv,(-8,x),(0,1))*gamma(0.5-(-8))
f3 = lambda x: hypot(f1(x), f2(x))
plot([f1,f2,f3], [-6,6], [-220,220])



A little more information can be found in the mpmath documentation section for PCFs.

Thanks again to the NSF grant enabling this work.

Saturday, July 10, 2010

Sage Days 23, and Bessel function zeros

Yesterday I came back home from Sage Days 23 in Leiden. I don't really have time to write a detailed travel report right now, but overall it was very, very nice. Those interested can check out William Stein's photos (1, 2, 3, 4) of the event (I can be seen in a few of the photos, wearing a blue t-shirt during days 1-3 and a yellow t-shirt during day 4).

I didn't get quite as much coding done as I probably had time for (in part because I had a talk to prepare, and in part because I spent a bunch of time playing with MPIR and FLINT 2 and gaining some insights but otherwise not accomplishing much productive).

The code I did write includes a complete Cython implementation of real exp (giving up to a 7x speedup) and complex sqrt for mpmath, as well as code for fast evaluation of the gamma function at rational arguments. I will submit the Cython code to Sage soon; possibly after rewriting the remaining power code in Cython as well.

Anyway, prompted by a recent question on sage-support (and because I finally got inspiration to do it while at the airport), I've now implemented Bessel function zeros in mpmath.

Bessel function zeros are very important in applied mathematics because they are needed for Fourier–Bessel series, used to solve partial differential equations in cylindrical coordinates. I'd like to write down a concrete computational example, but it'd be slightly involved and I don't really have time right now. Perhaps in a later blog post.

The code permits computing the m-th nonnegative simple zero of Jν(z), J'ν(z) Yν(z), or Y'ν(z) for any positive integer index m and real order ν ≥ 0. One can use it for large m, large ν, or both:

>>> besseljzero(1,100)
314.9434728377671624580656
>>> besseljzero(100,1)
108.836165898409774363098
>>> besseljzero(100,100)
459.5295465754674695983379


A word of caution: I've chosen the convention Abramowitz & Stegun of including z = 0 for J'0, which is incompatible with SciPy and the Mathematica BesselZeros package. The A&S convention makes much more sense to me because it makes the zeros a continuous function of the order. This case (ν=0, derivative=1) should be the only incompatibility.

It's a bit tricky to compute the zeros of Bessel functions, because (as far as I know) no globally valid approximations are known; not any reasonably simple ones anyway.

Asymptotic expansions are known with respect to either the order or the index. In my implementation, I've chosen to simply use the asymptotic expansion with respect to the index n. If the n is too small, the code simply chooses a larger index m for which the asymptotic expansion does work, and then isolates all the zeros between 0 and the m-th by counting sign changes. By caching this result, all the consecutive zeros for the same function order can then be computed quickly as well.

This approach is simple and quite robust (up to any silly implementation errors on my part), although the code could certainly be optimized more for large orders.

The following graphic shows the derivative of Y50(z) with its zeros marked:



A neat property of Bessel functions is that they can be expanded as infinite products over their zeros:



These products converge slowly, but work with the aid of convergence acceleration:

>>> v, z = mpf(3), mpf(0.75)
>>> (z/2)**v/gamma(v+1)*
... nprod(lambda k: 1-(z/besseljzero(v,k))**2, [1,inf])
...
0.008484383423274108843927552
>>> besselj(v,z)
0.008484383423274108843927552

>>> (z/2)**(v-1)/2/gamma(v)*
... nprod(lambda k: 1-(z/besseljzero(v,k,1))**2, [1,inf])
...
0.0331364636065541211306414
>>> besselj(v,z,1)
0.0331364636065541211306414


I will do more work on Bessel functions in mpmath and Sage, so watch for future updates.

Friday, July 2, 2010

Symbolic infinite series

I've written a little module for symbolically evaluating hypergeometric infinite series. Technically, a series is hypergeometric if the ratio between successive terms is a rational function of the index k; more intuitively, most "nice-looking" series (such as power series of most special functions) involving powers, factorials, gamma functions, binomial coefficients (etc.) are hypergeometric.

The code currently uses SymPy, but should be possible to convert to Sage's symbolics (it mostly requires basic pattern matching, and evaluation of standard special functions). Perhaps "rewrite" is a better word than "convert" because the current code is a messy hack. It's buggy, and only handles special cases in places where a general approach could be used (such as for rational functions).

The basic idea behind the code is to perform the evaluation in two steps. Firstly, the given series is converted into canonical form. I've used the standard generalized hypergeometric series pFq. This is not always an optimal representation, but usually does the job.

In the second step, parameter transformations and table lookups are used to reduce the pFq series into elementary (or simpler) functions when possible. If this step fails, the pFq function is returned, giving a closed-form solution that can be manipulated or evaluated numerically.

The second step is the most difficult, because the transformation algorithms that need to be used for hypergeometric functions are very complicated in general (and there are all kinds of special cases). So far I've only implemented some simple transformations. For the final lookup stage, it would be straightforward to also implement expansion into Bessel functions, incomplete gamma functions, etc.

Below is a sample of some working test cases, with numerical evaluation of both the evaluated result and the original series as verification. Some of the sums don't quite simplify fully, i.e. there is an unevaluated pFq function left. The formulas were mostly generated with SymPy's latex function, so they are not always as pretty as could be.

[;   \sum_{k=0}^{\infty} k!^{-1} = e ;]

>>> hypsum(1 / fac(k), k)
E
>>> _.evalf()
2.71828182845905
>>> mpmath.nsum(lambda k: 1 / mpmath.fac(k), [0,mpmath.inf])
2.71828182845905

[; \sum_{k=0}^{\infty}   \frac{k^{6}}{k!} = 203 e ;]

>>> hypsum(k**6 / fac(k), k)
203*E
>>> _.evalf()
551.811211177186
>>> mpmath.nsum(lambda k: k**6 / mpmath.fac(k), [0,mpmath.inf])
551.811211177186

[; \sum_{k=0}^{\infty} 4   \frac{d k z^{2 + k}}{k!} = 4 d z^{3} e^{z} ;]

>>> hypsum(4 * d * k * z**(k+2) / fac(k), k)
4*d*z**3*exp(z)
>>> _.evalf(subs={d:'1/2',z:'1/4'})
0.0401257942714919
>>> mpmath.nsum(lambda k: 4 * 0.5 * k * 0.25**(k+2) / mpmath.fac(k), [0,mpmath.inf])
0.0401257942714919

[; \sum_{k=0}^{\infty}   \left({\left(2\right)}_{k}\right)^{-1} = -1 + e ;]

>>> hypsum(1 / rf(2,k), k)
-1 + E
>>> _.evalf()
1.71828182845905
>>> mpmath.nsum(lambda k: 1 / mpmath.rf(2,k), [0,mpmath.inf])
1.71828182845905

>>> simplify(hypsum(k**3 * z**(k) / fac(k), k))
z*exp(z) + z**3*exp(z) + 3*z**2*exp(z)
>>> _.evalf(subs={d:'1/2',z:'1/4'})
0.581824016936633
>>> mpmath.nsum(lambda k: k**3 * 0.25**(k) / mpmath.fac(k), [0,mpmath.inf])
0.581824016936633

[; \sum_{k=0}^{\infty} {{2   k}\choose{k}}^{-1} = \frac{4}{3} + \frac{2}{27} \pi \sqrt{3} ;]

>>> simplify(hypsum(1 / binomial(2*k,k), k))
4/3 + 2*pi*3**(1/2)/27 >>> _.evalf()
1.73639985871872
>>> mpmath.nsum(lambda k: 1 / mpmath.binomial(2*k,k), [0,mpmath.inf])
1.73639985871872

[; \sum_{k=0}^{\infty} {{3   k}\choose{k}}^{-1} = \frac{2}{3} \frac{\pi \sqrt{3}   \,_{3}F_{2}\left(\frac{1}{2},1,1; \frac{1}{3},\frac{2}{3};   \frac{4}{27}\right)}{\operatorname{\Gamma}\left(\frac{1}{3}\right)   \operatorname{\Gamma}\left(\frac{2}{3}\right)} ;]

>>> hypsum(1 / binomial(3*k,k), k)
2*pi*3**(1/2)*3F2([1/2, 1, 1], [1/3, 2/3], 4/27)/(3*gamma(1/3)*gamma(2/3))
>>> _.evalf()
1.41432204432182
>>> mpmath.nsum(lambda k: 1 / mpmath.binomial(3*k,k), [0,mpmath.inf])
1.41432204432182

[; \sum_{k=0}^{\infty}   \frac{k^{4}}{{{2 k}\choose{k}}} = \frac{32}{3} + \frac{238}{243} \pi   \sqrt{3} ;]

>>> hypsum(k**4 / binomial(2*k,k), k)
32/3 + 238*pi*3**(1/2)/243
>>> _.evalf()
15.9961018356512
>>> mpmath.nsum(lambda k: k**4 / mpmath.binomial(2*k,k), [0,mpmath.inf])
15.9961018356512

[; \sum_{k=0}^{\infty}   \frac{\left(1 + k\right)! \left(2 + k\right)!}{\left(3 + k\right)!   \left(4 + k\right)!} = \frac{1}{72} \,_{3}F_{2}\left(1,2,3; 4,5;   1\right) ;]

>>> hypsum(fac(k+1)*fac(k+2)/fac(k+3)/fac(k+4), k)
3F2([1, 2, 3], [4, 5], 1)/72
>>> _.evalf()
0.0217325998184402
>>> mpmath.nsum(lambda k: mpmath.fac(k+1)*mpmath.fac(k+2)/mpmath.fac(k+3)/mpmath.fac(k+4), [0,mpmath.inf])
0.0217325998184402

[; \sum_{k=0}^{\infty}   \frac{1}{\left(3 + k\right)^{2} \left(8 + 6 k + k^{2}\right)} =   \frac{1}{72} \,_{3}F_{2}\left(1,2,3; 4,5; 1\right) ;]

>>> hypsum(1/((3+k)**2*(8+6*k+k**2)), k)
3F2([1, 2, 3], [4, 5], 1)/72
>>> _.evalf()
0.0217325998184402
>>> mpmath.nsum(lambda k: 1/((3+k)**2*(8+6*k+k**2)), [0,mpmath.inf])
0.0217325998184402

[; \sum_{k=0}^{\infty}   \frac{\left(1 + 2 k\right)!}{\left(1 + k\right) \left(2 + 2 k\right)!} =   \frac{1}{12} \pi^{2} ;]

>>> hypsum(fac(2*k+1)/(fac(2*k+2)*(k+1)), k)
pi**2/12
>>> _.evalf()
0.822467033424113
>>> mpmath.nsum(lambda k: mpmath.fac(2*k+1)/(mpmath.fac(2*k+2)*(k+1)), [0,mpmath.inf])
0.822467033424113

[; \sum_{k=0}^{\infty}   \frac{\left(1 + 2 k\right)!}{\left(2 + 3 k\right)!} = \frac{2}{27}   \frac{\pi \sqrt{3} \,_{2}F_{2}\left(1,\frac{3}{2};   \frac{4}{3},\frac{5}{3};   \frac{4}{27}\right)}{\operatorname{\Gamma}\left(\frac{4}{3}\right)   \operatorname{\Gamma}\left(\frac{5}{3}\right)} ;]

>>> hypsum(fac(2*k+1)/(fac(3*k+2)), k)
2*pi*3**(1/2)*2F2([1, 3/2], [4/3, 5/3], 4/27)/(27*gamma(4/3)*gamma(5/3))
>>> _.evalf()
0.553106730441975
>>> mpmath.nsum(lambda k: mpmath.fac(2*k+1)/(mpmath.fac(3*k+2)), [0,mpmath.inf])
0.553106730441975

[; \sum_{k=0}^{\infty}   \frac{4^{k} \left(\frac{3}{2} + k\right)! \left(\frac{5}{2} +   k\right)!}{\left(6 + 2 k\right)!} = \frac{3}{256} \pi ;]

>>> hypsum(4**k*fac(k+R32)*fac(k+R52)/(fac(2*k+6)), k)
3*pi/256
>>> _.evalf()
0.0368155389092554
>>> mpmath.nsum(lambda k: 4**k*mpmath.fac(k+1.5)*mpmath.fac(k+2.5)/(mpmath.fac(2*k+6)), [0,mpmath.inf], method='e')
0.0368155389092358

[; \sum_{k=0}^{\infty}   \frac{\left(-1\right)^{k} z^{1 + 2 k}}{\left(1 + 2 k\right) k!} =   \frac{1}{2} \sqrt{\pi} \operatorname{erf}\left(z\right) ;]

>>> hypsum((-1)**k * z**(2*k+1) / fac(k) / (2*k+1), k)
pi**(1/2)*erf(z)/2
>>> _.evalf(subs={d:'1/2',z:'1/4'})
0.244887887180256
>>> mpmath.nsum(lambda k: (-1)**k * 0.25**(2*k+1) / mpmath.fac(k) / (2*k+1), [0,mpmath.inf])
0.244887887180256

[; \sum_{k=0}^{\infty}   \frac{z^{1 + 2 k}}{1 + 2 k} = \operatorname{atanh}\left(z\right) ;]

>>> hypsum(z**(2*k+1) / (2*k+1), k)
atanh(z)
>>> _.evalf(subs={d:'1/2',z:'1/4'})
0.255412811882995
>>> mpmath.nsum(lambda k: 0.25**(2*k+1) / (2*k+1), [0,mpmath.inf])
0.255412811882995

[; \sum_{k=0}^{\infty}   \frac{z^{k}}{1 + 2 k} =   \frac{\operatorname{atanh}\left(\sqrt{z}\right)}{\sqrt{z}} ;]

>>> hypsum(z**k / (2*k+1), k)
atanh(z**(1/2))/z**(1/2)
>>> _.evalf(subs={d:'1/2',z:'1/4'})
1.09861228866811
>>> mpmath.nsum(lambda k: 0.25**k / (2*k+1), [0,mpmath.inf])
1.09861228866811

[; \sum_{k=0}^{\infty}   \frac{\left(- z\right)^{k}}{1 + k} = \frac{\operatorname{log}\left(1 +   z\right)}{z} ;]

>>> hypsum((-z)**k / (k+1), k)
log(1 + z)/z
>>> _.evalf(subs={d:'1/2',z:'1/4'})
0.892574205256839
>>> mpmath.nsum(lambda k: (-0.25)**k / (k+1), [0,mpmath.inf])
0.892574205256839

>>> hypsum(fac(k-R12)/((1+2*k)*fac(k))*z**(2*k), k)
pi**(1/2)*asin(z)/z
>>> _.evalf(subs={d:'1/2',z:'1/4'})
1.79145636509746
>>> mpmath.nsum(lambda k: mpmath.fac(k-0.5)/((1+2*k)*mpmath.fac(k))*0.25**(2*k), [0,mpmath.inf])
1.79145636509746

[; \sum_{k=0}^{\infty}   \frac{z^{k}}{\left(2 + k\right)!} = - \frac{1 + z - e^{z}}{z^{2}} ;]

>>> hypsum(z**k / fac(k+2), k)
-(1 + z - exp(z))/z**2
>>> _.evalf(subs={d:'1/2',z:'1/4'})
0.544406667003864
>>> mpmath.nsum(lambda k: 0.25**k / mpmath.fac(k+2), [0,mpmath.inf])
0.544406667003864

[; \sum_{k=0}^{\infty}   \left(-5 + 3 z^{2}\right)^{3 - 4 k} = - \frac{\left(5 - 3   z^{2}\right)^{3}}{1 - \frac{1}{\left(5 - 3 z^{2}\right)^{4}}} ;]

>>> hypsum((3*z**2-5)**(-4*k+3), k)
-(5 - 3*z**2)**3/(1 - 1/(5 - 3*z**2)**4)
>>> _.evalf(subs={d:'1/2',z:'1/4'})
-111.666432272586
>>> mpmath.nsum(lambda k: (3*0.25**2-5)**(-4*k+3), [0,mpmath.inf])
-111.666432272586

[; \sum_{k=0}^{\infty}   \frac{z^{1 + 2 k}}{\left(1 + 2 k\right)!} =   \operatorname{sinh}\left(z\right) ;]

>>> hypsum(z**(2*k+1) / fac(2*k+1), k)
sinh(z)
>>> _.evalf(subs={d:'1/2',z:'1/4'})
0.252612316808168
>>> mpmath.nsum(lambda k: 0.25**(2*k+1) / mpmath.fac(2*k+1), [0,mpmath.inf])
0.252612316808168

[; \sum_{k=0}^{\infty}   \frac{\left(-1\right)^{k} z^{1 + 2 k}}{\left(1 + 2 k\right)!} =   \operatorname{sin}\left(z\right) ;]

>>> hypsum((-1)**k * z**(2*k+1) / fac(2*k+1), k)
sin(z)
>>> _.evalf(subs={d:'1/2',z:'1/4'})
0.247403959254523
>>> mpmath.nsum(lambda k: (-1)**k * 0.25**(2*k+1) / mpmath.fac(2*k+1), [0,mpmath.inf])
0.247403959254523

[; \sum_{k=0}^{\infty}   \frac{\left(-1\right)^{k} d^{k} \left(1 - z\right)^{1 + 2 k}}{\left(2   k\right)!} = \left(1 - z\right) \operatorname{cos}\left(\sqrt{d} \left(1   - z\right)\right) ;]

>>> hypsum((-1)**k * d**k * (1-z)**(2*k+1) / fac(2*k), k)
(1 - z)*cos(d**(1/2)*(1 - z))
>>> _.evalf(subs={d:'1/2',z:'1/4'})
0.646980115568008
>>> mpmath.nsum(lambda k: (-1)**k * 0.5**k * (1-0.25)**(2*k+1) / mpmath.fac(2*k), [0,mpmath.inf])
0.646980115568008

[; \sum_{k=0}^{\infty}   \frac{k z^{2 k}}{\left(1 + 2 k\right)!} = \frac{1}{2}   \operatorname{cosh}\left(z\right) -   \frac{\operatorname{sinh}\left(z\right)}{2 z} ;]

>>> hypsum(k * z**(2*k) / fac(2*k+1), k)
cosh(z)/2 - sinh(z)/(2*z)
>>> _.evalf(subs={d:'1/2',z:'1/4'})
0.0104819163234500
>>> mpmath.nsum(lambda k: k * 0.25**(2*k) / mpmath.fac(2*k+1), [0,mpmath.inf])
0.01048191632345

[; \sum_{k=0}^{\infty}   \frac{\operatorname{\Gamma}^{2}\left(- \frac{1}{2} + k\right)}{k!^{2}} =   16 ;]

>>> hypsum(gamma(k-R12)**2/(fac(k)**2), k)
16
>>> _.evalf()
16.0000000000000
>>> mpmath.nsum(lambda k: mpmath.gamma(k-0.5)**2/(mpmath.fac(k)**2), [0,mpmath.inf])
16.0

[; \sum_{k=0}^{\infty}   \left(1 + k\right)^{-2} = \frac{1}{6} \pi^{2} ;]

>>> hypsum(1/(k+1)**2, k)
pi**2/6
>>> _.evalf()
1.64493406684823
>>> mpmath.nsum(lambda k: 1/(k+1)**2, [0,mpmath.inf])
1.64493406684823

[; \sum_{k=0}^{\infty}   \frac{k}{\left(1 + k\right)^{3}} = - \operatorname{\zeta}\left(3\right) +   \frac{1}{6} \pi^{2} ;]

>>> hypsum(k/(k+1)**3, k)
-zeta(3) + pi**2/6
>>> _.evalf()
0.442877163688632
>>> mpmath.nsum(lambda k: k/(k+1)**3, [0,mpmath.inf])
0.442877163688632

[; \sum_{k=0}^{\infty}   \frac{z^{k} \left(3 + k\right)}{\left(3 + 3 k\right) k!} = - \frac{2 - 2   e^{z} - z e^{z}}{3 z} ;]

>>> simplify(hypsum((3+k)/(3+3*k)*z**k/fac(k), k))
-(2 - 2*exp(z) - z*exp(z))/(3*z)
>>> _.evalf(subs={d:'1/2',z:'1/4'})
1.18540958339656
>>> mpmath.nsum(lambda k: (3+k)/(3+3*k)*0.25**k/mpmath.fac(k), [0,mpmath.inf])
1.18540958339656

[; \sum_{k=0}^{\infty}   \frac{k^{2} z^{1 + 2 k}}{\left(9 + 2 k\right)!} = \frac{1}{39916800}   z^{3} \left(- \,_{1}F_{2}\left(2; 6,\frac{13}{2}; \frac{1}{4}   z^{2}\right) + 2 \,_{1}F_{2}\left(3; 6,\frac{13}{2}; \frac{1}{4}   z^{2}\right)\right) ;]

>>> hypsum(k**2 * z**(2*k+1) / fac(2*k+9), k)
z**3*(-1F2([2], [6, 13/2], z**2/4) + 2*1F2([3], [6, 13/2], z**2/4))/39916800
>>> _.evalf(subs={d:'1/2',z:'1/4'})
3.92066920165299e-10
>>> mpmath.nsum(lambda k: k**2 * 0.25**(2*k+1) / mpmath.fac(2*k+9), [0,mpmath.inf])
3.92066920165299e-10

[; \sum_{k=0}^{\infty} z^{k}   \left(1 + k\right) \left(1 + 2 k\right) \left(2 + k\right) \left(3 +   k\right) = \frac{6 + 42 z}{1 - 5 z + 10 z^{2} - 10 z^{3} + 5 z^{4} -   z^{5}} ;]

>>> simplify(hypsum((k+1)*(k+2)*(k+3)*(1+2*k)*z**k, k))
(6 + 42*z)/(1 - 5*z + 10*z**2 - 10*z**3 + 5*z**4 - z**5)
>>> _.evalf(subs={d:'1/2',z:'1/4'})
69.5308641975309
>>> mpmath.nsum(lambda k: (k+1)*(k+2)*(k+3)*(1+2*k)*0.25**k, [0,mpmath.inf])
69.5308641975309

>>> hypsum(k**3 * (-z)**k / (k+1), k)
-z*(2*(1 - z)/(1 + z)**3 - ((2 + 2*z)/(1 + z) - (2 + 2*z)*log(1 + z)/z)/(z*(1 + z)) - 2/(1 + z)**2)/2
>>> _.evalf(subs={d:'1/2',z:'1/4'})
-0.0285742052568390
>>> mpmath.nsum(lambda k: k**3 * (-0.25)**k / (k+1), [0,mpmath.inf])
-0.028574205256839

[; \sum_{k=0}^{\infty}   \frac{e^{k}}{k!} = e^{e} ;]

>>> hypsum(E**k / fac(k), k)
exp(E)
>>> _.evalf()
15.1542622414793
>>> mpmath.nsum(lambda k: mpmath.e**k / mpmath.fac(k), [0,mpmath.inf])
15.1542622414793

[; \sum_{k=0}^{\infty}   \frac{\operatorname{cos}\left(k\right)}{k!} = \frac{1}{2}   e^{e^{\mathbf{\imath}}} + \frac{1}{2} e^{e^{- \mathbf{\imath}}} ;]

>>> hypsum(cos(k) / fac(k), k)
exp(exp(I))/2 + exp(exp(-I))/2
>>> _.evalf()
1.14383564379164 + .0e-20*I
>>> mpmath.nsum(lambda k: mpmath.cos(k) / mpmath.fac(k), [0,mpmath.inf])
1.14383564379164

[; \sum_{k=0}^{\infty}   \frac{\operatorname{cos}\left(k\right)}{1 + 2 k} = \frac{1}{2}   \operatorname{atanh}\left(e^{- \frac{1}{2} \mathbf{\imath}}\right)   e^{\frac{1}{2} \mathbf{\imath}} + \frac{1}{2}   \operatorname{atanh}\left(e^{\frac{1}{2} \mathbf{\imath}}\right) e^{-   \frac{1}{2} \mathbf{\imath}} ;]

>>> hypsum(cos(k) / (2*k+1), k)
atanh(exp(-I/2))*exp(I/2)/2 + atanh(exp(I/2))*exp(-I/2)/2
>>> _.evalf()
0.975556628913311
>>> mpmath.nsum(lambda k: mpmath.cos(k) / (2*k+1), [0,mpmath.inf])
0.975556628913311

[; \sum_{k=0}^{\infty}   \frac{k \operatorname{cos}\left(k\right)}{k!} = \frac{1}{2}   e^{\mathbf{\imath} + e^{\mathbf{\imath}}} + \frac{1}{2} e^{-   \mathbf{\imath} + e^{- \mathbf{\imath}}} ;]

>>> simplify(hypsum(cos(k) * k / fac(k), k))
exp(I + exp(I))/2 + exp(-I + exp(-I))/2
>>> _.evalf()
-0.458967373729452 + .0e-20*I
>>> mpmath.nsum(lambda k: mpmath.cos(k) * k / mpmath.fac(k), [0,mpmath.inf])
-0.458967373729452

[; \sum_{k=0}^{\infty}   \frac{\left(-1\right)^{k}}{1 + 2 k} = \frac{1}{4} \pi ;]

>>> hypsum((-1)**k / (2*k+1), k)
pi/4
>>> _.evalf()
0.785398163397448
>>> mpmath.nsum(lambda k: (-1)**k / (2*k+1), [0,mpmath.inf])
0.785398163397448

[; \sum_{k=0}^{\infty}   \frac{\left(-1\right)^{k}}{6 + 2 k} = - \frac{1}{4} + \frac{1}{2}   \operatorname{log}\left(2\right) ;]

>>> hypsum((-1)**k / (2*k+6), k)
-1/4 + log(2)/2
>>> _.evalf()
0.0965735902799727
>>> mpmath.nsum(lambda k: (-1)**k / (2*k+6), [0,mpmath.inf])
0.0965735902799727

[; \sum_{k=0}^{\infty}   \frac{\left(-1\right)^{k}}{\left(1 + 2 k\right)^{2}} = C ;]

>>> hypsum((-1)**k / (2*k+1)**2, k)
Catalan
>>> _.evalf()
0.915965594177219
>>> mpmath.nsum(lambda k: (-1)**k / (2*k+1)**2, [0,mpmath.inf])
0.915965594177219

[; \sum_{k=0}^{\infty}   \frac{\left(-1\right)^{k}}{\left(2 + 2 k\right)^{2}} = \frac{1}{48}   \pi^{2} ;]

>>> hypsum((-1)**k / (2*k+2)**2, k)
pi**2/48
>>> _.evalf()
0.205616758356028
>>> mpmath.nsum(lambda k: (-1)**k / (2*k+2)**2, [0,mpmath.inf])
0.205616758356028

[; \sum_{k=0}^{\infty}   \frac{\left(-1\right)^{k} \left(1 + k\right)}{\left(3 + 2 k\right)^{2}} =   \frac{1}{9} \,_{3}F_{2}\left(\frac{3}{2},\frac{3}{2},2;   \frac{5}{2},\frac{5}{2}; -1\right) ;]

>>> hypsum((-1)**k * (k+1) / (2*k+3)**2, k)
3F2([3/2, 3/2, 2], [5/2, 5/2], -1)/9
>>> _.evalf()
0.0652837153898853
>>> mpmath.nsum(lambda k: (-1)**k * (k+1) / (2*k+3)**2, [0,mpmath.inf])
0.0652837153898854

[; \sum_{k=0}^{\infty}   \frac{1}{4 + 2 k + k^{2}} = \frac{1}{6} \mathbf{\imath} \sqrt{3} \left(-   \operatorname{\psi}\left(0,1 + \mathbf{\imath} \sqrt{3}\right) +   \operatorname{\psi}\left(0,1 - \mathbf{\imath} \sqrt{3}\right)\right)   ;]

>>> hypsum(1/(k**2+2*k+4), k)
I*3**(1/2)*(-polygamma(0, 1 + I*3**(1/2)) + polygamma(0, 1 - I*3**(1/2)))/6
>>> _.evalf()
0.740267076581851
>>> mpmath.nsum(lambda k: 1/(k**2+2*k+4), [0,mpmath.inf])
0.740267076581851


This work was possible thanks to the support of NSF grant DMS-0757627, which is gratefully acknowledged.