I didn't get quite as much coding done as I probably had time for (in part because I had a talk to prepare, and in part because I spent a bunch of time playing with MPIR and FLINT 2 and gaining some insights but otherwise not accomplishing much productive).

The code I did write includes a complete Cython implementation of real exp (giving up to a 7x speedup) and complex sqrt for mpmath, as well as code for fast evaluation of the gamma function at rational arguments. I will submit the Cython code to Sage soon; possibly after rewriting the remaining power code in Cython as well.

Anyway, prompted by a recent question on sage-support (and because I finally got inspiration to do it while at the airport), I've now implemented Bessel function zeros in mpmath.

Bessel function zeros are very important in applied mathematics because they are needed for Fourier–Bessel series, used to solve partial differential equations in cylindrical coordinates. I'd like to write down a concrete computational example, but it'd be slightly involved and I don't really have time right now. Perhaps in a later blog post.

The code permits computing the

*m*-th nonnegative simple zero of

*J*(

_{ν}*z*),

*J'*(

_{ν}*z*)

*Y*(

_{ν}*z*), or

*Y'*(

_{ν}*z*) for any positive integer index

*m*and real order ν ≥ 0. One can use it for large

*m*, large ν, or both:

>>> besseljzero(1,100)

314.9434728377671624580656

>>> besseljzero(100,1)

108.836165898409774363098

>>> besseljzero(100,100)

459.5295465754674695983379

A word of caution: I've chosen the convention Abramowitz & Stegun of including

*z*= 0 for

*J'*

_{0}, which is incompatible with SciPy and the Mathematica BesselZeros package. The A&S convention makes much more sense to me because it makes the zeros a continuous function of the order. This case (ν=0,

*derivative=1*) should be the only incompatibility.

It's a bit tricky to compute the zeros of Bessel functions, because (as far as I know) no globally valid approximations are known; not any reasonably simple ones anyway.

Asymptotic expansions are known with respect to either the order or the index. In my implementation, I've chosen to simply use the asymptotic expansion with respect to the index

*n*. If the

*n*is too small, the code simply chooses a larger index

*m*for which the asymptotic expansion does work, and then isolates

*all*the zeros between 0 and the

*m*-th by counting sign changes. By caching this result, all the consecutive zeros for the same function order can then be computed quickly as well.

This approach is simple and quite robust (up to any silly implementation errors on my part), although the code could certainly be optimized more for large orders.

The following graphic shows the derivative of

*Y*

_{50}(

*z*) with its zeros marked:

A neat property of Bessel functions is that they can be expanded as infinite products over their zeros:

These products converge slowly, but work with the aid of convergence acceleration:

>>> v, z = mpf(3), mpf(0.75)

>>> (z/2)**v/gamma(v+1)*

... nprod(lambda k: 1-(z/besseljzero(v,k))**2, [1,inf])

...

0.008484383423274108843927552

>>> besselj(v,z)

0.008484383423274108843927552

>>> (z/2)**(v-1)/2/gamma(v)*

... nprod(lambda k: 1-(z/besseljzero(v,k,1))**2, [1,inf])

...

0.0331364636065541211306414

>>> besselj(v,z,1)

0.0331364636065541211306414

I will do more work on Bessel functions in mpmath and Sage, so watch for future updates.

## 2 comments:

I think you are plotting -Y(50,x) in that plot, because Y -> -oo as x ->0

Kris: it's the derivative.

Post a Comment