Saturday, February 21, 2009

How (not) to compute harmonic numbers

The nth harmonic number is the nth partial sum of the divergent harmonic series, Hn = 1 + 1/2 + 1/3 + ... + 1/n. The simplest way to compute this quantity is to add it directly the way it is written: 1, 1+1/2, 1+1/2+1/3, and so on. For n approximately greater than 10 or 100, this is algorithm is not a very good one. Why?

Firstly, in floating-point arithmetic, it is much better to use the asymptotic expansion



which requires fewer terms the larger n is. Combining this with recurrence when n is too small gives a fast numerical algorithm for fixed-precision or arbitrary-precision computation of harmonic numbers (especially large ones). It allows one to determine, for example, that it takes exactly 15092688622113788323693563264538101449859497 terms until the partial sums of the harmonic series exceed 100. (This algorithm is implemented in mpmath, and most other comparable software.)

Secondly, the naive algorithm is especially bad for exact computation of harmonic numbers. This might not be obvious at a glance. We are just adding n numbers: what's to improve? The catch is that rational arithmetic has hidden costs: a single addition requires three multiplications and usually a GCD reduction. In contrast with integer or floating-point addition, rational addition gets very slow when the numerators and denominators get large. Computing harmonic numbers the naive way triggers worst-case behavior of rational arithmetic: the denominators grow like n! (if no GCD is performed, the denominator of Hn literally becomes n!).

It should be well known that computing n! as 1, 1·2, 1·2·3, ... is a poor method when n is large. A much better algorithm, assuming that multiplication is subquadratic, is to recursively split the products in half, i.e. compute (1·2·...·n/2) · ((n/2+1)·...·n) and so on. This algorithm has complexity O(log(n) M(n log(n)) compared to O(n2 log(n)) for the naive algorithm, where M(n) is the complexity of multiplying two n-bit integers. This approach is called binary splitting, and has many other applications.

The example of factorials directly suggests an analogous algorithm for computing harmonic numbers with reduced complexity: just recursively split the summation in half.

A few more variations are also possible. Instead of working with rational numbers, which require a GCD reduction after each addition, one can work with the unreduced numerators and denominators as integers, and form a rational number at the end. Another way to obtain a pure-integer algorithm for Hn = p/q is to compute the denominator q = n! and then use the formula



for the numerator.

Below, I have implemented five different algorithms for computing harmonic numbers based on the preceding remarks. The code works with either Python 2.6, using the Fraction type from the standard library, or with gmpy, by adding the following respective header code:


# python
from math import factorial as fac
from fractions import Fraction as mpq
one = 1
mpz = int

# gmpy
from gmpy import mpz, mpq, fac
one = mpz(1)


Algorithm 1: directly add 1, 1/2, 1/3, ..., 1/n:

def harmonic1(n):
return sum(mpq(1,k) for k in xrange(1,n+1))


Algorithm 2: like algorithm 1, GCD reduction postponed to the end:

def harmonic2(n):
p, q = mpz(1), mpz(1)
for k in xrange(2,n+1):
p, q = p*k+q, k*q
return mpq(p,q)


Algorithm 3: compute denominator; compute numerator series:

def harmonic3(n):
q = fac(n)
p = sum(q//k for k in xrange(1,n+1))
return mpq(p,q)


Algorithm 4: binary splitting:

def _harmonic4(a, b):
if b-a == 1:
return mpq(1,a)
m = (a+b)//2
return _harmonic4(a,m) + _harmonic4(m,b)

def harmonic4(n):
return _harmonic4(1,n+1)


Algorithm 5: binary splitting, GCD reduction postponed to the end:

def _harmonic5(a, b):
if b-a == 1:
return one, mpz(a)
m = (a+b)//2
p, q = _harmonic5(a,m)
r, s = _harmonic5(m,b)
return p*s+q*r, q*s

def harmonic5(n):
return mpq(*_harmonic5(1,n+1))


I did some benchmarking on my laptop (32-bit, 1.6 GHz Intel Celeron M), with n up to 104 for the Python version and 106 for the gmpy version. Here are the results (times in seconds):


python:
n harmonic1 harmonic2 harmonic3 harmonic4 harmonic5
10 0.000159 0.000009 0.000010 0.000160 0.000025
100 0.004060 0.000233 0.000234 0.002211 0.000358
1000 0.889937 0.028822 0.029111 0.048106 0.026434
10000 769.109402 3.813702 3.823027 2.710409 3.475280

gmpy:
n harmonic1 harmonic2 harmonic3 harmonic4 harmonic5
10 0.000028 0.000010 0.000010 0.000033 0.000021
100 0.000284 0.000097 0.000111 0.000365 0.000226
1000 0.003543 0.001870 0.004986 0.004280 0.002651
10000 0.103110 0.142100 0.588723 0.059249 0.052265
100000 7.861546 17.002986 74.712460 1.115333 1.200865
1000000 994.548226 - - 27.927289 24.425421


Algorithm 1 clearly does not do well asymptotically. For the largest n measured, it is 284 times slower than the fastest algorithm with Python arithmetic and 40 times slower with gmpy arithmetic. Interestingly, algorithms 2 and 3 both substantially improve on algorithm 1 with Python arithmetic at least up to n = 10000, but both are worse than algorithm 1 with gmpy. This is presumably because GMP uses a much better GCD algorithm that reduces the relative cost of rational arithmetic.

Algorithms 4 and 5 are the clear winners for large n, but it is hard to tell which is better. The timings seem to fluctuate a bit when repeated, so small differences in the table above are unreliable. I think algorithm 5 could be optimized a bit if implemented in another language, by accumulating temporary values in machine integers.

It is amusing to note that computing the 10000th harmonic number with the recursive algorithm and an optimized implementation of rational arithmetic is 14,700 times faster than the direct algorithm with a non-optimized implementation of rational arithmetic. Choosing a larger n (say, 105) gives an even more sensational ratio, of course, which I'm not patient enough to try. The moral is: Moore's law is not an excuse for doing it wrong.

Algorithm 1 is not all bad, of course. With memoization (or implemented as a generator), it is good enough for sequential generation of the numbers H1, H2, H3, ..., and this tends to be needed much more commonly than isolated large harmonic numbers. SymPy therefore uses the memoized version of algorithm 1.

The algorithms above can be adapted for the task of computing generalized harmonic numbers, i.e. numbers of the form 1 + 1/2k + 1/3k + ... + 1/nk for some integer k. Also worth noting is that algorithm 5 allows for computation of Stirling numbers of the first kind. Since |S(n+1,2)| = n! · Hn, algorithm 5 can be viewed as an efficient algorithm for simultaneous computation of S(n,2) and n!. Expressing Stirling numbers in terms of generalized harmonic numbers, this extends to an algorithm for S(n,k) for any (large) n and (small) k.

A final remark: binary splitting is not the fastest algorithm for computing factorials. The fastest known method is to decompose n! into a product of prime powers and perform balanced multiplication-exponentiation. Can this idea be applied to harmonic numbers and/or Stirling numbers? It's not obvious to me how it would be done. Are there any other, possibly even faster algorithms for harmonic numbers? I've looked around, but been unable to find anything on the subject.

Thursday, February 19, 2009

Python 3.1 twice as fast as 3.0

For large-integer arithmetic, that is. Mark Dickinson has beeen working on a patch that changes the internal representation of long integers from 15-bit to 30-bit digits, and there is an additional patch that adds a few algorithmic optimizations, written by Mark and Mario Pernici. Likely both patches will pass review and be included in Python 3.1. I asked Mark to run the mpmath unit tests to find out how big a difference the patches make, which he kindly did. Here are the results:

32-bit build:
unpatched (15-bit digits): 42.91 seconds
patched (30-bit digits): 40.57 seconds
patched+optimized: 30.15 seconds

64-bit build:
unpatched: 38.39 seconds
patched: 22.36 seconds
patched+optimized: 21.55 seconds
patched+optimized+bitlength: 20.20 seconds

The last result includes the use of the new int.bit_length() method (which I had a small part in implementing) instead of the pure-Python version in mpmath.

It's not surprising that the patches make high-precision arithmetic much faster, but most of the mpmath tests actually work at low precision and depend mostly on general speed of the Python interpreter. There the speedup is perhaps of order 0-30%. For the tests working at very high precision, the improvement is a factor 4 or 5 with the 64-bit build. Then there are a number of tests in between. With some imagination, all unit tests together provide a plausible estimate of actual performance for a wide range of applications.

An excerpt of before/after results for some particular tests, comparing 64-bit unpatched and 64-bit patched+optimized+bitlength:

# Only tests 53-bit precision
double_compatibility ok 1.3149240 s
double_compatibility ok 1.1979949 s

# Logarithms up to 10,000 digits
log_hp ok 4.0845711 s
log_hp ok 0.7967579 s

# Large Bernoulli numbers
bernoulli ok 6.8625491 s
bernoulli ok 1.4261260 s

# Gamma function, high precision
gamma_huge_2 ok 2.4907949 s
gamma_huge_2 ok 0.5781031 s

# Numerical quadrature, 15 to 100 digit precision
basic_integrals ok 3.0117619 s
basic_integrals ok 1.8687689 s


Mpmath does not yet run on Python 3.x; the benchmarking was made with a version of mpmath hacked slightly for the tests to pass. It would be nice to provide a 3.x compatible version of mpmath soon. The 2to3 tool fortunately handles almost all necessary patching; a handful of fairly simple additional changes need to be made. The most annoying problem is the lack of cmp (and particularly the cmp argument for sorted), which has no trivial workaround, but still should not be too hard to fix. In any case, it seems likely that the 30-bit patch will also be backported to Python 2.7, so most users should be able to take advantage of it.

It would be interesting to compare these benchmarks with the GMPY mode in mpmath. GMPY too provides a ~2x speedup for the unit tests. (This factor would be much larger if tests at even higher precision were included. At a million digits, GMPY is orders of magnitude faster than pure Python.) Originally GMPY only sped up the very high precision tests, and was otherwise slower than pure Python, probably due to Python's internal optimizations for int and long instances. This disadvantage was eliminated by implementing some helper functions for mpmath in C in GMPY. Mario Pernici has recently worked on further optimizations along the same lines, which should substantially improve low-precision performance.

Tuesday, February 17, 2009

Approximate prime counting

I'm continuing my quest for the special functions coverage in mpmath to match the coverage in Mathematica. Comparisons are not quite fair, because mpmath is (with a few exceptions) numerical-only, and it is missing some heavy-duty functions such as the Meijer G-function (and will for the foreseeable future). Many functions in mpmath are also implemented on smaller domains and less rigorously. On the other hand, Mathematica is more restricted in that it only supports machine-precision exponents. And most importantly, the Mathematica source code is very difficult to read for most people.

In any case, I'm trying to catch up with Mathematica 7, which added a whole slew of functions (not that I've caught up with Mathematica 6 or even 5). I did add a few functions in version 0.11 of mpmath as a consequence of seeing them in Mathematica 7 (as I recall right now, the Barnes G-function, hyperfactorial and generalized Stieltjes constants). See also the "Fun with zeta functions" post, which discussed the addition of the Riemann-Siegel functions in mpmath.

I just committed an implementation of the Riemann R function R(x) (also discovered in the "new in Mathematica 7" list), which is an analytic function that closely approximates the prime counting function π(x). The incredible accuracy of the Riemann R function approximation can be visualized by plotting it against the exact prime counting funtion (I added a naive implementation of π(x) as primepi, mainly to facilitate this kind of comparison -- SymPy contains a slightly more optimized primepi which could be used instead):

>>> from mpmath import *
>>> plot([primepi, riemannr], [0,100])




The accuracy for small x is not a fluke, either:

>>> plot([primepi, riemannr], [100000,105000])



The "classical" approximation based on the logarithmic integral is not nearly as good:

>>> plot([primepi, lambda x: li(x) - li(2)], [100000,105000])



The largest exact value of the prime counting function ever computed was π(1023). The Riemann R function is quite easy to evaluate for x far larger than that, and the relative accuracy only improves:

>>> mp.dps = 50
>>> print riemannr(10**1000)
4.3448325764011974554109300516481778421781159070188e+996

It is quite likely that no one will ever compute the exact integer that is π(101000), but the value shown above is correct to every indicated digit. In fact, R(101000) can be shown using simple estimates to be accurate to about 500 digits.

For the fun of it, I implemented a version of the prime counting function that quickly returns a strict bounding interval for π(x), using an inequality by Lowell Schoenfeld:



This gives for example:

>>> mp.dps = 15
>>> print primepi2(10**9)
[50823160.0, 50875310.0]


Comparing the Riemann R function and the exact prime counting function with the lower and upper bounds of the Schoenfeld inequality:

>>> pi1 = lambda x: primepi2(x).a
>>> pi2 = lambda x: primepi2(x).b
>>> plot([primepi, riemannr, pi1, pi2], [10000,12000])



Returning to the example x = 101000, the bounds obtained are:

>>> mp.dps = 1000
>>> v = primepi2(10**1000)
>>> print v.a
43448325764011974554109300516481778421781159070187535206418774067892028956297421
41091652903064607718165071467289250162191865437538175693583349276176983162304817
20028842283993044722261194636177370816661364951186507714125556442852887672847627
37639487556791903286839317671011195810576335127780015765296354832441209488958529
40270929343735968992442702495339548061518980068329967842108151105502256086735702
70840137536441416124939228421984939684274707774026258195286398483405141793241664
48771389205215847692233236114877473504187398786485377346384217825087356612222721
64076496151428324862762772509621406939237565543669854423809507199075669125082789
94753472799321385531846190616064670701763597463167501715697285784529582495283729
84726395093948427825392438044975721449167967563521761421630319216447872905300292
34804577579707557749789078595077525265053645288083485194663875002648919670801486
00507106899091429169697567568603093512170870024325220369457870653525185029701437
3844901850360974589491752273505884338.0
>>> print v.b
43448325764011974554109300516481778421781159070187535206418774067892028956297421
41091652903064607718165071467289250162191865437538175693583349276176983162304817
20028842283993044722261194636177370816661364951186507714125556442852887672847627
37639487556791903286839317671011195810576335127780015765296354832441209488958529
40270929343735968992442702495339548061518980068329967842108151105502256086735702
70840137536441416124939228421984939684274707774026258195286398483405141793241664
48771389205216030926132955971812691701044333600154515219652233647424144532724506
59125521224590219319931601764655406966919866611954863306151010025269888571358859
44632455086296390810620688247562865809602361895967001311288611164573841496810109
47459439386009668941560164405255365722936716670915493742153084879238459437035932
04613524632469949425914711588697691782286908198073032336714125348156636382686152
39874926303969403597018131954944580668821382841922209788091197293915488520205360
8277109394003350202108994842650400209.0
>>> nprint((v.delta)/v.a)
4.21728e-495
>>> nprint((riemannr(10**1000)-v.a)/v.a)
2.10863e-495


and so it turns out that "accurate to about 500 digits" can be replaced by "accurate to at least 494 digits".

Actually, this is not quite true. Schoenfeld's inequality depends on the truth of the Riemann hypothesis. Thus, it is conceivable that primepi2 might return an erroneous interval. If you find an output from primepi2 that you can prove to be wrong, this must mean either that 1) you've found a bug in mpmath, or 2) you've disproved the Riemann hypothesis. (One of these options is more likely than the other, and reports should therefore be submitted to the mpmath issue tracker rather than Annals of Mathematics.)

On a final note, I've also added the function zetazero for computing the nth nontrivial zero of the Riemann zeta function:

>>> mp.dps = 50
>>> print zetazero(1)
(0.5 + 14.134725141734693790457251983562470270784257115699j)
>>> print zetazero(2)
(0.5 + 21.022039638771554992628479593896902777334340524903j)
>>> print zetazero(10)
(0.5 + 49.773832477672302181916784678563724057723178299677j)
>>> nprint(zeta(zetazero(10)))
(2.14411e-50 - 4.15422e-50j)

It just uses a table to determine correct initial values for the rootfinder, so zetazero currently only works up to n = 100. In fact, it works up to n = 100,000 on an internet-enabled computer; zetazero will automatically try to download the table of 100,000 zeros from Andrew Odlyzko's website if called with n > 100. Or it can load a custom url/file if specified by the user. This is one reason why I love Python: downloading a data file, parsing it and converting it to an appropriate internal representation in virtually a single line of code, easily done in the middle of some algorithmic code.

More information about the Riemann R function can be found in "The encoding of the prime distribution by the zeta zeros", part of Matthew R. Watkins' amazing website. The moral of that document is that the error between the exact prime counting and the Riemann R function can be represented by a kind of "Fourier series" involving a summation over the zeros of the Riemann zeta function. I have not gotten to looking at that yet.

Wednesday, February 4, 2009

Galerkin's method in SymPy

I'm currently taking a PDE course, and for this reason I am trying to come terms with the Galerkin method. The Galerkin method is conceptually simple: one chooses a basis (for example polynomials up to degree q, or piecewise linear functions) and assumes that the solution can be approximated as a linear combination of the basis functions. One then chooses the coefficients so as to make the error orthogonal to all basis functions; this amounts to computing a sequence of integrals (the inner products determining orthogonality) and then solving a linear system of equations.

Doing the calculations by hand can get very messy. Fortunately, it is straightforward to implement the Galerkin method e.g. for an ODE IVP in SymPy and have the ugly parts done automatically. The following function implements the global Galerkin method on an interval [x0, x1], using polynomials up to degree q as the basis. It is assumed that the 0th degree basis function is fixed to the initial value.


from sympy import *
sum = __builtins__.sum

def galerkin(ode, x, x0, x1, u0, q):
basis = [x**k for k in range(q+1)]
# Coefficients for the basis monomials
xi = [Symbol("xi_%i" % k) for k in range(q+1)]
# Solution function ansatz
u = u0 + sum(xi[k]*basis[k] for k in range(1,q+1))
# Form system of linear equations
equations = [integrate(ode(u)*basis[k], (x, x0, x1)) \
for k in range(1,q+1)]
coeffs = solve(equations, xi[1:])
return u.subs(coeffs)


In general, the integrals can get very complex, if not unsolvable, and one must fall back to numerical methods. But for a simple problem, SymPy can calculate the integrals and the symbolic solution provides insight. Solving the standard problem u' - u = 0 (with solution u(x) = exp(x)):


x = Symbol('x')
ode = lambda u: u.diff(x) - u
for q in range(1,6):
pprint(galerkin(ode, x, 0, 1, 1, q))


This outputs:

1 + 3*x
2
8*x 10*x
1 + --- + -----
11 11
2 3
30*x 45*x 35*x
1 + ---- + ----- + -----
29 116 116
2 3 4
1704*x 882*x 224*x 126*x
1 + ------ + ------ + ------ + ------
1709 1709 1709 1709
2 3 4 5
31745*x 15820*x 5460*x 1050*x 462*x
1 + ------- + -------- + ------- + ------- + ------
31739 31739 31739 31739 31739


One can see that the coefficients rapidly approach those of Maclaurin polynomials for exp. The fast global convergence is also readily apparent upon visualization:


from mpmath import plot
u1 = Lambda(x, galerkin(ode, x, 0, 1, 1, 1))
u2 = Lambda(x, galerkin(ode, x, 0, 1, 1, 2))
u3 = Lambda(x, galerkin(ode, x, 0, 1, 1, 3))
plot([exp, u1, u2, u3], [0, 2])




The blue curve is exp(x). Note that the plotted interval is larger than the solution interval, so already the q = 3 solution (purple) is essentially accurate to within graphical tolerance. With moderately high degree, one gets a very accurate solution:


>>> galerkin(ode, x, 0, 1, 1, 10).subs(x,'7/10').evalf()
2.01375270747023
>>> exp('7/10').evalf()
2.01375270747048


The Galerkin method might prove useful as an alternative algorithm for high-precision ODE solving in mpmath. Mpmath has no problem with the nonlinear integrations, nor with solving the linear system in case it gets ill-conditioned at higher degree.

I'll perhaps post some more code on this topic in the future, e.g. for solving the ODE with trigonometric and piecewise polynomial basis functions.

Fun with zeta functions

In mpmath-trunk there is now an implementation of the Riemann-Siegel Z function as well as the related Riemann-Siegel theta function. There is also a function for computing Gram points.

A picture is worth a thousand words. The Z function is a kind of real-valued version of the Riemann zeta function on the critical strip (compare to this previous plot):

>>> from mpmath import *
>>> plot(siegelz, [0,80])



The implementation is entirely generic, so it works in the complex numbers:

>>> cplot(siegelz, points=50000)



>>> cplot(siegelz, [-25,25], [-25,25], points=50000)



Gram points are useful for locating zeros of the zeta/Z functions. Huge Gram points themselves are easily located:

>>> mp.dps = 50
>>> print grampoint(10**10)
3293531632.7283354545611526800803306343201329980271


Unfortunately, evaluation of the Riemann zeta or Riemann-Siegel Z functions is not feasible for such large inputs with the present zeta function implementation in mpmath. Lowering expectations a bit, one can still compute some fairly large zeros:

>>> g1 = grampoint(1000)
>>> g2 = grampoint(1001)
>>> r = findroot(siegelz, [g1, g2], solver='illinois')
>>> print r
1421.8505671870486539107068075509847506037846486061
>>> print g1 < r < g2
True
>>> nprint(siegelz(r))
-1.30567e-51

(I chose the Illinois solver here because it combines bracketing with the fast convergence of the secant method.)

A plot of the initial couple of Gram points:

>>> mp.dps = 15
>>> plot(lambda x: grampoint(floor(x)), [0, 30])



The following visualization of Gram points is perhaps more illustrative. It is usually the case that the Gram points and sign changes (i.e. zeros) of Z alternate with each other:


>>> gs = [grampoint(n) for n in range(25)]
>>> def marker(x):
... for g in gs:
... if abs(g-x) < 0.2:
... return 1
... return 0
...
>>> plot([siegelz, marker], [0,50])




Put another way, it is usually the case that (-1)n Z(gn) is positive (this is Gram's law):

>>> (-1)**10 * siegelz(grampoint(10)) > 0
True
>>> (-1)**11 * siegelz(grampoint(11)) > 0
True
>>> (-1)**12 * siegelz(grampoint(12)) > 0
True


The first exception occurs at n = 126 (more exceptions are listed in OEIS A114856). The Gram point is very close to a root:

>>> (-1)**126 * siegelz(grampoint(126)) > 0
False
>>> mp.dps = 50
>>> print grampoint(126)
282.4547208234621746108397940690599354048302480008
>>> print findroot(siegelz, grampoint(126))
282.46511476505209623302720118650102420550683628035
>>> print siegelz(grampoint(126))
-0.02762949885719993669875120344077657187547768743854


Now, it takes only a few lines of code to enumerate exceptions to Gram's law, up to say n = 2000:


>>> mp.dps = 10
>>> for k in range(2000):
... g = grampoint(k)
... s = siegelz(g)
... if (-1)**k * s < 0:
... print k, g, s
...
126 282.4547208 -0.02762949482
134 295.583907 -0.01690039157
195 391.4482021 0.0232894207
211 415.60146 0.3828895679
232 446.8057559 -0.1410432574
254 478.9568293 -0.0600779492
288 527.6973416 -0.6654588176
367 637.320354 0.1579920088
377 650.8910448 0.8376010676
379 653.5978317 0.217352745
397 677.8523216 0.1342801302
400 681.8765522 -0.06717029681
461 762.6678783 0.1525747623
507 822.4194896 0.7419389942
518 836.5739092 -0.3982959071
529 850.6795334 0.1756176097
567 899.0502587 0.8296209263
578 912.9534756 -0.3537611108
595 934.3572317 0.121967239
618 963.1605748 -0.04025206217
626 973.1388241 -0.1578260824
637 986.8259207 0.2852904149
654 1007.905352 -0.5547383392
668 1025.199826 -0.2608153866
692 1054.715528 -0.07836895313
694 1057.167832 -0.1696327251
703 1068.189532 1.004550445
715 1082.850818 0.1608682601
728 1098.690513 -0.6875362205
766 1144.741881 -0.1678183794
777 1158.005614 9.646277741e-3
793 1177.246581 0.7011668936
795 1179.647457 0.3577242914
807 1194.033228 0.7072513907
819 1208.386043 0.2285739236
848 1242.939633 -0.6161899199
857 1253.626041 0.2131228775
869 1267.84791 1.271208219
887 1289.124631 0.1830773844
964 1379.419269 -1.392200147
992 1411.979146 -0.06783933868
995 1415.459427 0.2623456354
1016 1439.777518 -0.9324891715
1028 1453.639612 -0.3106139874
1034 1460.561547 -0.6501820215
1043 1470.933184 0.0987483831
1046 1474.387414 -0.5680195111
1071 1503.115534 0.1530310857
1086 1520.304266 -0.106037306
1094 1529.457103 -0.1839654392
1111 1548.873932 0.02102960258
1113 1551.155349 0.8003510427
1135 1576.211048 0.2276419942
1156 1600.060766 -0.02925581253
1165 1610.262397 8.575848564e-3
1178 1624.977566 -0.2655885013
1207 1657.717899 0.8645603918
1209 1659.971552 0.36625445
1231 1684.725787 0.6156424059
1250 1706.052177 -0.8285966558
1263 1720.616514 0.8288043998
1276 1735.158918 -0.02734148029
1290 1750.795762 -0.1953269334
1294 1755.258868 -0.2675606503
1307 1769.750095 0.5163671897
1319 1783.107961 0.09216880716
1329 1794.225992 0.3012117179
1331 1796.448134 0.1319941485
1342 1808.661254 -0.1258506495
1344 1810.880254 -0.03061304567
1402 1875.026023 -0.5758575812
1430 1905.854681 -1.184795762
1443 1920.138276 1.294805086
1456 1934.403327 -0.03436404474
1485 1966.159589 0.3192688646
1487 1968.346369 0.864509178
1495 1977.089273 0.1463922251
1498 1980.366127 -0.01464596441
1513 1996.736314 0.2570944738
1517 2001.097757 0.8247075447
1532 2017.438527 -0.1691755392
1543 2029.407189 0.1807441973
1545 2031.581995 1.207382688
1600 2091.233527 -0.03995365882
1613 2105.289905 0.4695228803
1620 2112.852035 -0.3058765971
1643 2137.666434 0.8692760726
1646 2140.899441 -0.2679836804
1656 2151.670095 -0.4251702923
1669 2165.658156 0.9247164091
1672 2168.883971 -0.01601726917
1684 2181.779042 -0.1420385576
1702 2201.097281 -1.274939957
1704 2203.241961 -0.2540991172
1722 2222.528116 -0.3804543179
1744 2246.06145 -0.1531821977
1747 2249.267282 1.366292187
1760 2263.150267 -0.7349734815
1773 2277.0188 0.4817831199
1787 2291.938132 0.06919712065
1796 2301.520437 -0.1687993299
1804 2310.032371 -0.687368415
1816 2322.790332 -0.1377354318
1819 2325.977969 0.1645731507
1843 2351.452601 0.01004227096
1850 2358.873909 -0.10089997
1856 2365.231898 -0.1952129943
1863 2372.645911 1.120344703
1876 2386.404453 -0.1397386302
1892 2403.31974 -0.6220070934
1902 2413.881627 -0.367836015
1921 2433.927875 0.01043043105
1933 2446.574386 1.413878285
1935 2448.681071 1.881639813
1953 2467.627613 0.05088637184
1969 2484.448555 0.2965372534
1982 2498.101553 -0.3858252138


(Note: the Z values are not quite accurate to the indicated precision, due to the magnitude of the Gram points. Increasing the precision, one finds siegelz(grampoint(1982)) = -0.385825235416...)

For those interested in this topic, there is a very nice book called "Riemann's Zeta Function" by H. M. Edwards. There is just as much information to be found on the web, e.g. on the amazing "Numbers, constants and computation" site by Xavier Gourdon and Pascal Sebah.