Wednesday, June 8, 2011

Some FLINT 2.2 highlights

Version 2.2 of FLINT (Fast Library for Number Theory) was released last weekend. Some updated benchmarks are available.

In this blog post, I'm going to talk a bit about features I contributed in this version. With apologies to Sebastian Pancratz who wrote a whole lot of code as well -- in particular, a new module for computing with p-adic numbers, and a module for rational functions! Bill Hart also implemented a faster polynomial GCD, which is a quite important update since GCD is crucial for most polynomial business. Anyhow...

Polynomial interpolation


I've added various functions for polynomial interpolation to the fmpz_poly module. In general, these can be used to speed up various computations involving integer or rational polynomials by mapping a given problem to (Z/nZ)[x], Z or even Z/nZ, taking advantage of fast arithmetic in those rings, and then via interpolation recovering a result in Z[x] or Q[x].

Firstly, there are some new Chinese Remainder Theorem functions for integer polynomials, allowing you to reconstruct an integer polynomial from a bunch of polynomials with coefficients modulo different primes. Straightforward code (the actual work is done by functions in the fmpz module), but useful to have. The CRT functions are used by the new modular GCD code.

There are also functions for evaluating an integer polynomial on a set of points, and forming the interpolating polynomial (generally with rational coefficients) given a set of points and the values at those points.

Finally, user-friendly functions for evaluation and interpolation at a power of two (Kronecker segmentation) have been added. The code for this is actually a very old part of FLINT, and possibly some of the most complicated code in the library (packing bits efficiently is surprisingly hard). The new functions just wrap this functionality, but take care of memory management and various special cases, so you can now just safely do something like:


fmpz_t z;
fmpz_init(z);
long bits = fmpz_poly_max_bits(poly) + 1; /* +1 for signs */
fmpz_poly_bit_pack(z, poly, bits);
fmpz_poly_bit_unpack(poly, z, bits); /* recover poly */
fmpz_clear(z);


Apart from Kronecker segmentation, these functions are not currently asymptotically fast. Fast multi-modulus CRT for coefficient reconstruction is probably not all that important in most circumstances, because it's more common to use evaluation-interpolation techniques for polynomials of large degree and small coefficients than the other way around. Nonetheless, polynomials with large coefficients do arise as well. For example, the vector Bernoulli number code in FLINT relies on fast CRT, and currently uses custom code for this.

Polynomial interpolation uses Lagrange interpolation with barycentric weights, with a few tricks to avoid fractions. This is all implemented using an O(n^2) algorithm, but the actual time complexity is higher due to the fact that the coefficients when working over integers usually will be large, around n! in magnitude.

Here are some timing examples, evaluating and recovering a length-n polynomial with +/-1 coefficients basically as follows:

x = _fmpz_vec_init(n);
y = _fmpz_vec_init(n);
fmpz_poly_init(P);
fmpz_poly_init(Q);

for (i = 0; i < n; i++)
x[i] = -n/2 + i;

fmpz_poly_randtest(P, state, n, 1);

fmpz_poly_evaluate_fmpz_vec(y, P, x, n);
fmpz_poly_interpolate_fmpz_vec(Q, x, y, n);


The bits column below measures the largest value in y, which grows quite large despite the input polynomial having small coefficients:


n=8 eval=762 ns interp=13 us bits=8 ok=1
n=16 eval=3662 ns interp=61 us bits=42 ok=1
n=32 eval=29 us interp=673 us bits=-113 ok=1
n=64 eval=136 us interp=4951 us bits=-316 ok=1
n=128 eval=625 us interp=45 ms bits=-762 ok=1
n=256 eval=2500 us interp=792 ms bits=-1779 ok=1
n=512 eval=12 ms interp=10 s bits=-4089 ok=1


As you can see, the interpolation speed is not too bad for small n, but eventually grows out of control. How to do better?

Naive Lagrange interpolation is not optimal: it is possible to do n-point evaluation and interpolation in essentially O(n log2 n) operations. Such algorithms do not necessarily lead to an improvement over the integers (you still have to deal with coefficient explosion), but they should win over finite fields. So the right solution will perhaps be to add polynomial evaluation/interpolation functions based on modular arithmetic.

Rational numbers and matrices



A new module fmpq is provided for computing with arbitrary-precision rational numbers. For the user, the fmpq_t type essentially behaves identically to the MPIR mpq_t type. However, an fmpq_t only takes up two words of memory when the numerator and denominator are small (less than 262), whereas an mpq_t always requires six words plus additional heap-allocated space for the actual number data.

The fmpq functions are a bit faster than mpq functions in many cases when the numerator and/or denominator is small. But the main improvement should come for vectors, matrices or polynomials of rational numbers, due to the significantly reduced memory usage and memory management overhead (especially when many entries are zero or integer-valued).

Some higher-level functionality is also provided in the fmpq module, e.g. for rational reconstruction. The functions for computing special rational numbers (like Bernoulli numbers) have also been switched over to the fmpq type. Another supported feature is enumeration of the rationals (using the Calkin-Wilf sequence or by height). Generating the 100 million "first" positive rational numbers takes 9.6 seconds done in order of height, or 2.6 seconds in Calkin-Wilf order.

FLINT actually does not use fmpq's to represent polynomials over Q (fmpq_poly), and probably never will. The fmpq_poly module represents a polynomial over Q as an integer polynomial with a single common denominator, which is usually faster. The reason for adding the fmpq_t type is that it enabled developing the new fmpq_mat module, which implements dense matrices of rational numbers. For matrices, a common-denominator representation would be less convenient and in many cases completely impractical.

The new FLINT fmpq_mat module is very fast, or at least very non-slow. It is easy to find examples where it does a simple computation a thousand times faster than the rational matrices in Sage.

There's not actually much code in the fmpq_mat module itself; it does almost all "level 3" linear algebra (computations requiring matrix multiplication or Gaussian elimination) by clearing denominators and computing over the integers. This approach is in fact stolen shamelessly from Sage, but the functions in Sage are highly unoptimized in many cases. The code in Sage still wins for many sufficiently large problems as it has asymptotically fast algorithms for many things we do not (like computing null spaces). See the benchmarks page for more details.

I should not forget to mention that I've implemented Dixon's p-adic algorithm for solving Ax = b for nonsingular square A. (I wish I had a good link for Dixon's algorithm here, but sadly it doesn't appear to be described conveniently anywhere on the web. The original paper is "Exact solution of linear equations using P-adic expansions", if you have the means to get through the Springer paywall.)

This is now used both for solving over both Z and Q. The solver in FLINT is competitive with Sage (which uses IML+ATLAS) up to systems of dimension somewhere between perhaps 100 and 1000 (depending greatly on the size of the entries in the inputs and in the solution!). There's much to do here -- we should eventually have BLAS support in FLINT, which will speed up core matrix arithmetic, but there's room for a lot of algorithmic tuning as well.

There are some other minor new matrix features as well... they can be found in the changelog.

Polynomial matrices



A new module (fmpz_poly_mat) is provided for dense matrices over Z[x], i.e. matrices whose entries are polynomials with integer coefficients. The available functionality includes matrix multiplication, row reduction, and determinants. Matrix multiplication is particularly fast, as it uses the Kronecker segmentation interpolation/evaluation technique described above. (A similar algorithm is provided for determinants, but it's not really optimal as this point.)

The benchmarks page has detailed some detailed timings, so I won't repeat them here -- but generally speaking, the FLINT implementation is an order of magnitude faster than Sage or Magma for matrices of manageable size.

There's much more to be done for polynomial matrices. Row reduction is implemented quite efficiently, but it's too slow as an algorithm for many tasks such as computing null spaces of very large matrices. A future goal is to implement asymptotically fast algorithms (see the paper on x-adic lifting by Burçin Eröcal and Arne Storjohann for example).