One week of GSoC has passed. Since school didn't end until today, I've so far only had time to do a simple edit to SymPy's Real type, replacing decimal with mpmath. Although this wasn't much work, the improvement is substantial. If you download the latest version of SymPy from the hg repository, you will find that for example cos(100).evalf() not only is fast (finishing in milliseconds rather than seconds), but actually returns the right value (!).
Switching to mpmath immediately solves a few problems, but others remain. What is the best way to handle the accuracy of approximate numbers in a computer algebra system? Different CASes use different strategies, each with pros and cons. I would like to arrive at some kind of conclusion by the end of this summer, but this is far from a solved problem. Common approaches roughly fall into three categories, which I will discuss briefly below.
The simplest approach, from the implementation point of view, is to approximate a number by a single (arbitrary-precision) floating-point number. It is up to the user to know whether any results become inaccurate due to cancellations and other errors. SymPy currently works this way (with the floating-point working precision adjusted via a global parameter).
The second approach is interval arithmetic: any exact real number can be approximated by bounding it between a pair of floating-point numbers. If all operations are implemented with proper rounding, interval arithmetic deals rigorously with approximation errors, in fact rigorously enough for formal proof generation.
Interval arithmetic is unfortunately sometimes needlessly conservative, greatly exaggerating the estimate of the true error. It also has some counterintuitive properties (e.g., x · x is not the same as x2), and there is no unambiguous way to define comparisons. Interval arithmetic also comes with a considerable speed penalty, by a factor usually of the order 2-8.
The third approach is what might be called significance arithmetic (although this term has other meanings). It can be viewed as a tradeoff between direct floating-point arithmetic and interval arithmetic. A number is approximated as a single floating-point number, but with an error estimate attached to it. This representation is in principle equivalent to that of interval arithmetic (the error estimate can be seen as the width of an interval), but operations are defined differently. Instead of propagating the estimated error rigorously, one settles for a first-order approximation (this often works well in practice).
The error can be stored as a machine-precision number (representing the logarithm of the absolute or relative error), making arithmetic only a little slower than direct arbitrary-precision floating-point arithmetic. A downside with significance arithmetic is that it is not so straightforward to represent the concept of a "completely inaccurate" number, e.g. a number with unknown sign (the number 0 requires special treatment). It is not entirely clear to me whether it is best to work with the absolute or relative error (or both). As with interval arithmetic, there are still multiple ways to define comparisons.
I'm leaning towards implementing significance arithmetic in SymPy. Mpmath already has some support for interval arithmetic (which I will work in improving later this summer), and this should be accessible in SymPy as well, but I think some looser form of significance arithmetic is the right approach for the common representation of approximate numbers. Right now, I'm studying the implementation of significance arithmetic in Mathematica ( Mark Sofroniou & Giulia Spaletta: Precise Numerical Computation), and writing some test code, to get to grips with the details.
Subscribe to:
Post Comments (Atom)
3 comments:
Nice post and nice work already. Keep going!
ch I sort of consider the standard approach to computable reals, though for some reason it seems not to be used so much: represent a number as a function from some representation of an error bound (you could use the number of digits of precision, or an explicit bound on the error), to a representation of a number with that level of precision (an arbitrary precision rational or floating point number, perhaps).
Then the various basic arithmetic functions that work with these will produce functions that know to what precision they need their inputs to be in order to produce the requested output precision.
This is the approach taken by the CReal module in Lennart Augustsson's numbers package for Haskell, if you'd like to see a reference implementation.
Er, that was mysteriously cut off at the beginning...
"There's another option, which I sort of consider the standard approach to computable reals, though for some reason it seems not to be used so much"
Post a Comment