Damian Vicino wrote:
I'm interested in developing basic support of "real numbers" for computable calculus applications.
In computable calculus, reals have infinite precision (certain restrictions may apply). The usual approach is representing numbers as functions producing digits.
Considering, for example, the Delaunay condition: bool delaunay(T ax, T ay, T bx, T by, T cx, T cy, T dx, T dy) { T cos_abc = (ax-bx)*(cx-bx) + (ay-by)*(cy-by); T cos_cda = (cx-dx)*(ax-dx) + (cy-dy)*(ay-dy); if (cos_abc >= 0 && cos_cda >= 0) return false; if (cos_abc < 0 && cos_cda < 0) return true; T sin_abc = (ax-bx)*(cy-by) - (cx-bx)*(ay-by); T sin_cda = (cx-dx)*(ay-dy) - (ax-dx)*(cy-dy); T sin_sum = sin_abc * cos_cda + cos_abc * sin_cda; return sin_sum < 0; } I'd be interested to know how this approach of effectively "lazy evaluation" compares to using eagerly-evaluated arbitrary-precision types (e.g. Boost.Multiprecision's floating point types). I.e. in terms of performance and any complexity introduced to the code. For what sorts of problems is it better? How much work is actually saved by laziness in practice? Regards, Phil.