On Jan 15, 2016, at 3:18 PM, Alexander Lauser
wrote: Reals in this context are functions generating digits. Arithmetic operations are composition of these functions to obtain digits. And comparison operations iterate the digits of the operands for deciding. This has (in theory) infinite precision, in practice we can limit it to a few thousands or millions. A lot more than what double can provide.
In practice, however, I don't think you will reach infinite precision with the approach as you stated it. Consider equality for example: You would iterate over the digits and compare them one by one. If they are unequal, then you win (ultimately). But if they are equal, then you will have to abort at some point and state that they are "probably equal" (more precisely, that they are equal up to an error of a given epsilon). I think the problem is inherent, or, in other words, you can only semi-decide inequality and thus have to restrain to some best-effort/up-to-some-precision.
Yes, agree, I stated that in a reply to another email. Three things are proved undecidable if using arbitrary algorithms for generating the numbers: irrationality, equal comparison, and comparison against 0. However, if you restrict the set of numbers participating in the an operation to well-known sets you can answer interesting questions.
Boost.Multiprecision also has infinite precision (in theory). The difference of your suggestion would be that as you do lazy-evaluation, you don't have to specify the precision a priori.
Here, multi precision needs the whole representation to be available for every operation. In the case of comparing using algorithms, you only have representation for the current digit dependencies. For example, if you have compared the first few digits in an operation and they are equal, then for comparing the following thousands you don’t need them (unless you require them to compare the following digits, but thats a problem with the algorithm used for the number, not with the approach). For example the binary number composed by all zeros, but positions p where p is power of 2. Only requires a counter for deciding each number. And the limitation of how many digits can be expanded is imposed by time, almost no space usage at all. It is a tradeoff between time/space.
I have a good use case for this in the area of simulation, but I’m sure there is a lot more places you can use them, e.g. physics computations, geometrical computations, etc…
The performance of the arithmetic operations would heavily depend on the "history" of the numbers (i.e., the sequence of "axiom reals" and arithmetic operations you used to generate them) involved, right? For example, in simulation the numbers come from iterative solvers and often propagate over several time steps. The "history" of the numbers would thus rapidly be huge. (You could eager-evaluate up to a certain precision intermittently to start with new "axiom reals", but then you're back to Boost.Multiprecision.)
Yes, they depend on the generators, always, some speedup can be obtain by caching known digits, since numbers never change. However, we expand only digits we need, while in multi precision, you would have to generate all the digits of the predefined precision to obtain the same results.