Is there interest in Computable Calculus?
Hi, An arithmetic operation produces a new function using the functions in the operands and some simple expression manipulations. if I gather it right, then what you propose is basically a Computer Algebra System (i.e., a system for manipulating reals by symbolic expressions). What specifically would the goal of the library be? Best, Alex
if I gather it right, then what you propose is basically a Computer Algebra System (i.e., a system for manipulating reals by symbolic expressions). What specifically would the goal of the library be?
It is not the arithmetic that is different to other approaches, but the way the reals are defined. Reals in this context are functions generating digits. Arithmetic operations are composition of these functions to obtain digits. And comparison operations iterate the digits of the operands for deciding. This has (in theory) infinite precision, in practice we can limit it to a few thousands or millions. A lot more than what double can provide. Notice I’m not including in my proposal any transcendental function. My scope now is only the four basic arithmetic operations, and comparison operators. The algorithms are not something new, they were researched and published decades ago. I just want to reimplement them because I never saw them in the form of open source modern c++. I have a good use case for this in the area of simulation, but I’m sure there is a lot more places you can use them, e.g. physics computations, geometrical computations, etc…
Reals in this context are functions generating digits. Arithmetic operations are composition of these functions to obtain digits. And comparison operations iterate the digits of the operands for deciding. This has (in theory) infinite precision, in practice we can limit it to a few thousands or millions. A lot more than what double can provide.
In practice, however, I don't think you will reach infinite precision with the approach as you stated it. Consider equality for example: You would iterate over the digits and compare them one by one. If they are unequal, then you win (ultimately). But if they are equal, then you will have to abort at some point and state that they are "probably equal" (more precisely, that they are equal up to an error of a given epsilon). I think the problem is inherent, or, in other words, you can only semi-decide inequality and thus have to restrain to some best-effort/up-to-some-precision. Boost.Multiprecision also has infinite precision (in theory). The difference of your suggestion would be that as you do lazy-evaluation, you don't have to specify the precision a priori.
I have a good use case for this in the area of simulation, but I’m sure there is a lot more places you can use them, e.g. physics computations, geometrical computations, etc…
The performance of the arithmetic operations would heavily depend on the "history" of the numbers (i.e., the sequence of "axiom reals" and arithmetic operations you used to generate them) involved, right? For example, in simulation the numbers come from iterative solvers and often propagate over several time steps. The "history" of the numbers would thus rapidly be huge. (You could eager-evaluate up to a certain precision intermittently to start with new "axiom reals", but then you're back to Boost.Multiprecision.) Best, Alex PS: Don't get my doubts wrong; I think the idea is interesting and in some cases it might certainly prove useful.
On Jan 15, 2016, at 3:18 PM, Alexander Lauser
wrote: Reals in this context are functions generating digits. Arithmetic operations are composition of these functions to obtain digits. And comparison operations iterate the digits of the operands for deciding. This has (in theory) infinite precision, in practice we can limit it to a few thousands or millions. A lot more than what double can provide.
In practice, however, I don't think you will reach infinite precision with the approach as you stated it. Consider equality for example: You would iterate over the digits and compare them one by one. If they are unequal, then you win (ultimately). But if they are equal, then you will have to abort at some point and state that they are "probably equal" (more precisely, that they are equal up to an error of a given epsilon). I think the problem is inherent, or, in other words, you can only semi-decide inequality and thus have to restrain to some best-effort/up-to-some-precision.
Yes, agree, I stated that in a reply to another email. Three things are proved undecidable if using arbitrary algorithms for generating the numbers: irrationality, equal comparison, and comparison against 0. However, if you restrict the set of numbers participating in the an operation to well-known sets you can answer interesting questions.
Boost.Multiprecision also has infinite precision (in theory). The difference of your suggestion would be that as you do lazy-evaluation, you don't have to specify the precision a priori.
Here, multi precision needs the whole representation to be available for every operation. In the case of comparing using algorithms, you only have representation for the current digit dependencies. For example, if you have compared the first few digits in an operation and they are equal, then for comparing the following thousands you don’t need them (unless you require them to compare the following digits, but thats a problem with the algorithm used for the number, not with the approach). For example the binary number composed by all zeros, but positions p where p is power of 2. Only requires a counter for deciding each number. And the limitation of how many digits can be expanded is imposed by time, almost no space usage at all. It is a tradeoff between time/space.
I have a good use case for this in the area of simulation, but I’m sure there is a lot more places you can use them, e.g. physics computations, geometrical computations, etc…
The performance of the arithmetic operations would heavily depend on the "history" of the numbers (i.e., the sequence of "axiom reals" and arithmetic operations you used to generate them) involved, right? For example, in simulation the numbers come from iterative solvers and often propagate over several time steps. The "history" of the numbers would thus rapidly be huge. (You could eager-evaluate up to a certain precision intermittently to start with new "axiom reals", but then you're back to Boost.Multiprecision.)
Yes, they depend on the generators, always, some speedup can be obtain by caching known digits, since numbers never change. However, we expand only digits we need, while in multi precision, you would have to generate all the digits of the predefined precision to obtain the same results.
... snip ...
Three things are proved undecidable if using arbitrary algorithms for generating the numbers: irrationality, equal comparison, and comparison against 0.
However, if you restrict the set of numbers participating in the an operation to well-known sets you can answer interesting questions.
What do you mean by restricting the set of numbers? Restricting the expressibility for the digit generating functions; e.g., providing some basic reals (pi, e, ...) and the only other way to generate new ones is by using arithmetic operations? In that setting, equality might indeed be decidable, but not by the algorithm you proposed. But I think you have to restrain to symbolic manipulations, most likely by finding some normal form for arithmetic expressions and comparing them. (Btw, that's what I meant by Computer Algebra System in my earlier message.) For clarity, is the goal infinite precision equality?
Boost.Multiprecision also has infinite precision (in theory). The difference of your suggestion would be that as you do lazy-evaluation, you don't have to specify the precision a priori.
Here, multi precision needs the whole representation to be available for every operation. In the case of comparing using algorithms, you only have representation for the current digit dependencies. For example, if you have compared the first few digits in an operation and they are equal, then for comparing the following thousands you don’t need them (unless you require them to compare the following digits, but thats a problem with the algorithm used for the number, not with the approach). For example the binary number composed by all zeros, but positions p where p is power of 2. Only requires a counter for deciding each number. And the limitation of how many digits can be expanded is imposed by time, almost no space usage at all. It is a tradeoff between time/space.
I don't really understand. I see that for certain numbers the approach might work -- if, together with the number, you also provide implementations for equality and all the other relations and operations.
I have a good use case for this in the area of simulation, but I’m sure there is a lot more places you can use them, e.g. physics computations, geometrical computations, etc…
The performance of the arithmetic operations would heavily depend on the "history" of the numbers (i.e., the sequence of "axiom reals" and arithmetic operations you used to generate them) involved, right? For example, in simulation the numbers come from iterative solvers and often propagate over several time steps. The "history" of the numbers would thus rapidly be huge. (You could eager-evaluate up to a certain precision intermittently to start with new "axiom reals", but then you're back to Boost.Multiprecision.)
Yes, they depend on the generators, always, some speedup can be obtain by caching known digits, since numbers never change. However, we expand only digits we need, while in multi precision, you would have to generate all the digits of the predefined precision to obtain the same results.
Let's make it more specific: You can generate pi, for example, by different series, some converging faster than others. Do you mean that by generator? Then, of course, the speed of the generators is the basis for the performance, but that's not my point. Maybe I don't understand, so let's consider an example: Take any natural i and put y = x_i, where x_0=pi and x_i=((x_{i-1}+1)-1 for natural i>0. How do you implement deciding y == pi? Alex
Thanks for all the comments Alex, we are still in an early stage and every input is useful for us. Sorry for the long delay in the reply.
I'm adding some context of our current work before going to the replies.
My motivation for the first implementation of this library was in the domain of Discrete-Event Simulation (DES). Looking for a representation for Time variables.
The main problem of using Floating point or other approximated data types in (some) DES is that errors of the approximations could not be bounded.
In the case of Time, only 3 operations were interesting for me, (+, <, =).
Each number was represented by a vector
What do you mean by restricting the set of numbers? Restricting the expressibility for the digit generating functions; e.g., providing some basic reals (pi, e, ...) and the only other way to generate new ones is by using arithmetic operations?
Yes. But, leaving the set open to be introduced by the user as template parameters. For example, e and pi is not proven to be linearly independent set of constants over Q, so it should not be used. However, pi and sq_rt(2), or e and sq_rt(2) are both valid sets of irrational constants.
In that setting, equality might indeed be decidable, but not by the algorithm you proposed. But I think you have to restrain to symbolic manipulations, most likely by finding some normal form for arithmetic expressions and comparing them. (Btw, that's what I meant by Computer Algebra System in my earlier message.)
Yes, It is a CAS then :)
For clarity, is the goal infinite precision equality?
Actually, my motivation was infinite precision of < comparison, but equality was required for getting that (at least using this solution).
Boost.Multiprecision also has infinite precision (in theory). The difference of your suggestion would be that as you do lazy-evaluation, you don't have to specify the precision a priori.
Here, multi precision needs the whole representation to be available for every operation. In the case of comparing using algorithms, you only have representation for the current digit dependencies. For example, if you have compared the first few digits in an operation and they are equal, then for comparing the following thousands you don’t need them (unless you require them to compare the following digits, but thats a problem with the algorithm used for the number, not with the approach). For example the binary number composed by all zeros, but positions p where p is power of 2. Only requires a counter for deciding each number. And the limitation of how many digits can be expanded is imposed by time, almost no space usage at all. It is a tradeoff between time/space.
I don't really understand. I see that for certain numbers the approach might work -- if, together with the number, you also provide implementations for equality and all the other relations and operations.
I don’t see how you distinguish (in multi precision) a very large rational from an irrational. Then, you cannot tell when digits are exhausted was because of an approximation of the true rational value. Also, you will operate with every digit in every operation. In our described approach, equality and addition only work with the coefficients with no requirement of a large representation (0.1pi == 0.1 pi only checks 0.1==0.1, and not every digit used if represented as MP, which we expect to be a lot of digits). In our case, expansion is only used for < comparison (only after checking != is true).
Let's make it more specific: You can generate pi, for example, by different series, some converging faster than others. Do you mean that by generator? Then, of course, the speed of the generators is the basis for the performance, but that's not my point.
Yes, I meant that.
Maybe I don't understand, so let's consider an example: Take any natural i and put y = x_i, where x_0=pi and x_i=((x_{i-1}+1)-1 for natural i>0. How do you implement deciding y == pi?
We take a set of constants including pi, for example, <1, pi, sqrt2> (the one is always implicitly included for representing rationals).
Something like this:
class pi : public irrational_constant{
…
};
class sqrt2 : public irrational_constant{
…
};
real
participants (2)
-
Alexander Lauser
-
Damian Vicino