Reals in this context are functions generating digits. Arithmetic operations are composition of these functions to obtain digits. And comparison operations iterate the digits of the operands for deciding. This has (in theory) infinite precision, in practice we can limit it to a few thousands or millions. A lot more than what double can provide.
In practice, however, I don't think you will reach infinite precision with the approach as you stated it. Consider equality for example: You would iterate over the digits and compare them one by one. If they are unequal, then you win (ultimately). But if they are equal, then you will have to abort at some point and state that they are "probably equal" (more precisely, that they are equal up to an error of a given epsilon). I think the problem is inherent, or, in other words, you can only semi-decide inequality and thus have to restrain to some best-effort/up-to-some-precision. Boost.Multiprecision also has infinite precision (in theory). The difference of your suggestion would be that as you do lazy-evaluation, you don't have to specify the precision a priori.
I have a good use case for this in the area of simulation, but I’m sure there is a lot more places you can use them, e.g. physics computations, geometrical computations, etc…
The performance of the arithmetic operations would heavily depend on the "history" of the numbers (i.e., the sequence of "axiom reals" and arithmetic operations you used to generate them) involved, right? For example, in simulation the numbers come from iterative solvers and often propagate over several time steps. The "history" of the numbers would thus rapidly be huge. (You could eager-evaluate up to a certain precision intermittently to start with new "axiom reals", but then you're back to Boost.Multiprecision.) Best, Alex PS: Don't get my doubts wrong; I think the idea is interesting and in some cases it might certainly prove useful.