On Wednesday, May 15, 2024, Matt Borland wrote:
Matt, do we have benchmarks comparing yours to the decimal64 implementation in GCC? i.e. The C++ std::decimal::decimal64 in
as well as the C _Decimal64 (https://gcc.gnu.org/onlinedocs/gcc/Decimal-Float.html) Those are backed by Intel's BID library.
Not yet, but I will add them. Our focus has been on ensuring correctness first, and only recently began optimizing routines for performance.
Your implementation of decimal64 also stores a uint64_t of the IEEE 754 Decimal64 (presumably BID not DPD) format, correct?
Correct, we use the BID format.
This also means that on every operation you have to decode significand, exponent, sign out of this - which isn't trivial given the format.
It's a series of bit-fiddling operations: https://github.com/ cppalliance/decimal/blob/develop/include/boost/decimal/decimal64.hpp#L1006
I know, because I also implemented them at one point. :) For real usage the repeated cost wasn't negligible. Which is why I want to think about (or compare against) an implementation where a decimal64 would contain { significand; exponent; } instead of { bid64; } Regarding comparison to Intel's implementation (bid based) which is used in both GCC's decimal64 and also in Bloomberg, that's especially important because it used some impressively large tables to get performance (which also bloats binaries) Glen