On 28/01/2021 08:16, Alexander Grund via Boost wrote:
((uint32_t) -1) assumed equivalent to 0xFFFFFFFFu (which is not guaranteed)
Is it not? IIRC, by the standard C++ the above is equivalent to `(uint32_t)((uint32_t)0u - 1)`, which must give 0xFFFFFFFF.
It is. To be exact `(uint32_t) -1` is defined to be `2^32 - 1` by the C++ standard and that is indeed `0xFFFFFFFFu` Note that this is only true for conversions *to* unsigned types, but C++20 might have changed that too.
Just to be clear here, and to summarise a discussion about this on Slack, (uint32_t) -1 generates a 32 bit unsigned integer with the object VALUE of 0xffffffff, but not necessarily the object storage REPRESENTATION of 0xffffffff. In other words: uint32_t *x = ..., *y = ...; *x = (uint32) - 1; assert(*x == 0xffffffff); // always true memset(y, 0xff, 4); assert(0 == memcmp(x, y, 4)); // not necessarily true That last assertion is true on all the major CPU architectures, and if I'm blunt, any that I personally care about supporting. But it may not be true according to the standard e.g. one could theoretically implement a C++ abstract machine which encrypts all storage, so no object representation in storage ever has a one-one correspondence to object value. Such a C++ implementation would be very interesting to see how badly all my C++ code breaks, but otherwise would not be useful, I suspect. Niall