28 Jan
2021
28 Jan
'21
8:16 a.m.
((uint32_t) -1) assumed equivalent to 0xFFFFFFFFu (which is not guaranteed)
Is it not? IIRC, by the standard C++ the above is equivalent to `(uint32_t)((uint32_t)0u - 1)`, which must give 0xFFFFFFFF.
It is. To be exact `(uint32_t) -1` is defined to be `2^32 - 1` by the C++ standard and that is indeed `0xFFFFFFFFu` Note that this is only true for conversions *to* unsigned types, but C++20 might have changed that too.