Andrey Semashev wrote:
void write_be32( uint_least32_t n, unsigned char p[4] ) { // I assume CHAR_BIT of 8 here
/* assert( n < 2^32 ); */
p[0] = ( n >> 24 ) & 0xFF; p[1] = ( n >> 16 ) & 0xFF; p[2] = ( n >> 8 ) & 0xFF; p[3] = n & 0xFF; }
For example, uint_least32_t could have non-base-2 representation, so as unsigned char, so the bit patterns that are stored to p would be also not base-2.
The representation of uint_least32_t doesn't matter. The expression n & 0xFF gives you a number between 0 and 255 that contains the lowest 8 VALUE bits of n, which is not the same as the STORAGE bits of n. In no possible representation would an uint_least32_t n with a value (m * 256 + 232) give you something other than 232 from (n & 0xFF).
I agree that byte swapping is rarely the end goal. But in practice, with some restrictions, like assuming base-2 integers and 8-bit bytes across all systems you're planning to run your code on, endian conversions can serve as a means of data serialization.
And my point is that the interface you gave handles representational differences with the same ease as it handles differences that are limited to a byte swap. It's actually the same for float. If your base is void write_le32( float x, unsigned char p[4] ); the interface remains the same no matter what the representation of x. What matters is that you get a little-endian 32 bit IEEE float in p. If however you go for float byteswap( float x ); then things get hairy even if restricted to IEEE, because an ordinary number may turn into a signaling NaN when byteswapped.