On 2016-04-04 18:35, Peter Dimov wrote:
Andrey Semashev wrote:
IMHO, if you need something more than a byte swap then you need a different library.
You're just repeating what you said. My answer remains the same. You never need a byte swap. What you need is always something else, to which a byte swap is sometimes the answer. Specifically, what you need is to be able to read or write integers or floats in non-native formats.
I agree that byte swapping is rarely the end goal. But in practice, with some restrictions, like assuming base-2 integers and 8-bit bytes across all systems you're planning to run your code on, endian conversions can serve as a means of data serialization. What I'm saying with my answer above is that if these restrictions don't apply to your case then you are no longer concerned with endianness anymore, and what you're doing is something else. I would even say that endianness conversion library won't be useful to you at all because you would be concerned with bit patterns, and not just bytes. Library for portable binary serialization would extremely useful, no argument about that, but it would have nothing to do with endianness. Sorry if I failed to communicate my point clearer or if I misunderstood you earlier.
Well, in some of the projects I work on we have something as simple as:
void write_be32(std::uint32_t n, void* p); std::uint32_t read_be32(const void* p); // etc.
template< typename T, std::size_t Size = sizeof(T) > struct big_endian;
// Specializations for different values of Size template< typename T > struct big_endian< T, 4 > { static void write(T n, void* p) { write_be32(n, p); }
That's more or less what I have, too. Where do we disagree?
Then we probably don't. :) My reply was partially triggered by the use of uint32_t to store endian-converted bits.
Note that
void write_be32(std::uint32_t n, void* p);
needs not assume presence of uint32_t or a particular representation of n. Given any integer n in any representation, it could still portably write a big-endian 32 bit integer into p.
void write_be32( uint_least32_t n, unsigned char p[4] ) { // I assume CHAR_BIT of 8 here
/* assert( n < 2^32 ); */
p[0] = ( n >> 24 ) & 0xFF; p[1] = ( n >> 16 ) & 0xFF; p[2] = ( n >> 8 ) & 0xFF; p[3] = n & 0xFF; }
Not sure, I'll have to dig into the standard. For example, uint_least32_t could have non-base-2 representation, so as unsigned char, so the bit patterns that are stored to p would be also not base-2. OTOH, uint32_t and uint8_t are guaranteed to have base-2 representation, so no surprises in this case.