The endian library was going to support float in 1.58, but even 1.61 is going to release soon, there is no news about the feature. I'm not going to say, that this library should have been rejected, but since it's accepted, the maintainer should at least complete the work. And what's the situation now? The float point feature is overdue for 1 year. The project is almost like abandoned. Last commit 4 months ago. Even the library isn't completed. I hope the creator, Beman, to take responsibility about this library.
On Sun, Apr 3, 2016 at 4:02 AM, Tatsuyuki Ishi
The endian library was going to support float in 1.58, but even 1.61 is going to release soon, there is no news about the feature.
Good point. The floating point support turned to be difficult to specify precisely, so there is a new plan. See below
I'm not going to say, that this library should have been rejected, but since it's accepted, the maintainer should at least complete the work.
And what's the situation now? The float point feature is overdue for 1 year. The project is almost like abandoned. Last commit 4 months ago. Even the library isn't completed.
I hope the creator, Beman, to take responsibility about this library.
The new plan is to go ahead with an Endian proposal to the C++ committee. They have the expertise to know whether it is even possible to convert between big and little endian floating point in any useful (and non-dangerous) way. The committee's March and June meetings are aimed at shipping a C++17 committee draft. The November meeting is Issaquah, WA, US, is likely the first meeting at which an Endian proposal will get any airtime. I'll update the Boost library docs accordingly. Thanks, --Beman
On 4/3/16 7:21 AM, Beman Dawes wrote:
The new plan is to go ahead with an Endian proposal to the C++ committee. They have the expertise to know whether it is even possible to convert between big and little endian floating point in any useful (and non-dangerous) way.
Hmmmm - Then there is the SG-6 committee which has pending a number of proposals dealing with integer arithmetic (fixed binary, arbitrary length integers, safe integers, and some more). Seems related to me. I'm thinking that maybe this should be included.
The committee's March and June meetings are aimed at shipping a C++17 committee draft. The November meeting is Issaquah, WA, US, is likely the first meeting at which an Endian proposal will get any airtime.
I'll update the Boost library docs accordingly.
But I'm not sure that this is a good reason to not maintain a library which has been accepted into Boost. We've created an expectation that everything in boost meets a certain standard and I'm wary of setting that aside for any reason. I'm sure those who recommended the acceptance of the library had every expectation that it would be maintained if accepted. Do you think that the acceptance should be reversed and the library withdrawn? Robert Ramey
But I'm not sure that this is a good reason to not maintain a library which has been accepted into Boost. We've created an expectation that everything in boost meets a certain standard and I'm wary of setting that aside for any reason. I'm sure those who recommended the acceptance of the library had every expectation that it would be maintained if accepted. Do you think that the acceptance should be reversed and the library withdrawn?
It's not unmaintained, and the library "as is" is very useful: it's just that a planned new feature has been dropped pending further investigation. At least that's my reading of things. And yes, floating point formats are a nightmare ;) John.
On 04/03/2016 06:07 PM, Robert Ramey wrote:
But I'm not sure that this is a good reason to not maintain a library which has been accepted into Boost. We've created an expectation that everything in boost meets a certain standard and I'm wary of setting that aside for any reason. I'm sure those who recommended the acceptance of the library had every expectation that it would be maintained if accepted. Do you think that the acceptance should be reversed and the library withdrawn?
Let us not blow things out of proportions. Boost.Endian is very useful even without floating-point support, and significantly better designed that the byte-order conversions coming from SG 4. The other issues and pull requests for Boost.Endian all appear to be very low priority (e.g. documentation typos) so it may be too early to claim that the library is unmaintained.
On 4/04/2016 02:21, Beman Dawes wrote:
The new plan is to go ahead with an Endian proposal to the C++ committee. They have the expertise to know whether it is even possible to convert between big and little endian floating point in any useful (and non-dangerous) way.
If you have API that can take a void* or char* (or other block-of-bytes-of-appropriate-size) and then reinterpret this as a big-endian or little-endian float/double/longdouble (returning the native representation), then this can work and is a useful function. Similarly taking a native float/double/longdouble and writing it to a block-of-bytes in a specified endian format. If you have an API that takes a float/double/longdouble that was already set with the "wrong" endian and you want to byteswap it, then this cannot work and is a bad API. (This was the problem with the prior version, IIRC.) As a Boost.Endian library user, until the first style of API is implemented you can work around it by doing endian swaps on unsigned integers only, and using a bitwise_cast-equivalent to convert between the floating-point values and unsigned integers of appropriate size. (Although bitwise_cast is itself a debated topic, since doing it via unions typically has better codegen but is technically UB, and doing it without unions can get you into strict aliasing trouble if you're not careful.)
Gavin Lambert wrote:
If you have API that can take a void* or char* (or other block-of-bytes-of-appropriate-size) and then reinterpret this as a big-endian or little-endian float/double/longdouble (returning the native representation), then this can work and is a useful function. Similarly taking a native float/double/longdouble and writing it to a block-of-bytes in a specified endian format.
"Little-endian float" is not enough information. Floating point formats are not fully described by their endianness. (Neither are integer formats in principle, but in practice they are.) Apart from that, I agree. char[4] /* IEEE 32 bit little endian float */ <-> float is a correct interface. Or even uint32_t <-> float. uint32_t float_to_bits( float x ); float float_from_bits( uint32_t v ); If you have that, you can then byteswap the unit32_t to your heart's content, even though the correct interface there is also char[4] /* little endian 32 bit */ <-> uint32_t.
As a Boost.Endian library user, until the first style of API is implemented you can work around it by doing endian swaps on unsigned integers only, and using a bitwise_cast-equivalent to convert between the floating-point values and unsigned integers of appropriate size.
Something like that, yes, but what I'm suggesting is not a bitwise cast. There need not be any correspondence between the uint32_t bits and the float bits, because the uint32_t is IEEE, and the float is native. Technically, this could also be true for integers; the char[4] representation is 32 bit with no padding and no trapping, and the uint32_t (which would have to be uint_least32_t) may have them. (Although this assumes CHAR_BIT == 8. I'm not sure what is the right interface when CHAR_BIT is 32.)
On 2016-04-04 14:54, Peter Dimov wrote:
Gavin Lambert wrote:
If you have API that can take a void* or char* (or other block-of-bytes-of-appropriate-size) and then reinterpret this as a big-endian or little-endian float/double/longdouble (returning the native representation), then this can work and is a useful function. Similarly taking a native float/double/longdouble and writing it to a block-of-bytes in a specified endian format.
"Little-endian float" is not enough information. Floating point formats are not fully described by their endianness. (Neither are integer formats in principle, but in practice they are.)
Do you mean word-granular endianness here? I would say, it's just another type of endianness.
Apart from that, I agree. char[4] /* IEEE 32 bit little endian float */ <-> float is a correct interface. Or even uint32_t <-> float.
uint32_t float_to_bits( float x ); float float_from_bits( uint32_t v );
If you have that, you can then byteswap the unit32_t to your heart's content, even though the correct interface there is also char[4] /* little endian 32 bit */ <-> uint32_t.
As a Boost.Endian library user, until the first style of API is implemented you can work around it by doing endian swaps on unsigned integers only, and using a bitwise_cast-equivalent to convert between the floating-point values and unsigned integers of appropriate size.
Something like that, yes, but what I'm suggesting is not a bitwise cast. There need not be any correspondence between the uint32_t bits and the float bits, because the uint32_t is IEEE, and the float is native.
I think it's not Boost.Endian's job to deal with FP implementations. Or integer implementations, for that matter. The interface can be defined to require a certain implementation of float or to simply ignore the implementation and consider input as opaque sequence of bytes. The implementation of that interface should just reorder bytes in that sequence, as requested, and do nothing more than that.
Technically, this could also be true for integers; the char[4] representation is 32 bit with no padding and no trapping, and the uint32_t (which would have to be uint_least32_t) may have them.
uint32_t cannot have traps or padding bits, so no problem there. If the interface uses this type then it restricts the library to architectures that can provide such a type, which realistically should be fine, but probably would not fit the C++ standard. A more generic interface would just avoid using integers as a storage of bits and just use a byte buffer for that (i.e. and array of char/unsigned char/signed char or input/output iterators of such value type or some such).
(Although this assumes CHAR_BIT == 8. I'm not sure what is the right interface when CHAR_BIT is 32.)
If a raw buffer is used to store bits then there is no problem as any C++ type has size of an integer number of chars. This may pose interoperability issues between platforms with different values of CHAR_BIT, but I'm inclined to think this is not a problem of endianness per se, but a problem of portable (de)serialization, which is a much wider problem than just endianness management (which, by definition, operates whole bytes - which are chars in C++).
Andrey Semashev wrote:
I think it's not Boost.Endian's job to deal with FP implementations. Or integer implementations, for that matter.
Dealing with different implementations is technically outside the charter of an "endianness" library. The end goal, however, is to let one read and write integers and floats in non-native formats. And when those non-native formats have a representation that is not a simple byteswap away from the native one, it would be kind of useful if the library still worked.
The interface can be defined to require a certain implementation of float or to simply ignore the implementation and consider input as opaque sequence of bytes. The implementation of that interface should just reorder bytes in that sequence, as requested, and do nothing more than that.
The interface could obviously be defined in various ways. What specific interface do you suggest?
On 2016-04-04 16:55, Peter Dimov wrote:
Andrey Semashev wrote:
I think it's not Boost.Endian's job to deal with FP implementations. Or integer implementations, for that matter.
Dealing with different implementations is technically outside the charter of an "endianness" library. The end goal, however, is to let one read and write integers and floats in non-native formats. And when those non-native formats have a representation that is not a simple byteswap away from the native one, it would be kind of useful if the library still worked.
IMHO, if you need something more than a byte swap then you need a different library.
The interface can be defined to require a certain implementation of float or to simply ignore the implementation and consider input as opaque sequence of bytes. The implementation of that interface should just reorder bytes in that sequence, as requested, and do nothing more than that.
The interface could obviously be defined in various ways. What specific interface do you suggest?
Well, in some of the projects I work on we have something as simple as: void write_be32(std::uint32_t n, void* p); std::uint32_t read_be32(const void* p); // etc. template< typename T, std::size_t Size = sizeof(T) > struct big_endian; // Specializations for different values of Size template< typename T > struct big_endian< T, 4 > { static void write(T n, void* p) { write_be32(n, p); } static T read(const void* p) { return read_be32(p); } }; // ditto for little endian As simplistic and error prone as it may look, this basic interface is enough to build higher level and more specialized tools upon.
On 2016-04-04 18:25, degski wrote:
IMHO, if you need something more than a byte swap then you need a different library.
IMEHO, If all you need is a byte swap, you don't need a library.
Well, you do want to abstract away from the compiler-specific intrinsics, platform-specific macros and CPU-specific instructions in asm blocks, don't you?
Andrey Semashev wrote:
IMHO, if you need something more than a byte swap then you need a different library.
You're just repeating what you said. My answer remains the same. You never need a byte swap. What you need is always something else, to which a byte swap is sometimes the answer. Specifically, what you need is to be able to read or write integers or floats in non-native formats.
Well, in some of the projects I work on we have something as simple as:
void write_be32(std::uint32_t n, void* p); std::uint32_t read_be32(const void* p); // etc.
template< typename T, std::size_t Size = sizeof(T) > struct big_endian;
// Specializations for different values of Size template< typename T > struct big_endian< T, 4 > { static void write(T n, void* p) { write_be32(n, p); }
That's more or less what I have, too. Where do we disagree? Note that
void write_be32(std::uint32_t n, void* p);
needs not assume presence of uint32_t or a particular representation of n. Given any integer n in any representation, it could still portably write a big-endian 32 bit integer into p. void write_be32( uint_least32_t n, unsigned char p[4] ) { // I assume CHAR_BIT of 8 here /* assert( n < 2^32 ); */ p[0] = ( n >> 24 ) & 0xFF; p[1] = ( n >> 16 ) & 0xFF; p[2] = ( n >> 8 ) & 0xFF; p[3] = n & 0xFF; } This is, however, not what Boost.Endian does.
On 2016-04-04 18:35, Peter Dimov wrote:
Andrey Semashev wrote:
IMHO, if you need something more than a byte swap then you need a different library.
You're just repeating what you said. My answer remains the same. You never need a byte swap. What you need is always something else, to which a byte swap is sometimes the answer. Specifically, what you need is to be able to read or write integers or floats in non-native formats.
I agree that byte swapping is rarely the end goal. But in practice, with some restrictions, like assuming base-2 integers and 8-bit bytes across all systems you're planning to run your code on, endian conversions can serve as a means of data serialization. What I'm saying with my answer above is that if these restrictions don't apply to your case then you are no longer concerned with endianness anymore, and what you're doing is something else. I would even say that endianness conversion library won't be useful to you at all because you would be concerned with bit patterns, and not just bytes. Library for portable binary serialization would extremely useful, no argument about that, but it would have nothing to do with endianness. Sorry if I failed to communicate my point clearer or if I misunderstood you earlier.
Well, in some of the projects I work on we have something as simple as:
void write_be32(std::uint32_t n, void* p); std::uint32_t read_be32(const void* p); // etc.
template< typename T, std::size_t Size = sizeof(T) > struct big_endian;
// Specializations for different values of Size template< typename T > struct big_endian< T, 4 > { static void write(T n, void* p) { write_be32(n, p); }
That's more or less what I have, too. Where do we disagree?
Then we probably don't. :) My reply was partially triggered by the use of uint32_t to store endian-converted bits.
Note that
void write_be32(std::uint32_t n, void* p);
needs not assume presence of uint32_t or a particular representation of n. Given any integer n in any representation, it could still portably write a big-endian 32 bit integer into p.
void write_be32( uint_least32_t n, unsigned char p[4] ) { // I assume CHAR_BIT of 8 here
/* assert( n < 2^32 ); */
p[0] = ( n >> 24 ) & 0xFF; p[1] = ( n >> 16 ) & 0xFF; p[2] = ( n >> 8 ) & 0xFF; p[3] = n & 0xFF; }
Not sure, I'll have to dig into the standard. For example, uint_least32_t could have non-base-2 representation, so as unsigned char, so the bit patterns that are stored to p would be also not base-2. OTOH, uint32_t and uint8_t are guaranteed to have base-2 representation, so no surprises in this case.
Andrey Semashev wrote:
void write_be32( uint_least32_t n, unsigned char p[4] ) { // I assume CHAR_BIT of 8 here
/* assert( n < 2^32 ); */
p[0] = ( n >> 24 ) & 0xFF; p[1] = ( n >> 16 ) & 0xFF; p[2] = ( n >> 8 ) & 0xFF; p[3] = n & 0xFF; }
For example, uint_least32_t could have non-base-2 representation, so as unsigned char, so the bit patterns that are stored to p would be also not base-2.
The representation of uint_least32_t doesn't matter. The expression n & 0xFF gives you a number between 0 and 255 that contains the lowest 8 VALUE bits of n, which is not the same as the STORAGE bits of n. In no possible representation would an uint_least32_t n with a value (m * 256 + 232) give you something other than 232 from (n & 0xFF).
I agree that byte swapping is rarely the end goal. But in practice, with some restrictions, like assuming base-2 integers and 8-bit bytes across all systems you're planning to run your code on, endian conversions can serve as a means of data serialization.
And my point is that the interface you gave handles representational differences with the same ease as it handles differences that are limited to a byte swap. It's actually the same for float. If your base is void write_le32( float x, unsigned char p[4] ); the interface remains the same no matter what the representation of x. What matters is that you get a little-endian 32 bit IEEE float in p. If however you go for float byteswap( float x ); then things get hairy even if restricted to IEEE, because an ordinary number may turn into a signaling NaN when byteswapped.
On 2016-04-04 19:37, Peter Dimov wrote:
Andrey Semashev wrote:
void write_be32( uint_least32_t n, unsigned char p[4] ) { // I assume CHAR_BIT of 8 here
/* assert( n < 2^32 ); */
p[0] = ( n >> 24 ) & 0xFF; p[1] = ( n >> 16 ) & 0xFF; p[2] = ( n >> 8 ) & 0xFF; p[3] = n & 0xFF; }
For example, uint_least32_t could have non-base-2 representation, so as unsigned char, so the bit patterns that are stored to p would be also not base-2.
The representation of uint_least32_t doesn't matter. The expression n & 0xFF gives you a number between 0 and 255 that contains the lowest 8 VALUE bits of n, which is not the same as the STORAGE bits of n. In no possible representation would an uint_least32_t n with a value (m * 256 + 232) give you something other than 232 from (n & 0xFF).
With (n & 0xFF) you get a bit pattern which on the current architecture is interpreted as a number of 232. This bit pattern may be interpreted differently on another architecture. If you say that write_be32 is formally portable then it is imperative that the bit pattern it produces is interpreted equivalently on all architectures. To guarantee that, write_be32 might have do something else than what is written above or what we call byte swapping. Or otherwise you have to force the representation of input.
I agree that byte swapping is rarely the end goal. But in practice, with some restrictions, like assuming base-2 integers and 8-bit bytes across all systems you're planning to run your code on, endian conversions can serve as a means of data serialization.
And my point is that the interface you gave handles representational differences with the same ease as it handles differences that are limited to a byte swap.
It's actually the same for float. If your base is
void write_le32( float x, unsigned char p[4] );
the interface remains the same no matter what the representation of x. What matters is that you get a little-endian 32 bit IEEE float in p.
On a platform with non-IEEE floats, does write_le32 have to convert to IEEE format before producing the bits in p? What if the x value on the current platform is not representable in IEEE float?
If however you go for
float byteswap( float x );
then things get hairy even if restricted to IEEE, because an ordinary number may turn into a signaling NaN when byteswapped.
Absolutely agreed.
Andrey Semashev wrote:
With (n & 0xFF) you get a bit pattern which on the current architecture is interpreted as a number of 232. This bit pattern may be interpreted differently on another architecture.
That's not very likely. unsigned char with CHAR_BIT == 8 is guaranteed to represent the values from 0 to 255. Arbitrary bit permutations or 0:255 -> 0:255 mappings at the byte level can reasonably be assumed to not exist, or if they do, that writing the value of 232 to a file or socket would write something there that would be interpreted as 232 on the receiving end.
If you say that write_be32 is formally portable then it is imperative that the bit pattern it produces is interpreted equivalently on all architectures.
The _bytes_ it produces have the specified values on all architectures. As long as writing the byte 232 reads the byte 232, we're clear.
On a platform with non-IEEE floats, does write_le32 have to convert to IEEE format before producing the bits in p?
Yes.
What if the x value on the current platform is not representable in IEEE float?
It would put the closest representable IEEE value into the bits in p. Not much different from passing a double for x.
On 2016-04-04 20:21, Peter Dimov wrote:
Andrey Semashev wrote:
On a platform with non-IEEE floats, does write_le32 have to convert to IEEE format before producing the bits in p?
Yes.
What if the x value on the current platform is not representable in IEEE float?
It would put the closest representable IEEE value into the bits in p. Not much different from passing a double for x.
Since you convert the input value, possibly even losing information, this is no longer about endian conversion but rather serialization. At least, that's not what I expect from an endian library.
Andrey Semashev wrote:
Since you convert the input value, possibly even losing information, this is no longer about endian conversion but rather serialization. At least, that's not what I expect from an endian library.
Once again, my point is that this interface handles representational differences as well as mere endianness differences. There is no (interface-imposed) need to make the library not work. Yes, if your platform's float can't be represented exactly in IEEE 32 bit, or if IEEE 32 bit can't be represented exactly in your platform's float, the roundtrip will not be perfect. Making the library not work purely out of spite is of no help though. The file still has 32 bit IEEE little-endian floats in it and you're expected to read or produce it. This is the task that the library is supposed to solve.
On 5/04/2016 05:59, Peter Dimov wrote:
Since you convert the input value, possibly even losing information, this is no longer about endian conversion but rather serialization. At least, that's not what I expect from an endian library. [...] Yes, if your platform's float can't be represented exactly in IEEE 32 bit, or if IEEE 32 bit can't be represented exactly in your platform's float, the roundtrip will not be perfect. Making the library not work
Andrey Semashev wrote: purely out of spite is of no help though. The file still has 32 bit IEEE little-endian floats in it and you're expected to read or produce it. This is the task that the library is supposed to solve.
I think Andrey is correct that this is essentially a serialisation problem. But I agree with Peter that this is exactly the purpose of an Endian library -- to perform portable serialisation/deserialisation of values from a defined serialised format (which mostly consists of the endianness, but for floats can include other factors such as specifying IEEE format vs. some other format) to native memory format and back again. There's no particular reason why the library couldn't also contain conversions for some specific non-IEEE floating point format, if one is sufficiently popular or appears in well-known file formats such that it might be useful. And it could be extensible to other formats not representable in basic C++ types, such as rational numbers or quad precision floats. They just need a well-defined block-of-bytes representation and an equivalent C++ class. But those things are probably beyond the scope of the initial release.
Gavin Lambert wrote:
There's no particular reason why the library couldn't also contain conversions for some specific non-IEEE floating point format, if one is sufficiently popular or appears in well-known file formats such that it might be useful.
Yes in principle, but that's not the case for which I was arguing, it's the opposite. I was discussing the scenario in which the file format contains IEEE (little-endian, say) floats, and the platform's floats are not IEEE (as opposed to merely not little endian).
On 5/04/2016 13:41, Peter Dimov wrote:
Yes in principle, but that's not the case for which I was arguing, it's the opposite. I was discussing the scenario in which the file format contains IEEE (little-endian, say) floats, and the platform's floats are not IEEE (as opposed to merely not little endian).
Yes, I know, and I was agreeing with that (in the first part of my reply, not the part you quoted). My apologies if that was unclear.
On 4/4/16 18:36, Gavin Lambert wrote:
There's no particular reason why the library couldn't also contain conversions for some specific non-IEEE floating point format, if one is sufficiently popular or appears in well-known file formats such that it might be useful.
And it could be extensible to other formats not representable in basic C++ types, such as rational numbers or quad precision floats. They just need a well-defined block-of-bytes representation and an equivalent C++ class.
Expect that we would no longer be talking endianess and those conversions in a Boost.Endian library would be odd at best. michael -- Michael Caisse Ciere Consulting ciere.com
On 5/04/2016 15:48, Michael Caisse wrote:
Expect that we would no longer be talking endianess and those conversions in a Boost.Endian library would be odd at best.
I think you missed my point about "endianness" just being a way to define a particular storage format (byte layout) for integers. Other types have other properties (floats have both endianness and IEEE layout vs. other layouts, for example). (Even endianness itself isn't limited to big vs. little as some people think -- some architectures use a mixed format, although hopefully those are less likely to end up in serialisation formats.) Ultimately though it's all just specifying a host-architecture-independent storage format (bit layout) for a given type. The purpose of Boost.Endian is to be able to take a block-of-bytes in such an explicitly specified format and convert it to the native representation of the value, and the reverse. (Note that this process might be more complex than just a byte swap in some cases.) The primary focus is on native types (int types initially, and we're discussing extending that to float types as well) and that's all I'd expect from an initial version of the library. But it would be useful (not mandatory, just convenient) if another library or application could hook into the design to extend it to support additional types as I mentioned. But that was just an aside, a "would be nice to keep this in mind".
On April 5, 2016 12:18:53 AM EDT, Gavin Lambert
On 5/04/2016 15:48, Michael Caisse wrote:
Expect that we would no longer be talking endianess and those conversions in a Boost.Endian library would be odd at best.
I think you missed my point about "endianness" just being a way to define a particular storage format (byte layout) for integers.
Other types have other properties (floats have both endianness and IEEE layout vs. other layouts, for example).
(Even endianness itself isn't limited to big vs. little as some people think -- some architectures use a mixed format, although hopefully those are less likely to end up in serialisation formats.)
Ultimately though it's all just specifying a host-architecture-independent storage format (bit layout) for a given type.
The purpose of Boost.Endian is to be able to take a block-of-bytes in such an explicitly specified format and convert it to the native representation of the value, and the reverse. (Note that this process might be more complex than just a byte swap in some cases.)
That's quite a leap from what I see in the docs and what has been meant traditionally by such facilities. In my experience, endianness was all about but swapping of otherwise compatible binary data representations. Anything more is serialization or marshalling. The difference is performance. I'll grant that a well-defined wire format would increase portability, and a well-chosen format would imply very little data manipulation for common platforms, but that exceeds the scope of and endian library. At that point, we're talking Boost.Exchange or something. ___ Rob (Sent from my portable computation engine)
Gavin Lambert wrote:
Ultimately though it's all just specifying a host-architecture-independent storage format (bit layout) for a given type.
It is, but my main point was that if you have an "endianness" library that gives you void write_ieee32le( float x, unsigned char * p ); this interface does not change when x is not IEEE 32 bit float. So while it's indeed true that the library no longer does endianness conversion, making another library that is not called "Endian" but has the exact same interface and works in the exact same way as the first one when x is IEEE 32 bit float would be mighty silly. If we start extending the interface with various other external formats, the analogy no longer works.
On Tue, Apr 5, 2016 at 4:44 PM, Peter Dimov
Gavin Lambert wrote:
Ultimately though it's all just specifying a host-architecture-independent storage format (bit layout) for a given type.
It is, but my main point was that if you have an "endianness" library that gives you
void write_ieee32le( float x, unsigned char * p );
this interface does not change when x is not IEEE 32 bit float.
Boost.Endian and let's call it Boost.BinarySerialization may have similar interfaces, but they are not required to work similarly. Boost.Endian has to convert byte order and doesn't have to deal with portable data representation.
So while it's indeed true that the library no longer does endianness conversion, making another library that is not called "Endian" but has the exact same interface and works in the exact same way as the first one when x is IEEE 32 bit float would be mighty silly.
There is already overlap between different libraries in Boost; there's nothing silly about that. IMHO, stuffing unrelated functionality in a library is much worse.
On Tue, Apr 5, 2016 at 5:09 PM, Peter Dimov
Andrey Semashev wrote:
Boost.Endian and let's call it Boost.BinarySerialization may have similar interfaces, but they are not required to work similarly.
Not "similar". The same. And not "similarly". In the exact same way.
Remarkable how you deliberately miss my point.
I guess I just don't understand why you think the two libraries must work the same way. To me endian conversion and serialization are two different things, even if sometimes they seem to produce the same result.
On 6/04/2016 01:44, Peter Dimov wrote:
Gavin Lambert wrote:
Ultimately though it's all just specifying a host-architecture-independent storage format (bit layout) for a given type.
It is, but my main point was that if you have an "endianness" library that gives you
void write_ieee32le( float x, unsigned char * p );
this interface does not change when x is not IEEE 32 bit float.
So while it's indeed true that the library no longer does endianness conversion, making another library that is not called "Endian" but has the exact same interface and works in the exact same way as the first one when x is IEEE 32 bit float would be mighty silly.
Yes, I was trying to make the exact same point as well. Perhaps I worded it badly.
participants (11)
-
Andrey Semashev
-
Beman Dawes
-
Bjorn Reese
-
degski
-
Gavin Lambert
-
John Maddock
-
Michael Caisse
-
Peter Dimov
-
Rob Stewart
-
Robert Ramey
-
Tatsuyuki Ishi