On Mon, 1 Jul 2013 05:16:22 -0400, Rob Stewart wrote:
On Jun 30, 2013, at 12:14 PM, Paul Long
wrote: uint32_t u; char networkByteOrder[sizeof u];
networkByteOrder[3] = u; networkByteOrder[2] = u >> CHAR_BIT; networkByteOrder[1] = u >> CHAR_BIT * 2; networkByteOrder[0] = u >> CHAR_BIT * 3;
ibitstream does not do this specific thing, but it shows how one can write portable code without regard to platform endianess. Therefore, ibitstream does not "translate," at least IMO. If you do the same thing on all platforms, then you have no portability. If you do that only on little endian platforms, then you've reinvented the network/host byte ordering mechanism. If something else, then I don't understand you.
Unless you're referring to my mistake, this should be portable because integrals are operated on as _values_, not according their representation in memory. It's only when one aliases them as a char array that their representation, e.g., endianess, becomes relevant.
You are aliasing a value as a char array, so you are relying on the endianness, and if you do that for every value on every platform, then the bitstream's order depends upon the endianness of the host that creates it and isn't portable.
If you only do such things on little endian hosts, and not on big endian hosts, then you're doing the normal host->network->host order swaps of network communications, albeit with your own reordering code.
I'm still not sure what you mean. I wrote encode and decode functions on http://codepad.org/nml6RjX5 which apparently runs on a big-endian platform. Maybe that'll help our understanding. I'm saying that, using this technique, one can write portable code--without the hton family of functions--that makes no assumptions about the underlying endianess of the platform. Of course, one will have to choose an endianess of the encoding. In this example, I have chosen big endian. Is that what you're concerned about--that any decoder will have to know the endianess of the encoded value and therefore it must "translate?" I agree that a decoder will have to know the endianess of the encoded value, but it does not need to know the endianess of the platform on which it is running; see my decode function on http://codepad.org/nml6RjX5. As an aside, I bet there is a dichotomy of developers. Some, like me, mostly write code that supports a standardized encoding or protocol, in which the endianess is prescribed. Others are just interested in communicating data between proprietary entities and don't much care what endian is used, so it might as well be the endian of one and hopefully all of the platforms. Paul