Andrey Semashev wrote:
With (n & 0xFF) you get a bit pattern which on the current architecture is interpreted as a number of 232. This bit pattern may be interpreted differently on another architecture.
That's not very likely. unsigned char with CHAR_BIT == 8 is guaranteed to represent the values from 0 to 255. Arbitrary bit permutations or 0:255 -> 0:255 mappings at the byte level can reasonably be assumed to not exist, or if they do, that writing the value of 232 to a file or socket would write something there that would be interpreted as 232 on the receiving end.
If you say that write_be32 is formally portable then it is imperative that the bit pattern it produces is interpreted equivalently on all architectures.
The _bytes_ it produces have the specified values on all architectures. As long as writing the byte 232 reads the byte 232, we're clear.
On a platform with non-IEEE floats, does write_le32 have to convert to IEEE format before producing the bits in p?
Yes.
What if the x value on the current platform is not representable in IEEE float?
It would put the closest representable IEEE value into the bits in p. Not much different from passing a double for x.