On 5/17/2017 3:07 AM, Pete Bartlett via Boost wrote:
On 17 May 2017, at 07:54, degski via Boost
wrote: On 17 May 2017 at 07:01, Edward Diener via Boost
wrote: On 5/16/2017 8:46 PM, Peter Dimov via Boost wrote:
Edward Diener wrote:
An alternative algorithm I have been considering is when the bit sizes are unequal is:
a) The bits comprising the common length are the low order bits.
The fact that bits are part of the common length does not mean that these bits have anything in common (semantically). It seems (what Edward D. seemed to think originally) non-sensical to me (not to say overly expensive).
d) if still equal, the larger bit size > the smaller bit size.
This makes 000111 > 111 instead of equal, so that equality (which I presume checks sizes) and equivalence match.
This means you can just as well start with the length comparison first. The Algorithm:
1. Compare length: shorter < longer. 2. If equal lengths, compare bitwise (in some way, f.e. memcmp()), but ALL the bits.
Any use of ulong() as suggested, or f.e. hashing, will not result in a strict weak ordering, i.e. it's useless.
The documentation already states that the ordering is lexicographic - there's no fuss or choice to be had here. The ordering MUST be the same as for std::string except the alphabet is now two characters (0/1) instead of 256 chars.
I hope no one thinks it is hard to compare std::strings of non-equal lengths...!
I implemented it with a lexicographic compare and pushed it to develop after testing it. I also added a more tests. I honestly still do not see the value of having ordering for bitsets, and I do not know the initial reasoning why it was implemented. But since it was there might as well be ordering for bitsets of different sizes.
Nb there are other member functions that might be affected by this issue (the is_subset_of group). Do they demand equal sizes too? The docs don't impose a pre-condition that they do...
I did not take a look at any this.