On 3/03/2020 03:52, degski wrote:
Could you please try and explain, why you think that signed is not a good type for a size (other than stating that "size cannot be negative"), what I am saying is that "valid-size cannot be negative"? You could also make the size a complex number, that would be an analogue. The imaginary-componant will have to be zero, but otherwise it would work just fine. The fact that that set is larger than the problem domain is IMO orthogonal to that.
Using a signed type for a size introduces more potential for bugs than using an unsigned type. In addition to what Alexander said about the intent (negative values are never valid sizes, by definition), there are a few other reasons why signed sizes are a bad idea: 1. if you do range checking, you now have to do "index < 0 || index
= size()" instead of only "index >= size()". This is both more work and easily forgotten, which can introduce bugs.
2. if you actually do end up wrapping the type range somehow, in unsigned values this is well defined while in signed values this is UB. Compilers react increasingly poorly to UB, in many weird ways, so it's a bad idea to increase the probability of it occurring. There is exactly one reason why a signed size type is better: if you are doing subtraction of indexes for any reason, it is usually more convenient to deal with getting a -1 than getting a max size_t. However it's easy to recognise that you've hit that case and to cast explicitly yourself to a signed type and back as needed (with appropriate sanity checking). This is also usually free, as it's simply a reinterpretation of an existing bit pattern without any actual change to the bit pattern, or just using a different assembly instruction. Yes, there are some kinds of code (notably std::string::substr) that might be less surprising if they used signed indexing, because they tend to be involved in index subtraction and can end up doing the wrong thing if you don't externally check for improper conditions. But as usual, C++ aims for performance by default and trusts you to do any necessary sanity checks externally, or to omit checks if you think you know better. BUT: if you really want signed types in the interface, nothing stops you from wrapping the standard type in your own type that uses signed indexing. (I actually use this technique a fair bit when I want to have vectors and arrays that are indexed by enums or by typesafe-integers, or model an index range that doesn't start at 0.) And the compiler will usually inline everything for free anyway.