---------- Forwarded message ---------
From: degski
Because if you use a signed type you have no compile-time guarantee that the value is unsigned (using "type" and "value" to differentiate those 2).
You have indeed the guarantee that it is non-negative, compile-time, the problem is that, although you now know it is positive, you have lost (compile-time) any means of determining whether this value is correct (at run-time), while the guarantee at compile-time is just tautological. I don't understand really what you have against it, one can always use an int64_t, iff worried about int32_t overflow. Same as with a pointer: It can be NULL.
This value of null in itself is nothing special, it points at the top/bottom of the stack, there's nothing, so we hit UB. I don't think this is a good parallel. Pointers are not numbers, addition is meaningless. Adding up sizes is not meaningless, and we do it every day. Adding (or deducting) up a size and a pointer-difference is not meaningless either, that's why size needs to be signed.
If you want an interface where you want to guarantee at compile-time that the value passed over an API is never NULL, you use e.g. a reference that can never be NULL (or not_null<T>)
Herb's most recent post puts another angle on this: http://herbsutter.com/2020/02/23/references-simply/ .
Didn't you argue in the mail before that there will never be anything of size 2^32 and hence even not anything like 2^64? How could you overflow that then?
If you are manipulating (subtracting, adding differences) pointers, I thought I wrote that. The pointers might be pointers into Virtual Memory and can have any value [1, 2^56].
Not sure I understand that. Can we agree that a 64-bit unsigned type is big enough to store any size of any container and hence no overflow is possible?
Yes, certainly, what about a 63-bit value (the positive part of a signed 64-bit int), is that too small? ... signed type which may be an unsigned value. You'll have to check.
No, a signed value is never an unsigned value in disguise (that would result in overflow), the other way around, yes. I disagree. And as mentioned you can do things like `int difference =
int(obj.size()) - 1` anytime you want to do operations that are not fully defined on unsigned types (as in: may result in values outside the range) same as you can't to `int foo = sqrt(integer)` because you may get an imaginary number (if sqrt could do that, but I think you get the gist).
C++ has questionable math's. How is that any different from `assert(a <= size); return size - a;`?
That's not what you wrote earlier. The you introduced a branch in release and made the function throw.
all that because it upsets you that something that should not occur in the first place in correct code can occur iff one is writing code that now (as one observed it got negative) is known to have a bug.
Again: How is that different to using a signed type for "size"? You have exactly the same potential for bugs. You always have to make sure you stay in your valid domain, and a negative size is outside of that valid domain. Hence you got to check somewhere or use control flow to make sure this doesn't happen. So no difference in signed vs unsigned size regarding that
The unsigned cannot overflow, that is not a bonus, that's a problem!
The use of unsigned is false security (actually no security) and serves nothing. In the end, you still need to write correct code (so the signed int's WON'T BE negative, there where they shouldn't be), but this practice makes you're code less flexible, more verbose (the unavoidable casts add to that) and probably slower than using signed. All that because of this 'natural' way of looking at sizes.
It serves as a contract on the API level: "This value is unsigned. Period."
No, it says, if the value is out of the problem domain, don't worry, I'll cover it up (behind you're back so your app appears to be working, I make sure of that at compile-time). If the type was signed you'd need something else to enforce that the value
is unsigned. So yes you still need to write correct code and passing a negative value to an API expecting an unsigned value is in any case a bug.
Of course, I never said, quite the opposite, that one not needs to write correct code.
Surely at runtime. How else could you guarantee that your value isn't negative after you subtract something from it? It can be compile-time if you only add something to it and ignore overflow but you already do that when using signed values anyway.
So, why does it need to be guaranteed positive at compile time? It just needs to be positive (size) at run-time, and as per above, you'll have to check.
But all this arguing doesn't solve much: What piece of code would actually benefit from having a signed size?
I have already spelt out the reasons in an earlier post in this thread.
And not only the part where you request the size and use it, but also the part where you give that size back to the object, so you'll need to ensure an unsigned value. And yes `for(int i=0; i
Thou shall not compare signed and unsigned values, my compilers insist and keep on telling me that. So now you need a cast to shut the thing up, and I would write: for ( int i = 0, back = static_cast<int> ( obj.size ( ) ) - 1; i < back; ++i ). I would like: for ( int i = 0, back = obj.size ( ) - 1; i < back; ++i ). This is my last post on the subject (bar new arguments). Thank you for the discussion. degski -- @systemdeg "We value your privacy, click here!" Sod off! - degski "Anyone who believes that exponential growth can go on forever in a finite world is either a madman or an economist" - Kenneth E. Boulding "Growth for the sake of growth is the ideology of the cancer cell" - Edward P. Abbey -- @systemdeg "We value your privacy, click here!" Sod off! - degski "Anyone who believes that exponential growth can go on forever in a finite world is either a madman or an economist" - Kenneth E. Boulding "Growth for the sake of growth is the ideology of the cancer cell" - Edward P. Abbey